WorldWideScience

Sample records for high-performance physics simulations

  1. A High Performance Computing Framework for Physics-based Modeling and Simulation of Military Ground Vehicles

    Science.gov (United States)

    2011-03-25

    supported in part by the National Science Foundation under grant NSF- CMMI -0840442. REFERENCES [1] Manferdelli, J.L., The Many-Core Inflection Point...M.S. Thesis , in Mechanical Engineering. 2010, University of Wisconsin-Madison: Madison. [11] Negrut, D., A. Tasora, M. Anitescu, H. Mazhar, T. Heyn...011-9246-y, 2011. [13] Hahn, P., On the Use of Meshless Methods in Acoustic Simulations - M.S. Thesis , in Mechanical Engineering. 2009, University

  2. A high performance computing framework for physics-based modeling and simulation of military ground vehicles

    Science.gov (United States)

    Negrut, Dan; Lamb, David; Gorsich, David

    2011-06-01

    This paper describes a software infrastructure made up of tools and libraries designed to assist developers in implementing computational dynamics applications running on heterogeneous and distributed computing environments. Together, these tools and libraries compose a so called Heterogeneous Computing Template (HCT). The heterogeneous and distributed computing hardware infrastructure is assumed herein to be made up of a combination of CPUs and Graphics Processing Units (GPUs). The computational dynamics applications targeted to execute on such a hardware topology include many-body dynamics, smoothed-particle hydrodynamics (SPH) fluid simulation, and fluid-solid interaction analysis. The underlying theme of the solution approach embraced by HCT is that of partitioning the domain of interest into a number of subdomains that are each managed by a separate core/accelerator (CPU/GPU) pair. Five components at the core of HCT enable the envisioned distributed computing approach to large-scale dynamical system simulation: (a) the ability to partition the problem according to the one-to-one mapping; i.e., spatial subdivision, discussed above (pre-processing); (b) a protocol for passing data between any two co-processors; (c) algorithms for element proximity computation; and (d) the ability to carry out post-processing in a distributed fashion. In this contribution the components (a) and (b) of the HCT are demonstrated via the example of the Discrete Element Method (DEM) for rigid body dynamics with friction and contact. The collision detection task required in frictional-contact dynamics (task (c) above), is shown to benefit on the GPU of a two order of magnitude gain in efficiency when compared to traditional sequential implementations. Note: Reference herein to any specific commercial products, process, or service by trade name, trademark, manufacturer, or otherwise, does not imply its endorsement, recommendation, or favoring by the United States Army. The views and

  3. Flow simulation and high performance computing

    Science.gov (United States)

    Tezduyar, T.; Aliabadi, S.; Behr, M.; Johnson, A.; Kalro, V.; Litke, M.

    1996-10-01

    Flow simulation is a computational tool for exploring science and technology involving flow applications. It can provide cost-effective alternatives or complements to laboratory experiments, field tests and prototyping. Flow simulation relies heavily on high performance computing (HPC). We view HPC as having two major components. One is advanced algorithms capable of accurately simulating complex, real-world problems. The other is advanced computer hardware and networking with sufficient power, memory and bandwidth to execute those simulations. While HPC enables flow simulation, flow simulation motivates development of novel HPC techniques. This paper focuses on demonstrating that flow simulation has come a long way and is being applied to many complex, real-world problems in different fields of engineering and applied sciences, particularly in aerospace engineering and applied fluid mechanics. Flow simulation has come a long way because HPC has come a long way. This paper also provides a brief review of some of the recently-developed HPC methods and tools that has played a major role in bringing flow simulation where it is today. A number of 3D flow simulations are presented in this paper as examples of the level of computational capability reached with recent HPC methods and hardware. These examples are, flow around a fighter aircraft, flow around two trains passing in a tunnel, large ram-air parachutes, flow over hydraulic structures, contaminant dispersion in a model subway station, airflow past an automobile, multiple spheres falling in a liquid-filled tube, and dynamics of a paratrooper jumping from a cargo aircraft.

  4. High-performance computing MRI simulations.

    Science.gov (United States)

    Stöcker, Tony; Vahedipour, Kaveh; Pflugfelder, Daniel; Shah, N Jon

    2010-07-01

    A new open-source software project is presented, JEMRIS, the Jülich Extensible MRI Simulator, which provides an MRI sequence development and simulation environment for the MRI community. The development was driven by the desire to achieve generality of simulated three-dimensional MRI experiments reflecting modern MRI systems hardware. The accompanying computational burden is overcome by means of parallel computing. Many aspects are covered that have not hitherto been simultaneously investigated in general MRI simulations such as parallel transmit and receive, important off-resonance effects, nonlinear gradients, and arbitrary spatiotemporal parameter variations at different levels. The latter can be used to simulate various types of motion, for instance. The JEMRIS user interface is very simple to use, but nevertheless it presents few limitations. MRI sequences with arbitrary waveforms and complex interdependent modules are modeled in a graphical user interface-based environment requiring no further programming. This manuscript describes the concepts, methods, and performance of the software. Examples of novel simulation results in active fields of MRI research are given. (c) 2010 Wiley-Liss, Inc.

  5. High performance simulations for transformational earthquake risk assessments

    Science.gov (United States)

    McCallen, D. B.; Larsen, S. C.

    2009-07-01

    Earthquakes occurring around the world are responsible for extensive loss of life and infrastructure damage. On average, 1100 earthquakes with significant damage potential occur world-wide per year, and a major societal challenge is to design a human environment that contains appropriate earthquake resistance. Design of critical infrastructure such as large buildings, bridges, industrial facilities and nuclear power plants in seismically active regions is a significant scientific and engineering challenge that encompasses the multiple disciplines of geophysics, geotechnical and structural engineering. Because of the great complexities in earthquake physical processes, traditional approaches to seismic hazard assessment have relied heavily on historical earthquake observations. In this approach, observational data from many locations is homogenized into an empirical assessment of earthquake hazard at any specific site of interest. With major advancements in high performance computing platforms and algorithms, it is now possible to utilize physics-based predictive models to gain enhanced insight about site-specific earthquake ground motions and infrastructure response. This paper discusses recent advancements in geophysics and infrastructure simulations and future challenges in implementing advanced simulations for both earthquake hazard (future ground motions) and earthquake risk (infrastructure response and damage) assessments.

  6. High-Performance Beam Simulator for the LANSCE Linac

    Energy Technology Data Exchange (ETDEWEB)

    Pang, Xiaoying [Los Alamos National Laboratory; Rybarcyk, Lawrence J. [Los Alamos National Laboratory; Baily, Scott A. [Los Alamos National Laboratory

    2012-05-14

    A high performance multiparticle tracking simulator is currently under development at Los Alamos. The heart of the simulator is based upon the beam dynamics simulation algorithms of the PARMILA code, but implemented in C++ on Graphics Processing Unit (GPU) hardware using NVIDIA's CUDA platform. Linac operating set points are provided to the simulator via the EPICS control system so that changes of the real time linac parameters are tracked and the simulation results updated automatically. This simulator will provide valuable insight into the beam dynamics along a linac in pseudo real-time, especially where direct measurements of the beam properties do not exist. Details regarding the approach, benefits and performance are presented.

  7. High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science

    Directory of Open Access Journals (Sweden)

    Florin Pop

    2014-01-01

    Full Text Available Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.

  8. Comprehensive Simulation Lifecycle Management for High Performance Computing Modeling and Simulation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — There are significant logistical barriers to entry-level high performance computing (HPC) modeling and simulation (M&S) users. Performing large-scale, massively...

  9. High performance computing system for flight simulation at NASA Langley

    Science.gov (United States)

    Cleveland, Jeff I., II; Sudik, Steven J.; Grove, Randall D.

    1991-01-01

    The computer architecture and components used in the NASA Langley Advanced Real-Time Simulation System (ARTSS) are briefly described and illustrated with diagrams and graphs. Particular attention is given to the advanced Convex C220 processing units, the UNIX-based operating system, the software interface to the fiber-optic-linked Computer Automated Measurement and Control system, configuration-management and real-time supervisor software, ARTSS hardware modifications, and the current implementation status. Simulation applications considered include the Transport Systems Research Vehicle, the Differential Maneuvering Simulator, the General Aviation Simulator, and the Visual Motion Simulator.

  10. High-performance simulations for atmospheric pressure plasma reactor

    Science.gov (United States)

    Chugunov, Svyatoslav

    Plasma-assisted processing and deposition of materials is an important component of modern industrial applications, with plasma reactors sharing 30% to 40% of manufacturing steps in microelectronics production. Development of new flexible electronics increases demands for efficient high-throughput deposition methods and roll-to-roll processing of materials. The current work represents an attempt of practical design and numerical modeling of a plasma enhanced chemical vapor deposition system. The system utilizes plasma at standard pressure and temperature to activate a chemical precursor for protective coatings. A specially designed linear plasma head, that consists of two parallel plates with electrodes placed in the parallel arrangement, is used to resolve clogging issues of currently available commercial plasma heads, as well as to increase the flow-rate of the processed chemicals and to enhance the uniformity of the deposition. A test system is build and discussed in this work. In order to improve operating conditions of the setup and quality of the deposited material, we perform numerical modeling of the plasma system. The theoretical and numerical models presented in this work comprehensively describe plasma generation, recombination, and advection in a channel of arbitrary geometry. Number density of plasma species, their energy content, electric field, and rate parameters are accurately calculated and analyzed in this work. Some interesting engineering outcomes are discussed with a connection to the proposed setup. The numerical model is implemented with the help of high-performance parallel technique and evaluated at a cluster for parallel calculations. A typical performance increase, calculation speed-up, parallel fraction of the code and overall efficiency of the parallel implementation are discussed in details.

  11. Mixed-Language High-Performance Computing for Plasma Simulations

    Directory of Open Access Journals (Sweden)

    Quanming Lu

    2003-01-01

    Full Text Available Java is receiving increasing attention as the most popular platform for distributed computing. However, programmers are still reluctant to embrace Java as a tool for writing scientific and engineering applications due to its still noticeable performance drawbacks compared with other programming languages such as Fortran or C. In this paper, we present a hybrid Java/Fortran implementation of a parallel particle-in-cell (PIC algorithm for plasma simulations. In our approach, the time-consuming components of this application are designed and implemented as Fortran subroutines, while less calculation-intensive components usually involved in building the user interface are written in Java. The two types of software modules have been glued together using the Java native interface (JNI. Our mixed-language PIC code was tested and its performance compared with pure Java and Fortran versions of the same algorithm on a Sun E6500 SMP system and a Linux cluster of Pentium~III machines.

  12. High performance Python for direct numerical simulations of turbulent flows

    Science.gov (United States)

    Mortensen, Mikael; Langtangen, Hans Petter

    2016-06-01

    Direct Numerical Simulations (DNS) of the Navier Stokes equations is an invaluable research tool in fluid dynamics. Still, there are few publicly available research codes and, due to the heavy number crunching implied, available codes are usually written in low-level languages such as C/C++ or Fortran. In this paper we describe a pure scientific Python pseudo-spectral DNS code that nearly matches the performance of C++ for thousands of processors and billions of unknowns. We also describe a version optimized through Cython, that is found to match the speed of C++. The solvers are written from scratch in Python, both the mesh, the MPI domain decomposition, and the temporal integrators. The solvers have been verified and benchmarked on the Shaheen supercomputer at the KAUST supercomputing laboratory, and we are able to show very good scaling up to several thousand cores. A very important part of the implementation is the mesh decomposition (we implement both slab and pencil decompositions) and 3D parallel Fast Fourier Transforms (FFT). The mesh decomposition and FFT routines have been implemented in Python using serial FFT routines (either NumPy, pyFFTW or any other serial FFT module), NumPy array manipulations and with MPI communications handled by MPI for Python (mpi4py). We show how we are able to execute a 3D parallel FFT in Python for a slab mesh decomposition using 4 lines of compact Python code, for which the parallel performance on Shaheen is found to be slightly better than similar routines provided through the FFTW library. For a pencil mesh decomposition 7 lines of code is required to execute a transform.

  13. Advanced modeling and simulation to design and manufacture high performance and reliable advanced microelectronics and microsystems.

    Energy Technology Data Exchange (ETDEWEB)

    Nettleship, Ian (University of Pittsburgh, Pittsburgh, PA); Hinklin, Thomas; Holcomb, David Joseph; Tandon, Rajan; Arguello, Jose Guadalupe, Jr. (,; .); Dempsey, James Franklin; Ewsuk, Kevin Gregory; Neilsen, Michael K.; Lanagan, Michael (Pennsylvania State University, University Park, PA)

    2007-07-01

    An interdisciplinary team of scientists and engineers having broad expertise in materials processing and properties, materials characterization, and computational mechanics was assembled to develop science-based modeling/simulation technology to design and reproducibly manufacture high performance and reliable, complex microelectronics and microsystems. The team's efforts focused on defining and developing a science-based infrastructure to enable predictive compaction, sintering, stress, and thermomechanical modeling in ''real systems'', including: (1) developing techniques to and determining materials properties and constitutive behavior required for modeling; (2) developing new, improved/updated models and modeling capabilities, (3) ensuring that models are representative of the physical phenomena being simulated; and (4) assessing existing modeling capabilities to identify advances necessary to facilitate the practical application of Sandia's predictive modeling technology.

  14. Reusable Object-Oriented Solutions for Numerical Simulation of PDEs in a High Performance Environment

    Directory of Open Access Journals (Sweden)

    Andrea Lani

    2006-01-01

    Full Text Available Object-oriented platforms developed for the numerical solution of PDEs must combine flexibility and reusability, in order to ease the integration of new functionalities and algorithms. While designing similar frameworks, a built-in support for high performance should be provided and enforced transparently, especially in parallel simulations. The paper presents solutions developed to effectively tackle these and other more specific problems (data handling and storage, implementation of physical models and numerical methods that have arisen in the development of COOLFluiD, an environment for PDE solvers. Particular attention is devoted to describe a data storage facility, highly suitable for both serial and parallel computing, and to discuss the application of two design patterns, Perspective and Method-Command-Strategy, that support extensibility and run-time flexibility in the implementation of physical models and generic numerical algorithms respectively.

  15. High-Performance Modeling of Carbon Dioxide Sequestration by Coupling Reservoir Simulation and Molecular Dynamics

    KAUST Repository

    Bao, Kai

    2015-10-26

    The present work describes a parallel computational framework for carbon dioxide (CO2) sequestration simulation by coupling reservoir simulation and molecular dynamics (MD) on massively parallel high-performance-computing (HPC) systems. In this framework, a parallel reservoir simulator, reservoir-simulation toolbox (RST), solves the flow and transport equations that describe the subsurface flow behavior, whereas the MD simulations are performed to provide the required physical parameters. Technologies from several different fields are used to make this novel coupled system work efficiently. One of the major applications of the framework is the modeling of large-scale CO2 sequestration for long-term storage in subsurface geological formations, such as depleted oil and gas reservoirs and deep saline aquifers, which has been proposed as one of the few attractive and practical solutions to reduce CO2 emissions and address the global-warming threat. Fine grids and accurate prediction of the properties of fluid mixtures under geological conditions are essential for accurate simulations. In this work, CO2 sequestration is presented as a first example for coupling reservoir simulation and MD, although the framework can be extended naturally to the full multiphase multicomponent compositional flow simulation to handle more complicated physical processes in the future. Accuracy and scalability analysis are performed on an IBM BlueGene/P and on an IBM BlueGene/Q, the latest IBM supercomputer. Results show good accuracy of our MD simulations compared with published data, and good scalability is observed with the massively parallel HPC systems. The performance and capacity of the proposed framework are well-demonstrated with several experiments with hundreds of millions to one billion cells. To the best of our knowledge, the present work represents the first attempt to couple reservoir simulation and molecular simulation for large-scale modeling. Because of the complexity of

  16. An Advanced, Interactive, High-Performance Liquid Chromatography Simulator and Instructor Resources

    Science.gov (United States)

    Boswell, Paul G.; Stoll, Dwight R.; Carr, Peter W.; Nagel, Megan L.; Vitha, Mark F.; Mabbott, Gary A.

    2013-01-01

    High-performance liquid chromatography (HPLC) simulation software has long been recognized as an effective educational tool, yet many of the existing HPLC simulators are either too expensive, outdated, or lack many important features necessary to make them widely useful for educational purposes. Here, a free, open-source HPLC simulator is…

  17. Student Engagement in High-Performing Schools: Relationships to Mental and Physical Health

    Science.gov (United States)

    Conner, Jerusha O.; Pope, Denise

    2014-01-01

    This chapter examines how the three most common types of engagement found among adolescents attending high-performing high schools relate to indicators of mental and physical health. [This article originally appeared as NSSE Yearbook Vol. 113, No. 1.

  18. High performance MRI simulations of motion on multi-GPU systems

    Science.gov (United States)

    2014-01-01

    Background MRI physics simulators have been developed in the past for optimizing imaging protocols and for training purposes. However, these simulators have only addressed motion within a limited scope. The purpose of this study was the incorporation of realistic motion, such as cardiac motion, respiratory motion and flow, within MRI simulations in a high performance multi-GPU environment. Methods Three different motion models were introduced in the Magnetic Resonance Imaging SIMULator (MRISIMUL) of this study: cardiac motion, respiratory motion and flow. Simulation of a simple Gradient Echo pulse sequence and a CINE pulse sequence on the corresponding anatomical model was performed. Myocardial tagging was also investigated. In pulse sequence design, software crushers were introduced to accommodate the long execution times in order to avoid spurious echoes formation. The displacement of the anatomical model isochromats was calculated within the Graphics Processing Unit (GPU) kernel for every timestep of the pulse sequence. Experiments that would allow simulation of custom anatomical and motion models were also performed. Last, simulations of motion with MRISIMUL on single-node and multi-node multi-GPU systems were examined. Results Gradient Echo and CINE images of the three motion models were produced and motion-related artifacts were demonstrated. The temporal evolution of the contractility of the heart was presented through the application of myocardial tagging. Better simulation performance and image quality were presented through the introduction of software crushers without the need to further increase the computational load and GPU resources. Last, MRISIMUL demonstrated an almost linear scalable performance with the increasing number of available GPU cards, in both single-node and multi-node multi-GPU computer systems. Conclusions MRISIMUL is the first MR physics simulator to have implemented motion with a 3D large computational load on a single computer

  19. High performance MRI simulations of motion on multi-GPU systems.

    Science.gov (United States)

    Xanthis, Christos G; Venetis, Ioannis E; Aletras, Anthony H

    2014-07-04

    MRI physics simulators have been developed in the past for optimizing imaging protocols and for training purposes. However, these simulators have only addressed motion within a limited scope. The purpose of this study was the incorporation of realistic motion, such as cardiac motion, respiratory motion and flow, within MRI simulations in a high performance multi-GPU environment. Three different motion models were introduced in the Magnetic Resonance Imaging SIMULator (MRISIMUL) of this study: cardiac motion, respiratory motion and flow. Simulation of a simple Gradient Echo pulse sequence and a CINE pulse sequence on the corresponding anatomical model was performed. Myocardial tagging was also investigated. In pulse sequence design, software crushers were introduced to accommodate the long execution times in order to avoid spurious echoes formation. The displacement of the anatomical model isochromats was calculated within the Graphics Processing Unit (GPU) kernel for every timestep of the pulse sequence. Experiments that would allow simulation of custom anatomical and motion models were also performed. Last, simulations of motion with MRISIMUL on single-node and multi-node multi-GPU systems were examined. Gradient Echo and CINE images of the three motion models were produced and motion-related artifacts were demonstrated. The temporal evolution of the contractility of the heart was presented through the application of myocardial tagging. Better simulation performance and image quality were presented through the introduction of software crushers without the need to further increase the computational load and GPU resources. Last, MRISIMUL demonstrated an almost linear scalable performance with the increasing number of available GPU cards, in both single-node and multi-node multi-GPU computer systems. MRISIMUL is the first MR physics simulator to have implemented motion with a 3D large computational load on a single computer multi-GPU configuration. The incorporation

  20. Power grid simulation applications developed using the GridPACK™ high performance computing framework

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Shuangshuang; Chen, Yousu; Diao, Ruisheng; Huang, Zhenyu (Henry); Perkins, William; Palmer, Bruce

    2016-12-01

    This paper describes the GridPACK™ software framework for developing power grid simulations that can run on high performance computing platforms, with several example applications (dynamic simulation, static contingency analysis, and dynamic contingency analysis) that have been developed using GridPACK.

  1. Scalable parallel programming for high performance seismic simulation on petascale heterogeneous supercomputers

    Science.gov (United States)

    Zhou, Jun

    The 1994 Northridge earthquake in Los Angeles, California, killed 57 people, injured over 8,700 and caused an estimated $20 billion in damage. Petascale simulations are needed in California and elsewhere to provide society with a better understanding of the rupture and wave dynamics of the largest earthquakes at shaking frequencies required to engineer safe structures. As the heterogeneous supercomputing infrastructures are becoming more common, numerical developments in earthquake system research are particularly challenged by the dependence on the accelerator elements to enable "the Big One" simulations with higher frequency and finer resolution. Reducing time to solution and power consumption are two primary focus area today for the enabling technology of fault rupture dynamics and seismic wave propagation in realistic 3D models of the crust's heterogeneous structure. This dissertation presents scalable parallel programming techniques for high performance seismic simulation running on petascale heterogeneous supercomputers. A real world earthquake simulation code, AWP-ODC, one of the most advanced earthquake codes to date, was chosen as the base code in this research, and the testbed is based on Titan at Oak Ridge National Laboraratory, the world's largest hetergeneous supercomputer. The research work is primarily related to architecture study, computation performance tuning and software system scalability. An earthquake simulation workflow has also been developed to support the efficient production sets of simulations. The highlights of the technical development are an aggressive performance optimization focusing on data locality and a notable data communication model that hides the data communication latency. This development results in the optimal computation efficiency and throughput for the 13-point stencil code on heterogeneous systems, which can be extended to general high-order stencil codes. Started from scratch, the hybrid CPU/GPU version of AWP

  2. H5hut: A High-Performance I/O Library for Particle-based Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Howison, Mark; Adelmann, Andreas; Bethel, E. Wes; Gsell, Achim; Oswald, Benedikt; Prabhat,

    2010-09-24

    Particle-based simulations running on large high-performance computing systems over many time steps can generate an enormous amount of particle- and field-based data for post-processing and analysis. Achieving high-performance I/O for this data, effectively managing it on disk, and interfacing it with analysis and visualization tools can be challenging, especially for domain scientists who do not have I/O and data management expertise. We present the H5hut library, an implementation of several data models for particle-based simulations that encapsulates the complexity of HDF5 and is simple to use, yet does not compromise performance.

  3. High-Performance Modeling and Simulation of Anchoring in Granular Media for NEO Applications

    Science.gov (United States)

    Quadrelli, Marco B.; Jain, Abhinandan; Negrut, Dan; Mazhar, Hammad

    2012-01-01

    NASA is interested in designing a spacecraft capable of visiting a near-Earth object (NEO), performing experiments, and then returning safely. Certain periods of this mission would require the spacecraft to remain stationary relative to the NEO, in an environment characterized by very low gravity levels; such situations require an anchoring mechanism that is compact, easy to deploy, and upon mission completion, easy to remove. The design philosophy used in this task relies on the simulation capability of a high-performance multibody dynamics physics engine. On Earth, it is difficult to create low-gravity conditions, and testing in low-gravity environments, whether artificial or in space, can be costly and very difficult to achieve. Through simulation, the effect of gravity can be controlled with great accuracy, making it ideally suited to analyze the problem at hand. Using Chrono::Engine, a simulation pack age capable of utilizing massively parallel Graphic Processing Unit (GPU) hardware, several validation experiments were performed. Modeling of the regolith interaction has been carried out, after which the anchor penetration tests were performed and analyzed. The regolith was modeled by a granular medium composed of very large numbers of convex three-dimensional rigid bodies, subject to microgravity levels and interacting with each other with contact, friction, and cohesional forces. The multibody dynamics simulation approach used for simulating anchors penetrating a soil uses a differential variational inequality (DVI) methodology to solve the contact problem posed as a linear complementarity method (LCP). Implemented within a GPU processing environment, collision detection is greatly accelerated compared to traditional CPU (central processing unit)- based collision detection. Hence, systems of millions of particles interacting with complex dynamic systems can be efficiently analyzed, and design recommendations can be made in a much shorter time. The figure

  4. LIAR -- A computer program for the modeling and simulation of high performance linacs

    Energy Technology Data Exchange (ETDEWEB)

    Assmann, R.; Adolphsen, C.; Bane, K.; Emma, P.; Raubenheimer, T.; Siemann, R.; Thompson, K.; Zimmermann, F.

    1997-04-01

    The computer program LIAR (LInear Accelerator Research Code) is a numerical modeling and simulation tool for high performance linacs. Amongst others, it addresses the needs of state-of-the-art linear colliders where low emittance, high-intensity beams must be accelerated to energies in the 0.05-1 TeV range. LIAR is designed to be used for a variety of different projects. LIAR allows the study of single- and multi-particle beam dynamics in linear accelerators. It calculates emittance dilutions due to wakefield deflections, linear and non-linear dispersion and chromatic effects in the presence of multiple accelerator imperfections. Both single-bunch and multi-bunch beams can be simulated. Several basic and advanced optimization schemes are implemented. Present limitations arise from the incomplete treatment of bending magnets and sextupoles. A major objective of the LIAR project is to provide an open programming platform for the accelerator physics community. Due to its design, LIAR allows straight-forward access to its internal FORTRAN data structures. The program can easily be extended and its interactive command language ensures maximum ease of use. Presently, versions of LIAR are compiled for UNIX and MS Windows operating systems. An interface for the graphical visualization of results is provided. Scientific graphs can be saved in the PS and EPS file formats. In addition a Mathematica interface has been developed. LIAR now contains more than 40,000 lines of source code in more than 130 subroutines. This report describes the theoretical basis of the program, provides a reference for existing features and explains how to add further commands. The LIAR home page and the ONLINE version of this manual can be accessed under: http://www.slac.stanford.edu/grp/arb/rwa/liar.htm.

  5. Scalable High Performance Computing: Direct and Large-Eddy Turbulent Flow Simulations Using Massively Parallel Computers

    Science.gov (United States)

    Morgan, Philip E.

    2004-01-01

    This final report contains reports of research related to the tasks "Scalable High Performance Computing: Direct and Lark-Eddy Turbulent FLow Simulations Using Massively Parallel Computers" and "Devleop High-Performance Time-Domain Computational Electromagnetics Capability for RCS Prediction, Wave Propagation in Dispersive Media, and Dual-Use Applications. The discussion of Scalable High Performance Computing reports on three objectives: validate, access scalability, and apply two parallel flow solvers for three-dimensional Navier-Stokes flows; develop and validate a high-order parallel solver for Direct Numerical Simulations (DNS) and Large Eddy Simulation (LES) problems; and Investigate and develop a high-order Reynolds averaged Navier-Stokes turbulence model. The discussion of High-Performance Time-Domain Computational Electromagnetics reports on five objectives: enhancement of an electromagnetics code (CHARGE) to be able to effectively model antenna problems; utilize lessons learned in high-order/spectral solution of swirling 3D jets to apply to solving electromagnetics project; transition a high-order fluids code, FDL3DI, to be able to solve Maxwell's Equations using compact-differencing; develop and demonstrate improved radiation absorbing boundary conditions for high-order CEM; and extend high-order CEM solver to address variable material properties. The report also contains a review of work done by the systems engineer.

  6. a High-Performance Method for Simulating Surface Rainfall-Runoff Dynamics Using Particle System

    Science.gov (United States)

    Zhang, Fangli; Zhou, Qiming; Li, Qingquan; Wu, Guofeng; Liu, Jun

    2016-06-01

    The simulation of rainfall-runoff process is essential for disaster emergency and sustainable development. One common disadvantage of the existing conceptual hydrological models is that they are highly dependent upon specific spatial-temporal contexts. Meanwhile, due to the inter-dependence of adjacent flow paths, it is still difficult for the RS or GIS supported distributed hydrological models to achieve high-performance application in real world applications. As an attempt to improve the performance efficiencies of those models, this study presents a high-performance rainfall-runoff simulating framework based on the flow path network and a separate particle system. The vector-based flow path lines are topologically linked to constrain the movements of independent rain drop particles. A separate particle system, representing surface runoff, is involved to model the precipitation process and simulate surface flow dynamics. The trajectory of each particle is constrained by the flow path network and can be tracked by concurrent processors in a parallel cluster system. The result of speedup experiment shows that the proposed framework can significantly improve the simulating performance just by adding independent processors. By separating the catchment elements and the accumulated water, this study provides an extensible solution for improving the existing distributed hydrological models. Further, a parallel modeling and simulating platform needs to be developed and validate to be applied in monitoring real world hydrologic processes.

  7. Usage of the Reduced Basis Method and High-Performance Simulations in Geosciences

    Science.gov (United States)

    Degen, Denise; Veroy, Karen; Wellmann, Florian

    2017-04-01

    The field of Computational Geosciences often encounters the "curse" of dimensionality, since it aims at analyzing complex coupled processes over a large domain in space and time. These high-dimensional problems are computationally intensive, requiring High-Performance Computing infrastructures. However, constructing parallelized problems is often not trivial. Therefore, we present a software implementation within the Multiphysics Object-Orientated Simulation Environment (MOOSE) offering a built-in parallelization. Even with the computational potential of High-Performance Computers, it may be prohibitive to perform model calibrations or inversions for a reasonably large number of parameters, since the geoscientific forward simulations can be very demanding. Hence, one desires a method reducing the dimensionality of the problem while retaining the accuracy within a certain tolerance. Considering model order reduction techniques is a way to achieve this. We present the Reduced Basis (RB) Method being such a Model Order Reduction Technique aiming at considerably reducing the number of degrees of freedom. We show how the reduction in the dimension results in a significant speed-up, which in turn allows one to perform sensitivity analyses and parameter estimations, to analyze more complicated structures, or to obtain results in real-time. In order to demonstrate the powerful combination of the Reduced Basis Method and High-Performance Computing, we investigate the method's of scalability and parallel efficiency, two measurements for the performance of clusters by using the example of a geothermal conduction problem.

  8. High Performance Wideband CMOS CCI and its Application in Inductance Simulator Design

    Directory of Open Access Journals (Sweden)

    ARSLAN, E.

    2012-08-01

    Full Text Available In this paper, a new, differential pair based, low-voltage, high performance and wideband CMOS first generation current conveyor (CCI is proposed. The proposed CCI has high voltage swings on ports X and Y and very low equivalent impedance on port X due to super source follower configuration. It also has high voltage swings (close to supply voltages on input and output ports and wideband current and voltage transfer ratios. Furthermore, two novel grounded inductance simulator circuits are proposed as application examples. Using HSpice, it is shown that the simulation results of the proposed CCI and also of the presented inductance simulators are in very good agreement with the expected ones.

  9. OpenMM 4: A Reusable, Extensible, Hardware Independent Library for High Performance Molecular Simulation.

    Science.gov (United States)

    Eastman, Peter; Friedrichs, Mark S; Chodera, John D; Radmer, Randall J; Bruns, Christopher M; Ku, Joy P; Beauchamp, Kyle A; Lane, Thomas J; Wang, Lee-Ping; Shukla, Diwakar; Tye, Tony; Houston, Mike; Stich, Timo; Klein, Christoph; Shirts, Michael R; Pande, Vijay S

    2013-01-08

    OpenMM is a software toolkit for performing molecular simulations on a range of high performance computing architectures. It is based on a layered architecture: the lower layers function as a reusable library that can be invoked by any application, while the upper layers form a complete environment for running molecular simulations. The library API hides all hardware-specific dependencies and optimizations from the users and developers of simulation programs: they can be run without modification on any hardware on which the API has been implemented. The current implementations of OpenMM include support for graphics processing units using the OpenCL and CUDA frameworks. In addition, OpenMM was designed to be extensible, so new hardware architectures can be accommodated and new functionality (e.g., energy terms and integrators) can be easily added.

  10. High Performance Computing and Storage Requirements for Nuclear Physics: Target 2017

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wasserman, Harvey [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2014-04-30

    In April 2014, NERSC, ASCR, and the DOE Office of Nuclear Physics (NP) held a review to characterize high performance computing (HPC) and storage requirements for NP research through 2017. This review is the 12th in a series of reviews held by NERSC and Office of Science program offices that began in 2009. It is the second for NP, and the final in the second round of reviews that covered the six Office of Science program offices. This report is the result of that review

  11. A Queue Simulation Tool for a High Performance Scientific Computing Center

    Science.gov (United States)

    Spear, Carrie; McGalliard, James

    2007-01-01

    The NASA Center for Computational Sciences (NCCS) at the Goddard Space Flight Center provides high performance highly parallel processors, mass storage, and supporting infrastructure to a community of computational Earth and space scientists. Long running (days) and highly parallel (hundreds of CPUs) jobs are common in the workload. NCCS management structures batch queues and allocates resources to optimize system use and prioritize workloads. NCCS technical staff use a locally developed discrete event simulation tool to model the impacts of evolving workloads, potential system upgrades, alternative queue structures and resource allocation policies.

  12. Comparison of High-Performance Fiber Materials Properties in Simulated and Actual Space Environments

    Science.gov (United States)

    Finckernor, M. M.

    2017-01-01

    A variety of high-performance fibers, including Kevlar, Nomex, Vectran, and Spectra, have been tested for durability in the space environment, mostly the low Earth orbital environment. These materials have been tested in yarn, tether/cable, and fabric forms. Some material samples were tested in a simulated space environment, such as the Atomic Oxygen Beam Facility and solar simulators in the laboratory. Other samples were flown on the International Space Station as part of the Materials on International Space Station Experiment. Mass loss due to atomic oxygen erosion and optical property changes due to ultraviolet radiation degradation are given. Tensile test results are also presented, including where moisture loss in a vacuum had an impact on tensile strength.

  13. Methodology and application of high performance electrostatic field simulation in the KATRIN experiment

    Science.gov (United States)

    Corona, Thomas

    The Karlsruhe Tritium Neutrino (KATRIN) experiment is a tritium beta decay experiment designed to make a direct, model independent measurement of the electron neutrino mass. The experimental apparatus employs strong ( O[T]) magnetostatic and (O[10 5 V/m]) electrostatic fields in regions of ultra high (O[10-11 mbar]) vacuum in order to obtain precise measurements of the electron energy spectrum near the endpoint of tritium beta-decay. The electrostatic fields in KATRIN are formed by multiscale electrode geometries, necessitating the development of high performance field simulation software. To this end, we present a Boundary Element Method (BEM) with analytic boundary integral terms in conjunction with the Robin Hood linear algebraic solver, a nonstationary successive subspace correction (SSC) method. We describe an implementation of these techniques for high performance computing environments in the software KEMField, along with the geometry modeling and discretization software KGeoBag. We detail the application of KEMField and KGeoBag to KATRIN's spectrometer and detector sections, and demonstrate its use in furthering several of KATRIN's scientific goals. Finally, we present the results of a measurement designed to probe the electrostatic profile of KATRIN's main spectrometer in comparison to simulated results.

  14. Application of High Performance Computing for Simulations of N-Dodecane Jet Spray with Evaporation

    Science.gov (United States)

    2016-11-01

    multicomponent fluids such as n-dodecane. The long-term goal of this research is to incorporate these models into future simulations of turbulent jet...performance computing, fuel, spray, large eddy simulation, computational fluid dynamics 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR...Research Experience Prior to this summer’s internship, I held 2 Master’s degrees in Mechanical and Nuclear Engineering. I had focused on plasma physics at

  15. A configurable distributed high-performance computing framework for satellite's TDI-CCD imaging simulation

    Science.gov (United States)

    Xue, Bo; Mao, Bingjing; Chen, Xiaomei; Ni, Guoqiang

    2010-11-01

    This paper renders a configurable distributed high performance computing(HPC) framework for TDI-CCD imaging simulation. It uses strategy pattern to adapt multi-algorithms. Thus, this framework help to decrease the simulation time with low expense. Imaging simulation for TDI-CCD mounted on satellite contains four processes: 1) atmosphere leads degradation, 2) optical system leads degradation, 3) electronic system of TDI-CCD leads degradation and re-sampling process, 4) data integration. Process 1) to 3) utilize diversity data-intensity algorithms such as FFT, convolution and LaGrange Interpol etc., which requires powerful CPU. Even uses Intel Xeon X5550 processor, regular series process method takes more than 30 hours for a simulation whose result image size is 1500 * 1462. With literature study, there isn't any mature distributing HPC framework in this field. Here we developed a distribute computing framework for TDI-CCD imaging simulation, which is based on WCF[1], uses Client/Server (C/S) layer and invokes the free CPU resources in LAN. The server pushes the process 1) to 3) tasks to those free computing capacity. Ultimately we rendered the HPC in low cost. In the computing experiment with 4 symmetric nodes and 1 server , this framework reduced about 74% simulation time. Adding more asymmetric nodes to the computing network, the time decreased namely. In conclusion, this framework could provide unlimited computation capacity in condition that the network and task management server are affordable. And this is the brand new HPC solution for TDI-CCD imaging simulation and similar applications.

  16. Design of the HELICS High-Performance Transmission-Distribution-Communication-Market Co-Simulation Framework

    Energy Technology Data Exchange (ETDEWEB)

    Palmintier, Bryan S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Krishnamurthy, Dheepak [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Top, Philip [Lawrence Livermore National Laboratories; Smith, Steve [Lawrence Livermore National Laboratories; Daily, Jeff [Pacific Northwest National Laboratory; Fuller, Jason [Pacific Northwest National Laboratory

    2017-10-12

    This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layered co-simulation architecture.

  17. Towards High Performance Discrete-Event Simulations of Smart Electric Grids

    Energy Technology Data Exchange (ETDEWEB)

    Perumalla, Kalyan S [ORNL; Nutaro, James J [ORNL; Yoginath, Srikanth B [ORNL

    2011-01-01

    Future electric grid technology is envisioned on the notion of a smart grid in which responsive end-user devices play an integral part of the transmission and distribution control systems. Detailed simulation is often the primary choice in analyzing small network designs, and the only choice in analyzing large-scale electric network designs. Here, we identify and articulate the high-performance computing needs underlying high-resolution discrete event simulation of smart electric grid operation large network scenarios such as the entire Eastern Interconnect. We focus on the simulator's most computationally intensive operation, namely, the dynamic numerical solution for the electric grid state, for both time-integration as well as event-detection. We explore solution approaches using general-purpose dense and sparse solvers, and propose a scalable solver specialized for the sparse structures of actual electric networks. Based on experiments with an implementation in the THYME simulator, we identify performance issues and possible solution approaches for smart grid experimentation in the large.

  18. A Grid-Based Cyber Infrastructure for High Performance Chemical Dynamics Simulations

    Directory of Open Access Journals (Sweden)

    Khadka Prashant

    2008-10-01

    Full Text Available Chemical dynamics simulation is an effective means to study atomic level motions of molecules, collections of molecules, liquids, surfaces, interfaces of materials, and chemical reactions. To make chemical dynamics simulations globally accessible to a broad range of users, recently a cyber infrastructure was developed that provides an online portal to VENUS, a popular chemical dynamics simulation program package, to allow people to submit simulation jobs that will be executed on the web server machine. In this paper, we report new developments of the cyber infrastructure for the improvement of its quality of service by dispatching the submitted simulations jobs from the web server machine onto a cluster of workstations for execution, and by adding an animation tool, which is optimized for animating the simulation results. The separation of the server machine from the simulation-running machine improves the service quality by increasing the capacity to serve more requests simultaneously with even reduced web response time, and allows the execution of large scale, time-consuming simulation jobs on the powerful workstation cluster. With the addition of an animation tool, the cyber infrastructure automatically converts, upon the selection of the user, some simulation results into an animation file that can be viewed on usual web browsers without requiring installation of any special software on the user computer. Since animation is essential for understanding the results of chemical dynamics simulations, this animation capacity provides a better way for understanding simulation details of the chemical dynamics. By combining computing resources at locations under different administrative controls, this cyber infrastructure constitutes a grid environment providing physically and administratively distributed functionalities through a single easy-to-use online portal

  19. GPU-based high performance Monte Carlo simulation in neutron transport

    Energy Technology Data Exchange (ETDEWEB)

    Heimlich, Adino; Mol, Antonio C.A.; Pereira, Claudio M.N.A. [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. de Inteligencia Artificial Aplicada], e-mail: cmnap@ien.gov.br

    2009-07-01

    Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)

  20. Using Shared Memory As A Cache In High Performance Cellular Automata Water Flow Simulations

    Directory of Open Access Journals (Sweden)

    Paweł Topa

    2013-01-01

    Full Text Available Graphics processors (GPU -- Graphic Processor Units recently have gained a lot of interest as an efficient platform for general-purpose computation. Cellular Automata approach which is inherently parallel gives the opportunity to implement high performance simulations. This paper presents how shared memory in GPU can be used to improve performance for Cellular Automata models. In our previous works, we proposed algorithms for Cellular Automata model that use only a GPU global memory. Using a profiling tool, we found bottlenecks in our approach. We introduce modifications that takes an advantage of fast shared memory. The modified algorithm is presented in details, and the results of profiling and performance test are demonstrated. Our unique achievement is comparing the efficiency of the same algorithm working with a global and shared memory.

  1. Multi-scale high-performance fluid flow: Simulations through porous media

    KAUST Repository

    Perović, Nevena

    2016-08-03

    Computational fluid dynamic (CFD) calculations on geometrically complex domains such as porous media require high geometric discretisation for accurately capturing the tested physical phenomena. Moreover, when considering a large area and analysing local effects, it is necessary to deploy a multi-scale approach that is both memory-intensive and time-consuming. Hence, this type of analysis must be conducted on a high-performance parallel computing infrastructure. In this paper, the coupling of two different scales based on the Navier–Stokes equations and Darcy\\'s law is described followed by the generation of complex geometries, and their discretisation and numerical treatment. Subsequently, the necessary parallelisation techniques and a rather specific tool, which is capable of retrieving data from the supercomputing servers and visualising them during the computation runtime (i.e. in situ) are described. All advantages and possible drawbacks of this approach, together with the preliminary results and sensitivity analyses are discussed in detail.

  2. THC-MP: High performance numerical simulation of reactive transport and multiphase flow in porous media

    Science.gov (United States)

    Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu

    2015-07-01

    The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.

  3. High-performance modeling of CO2 sequestration by coupling reservoir simulation and molecular dynamics

    KAUST Repository

    Bao, Kai

    2013-01-01

    The present work describes a parallel computational framework for CO2 sequestration simulation by coupling reservoir simulation and molecular dynamics (MD) on massively parallel HPC systems. In this framework, a parallel reservoir simulator, Reservoir Simulation Toolbox (RST), solves the flow and transport equations that describe the subsurface flow behavior, while the molecular dynamics simulations are performed to provide the required physical parameters. Numerous technologies from different fields are employed to make this novel coupled system work efficiently. One of the major applications of the framework is the modeling of large scale CO2 sequestration for long-term storage in the subsurface geological formations, such as depleted reservoirs and deep saline aquifers, which has been proposed as one of the most attractive and practical solutions to reduce the CO2 emission problem to address the global-warming threat. To effectively solve such problems, fine grids and accurate prediction of the properties of fluid mixtures are essential for accuracy. In this work, the CO2 sequestration is presented as our first example to couple the reservoir simulation and molecular dynamics, while the framework can be extended naturally to the full multiphase multicomponent compositional flow simulation to handle more complicated physical process in the future. Accuracy and scalability analysis are performed on an IBM BlueGene/P and on an IBM BlueGene/Q, the latest IBM supercomputer. Results show good accuracy of our MD simulations compared with published data, and good scalability are observed with the massively parallel HPC systems. The performance and capacity of the proposed framework are well demonstrated with several experiments with hundreds of millions to a billion cells. To our best knowledge, the work represents the first attempt to couple the reservoir simulation and molecular simulation for large scale modeling. Due to the complexity of the subsurface systems

  4. Investigating the Mobility of Light Autonomous Tracked Vehicles using a High Performance Computing Simulation Capability

    Science.gov (United States)

    Negrut, Dan; Mazhar, Hammad; Melanz, Daniel; Lamb, David; Jayakumar, Paramsothy; Letherwood, Michael; Jain, Abhinandan; Quadrelli, Marco

    2012-01-01

    This paper is concerned with the physics-based simulation of light tracked vehicles operating on rough deformable terrain. The focus is on small autonomous vehicles, which weigh less than 100 lb and move on deformable and rough terrain that is feature rich and no longer representable using a continuum approach. A scenario of interest is, for instance, the simulation of a reconnaissance mission for a high mobility lightweight robot where objects such as a boulder or a ditch that could otherwise be considered small for a truck or tank, become major obstacles that can impede the mobility of the light autonomous vehicle and negatively impact the success of its mission. Analyzing and gauging the mobility and performance of these light vehicles is accomplished through a modeling and simulation capability called Chrono::Engine. Chrono::Engine relies on parallel execution on Graphics Processing Unit (GPU) cards.

  5. High Fidelity Simulation of Liquid Jet in Cross-flow Using High Performance Computing

    Science.gov (United States)

    Soteriou, Marios; Li, Xiaoyi

    2011-11-01

    High fidelity, first principles simulation of atomization of a liquid jet by a fast cross-flowing gas can help reveal the controlling physics of this complicated two-phase flow of engineering interest. The turn-around execution time of such a simulation is prohibitively long using typically available computational resources today (i.e. parallel systems with ~O(100) CPUs). This is due to multiscale nature of the problem which requires the use of fine grids and time steps. In this work we present results from such a simulation performed on a state of the art massively parallel system available at Oakridge Leadership Computing Facility (OLCF). Scalability of the computational algorithm to ~2000 CPUs is demonstrated on grids of up to 200 million nodes. As a result, a simulation at intermediate Weber number becomes possible on this system. Results are in agreement with detailed experiment measurements of liquid column trajectory, breakup location, surface wavelength, onset of surface stripping as well as droplet size and velocity after primary breakup. Moreover, this uniform grid simulation is used as a base case for further code enhancement by evaluating the feasibility of employing Adaptive Mesh Refinement (AMR) near the liquid-gas interface as a means of mitigating computational cost.

  6. Simulating the Physical World

    Science.gov (United States)

    Berendsen, Herman J. C.

    2004-06-01

    The simulation of physical systems requires a simplified, hierarchical approach which models each level from the atomistic to the macroscopic scale. From quantum mechanics to fluid dynamics, this book systematically treats the broad scope of computer modeling and simulations, describing the fundamental theory behind each level of approximation. Berendsen evaluates each stage in relation to its applications giving the reader insight into the possibilities and limitations of the models. Practical guidance for applications and sample programs in Python are provided. With a strong emphasis on molecular models in chemistry and biochemistry, this book will be suitable for advanced undergraduate and graduate courses on molecular modeling and simulation within physics, biophysics, physical chemistry and materials science. It will also be a useful reference to all those working in the field. Additional resources for this title including solutions for instructors and programs are available online at www.cambridge.org/9780521835275. The first book to cover the wide range of modeling and simulations, from atomistic to the macroscopic scale, in a systematic fashion Providing a wealth of background material, it does not assume advanced knowledge and is eminently suitable for course use Contains practical examples and sample programs in Python

  7. libRoadRunner: a high performance SBML simulation and analysis library.

    Science.gov (United States)

    Somogyi, Endre T; Bouteiller, Jean-Marie; Glazier, James A; König, Matthias; Medley, J Kyle; Swat, Maciej H; Sauro, Herbert M

    2015-10-15

    This article presents libRoadRunner, an extensible, high-performance, cross-platform, open-source software library for the simulation and analysis of models expressed using Systems Biology Markup Language (SBML). SBML is the most widely used standard for representing dynamic networks, especially biochemical networks. libRoadRunner is fast enough to support large-scale problems such as tissue models, studies that require large numbers of repeated runs and interactive simulations. libRoadRunner is a self-contained library, able to run both as a component inside other tools via its C++ and C bindings, and interactively through its Python interface. Its Python Application Programming Interface (API) is similar to the APIs of MATLAB ( WWWMATHWORKSCOM: ) and SciPy ( HTTP//WWWSCIPYORG/: ), making it fast and easy to learn. libRoadRunner uses a custom Just-In-Time (JIT) compiler built on the widely used LLVM JIT compiler framework. It compiles SBML-specified models directly into native machine code for a variety of processors, making it appropriate for solving extremely large models or repeated runs. libRoadRunner is flexible, supporting the bulk of the SBML specification (except for delay and non-linear algebraic equations) including several SBML extensions (composition and distributions). It offers multiple deterministic and stochastic integrators, as well as tools for steady-state analysis, stability analysis and structural analysis of the stoichiometric matrix. libRoadRunner binary distributions are available for Mac OS X, Linux and Windows. The library is licensed under Apache License Version 2.0. libRoadRunner is also available for ARM-based computers such as the Raspberry Pi. http://www.libroadrunner.org provides online documentation, full build instructions, binaries and a git source repository. hsauro@u.washington.edu or somogyie@indiana.edu Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2015. This work is written

  8. Parallel Processing of Numerical Tsunami Simulations on a High Performance Cluster based on the GDAL Library

    Science.gov (United States)

    Schroeder, Matthias; Jankowski, Cedric; Hammitzsch, Martin; Wächter, Joachim

    2014-05-01

    Thousands of numerical tsunami simulations allow the computation of inundation and run-up along the coast for vulnerable areas over the time. A so-called Matching Scenario Database (MSDB) [1] contains this large number of simulations in text file format. In order to visualize these wave propagations the scenarios have to be reprocessed automatically. In the TRIDEC project funded by the seventh Framework Programme of the European Union a Virtual Scenario Database (VSDB) and a Matching Scenario Database (MSDB) were established amongst others by the working group of the University of Bologna (UniBo) [1]. One part of TRIDEC was the developing of a new generation of a Decision Support System (DSS) for tsunami Early Warning Systems (TEWS) [2]. A working group of the GFZ German Research Centre for Geosciences was responsible for developing the Command and Control User Interface (CCUI) as central software application which support operator activities, incident management and message disseminations. For the integration and visualization in the CCUI, the numerical tsunami simulations from MSDB must be converted into the shapefiles format. The usage of shapefiles enables a much easier integration into standard Geographic Information Systems (GIS). Since also the CCUI is based on two widely used open source products (GeoTools library and uDig), whereby the integration of shapefiles is provided by these libraries a priori. In this case, for an example area around the Western Iberian margin several thousand tsunami variations were processed. Due to the mass of data only a program-controlled process was conceivable. In order to optimize the computing efforts and operating time the use of an existing GFZ High Performance Computing Cluster (HPC) had been chosen. Thus, a geospatial software was sought after that is capable for parallel processing. The FOSS tool Geospatial Data Abstraction Library (GDAL/OGR) was used to match the coordinates with the wave heights and generates the

  9. High Performance Hybrid RANS-LES Simulation Framework for Turbulent Combusting Flows Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation proposed here is a computational framework for high performance, high fidelity computational fluid dynamics (CFD) to enable accurate, fast and robust...

  10. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    Science.gov (United States)

    Moon, Hongsik

    changing computer hardware platforms in order to provide fast, accurate and efficient solutions to large, complex electromagnetic problems. The research in this dissertation proves that the performance of parallel code is intimately related to the configuration of the computer hardware and can be maximized for different hardware platforms. To benchmark and optimize the performance of parallel CEM software, a variety of large, complex projects are created and executed on a variety of computer platforms. The computer platforms used in this research are detailed in this dissertation. The projects run as benchmarks are also described in detail and results are presented. The parameters that affect parallel CEM software on High Performance Computing Clusters (HPCC) are investigated. This research demonstrates methods to maximize the performance of parallel CEM software code.

  11. A High Performance Chemical Simulation Preprocessor and Source Code Generator Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Numerical simulations of chemical kinetics are a critical component of aerospace research, Earth systems research, and energy research. These simulations enable a...

  12. The rise of games and high-performance computing for modeling and simulation

    National Research Council Canada - National Science Library

    Committee on Modeling, Simulation, and Games; Standing Committee on Technology Insight--Gauge, Evaluate, and Review; National Research Council

    2010-01-01

    "The technical and cultural boundaries between modeling, simulation, and games are increasingly blurring, providing broader access to capabilities in modeling and simulation and further credibility...

  13. A lattice-particle approach for the simulation of fracture processes in fiber-reinforced high-performance concrete

    NARCIS (Netherlands)

    Montero-Chacón, F.; Schlangen, H.E.J.G.; Medina, F.

    2013-01-01

    The use of fiber-reinforced high-performance concrete (FRHPC) is becoming more extended; therefore it is necessary to develop tools to simulate and better understand its behavior. In this work, a discrete model for the analysis of fracture mechanics in FRHPC is presented. The plain concrete matrix,

  14. High Performance Simulations of Accretion Disk Dynamics and Jet Formations Around Kerr Black Holes

    Science.gov (United States)

    Nishikawa, Ken-Ichi; Mizuno, Yosuke; Watson, Michael

    2007-01-01

    We investigate jet formation in black-hole systems using 3-D General Relativistic Particle-In-Cell (GRPIC) and 3-D GRMHD simulations. GRPIC simulations, which allow charge separations in a collisionless plasma, do not need to invoke the frozen condition as in GRMHD simulations. 3-D GRPIC simulations show that jets are launched from Kerr black holes as in 3-D GRMHD simulations, but jet formation in the two cases may not be identical. Comparative study of black hole systems with GRPIC and GRMHD simulations with the inclusion of radiate transfer will further clarify the mechanisms that drive the evolution of disk-jet systems.

  15. Virtualization in High-Performance Computing: An Analysis of Physical and Virtual Node Performance

    OpenAIRE

    Jungels, Glendon M

    2012-01-01

    The process of virtualizing computing resources allows an organization to make more efficient use of it's resources. In addtion, this process enables flexibility that deployment on raw hardware does not. Virtualization, however, comes with a performance penalty. This study examines the performance of utilizing virtualization technology for use in high performance computing to determine the suitibility of using this technology. It makes use of a small (4 node) virtual cluster as well as a ...

  16. Optimized Parallel Discrete Event Simulation (PDES) for High Performance Computing (HPC) Clusters

    National Research Council Canada - National Science Library

    Abu-Ghazaleh, Nael

    2005-01-01

    The aim of this project was to study the communication subsystem performance of state of the art optimistic simulator Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES...

  17. Effects of cold-water immersion on physical performance between successive matches in high-performance junior male soccer players.

    Science.gov (United States)

    Rowsell, Greg J; Coutts, Aaron J; Reaburn, Peter; Hill-Haas, Stephen

    2009-04-01

    In this study, we investigated the effect of water immersion on physical test performance and perception of fatigue/recovery during a 4-day simulated soccer tournament. Twenty high-performance junior male soccer players (age 15.9 +/- 0.6 years) played four matches in 4 days and undertook either cold-water immersion (10 +/- 0.5 degrees C) or thermoneutral water immersion (34 +/- 0.5 degrees C) after each match. Physical performance tests (countermovement jump height, heart rate, and rating of perceived exertion after a standard 5-min run and 12 x 20-m repeated sprint test), intracellular proteins, and inflammatory markers were recorded approximately 90 min before each match and 22 h after the final match. Perceptual measures of recovery (physical, mental, leg soreness, and general fatigue) were recorded 22 h after each match. There were non-significant reductions in countermovement jump height (1.7-7.3%, P = 0.74, eta(2) = 0.34) and repeated sprint ability (1.0-2.1%, P = 0.41, eta(2) = 0.07) over the 4-day tournament with no differences between groups. Post-shuttle run rating of perceived exertion increased over the tournament in both groups (P < 0.001, eta(2) = 0.48), whereas the perceptions of leg soreness (P = 0.004, eta(2) = 0.30) and general fatigue (P = 0.007, eta(2) = 0.12) were lower in the cold-water immersion group than the thermoneutral immersion group over the tournament. Creatine kinase (P = 0.004, eta(2) = 0.26) and lactate dehydrogenase (P < 0.001, eta(2) = 0.40) concentrations increased in both groups but there were no changes over time for any inflammatory markers. These results suggest that immediate post-match cold-water immersion does not affect physical test performance or indices of muscle damage and inflammation but does reduce the perception of general fatigue and leg soreness between matches in tournaments.

  18. Semiconductor device physics and simulation

    CERN Document Server

    Yuan, J S

    1998-01-01

    This volume provides thorough coverage of modern semiconductor devices -including hetero- and homo-junction devices-using a two-dimensional simulator (MEDICI) to perform the analysis and generate simulation results Each device is examined in terms of dc, ac, and transient simulator results; relevant device physics; and implications for design and analysis Two hundred forty-four useful figures illustrate the physical mechanisms and characteristics of the devices simulated Comprehensive and carefully organized, Semiconductor Device Physics and Simulation is the ideal bridge from device physics to practical device design

  19. High performance hybrid functional Petri net simulations of biological pathway models on CUDA.

    Science.gov (United States)

    Chalkidis, Georgios; Nagasaki, Masao; Miyano, Satoru

    2011-01-01

    Hybrid functional Petri nets are a wide-spread tool for representing and simulating biological models. Due to their potential of providing virtual drug testing environments, biological simulations have a growing impact on pharmaceutical research. Continuous research advancements in biology and medicine lead to exponentially increasing simulation times, thus raising the demand for performance accelerations by efficient and inexpensive parallel computation solutions. Recent developments in the field of general-purpose computation on graphics processing units (GPGPU) enabled the scientific community to port a variety of compute intensive algorithms onto the graphics processing unit (GPU). This work presents the first scheme for mapping biological hybrid functional Petri net models, which can handle both discrete and continuous entities, onto compute unified device architecture (CUDA) enabled GPUs. GPU accelerated simulations are observed to run up to 18 times faster than sequential implementations. Simulating the cell boundary formation by Delta-Notch signaling on a CUDA enabled GPU results in a speedup of approximately 7x for a model containing 1,600 cells.

  20. The UPSF code: a metaprogramming-based high-performance automatically parallelized plasma simulation framework

    Science.gov (United States)

    Gao, Xiatian; Wang, Xiaogang; Jiang, Binhao

    2017-10-01

    UPSF (Universal Plasma Simulation Framework) is a new plasma simulation code designed for maximum flexibility by using edge-cutting techniques supported by C++17 standard. Through use of metaprogramming technique, UPSF provides arbitrary dimensional data structures and methods to support various kinds of plasma simulation models, like, Vlasov, particle in cell (PIC), fluid, Fokker-Planck, and their variants and hybrid methods. Through C++ metaprogramming technique, a single code can be used to arbitrary dimensional systems with no loss of performance. UPSF can also automatically parallelize the distributed data structure and accelerate matrix and tensor operations by BLAS. A three-dimensional particle in cell code is developed based on UPSF. Two test cases, Landau damping and Weibel instability for electrostatic and electromagnetic situation respectively, are presented to show the validation and performance of the UPSF code.

  1. High-Performance Computing for the Electromagnetic Modeling and Simulation of Interconnects

    Science.gov (United States)

    Schutt-Aine, Jose E.

    1996-01-01

    The electromagnetic modeling of packages and interconnects plays a very important role in the design of high-speed digital circuits, and is most efficiently performed by using computer-aided design algorithms. In recent years, packaging has become a critical area in the design of high-speed communication systems and fast computers, and the importance of the software support for their development has increased accordingly. Throughout this project, our efforts have focused on the development of modeling and simulation techniques and algorithms that permit the fast computation of the electrical parameters of interconnects and the efficient simulation of their electrical performance.

  2. Physical intelligence at work: Servant-leadership development for high performance

    Science.gov (United States)

    Jim Saveland

    2001-01-01

    In October 2000, the RMRS Leadership Team attended a one-day seminar on leadership presented by Stephen Covey (1990). Covey talked about the role of a leader being respecting, integrating and developing body, heart, mind, and spirit. Integrating our physical, emotional, mental and spiritual selves is a popular theme (e.g. Leonard and Murphy 1995, Levey and Levey 1998,...

  3. High-performance gravitational N-body simulations on a planet-wide-distributed supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Groen, Derek; Zwart, Simon Portegies [Leiden Observatory, Leiden University, PO Box 9513, 2300 RA Leiden (Netherlands); Ishiyama, Tomoaki; Makino, Jun, E-mail: djgroen@strw.leidenuniv.nl [National Astronomical Observatory, Mitaka, Tokyo 181-8588 (Japan)

    2011-01-15

    We report on the performance of our cold dark matter cosmological N-body simulation that was carried out concurrently using supercomputers across the globe. We ran simulations on 60-750 cores distributed over a variety of supercomputers in Amsterdam (The Netherlands, Europe), in Tokyo (Japan, Asia), Edinburgh (UK, Europe) and Espoo (Finland, Europe). Regardless of the network latency of 0.32 s and the communication over 30 000 km of optical network cable, we are able to achieve {approx}87% of the performance compared to an equal number of cores on a single supercomputer. We argue that using widely distributed supercomputers in order to acquire more compute power is technically feasible and that the largest obstacle is introduced by local scheduling and reservation policies.

  4. Investigating the Mobility of Light Autonomous Tracked Vehicles using a High Performance Computing Simulation Capability

    Science.gov (United States)

    2012-08-01

    by funding provided by the Na- tional Science Foundation under NSF Project CMMI - 0840442 and through TARDEC grant W911NF-11- D-0001-0048. M...Hall, Englewood Cliffs, New Jersey, 1989. [13] HEYN, T. Simulation of Tracked Vehicles on Granular Terrain Leveraging GPU Comput- ing. M.S. thesis ...Dy- namics on Graphics Processing Unit (GPU) Cards. M.S. thesis , Department of Me- chanical Engineering, University of Wisconsin– Madison, http

  5. Designing a compact high performance brain PET scanner—simulation study

    Science.gov (United States)

    Gong, Kuang; Majewski, Stan; Kinahan, Paul E.; Harrison, Robert L.; Elston, Brian F.; Manjeshwar, Ravindra; Dolinsky, Sergei; Stolin, Alexander V.; Brefczynski-Lewis, Julie A.; Qi, Jinyi

    2016-05-01

    The desire to understand normal and disordered human brain function of upright, moving persons in natural environments motivates the development of the ambulatory micro-dose brain PET imager (AMPET). An ideal system would be light weight but with high sensitivity and spatial resolution, although these requirements are often in conflict with each other. One potential approach to meet the design goals is a compact brain-only imaging device with a head-sized aperture. However, a compact geometry increases parallax error in peripheral lines of response, which increases bias and variance in region of interest (ROI) quantification. Therefore, we performed simulation studies to search for the optimal system configuration and to evaluate the potential improvement in quantification performance over existing scanners. We used the Cramér-Rao variance bound to compare the performance for ROI quantification using different scanner geometries. The results show that while a smaller ring diameter can increase photon detection sensitivity and hence reduce the variance at the center of the field of view, it can also result in higher variance in peripheral regions when the length of detector crystal is 15 mm or more. This variance can be substantially reduced by adding depth-of-interaction (DOI) measurement capability to the detector modules. Our simulation study also shows that the relative performance depends on the size of the ROI, and a large ROI favors a compact geometry even without DOI information. Based on these results, we propose a compact ‘helmet’ design using detectors with DOI capability. Monte Carlo simulations show the helmet design can achieve four-fold higher sensitivity and resolve smaller features than existing cylindrical brain PET scanners. The simulations also suggest that improving TOF timing resolution from 400 ps to 200 ps also results in noticeable improvement in image quality, indicating better timing resolution is desirable for brain imaging.

  6. Simulation-Driven Development and Optimization of a High-Performance Six-Dimensional Wrist Force/Torque Sensor

    Directory of Open Access Journals (Sweden)

    Qiaokang LIANG

    2010-05-01

    Full Text Available This paper describes the Simulation-Driven Development and Optimization (SDDO of a six-dimensional force/torque sensor with high performance. By the implementation of the SDDO, the developed sensor possesses high performance such as high sensitivity, linearity, stiffness and repeatability simultaneously, which is hard for tranditional force/torque sensor. Integrated approach provided by software ANSYS was used to streamline and speed up the process chain and thereby to deliver results significantly faster than traditional approaches. The result of calibration experiment possesses some impressive characters, therefore the developed fore/torque sensor can be usefully used in industry and the methods of design can also be used to develop industrial product.

  7. High Performance Computation of a Jet in Crossflow by Lattice Boltzmann Based Parallel Direct Numerical Simulation

    Directory of Open Access Journals (Sweden)

    Jiang Lei

    2015-01-01

    Full Text Available Direct numerical simulation (DNS of a round jet in crossflow based on lattice Boltzmann method (LBM is carried out on multi-GPU cluster. Data parallel SIMT (single instruction multiple thread characteristic of GPU matches the parallelism of LBM well, which leads to the high efficiency of GPU on the LBM solver. With present GPU settings (6 Nvidia Tesla K20M, the present DNS simulation can be completed in several hours. A grid system of 1.5 × 108 is adopted and largest jet Reynolds number reaches 3000. The jet-to-free-stream velocity ratio is set as 3.3. The jet is orthogonal to the mainstream flow direction. The validated code shows good agreement with experiments. Vortical structures of CRVP, shear-layer vortices and horseshoe vortices, are presented and analyzed based on velocity fields and vorticity distributions. Turbulent statistical quantities of Reynolds stress are also displayed. Coherent structures are revealed in a very fine resolution based on the second invariant of the velocity gradients.

  8. STEMsalabim: A high-performance computing cluster friendly code for scanning transmission electron microscopy image simulations of thin specimens

    Energy Technology Data Exchange (ETDEWEB)

    Oelerich, Jan Oliver, E-mail: jan.oliver.oelerich@physik.uni-marburg.de; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D.; Volz, Kerstin

    2017-06-15

    Highlights: • We present STEMsalabim, a modern implementation of the multislice algorithm for simulation of STEM images. • Our package is highly parallelizable on high-performance computing clusters, combining shared and distributed memory architectures. • With STEMsalabim, computationally and memory expensive STEM image simulations can be carried out within reasonable time. - Abstract: We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space.

  9. GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers

    Directory of Open Access Journals (Sweden)

    Mark James Abraham

    2015-09-01

    Full Text Available GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules. It provides a rich set of calculation types, preparation and analysis tools. Several advanced techniques for free-energy calculations are supported. In version 5, it reaches new performance heights, through several new and enhanced parallelization algorithms. These work on every level; SIMD registers inside cores, multithreading, heterogeneous CPU–GPU acceleration, state-of-the-art 3D domain decomposition, and ensemble-level parallelization through built-in replica exchange and the separate Copernicus framework. The latest best-in-class compressed trajectory storage format is supported.

  10. Performance of space charge simulations using High Performance Computing (HPC) cluster

    CERN Document Server

    Bartosik, Hannes; CERN. Geneva. ATS Department

    2017-01-01

    In 2016 a collaboration agreement between CERN and Istituto Nazionale di Fisica Nucleare (INFN) through its Centro Nazionale Analisi Fotogrammi (CNAF, Bologna) was signed [1], which foresaw the purchase and installation of a cluster of 20 nodes with 32 cores each, connected with InfiniBand, at CNAF for the use of CERN members to develop parallelized codes as well as conduct massive simulation campaigns with the already available parallelized tools. As outlined in [1], after the installation and the set up of the first 12 nodes, the green light to proceed with the procurement and installation of the next 8 nodes can be given only after successfully passing an acceptance test based on two specific benchmark runs. This condition is necessary to consider the first batch of the cluster operational and complying with the desired performance specifications. In this brief note, we report the results of the above mentioned acceptance test.

  11. High-Performance Kinetic Plasma Simulations with GPUs and load balancing

    Science.gov (United States)

    Germaschewski, Kai; Ahmadi, Narges; Abbott, Stephen; Lin, Liwei; Wang, Liang; Bhattacharjee, Amitava; Fox, Will

    2014-10-01

    We will describe the Plasma Simulation Code (PSC), a modern particle-in-cell code with GPU support and dynamic load balancing capabilities. For 2-d problems, we achieve a speed-up of up to 6 × on the Cray XK7 ``Titan'' using its GPUs over the well-known VPIC code, which has been optimized for conventional CPUs with SIMD support. Our load-balancing algorithm employs a space-filling Hilbert-Peano curve to maintain locality and has shown to keep the load balanced within approximately 10% in production runs which otherwise slow down up to 5 × with only static load balancing. PSC is based on the libmrc computational framework, which also supports explicit and implicit time integration of fluid plasma models. Applications include magnetic reconnection in HED plasmas, particle acceleration in space plasmas and the nonlinear evolution of anisotropy-based kinetic instabilities like the mirror mode.

  12. Hazards Caused by UV Rays of Xenon Light Based High Performance Solar Simulators.

    Science.gov (United States)

    Dibowski, Gerd; Esser, Kai

    2017-09-01

    Solar furnaces are used worldwide to conduct experiments to demonstrate the feasibility of solar-chemical processes with the aid of concentrated sunlight, or to qualify high temperature-resistant components. In recent years, high-flux solar simulators (HFSSs) based on short-arc xenon lamps are more frequently used. The emitted spectrum is very similar to natural sunlight but with dangerous portions of ultraviolet light as well. Due to special benefits of solar simulators the increase of construction activity for HFSS can be observed worldwide. Hence, it is quite important to protect employees against serious injuries caused by ultraviolet radiation (UVR) in a range of 100 nm to 400 nm. The UV measurements were made at the German Aerospace Center (DLR), Cologne and Paul-Scherrer-Institute (PSI), Switzerland, during normal operations of the HFSS, with a high-precision UV-A/B radiometer using different experiment setups at different power levels. Thus, the measurement results represent UV emissions which are typical when operating a HFSS. Therefore, the biological effects on people exposed to UVR was investigated systematically to identify the existing hazard potential. It should be noted that the permissible workplace exposure limits for UV emissions significantly exceeded after a few seconds. One critical value was strongly exceeded by a factor of 770. The prevention of emissions must first and foremost be carried out by structural measures. Furthermore, unambiguous protocols have to be defined and compliance must be monitored. For short-term activities in the hazard area, measures for the protection of eyes and skin must be taken.

  13. A high-performance model for shallow-water simulations in distributed and heterogeneous architectures

    Science.gov (United States)

    Conde, Daniel; Canelas, Ricardo B.; Ferreira, Rui M. L.

    2017-04-01

    One of the most common challenges in hydrodynamic modelling is the trade off one must make between highly resolved simulations and the time required for their computation. In the particular case of urban floods, modelers are often forced to simplify the complex geometries of the problem, or to implicitly include some of its hydrodynamic effects, due to the typically very large spatial scales involved and limited computational resources. At CEris - Instituto Superior Técnico, Universidade de Lisboa - the STAV-2D shallow-water model, particularly suited for strong transient flows in complex and dynamic geometries, has been under development for the past recent years (Canelas et al., 2013 & Conde et al., 2013). The model is based on an explicit, first-order 2DH finite-volume discretization scheme for unstructured triangular meshes, in which a flux-splitting technique is paired with a reviewed Roe-Riemann solver, yielding a model applicable to discontinuous flows over time-evolving geometries. STAV-2D features solid transport in both Euleran and Lagrangian forms, with the first aiming at describing the transport of fine natural sediments and the latter aimed at large individual debris. The model has been validated with theoretical solutions and laboratory experiments (Canelas et al., 2013 & Conde et al., 2015). This work presents our most recent effort in STAV-2D: the re-design of the code in a modern Object-Oriented parallel framework for heterogeneous computations in CPUs and GPUs. The programming language of choice for this re-design was C++, due to its wide support of established and emerging parallel programming interfaces. The current implementation of STAV-2D provides two different levels of parallel granularity: inter-node and intra-node. Inter-node parallelism is achieved by distributing a simulation across a set of worker nodes, with communication between nodes being explicitly managed through MPI. At this level, the main difficulty is associated with the

  14. The High performance of nanocrystalline CVD diamond coated hip joints in wear simulator test.

    Science.gov (United States)

    Maru, M M; Amaral, M; Rodrigues, S P; Santos, R; Gouvea, C P; Archanjo, B S; Trommer, R M; Oliveira, F J; Silva, R F; Achete, C A

    2015-09-01

    The superior biotribological performance of nanocrystalline diamond (NCD) coatings grown by a chemical vapor deposition (CVD) method was already shown to demonstrate high wear resistance in ball on plate experiments under physiological liquid lubrication. However, tests with a close-to-real approach were missing and this constitutes the aim of the present work. Hip joint wear simulator tests were performed with cups and heads made of silicon nitride coated with NCD of ~10 μm in thickness. Five million testing cycles (Mc) were run, which represent nearly five years of hip joint implant activity in a patient. For the wear analysis, gravimetry, profilometry, scanning electron microscopy and Raman spectroscopy techniques were used. After 0.5 Mc of wear test, truncation of the protruded regions of the NCD film happened as a result of a fine-scale abrasive wear mechanism, evolving to extensive plateau regions and highly polished surface condition (Racracking, grain pullouts or delamination of the coatings. A steady state volumetric wear rate of 0.02 mm(3)/Mc, equivalent to a linear wear of 0.27 μm/Mc favorably compares with the best performance reported in the literature for the fourth generation alumina ceramic (0.05 mm(3)/Mc). Also, squeaking, quite common phenomenon in hard-on-hard systems, was absent in the present all-NCD system. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Simulation of cardiac electrophysiology on next-generation high-performance computers.

    Science.gov (United States)

    Bordas, Rafel; Carpentieri, Bruno; Fotia, Giorgio; Maggio, Fabio; Nobes, Ross; Pitt-Francis, Joe; Southern, James

    2009-05-28

    Models of cardiac electrophysiology consist of a system of partial differential equations (PDEs) coupled with a system of ordinary differential equations representing cell membrane dynamics. Current software to solve such models does not provide the required computational speed for practical applications. One reason for this is that little use is made of recent developments in adaptive numerical algorithms for solving systems of PDEs. Studies have suggested that a speedup of up to two orders of magnitude is possible by using adaptive methods. The challenge lies in the efficient implementation of adaptive algorithms on massively parallel computers. The finite-element (FE) method is often used in heart simulators as it can encapsulate the complex geometry and small-scale details of the human heart. An alternative is the spectral element (SE) method, a high-order technique that provides the flexibility and accuracy of FE, but with a reduced number of degrees of freedom. The feasibility of implementing a parallel SE algorithm based on fully unstructured all-hexahedra meshes is discussed. A major computational task is solution of the large algebraic system resulting from FE or SE discretization. Choice of linear solver and preconditioner has a substantial effect on efficiency. A fully parallel implementation based on dynamic partitioning that accounts for load balance, communication and data movement costs is required. Each of these methods must be implemented on next-generation supercomputers in order to realize the necessary speedup. The problems that this may cause, and some of the techniques that are beginning to be developed to overcome these issues, are described.

  16. Design of the HELICS High-Performance Transmission-Distribution-Communication-Market Co-Simulation Framework: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Palmintier, Bryan S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Krishnamurthy, Dheepak [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Top, Philip [Lawrence Livermore National Laboratories; Smith, Steve [Lawrence Livermore National Laboratories; Daily, Jeff [Pacific Northwest National Laboratory; Fuller, Jason [Pacific Northwest National Laboratory

    2017-09-12

    This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layered co-simulation architecture.

  17. Applying GIS and high performance agent-based simulation for managing an Old World Screwworm fly invasion of Australia.

    Science.gov (United States)

    Welch, M C; Kwan, P W; Sajeev, A S M

    2014-10-01

    Agent-based modelling has proven to be a promising approach for developing rich simulations for complex phenomena that provide decision support functions across a broad range of areas including biological, social and agricultural sciences. This paper demonstrates how high performance computing technologies, namely General-Purpose Computing on Graphics Processing Units (GPGPU), and commercial Geographic Information Systems (GIS) can be applied to develop a national scale, agent-based simulation of an incursion of Old World Screwworm fly (OWS fly) into the Australian mainland. The development of this simulation model leverages the combination of massively data-parallel processing capabilities supported by NVidia's Compute Unified Device Architecture (CUDA) and the advanced spatial visualisation capabilities of GIS. These technologies have enabled the implementation of an individual-based, stochastic lifecycle and dispersal algorithm for the OWS fly invasion. The simulation model draws upon a wide range of biological data as input to stochastically determine the reproduction and survival of the OWS fly through the different stages of its lifecycle and dispersal of gravid females. Through this model, a highly efficient computational platform has been developed for studying the effectiveness of control and mitigation strategies and their associated economic impact on livestock industries can be materialised. Copyright © 2014 International Atomic Energy Agency 2014. Published by Elsevier B.V. All rights reserved.

  18. The High Performance Computing Initiative

    Science.gov (United States)

    Holcomb, Lee B.; Smith, Paul H.; Macdonald, Michael J.

    1991-01-01

    The paper discusses NASA High Performance Computing Initiative (HPCI), an essential component of the Federal High Performance Computing Program. The HPCI program is designed to provide a thousandfold increase in computing performance, and apply the technologies to NASA 'Grand Challenges'. The Grand Challenges chosen include integrated multidisciplinary simulations and design optimizations of aerospace vehicles throughout the mission profiles; the multidisciplinary modeling and data analysis of the earth and space science physical phenomena; and the spaceborne control of automated systems, handling, and analysis of sensor data and real-time response to sensor stimuli.

  19. Direct first-principles simulation of a high-performance electron emitter: Lithium-oxide-coated diamond surface

    Energy Technology Data Exchange (ETDEWEB)

    Miyamoto, Yoshiyuki, E-mail: yoshi-miyamoto@aist.go.jp; Miyazaki, Takehide [Nanosystem Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Central 2, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568 (Japan); Takeuchi, Daisuke; Yamasaki, Satoshi [Energy Technology Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Central 2, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568 (Japan); JST, ALCA, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012 (Japan)

    2014-09-28

    We examined the field emission properties of lithium(Li)/oxygen(O)-co-terminated diamond (001) surface [C(001)-LiO] through real-time electron dynamics simulation under an applied field. The current emitted from this surface was found to be more than four-fold that emitted by an H-terminated (001) surface, the latter being a typical negative electron affinity system. This high performance is attributed to the Li layer, which bends the potential wall of O-induced electron pockets down in the direction of vacuum, thus facilitating electron emission. Detailed analysis of the emitted electrons and the profile of the self-consistent potential elucidated that the role of O atoms changes from an electron barrier on OH-terminated diamond surfaces to an outlet for electron emission on C(001)-LiO.

  20. Direct first-principles simulation of a high-performance electron emitter: Lithium-oxide-coated diamond surface

    Science.gov (United States)

    Miyamoto, Yoshiyuki; Miyazaki, Takehide; Takeuchi, Daisuke; Yamasaki, Satoshi

    2014-09-01

    We examined the field emission properties of lithium(Li)/oxygen(O)-co-terminated diamond (001) surface [C(001)-LiO] through real-time electron dynamics simulation under an applied field. The current emitted from this surface was found to be more than four-fold that emitted by an H-terminated (001) surface, the latter being a typical negative electron affinity system. This high performance is attributed to the Li layer, which bends the potential wall of O-induced electron pockets down in the direction of vacuum, thus facilitating electron emission. Detailed analysis of the emitted electrons and the profile of the self-consistent potential elucidated that the role of O atoms changes from an electron barrier on OH-terminated diamond surfaces to an outlet for electron emission on C(001)-LiO.

  1. A High Performance Computing Approach to the Simulation of Fluid-Solid interaction Problems with Rigid and Flexible Components

    Directory of Open Access Journals (Sweden)

    Pazouki Arman

    2014-08-01

    Full Text Available This work outlines a unified multi-threaded, multi-scale High Performance Computing (HPC approach for the direct numerical simulation of Fluid-Solid Interaction (FSI problems. The simulation algorithm relies on the extended Smoothed Particle Hydrodynamics (XSPH method, which approaches the fluid flow in a La-grangian framework consistent with the Lagrangian tracking of the solid phase. A general 3D rigid body dynamics and an Absolute Nodal Coordinate Formulation (ANCF are implemented to model rigid and flexible multibody dynamics. The two-way coupling of the fluid and solid phases is supported through use of Boundary Condition Enforcing (BCE markers that capture the fluid-solid coupling forces by enforcing a no-slip boundary condition. The solid-solid short range interaction, which has a crucial impact on the small-scale behavior of fluid-solid mixtures, is resolved via a lubrication force model. The collective system states are integrated in time using an explicit, multi-rate scheme. To alleviate the heavy computational load, the overall algorithm leverages parallel computing on Graphics Processing Unit (GPU cards. Performance and scaling analysis are provided for simulations scenarios involving one or multiple phases with up to tens of thousands of solid objects. The software implementation of the approach, called Chrono:Fluid, is part of the Chrono project and available as an open-source software.

  2. IMPETUS - Interactive MultiPhysics Environment for Unified Simulations.

    Science.gov (United States)

    Ha, Vi Q; Lykotrafitis, George

    2016-12-08

    We introduce IMPETUS - Interactive MultiPhysics Environment for Unified Simulations, an object oriented, easy-to-use, high performance, C++ program for three-dimensional simulations of complex physical systems that can benefit a large variety of research areas, especially in cell mechanics. The program implements cross-communication between locally interacting particles and continuum models residing in the same physical space while a network facilitates long-range particle interactions. Message Passing Interface is used for inter-processor communication for all simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Physically Cross-linked Polymer Binder Induced by Reversible Acid-Base Interaction for High-Performance Silicon Composite Anodes.

    Science.gov (United States)

    Lim, Sanghyun; Chu, Hodong; Lee, Kukjoo; Yim, Taeeun; Kim, Young-Jun; Mun, Junyoung; Kim, Tae-Hyun

    2015-10-28

    Silicon is greatly promising for high-capacity anode materials in lithium-ion batteries (LIBs) due to their exceptionally high theoretical capacity. However, it has a big challenge of severe volume changes during charge and discharge, resulting in substantial deterioration of the electrode and restricting its practical application. This conflict requires a novel binder system enabling reliable cyclability to hold silicon particles without severe disintegration of the electrode. Here, a physically cross-linked polymer binder induced by reversible acid-base interaction is reported for high performance silicon-anodes. Chemical cross-linking of polymer binders, mainly based on acidic polymers including poly(acrylic acid) (PAA), have been suggested as effective ways to accommodate the volume expansion of Si-based electrodes. Unlike the common chemical cross-linking, which causes a gradual and nonreversible fracturing of the cross-linked network, a physically cross-linked binder based on PAA-PBI (poly(benzimidazole)) efficiently holds the Si particles even after the large volume changes due to its ability to reversibly reconstruct ionic bonds. The PBI-containing binder, PAA-PBI-2, exhibited large capacity (1376.7 mAh g(-1)), high Coulombic efficiency (99.1%) and excellent cyclability (751.0 mAh g(-1) after 100 cycles). This simple yet efficient method is promising to solve the failures relating with pulverization and isolation from the severe volume changes of the Si electrode, and advance the realization of high-capacity LIBs.

  4. pWeb: A High-Performance, Parallel-Computing Framework for Web-Browser-Based Medical Simulation.

    Science.gov (United States)

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2014-01-01

    This work presents a pWeb - a new language and compiler for parallelization of client-side compute intensive web applications such as surgical simulations. The recently introduced HTML5 standard has enabled creating unprecedented applications on the web. Low performance of the web browser, however, remains the bottleneck of computationally intensive applications including visualization of complex scenes, real time physical simulations and image processing compared to native ones. The new proposed language is built upon web workers for multithreaded programming in HTML5. The language provides fundamental functionalities of parallel programming languages as well as the fork/join parallel model which is not supported by web workers. The language compiler automatically generates an equivalent parallel script that complies with the HTML5 standard. A case study on realistic rendering for surgical simulations demonstrates enhanced performance with a compact set of instructions.

  5. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  6. Physical Simulation: Testing the PHYSICALITY of Phenomena

    Science.gov (United States)

    Srivastava, Jagdish

    2004-05-01

    Theories of Quantum Mechanics in which `consciousness' plays a role have been around for decades. For example, Wheeler maintains that no phenomenon is a real phenomenon unless it has been observed. Also, the von Neumann chain, where the wave function is said to collapse when the chain reaches the mind of a conscious observer, is well known. The author's theory of Quantum Reality (denoted by TK) goes a bit further, saying that at the fundamental levels, all phenomena are logical-mathematical objects only, and the experience of their `physicality' is due to the consciousness of the observer. This paper addresses the question, as to how TK (and, the other related theories) could be tested. A procedure for this, termed `Physical Simulation' is proposed. The idea is to create logical-mathematical objects through a computer. Various aspects of this methodology are discussed.

  7. High-performance simulation-based algorithms for an alpine ski racer’s trajectory optimization in heterogeneous computer systems

    Directory of Open Access Journals (Sweden)

    Dębski Roman

    2014-09-01

    Full Text Available Effective, simulation-based trajectory optimization algorithms adapted to heterogeneous computers are studied with reference to the problem taken from alpine ski racing (the presented solution is probably the most general one published so far. The key idea behind these algorithms is to use a grid-based discretization scheme to transform the continuous optimization problem into a search problem over a specially constructed finite graph, and then to apply dynamic programming to find an approximation of the global solution. In the analyzed example it is the minimum-time ski line, represented as a piecewise-linear function (a method of elimination of unfeasible solutions is proposed. Serial and parallel versions of the basic optimization algorithm are presented in detail (pseudo-code, time and memory complexity. Possible extensions of the basic algorithm are also described. The implementation of these algorithms is based on OpenCL. The included experimental results show that contemporary heterogeneous computers can be treated as μ-HPC platforms-they offer high performance (the best speedup was equal to 128 while remaining energy and cost efficient (which is crucial in embedded systems, e.g., trajectory planners of autonomous robots. The presented algorithms can be applied to many trajectory optimization problems, including those having a black-box represented performance measure

  8. A Mesoscopic Simulation for the Early-Age Shrinkage Cracking Process of High Performance Concrete in Bridge Engineering

    Directory of Open Access Journals (Sweden)

    Guodong Li

    2017-01-01

    Full Text Available On a mesoscopic level, high performance concrete (HPC was assumed to be a heterogeneous composite material consisting of aggregates, mortar, and pores. The concrete mesoscopic structure model had been established based on CT image reconstruction. By combining this model with continuum mechanics, damage mechanics, and fracture mechanics, a relatively complete system for concrete mesoscopic mechanics analysis was established to simulate the process of early-age shrinkage cracking in HPC. This process was based on the dispersion crack model. The results indicated that the interface between the aggregate and mortar was the crack point caused by shrinkage cracks in HPC. The locations of early-age shrinkage cracks in HPC were associated with the spacing and the size of the aggregate particle. However, the shrinkage deformation size of the mortar was related to the scope of concrete cracking and was independent of the crack position. Whereas lower water to cement ratios can improve the early strength of concrete, this ratio cannot control early-age shrinkage cracks in HPC.

  9. Hadron therapy physics and simulations

    CERN Document Server

    d’Ávila Nunes, Marcos

    2014-01-01

    This brief provides an in-depth overview of the physics of hadron therapy, ranging from the history to the latest contributions to the subject. It covers the mechanisms of protons and carbon ions at the molecular level (DNA breaks and proteins 53BP1 and RPA), the physics and mathematics of accelerators (Cyclotron and Synchrotron), microdosimetry measurements (with new results so far achieved), and Monte Carlo simulations in hadron therapy using FLUKA (CERN) and MCHIT (FIAS) software. The text also includes information about proton therapy centers and carbon ion centers (PTCOG), as well as a comparison and discussion of both techniques in treatment planning and radiation monitoring. This brief is suitable for newcomers to medical physics as well as seasoned specialists in radiation oncology.

  10. Simulation of the Physics of Flight

    Science.gov (United States)

    Lane, W. Brian

    2013-01-01

    Computer simulations continue to prove to be a valuable tool in physics education. Based on the needs of an Aviation Physics course, we developed the PHYSics of FLIght Simulator (PhysFliS), which numerically solves Newton's second law for an airplane in flight based on standard aerodynamics relationships. The simulation can be used to pique…

  11. High performance in software development

    CERN Multimedia

    CERN. Geneva; Haapio, Petri; Liukkonen, Juha-Matti

    2015-01-01

    What are the ingredients of high-performing software? Software development, especially for large high-performance systems, is one the most complex tasks mankind has ever tried. Technological change leads to huge opportunities but challenges our old ways of working. Processing large data sets, possibly in real time or with other tight computational constraints, requires an efficient solution architecture. Efficiency requirements span from the distributed storage and large-scale organization of computation and data onto the lowest level of processor and data bus behavior. Integrating performance behavior over these levels is especially important when the computation is resource-bounded, as it is in numerics: physical simulation, machine learning, estimation of statistical models, etc. For example, memory locality and utilization of vector processing are essential for harnessing the computing power of modern processor architectures due to the deep memory hierarchies of modern general-purpose computers. As a r...

  12. High-performance sports medicine

    National Research Council Canada - National Science Library

    Speed, Cathy

    2013-01-01

    High performance sports medicine involves the medical care of athletes, who are extraordinary individuals and who are exposed to intensive physical and psychological stresses during training and competition...

  13. A High Performance Computing Approach to the Simulation of Fluid Solid Interaction Problems with Rigid and Flexible Components (Open Access Publisher’s Version)

    Science.gov (United States)

    2014-08-01

    HIGH PERFORMANCE COMPUTING APPROACH TO THE SIMULATION OF FLUID-SOLID INTERACTION PROBLEMS WITH RIGID AND FLEXIBLE COMPONENTS This work outlines a unified ...speed and architecture, memory layout and capacity, and power efficiency have motivated a trend of re-evaluating simulation algorithms with an eye...leverage any multi-threaded architecture, the CUDA library [26] was employed for the execution of all solution components on the GPU, with negligible

  14. Partnership for Edge Physics Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kritz, Arnold H. [Lehigh Univ., Bethlehem, PA (United States). Dept. of Physics; Rafiq, Tariq [Lehigh Univ., Bethlehem, PA (United States). Dept. of Physics

    2017-07-31

    A major goal of our participation in the Edge Physics Simulation project has been to contribute to the understanding of the self-organization of tokamak turbulence fluctuations resulting in the formation of a staircase structure in the ion temperature. A second important goal is to demonstrate how small scale turbulence in plasmas self-organizes with dynamically driven quasi-stationary flow shear. These goals have been accomplished through the analyses of the statistical properties of XGC1 flux driven Gyrokinetic electrostatic ion temperature gradient (ITG) turbulence simulation data in which neutrals are included. The ITG turbulence data, and in particular fluctuation data, were obtained from a massively parallel flux-driven gyrokinetic full-f particle-in-cell simulation of a DIII-D like equilibrium. Below some the findings are summarized. It was observed that the emergence of staircase structure is related to the variations in the normalized temperature gradient length (R/LT) and the poloidal flow shear. Average turbulence intensity is found to be large in the vicinity of minima in R/LTi, where ITG growth is expected to be lower. The distributions of the occurrences of potential fluctuation are found to be Gaussian away from the staircase-step locations, but they are found to be non-Gaussian in the vicinity of staircase-step locations. The results of analytically derived expressions for the distribution of the occurrences of turbulence intensity and intensity flux were compared with the corresponding quantities computed using XGC1 simulation data and good agreement is found. The derived expressions predicts inward and outward propagation of turbulence intensity flux in an intermittent fashion. The outward propagation of turbulence intensity flux occurs at staircase-step locations and is related to the change in poloidal flow velocity shear and to the change in the ion temperature gradient. The standard deviation, skewness and kurtosis for turbulence quantities

  15. Molecular simulation workflows as parallel algorithms: the execution engine of Copernicus, a distributed high-performance computing platform.

    Science.gov (United States)

    Pronk, Sander; Pouya, Iman; Lundborg, Magnus; Rotskoff, Grant; Wesén, Björn; Kasson, Peter M; Lindahl, Erik

    2015-06-09

    Computational chemistry and other simulation fields are critically dependent on computing resources, but few problems scale efficiently to the hundreds of thousands of processors available in current supercomputers-particularly for molecular dynamics. This has turned into a bottleneck as new hardware generations primarily provide more processing units rather than making individual units much faster, which simulation applications are addressing by increasingly focusing on sampling with algorithms such as free-energy perturbation, Markov state modeling, metadynamics, or milestoning. All these rely on combining results from multiple simulations into a single observation. They are potentially powerful approaches that aim to predict experimental observables directly, but this comes at the expense of added complexity in selecting sampling strategies and keeping track of dozens to thousands of simulations and their dependencies. Here, we describe how the distributed execution framework Copernicus allows the expression of such algorithms in generic workflows: dataflow programs. Because dataflow algorithms explicitly state dependencies of each constituent part, algorithms only need to be described on conceptual level, after which the execution is maximally parallel. The fully automated execution facilitates the optimization of these algorithms with adaptive sampling, where undersampled regions are automatically detected and targeted without user intervention. We show how several such algorithms can be formulated for computational chemistry problems, and how they are executed efficiently with many loosely coupled simulations using either distributed or parallel resources with Copernicus.

  16. High performance direct gravitational N-body simulations on graphics processing units II: An implementation in CUDA

    NARCIS (Netherlands)

    Belleman, R.G.; Bédorf, J.; Portegies Zwart, S.F.

    2008-01-01

    We present the results of gravitational direct N-body simulations using the graphics processing unit (GPU) on a commercial NVIDIA GeForce 8800GTX designed for gaming computers. The force evaluation of the N-body problem is implemented in "Compute Unified Device Architecture" (CUDA) using the GPU to

  17. High performance shallow water kernels for parallel overland flow simulations based on FullSWOF2D

    KAUST Repository

    Wittmann, Roland

    2017-01-25

    We describe code optimization and parallelization procedures applied to the sequential overland flow solver FullSWOF2D. Major difficulties when simulating overland flows comprise dealing with high resolution datasets of large scale areas which either cannot be computed on a single node either due to limited amount of memory or due to too many (time step) iterations resulting from the CFL condition. We address these issues in terms of two major contributions. First, we demonstrate a generic step-by-step transformation of the second order finite volume scheme in FullSWOF2D towards MPI parallelization. Second, the computational kernels are optimized by the use of templates and a portable vectorization approach. We discuss the load imbalance of the flux computation due to dry and wet cells and propose a solution using an efficient cell counting approach. Finally, scalability results are shown for different test scenarios along with a flood simulation benchmark using the Shaheen II supercomputer.

  18. High-performance simulation-based algorithms for an alpine ski racer’s trajectory optimization in heterogeneous computer systems

    OpenAIRE

    Dębski Roman

    2014-01-01

    Effective, simulation-based trajectory optimization algorithms adapted to heterogeneous computers are studied with reference to the problem taken from alpine ski racing (the presented solution is probably the most general one published so far). The key idea behind these algorithms is to use a grid-based discretization scheme to transform the continuous optimization problem into a search problem over a specially constructed finite graph, and then to apply dynamic programming to find an approxi...

  19. STEMsalabim: A high-performance computing cluster friendly code for scanning transmission electron microscopy image simulations of thin specimens.

    Science.gov (United States)

    Oelerich, Jan Oliver; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D; Volz, Kerstin

    2017-06-01

    We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. High performance pseudo-analytical simulation of multi-object adaptive optics over multi-GPU systems

    KAUST Repository

    Abdelfattah, Ahmad

    2014-01-01

    Multi-object adaptive optics (MOAO) is a novel adaptive optics (AO) technique dedicated to the special case of wide-field multi-object spectrographs (MOS). It applies dedicated wavefront corrections to numerous independent tiny patches spread over a large field of view (FOV). The control of each deformable mirror (DM) is done individually using a tomographic reconstruction of the phase based on measurements from a number of wavefront sensors (WFS) pointing at natural and artificial guide stars in the field. The output of this study helps the design of a new instrument called MOSAIC, a multi-object spectrograph proposed for the European Extremely Large Telescope (E-ELT). We have developed a novel hybrid pseudo-analytical simulation scheme that allows us to accurately simulate in detail the tomographic problem. The main challenge resides in the computation of the tomographic reconstructor, which involves pseudo-inversion of a large dense symmetric matrix. The pseudo-inverse is computed using an eigenvalue decomposition, based on the divide and conquer algorithm, on multicore systems with multi-GPUs. Thanks to a new symmetric matrix-vector product (SYMV) multi-GPU kernel, our overall implementation scores significant speedups over standard numerical libraries on multicore, like Intel MKL, and up to 60% speedups over the standard MAGMA implementation on 8 Kepler K20c GPUs. At 40,000 unknowns, this appears to be the largest-scale tomographic AO matrix solver submitted to computation, to date, to our knowledge and opens new research directions for extreme scale AO simulations. © 2014 Springer International Publishing Switzerland.

  1. High performance computing equipment for environmental remediation modeling and first principles simulation of materials properties. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Glimm, J.; Lindquist, W.B.

    1994-08-01

    A 56-node Intel Paragon parallel computer was purchased with major support provided by this grant, and installed in July, 1993, in the Center for Scientific Computing, Department of Applied Mathematics and Statistics, SUNY - Stony Brook. The targeted research funded by this proposal consists of work to support the Stony Brook and Brookhaven National Laboratory contributions to the Partnership in Computational Science (PICS) program; namely environmental remediation modeling of ground water transport, Car-Parrinello first principles molecular dynamics calculations, and the supporting development of the parallelized VolVis graphics package. Research accomplishments to date for this targeted research is discussed in {section}2. This computer has also enabled or enhanced many other projects conducted both by the Center for Scientific Computing and by the Department of Applied Mathematics and Statistics. These other projects include two- and three-dimensional gas dynamics using front tracking, other molecular dynamics applications, kidney modeling, and global optimization techniques applied to DNA-protein interactions. Technical summaries of these additional projects are presented in {section}3. The targeted research includes users from the Departments of Applied Mathematics and Computer Science at SUNY - Stony Brook, as well as staff scientists from the Departments of Physics and Applied Sciences at Brookhaven National Laboratory. The additional projects involve university faculty from the above departments as well as the Departments of Physics and Chemistry. Regular users of this machine currently include 10 faculty members, 8 postdoctoral fellows, more that 12 PhD students and approximately 8 staff members from BNL.

  2. Multi-physics corrosion modeling for sustainability assessment of steel reinforced high performance fiber reinforced cementitious composites

    DEFF Research Database (Denmark)

    Lepech, M.; Michel, Alexander; Geiker, Mette

    2016-01-01

    Using a newly developed multi-physics transport, corrosion, and cracking model, which models these phenomena as a coupled physiochemical processes, the role of HPFRCC crack control and formation in regulating steel reinforcement corrosion is investigated. This model describes transport of water...... and chemical species, the electric potential distribution in the HPFRCC, the electrochemical propagation of steel corrosion, and the role of microcracks in the HPFRCC material. Numerical results show that the reduction in anode and cathode size on the reinforcing steel surface, due to multiple crack formation...... and widespread depassivation, are the mechanism behind experimental results of HPFRCC steel corrosion studies found in the literature. Such results provide an indication of the fundamental mechanisms by which steel reinforced HPFRCC materials may be more durable than traditional reinforced concrete and other...

  3. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    KAUST Repository

    Tonellot, Thierry

    2017-02-27

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey\\'s acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less

  4. A DFN-based High Performance Computing Approach to the Simulation of Radionuclide Transport in Mineralogically Heterogeneous Fractured Rocks

    Science.gov (United States)

    Gylling, B.; Trinchero, P.; Molinero, J.; Deissmann, G.; Svensson, U.; Ebrahimi, H.; Hammond, G. E.; Bosbach, D.; Puigdomenech, I.

    2016-12-01

    Geological repositories for nuclear waste are based multi-barrier concepts using engineered and natural barriers. In fractured crystalline rocks, the efficiency of the host rock as transport barrier is related to the processes: advection along fractures, diffusion into the rock matrix and retention onto the available sorption sites. Anomalous matrix penetration profiles were observed in experiments (i.e. REPRO carried out by Posiva at the ONKALO underground facility in Finland and the Long Term Sorption Diffusion Experiment, LTDE-SD, carried out by SKB at the Äspö Hard Rock Laboratory in Sweden). The textural and mineralogical heterogeneity of the rock matrix was offered as plausible explanation for these anomalous penetration profiles. The heterogeneous structure of the rock matrix was characterised at the grain-scale using a micron-scale Discrete Fracture Network (DFN), which is then represented onto a micron-scale structured grid. Matrix fracture free volumes are identified as reactive biotite-bearing grains whereas the rest of the matrix domain constitutes the inter-granular regions. The reactive transport problem mimics the ingress of cesium along a single transmissive fracture. Part of the injected mass diffuses into the matrix where it might eventually sorb onto the surface of reactive grains. The reactive transport calculations are carried out using iDP (interface between DarcyTools and PFLOTRAN). The generation of the DFN is done by DarcyTools, which also takes care of solving the groundwater flow problem. Computed Darcy velocities are extracted and used as input for PFLOTRAN. All the simulation runs are carried out on the supercomputer JUQUEEN at the Jülich Supercomputing Centre. The results are compared with those derived with an alternative model, where biotite abundance is averaged over the whole matrix volume. The analysis of the cesium breakthrough computed at the fracture outlet shows that the averaged model provides later first-arrival time

  5. Viscoelastic Waves Simulation in a Blocky Medium with Fluid-Saturated Interlayers Using High-Performance Computing

    Science.gov (United States)

    Sadovskii, Vladimir; Sadovskaya, Oxana

    2017-04-01

    A thermodynamically consistent approach to the description of linear and nonlinear wave processes in a blocky medium, which consists of a large number of elastic blocks interacting with each other via pliant interlayers, is proposed. The mechanical properties of interlayers are defined by means of the rheological schemes of different levels of complexity. Elastic interaction between the blocks is considered in the framework of the linear elasticity theory [1]. The effects of viscoelastic shear in the interblock interlayers are taken into consideration using the Pointing-Thomson rheological scheme. The model of an elastic porous material is used in the interlayers, where the pores collapse if an abrupt compressive stress is applied. On the basis of the Biot equations for a fluid-saturated porous medium, a new mathematical model of a blocky medium is worked out, in which the interlayers provide a convective fluid motion due to the external perturbations. The collapse of pores is modeled within the generalized rheological approach, wherein the mechanical properties of a material are simulated using four rheological elements. Three of them are the traditional elastic, viscous and plastic elements, the fourth element is the so-called rigid contact [2], which is used to describe the behavior of materials with different resistance to tension and compression. Thermodynamic consistency of the equations in interlayers with the equations in blocks guarantees fulfillment of the energy conservation law for a blocky medium in a whole, i.e. kinetic and potential energy of the system is the sum of kinetic and potential energies of the blocks and interlayers. As a result of discretization of the equations of the model, robust computational algorithm is constructed, that is stable because of the thermodynamic consistency of the finite difference equations at a discrete level. The splitting method by the spatial variables and the Godunov gap decay scheme are used in the blocks, the

  6. High performance computing applied to simulation of the flow in pipes; Computacao de alto desempenho aplicada a simulacao de escoamento em dutos

    Energy Technology Data Exchange (ETDEWEB)

    Cozin, Cristiane; Lueders, Ricardo; Morales, Rigoberto E.M. [Universidade Tecnologica Federal do Parana (UTFPR), Curitiba, PR (Brazil). Dept. de Engenharia Mecanica

    2008-07-01

    In recent years, computer cluster has emerged as a real alternative to solution of problems which require high performance computing. Consequently, the development of new applications has been driven. Among them, flow simulation represents a real computational burden specially for large systems. This work presents a study of using parallel computing for numerical fluid flow simulation in pipelines. A mathematical flow model is numerically solved. In general, this procedure leads to a tridiagonal system of equations suitable to be solved by a parallel algorithm. In this work, this is accomplished by a parallel odd-oven reduction method found in the literature which is implemented on Fortran programming language. A computational platform composed by twelve processors was used. Many measures of CPU times for different tridiagonal system sizes and number of processors were obtained, highlighting the communication time between processors as an important issue to be considered when evaluating the performance of parallel applications. (author)

  7. Nuclear physics from lattice simulations

    CERN Document Server

    Doi, Takumi

    2012-01-01

    We review recent lattice QCD activities with emphasis on the impact on nuclear physics. In particular, the progress toward the determination of nuclear and baryonic forces (potentials) using Nambu-Bethe-Salpeter (NBS) wave functions is presented. We discuss major challenges for multi-baryon systems on the lattice: (i) signal to noise issue and (ii) computational cost issue. We argue that the former issue can be avoided by extracting energy-independent (non-local) potentials from time-dependent NBS wave functions without relying on the ground state saturation, and the latter cost is drastically reduced by developing a novel "unified contraction algorithm." The lattice QCD results for nuclear forces, hyperon forces and three-nucleon forces are presented, and physical insights are discussed. Comparison to results from the traditional Luescher's method is given, and open issues to be resolved are addressed as well.

  8. Physics-Based Simulator for NEO Exploration Analysis & Simulation

    Science.gov (United States)

    Balaram, J.; Cameron, J.; Jain, A.; Kline, H.; Lim, C.; Mazhar, H.; Myint, S.; Nayar, H.; Patton, R.; Pomerantz, M.; hide

    2011-01-01

    As part of the Space Exploration Analysis and Simulation (SEAS) task, the National Aeronautics and Space Administration (NASA) is using physics-based simulations at NASA's Jet Propulsion Laboratory (JPL) to explore potential surface and near-surface mission operations at Near Earth Objects (NEOs). The simulator is under development at JPL and can be used to provide detailed analysis of various surface and near-surface NEO robotic and human exploration concepts. In this paper we describe the SEAS simulator and provide examples of recent mission systems and operations concepts investigated using the simulation. We also present related analysis work and tools developed for both the SEAS task as well as general modeling, analysis and simulation capabilites for asteroid/small-body objects.

  9. Design and Study of Cognitive Network Physical Layer Simulation Platform

    Directory of Open Access Journals (Sweden)

    Yongli An

    2014-01-01

    Full Text Available Cognitive radio technology has received wide attention for its ability to sense and use idle frequency. IEEE 802.22 WRAN, the first to follow the standard in cognitive radio technology, is featured by spectrum sensing and wireless data transmission. As far as wireless transmission is concerned, the availability and implementation of a mature and robust physical layer algorithm are essential to high performance. For the physical layer of WRAN using OFDMA technology, this paper proposes a synchronization algorithm and at the same time provides a public platform for the improvement and verification of that new algorithm. The simulation results show that the performance of the platform is highly close to the theoretical value.

  10. Numerical simulation of instability and transition physics

    Science.gov (United States)

    Streett, C. L.

    1990-01-01

    The study deals with the algorithm technology used in instability and transition simulations. Discretization methods are outlined, and attention is focused on high-order finite-difference methods and high-order centered-difference formulas. One advantage of finite-difference methods over spectral methods is thought to be in implementation of nonrigorous boundary conditions. It is suggested that the next significant advances in the understanding of transition physics and the ability to predict transition will come with more physically-realistic simulations. Compressible-flow algorithms are discussed, and it is noted that with further development, exploration of bypass mechanism on simple bodies at high speed would be possible.

  11. Computer simulation in physics and engineering

    CERN Document Server

    Steinhauser, Martin Oliver

    2013-01-01

    This work is a needed reference for widely used techniques and methods of computer simulation in physics and other disciplines, such as materials science. The work conveys both: the theoretical foundations of computer simulation as well as applications and "tricks of the trade", that often are scattered across various papers. Thus it will meet a need and fill a gap for every scientist who needs computer simulations for his/her task at hand. In addition to being a reference, case studies and exercises for use as course reading are included.

  12. Physical Characterization of Florida International University Simulants

    Energy Technology Data Exchange (ETDEWEB)

    HANSEN, ERICHK.

    2004-08-19

    Florida International University shipped Laponite, clay (bentonite and kaolin blend), and Quality Assurance Requirements Document AZ-101 simulants to the Savannah River Technology Center for physical characterization and to report the results. The objectives of the task were to measure the physical properties of the fluids provided by FIU and to report the results. The physical properties were measured using the approved River Protection Project Waste Treatment Plant characterization procedure [Ref. 1]. This task was conducted in response to the work outlined in CCN066794 [Ref. 2], authored by Gary Smith and William Graves of RPP-WTP.

  13. Complex Langevin simulation in condensed matter physics

    CERN Document Server

    Yamamoto, Arata

    2015-01-01

    The complex Langevin method is one hopeful candidate to tackle the sign problem. This method is applicable not only to QCD but also to nonrelativistic field theory, such as condensed matter physics. We present the simulation results of a rotating Bose gas and an imbalanced Fermi-Hubbard model.

  14. Morphology of Gas Release in Physical Simulants

    Energy Technology Data Exchange (ETDEWEB)

    Daniel, Richard C.; Burns, Carolyn A.; Crawford, Amanda D.; Hylden, Laura R.; Bryan, Samuel A.; MacFarlan, Paul J.; Gauglitz, Phillip A.

    2014-07-03

    This report documents testing activities conducted as part of the Deep Sludge Gas Release Event Project (DSGREP). The testing described in this report focused on evaluating the potential retention and release mechanisms of hydrogen bubbles in underground radioactive waste storage tanks at Hanford. The goal of the testing was to evaluate the rate, extent, and morphology of gas release events in simulant materials. Previous, undocumented scoping tests have evidenced dramatically different gas release behavior from simulants with similar physical properties. Specifically, previous gas release tests have evaluated the extent of release of 30 Pa kaolin and 30 Pa bentonite clay slurries. While both materials are clays and both have equivalent material shear strength using a shear vane, it was found that upon stirring, gas was released immediately and completely from bentonite clay slurry while little if any gas was released from the kaolin slurry. The motivation for the current work is to replicate these tests in a controlled quality test environment and to evaluate the release behavior for another simulant used in DSGREP testing. Three simulant materials were evaluated: 1) a 30 Pa kaolin clay slurry, 2) a 30 Pa bentonite clay slurry, and 3) Rayleigh-Taylor (RT) Simulant (a simulant designed to support DSGREP RT instability testing. Entrained gas was generated in these simulant materials using two methods: 1) application of vacuum over about a 1-minute period to nucleate dissolved gas within the simulant and 2) addition of hydrogen peroxide to generate gas by peroxide decomposition in the simulants over about a 16-hour period. Bubble release was effected by vibrating the test material using an external vibrating table. When testing with hydrogen peroxide, gas release was also accomplished by stirring of the simulant.

  15. Simulation of General Physics laboratory exercise

    Science.gov (United States)

    Aceituno, P.; Hernández-Aceituno, J.; Hernández-Cabrera, A.

    2015-01-01

    Laboratory exercises are an important part of general Physics teaching, both during the last years of high school and the first year of college education. Due to the need to acquire enough laboratory equipment for all the students, and the widespread access to computers rooms in teaching, we propose the development of computer simulated laboratory exercises. A representative exercise in general Physics is the calculation of the gravity acceleration value, through the free fall motion of a metal ball. Using a model of the real exercise, we have developed an interactive system which allows students to alter the starting height of the ball to obtain different fall times. The simulation was programmed in ActionScript 3, so that it can be freely executed in any operative system; to ensure the accuracy of the calculations, all the input parameters of the simulations were modelled using digital measurement units, and to allow a statistical management of the resulting data, measurement errors are simulated through limited randomization.

  16. High Performance Marine Vessels

    CERN Document Server

    Yun, Liang

    2012-01-01

    High Performance Marine Vessels (HPMVs) range from the Fast Ferries to the latest high speed Navy Craft, including competition power boats and hydroplanes, hydrofoils, hovercraft, catamarans and other multi-hull craft. High Performance Marine Vessels covers the main concepts of HPMVs and discusses historical background, design features, services that have been successful and not so successful, and some sample data of the range of HPMVs to date. Included is a comparison of all HPMVs craft and the differences between them and descriptions of performance (hydrodynamics and aerodynamics). Readers will find a comprehensive overview of the design, development and building of HPMVs. In summary, this book: Focuses on technology at the aero-marine interface Covers the full range of high performance marine vessel concepts Explains the historical development of various HPMVs Discusses ferries, racing and pleasure craft, as well as utility and military missions High Performance Marine Vessels is an ideal book for student...

  17. PREFACE: VII Brazilian Meeting on Simulational Physics

    Science.gov (United States)

    Plascak, Joao Antonio; Rosas, Alexandres

    2014-03-01

    This special issue includes invited and selected articles of the VIIth Brazilian Meeting on Simulational Physics (BMSP), held in João Pessoa, Paraíba, Brazil, from the 5th to 10th August, 2013. This is the seventh such meeting, and the first one to have contributed papers published in the Journal of Physics: Conference Series. The previous meetings in the BMSP series took place in the mountains of Minas Gerais and in the region of the Brazilian Pantanal. Now, for the first time, the Meeting was held in the pleasant shores of João Pessoa, the capital of the Paraíba state. The VIIth BMSP brought together more than 50 researchers from all over the world for a vibrant and productive period. As in the previous meetings, the talks and posters highlighted recent advances in applications, algorithms, and implementations of computer simulation methods for the study of condensed matter, materials, out of equilibrium, quantum and biologically motivated systems. We are sure that this meeting series will continue to provide a valuable venue for people working in simulational physics to exchange ideas and discuss the state of art of this always expanding field. We are very glad to realize this special issue, and are most appreciative to the editors of the Journal of Physics: Conference Series for making this publication possible. We are grateful for the outstanding work of the João Pessoa team, for the financial support of the Brazilian agencies CNPq, CAPES, FAPESQ, and of the Federal Universities UFPB and UFMG. At last, but not least, we would like to acknowledge all of the authors of this special issue for their contributions. João Antonio Plascak Alexandre Rosas Guest Editors Conference photograph

  18. Simulation and analysis of physical mapping

    Energy Technology Data Exchange (ETDEWEB)

    Sirotkin, K.; Loehr, J.J.

    1988-01-01

    The current talk involves objects smaller than those that are macro-restriction mapped but larger than the bases that are sequenced. Specifically, we describe simulations of the alignment of recombinant lambdoid and cosmid clones by fingerprinting methods. The purpose of the simulation is to compare methods, as realistically as desired, while preparing for the analysis of actual physical mapping data. Furthermore, we will eventually begin to ''submit'' data to the Human Genome Information Resource (HGIR) to exercise its database. A simulation has advantages over a formal mathematical analysis. Not only can a simulation be as realistic as desired (for example by using actual sequences from GenBank/trademark/) but if desired properly, when finished much of the code could be used on actual data. Furthermore, a simulation can be designed to utilize any degree of parameterization, while analyses usually must make simplifying assumptions to minimize the number of parameters. For example, the way this simulation is designed one could, by simply adding a short module, mimic rearrangements that might occur during cloning in order to discover the effect that they would have on the contig generating algorithms and to learn how to recognize and deal with such rearrangements. This talk describes the structure and announces the availability of the code for the simulation modules. We tested the method for aligning clones based upon oligonucleotide hybridizing sites comparing its efficacy on actual human DNA sequences from GenBank to its efficacy on random, completely uncorrelated sequence. Surprisingly, its performance was about the same on both sequences.

  19. TOWARD END-TO-END MODELING FOR NUCLEAR EXPLOSION MONITORING: SIMULATION OF UNDERGROUND NUCLEAR EXPLOSIONS AND EARTHQUAKES USING HYDRODYNAMIC AND ANELASTIC SIMULATIONS, HIGH-PERFORMANCE COMPUTING AND THREE-DIMENSIONAL EARTH MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Rodgers, A; Vorobiev, O; Petersson, A; Sjogreen, B

    2009-07-06

    This paper describes new research being performed to improve understanding of seismic waves generated by underground nuclear explosions (UNE) by using full waveform simulation, high-performance computing and three-dimensional (3D) earth models. The goal of this effort is to develop an end-to-end modeling capability to cover the range of wave propagation required for nuclear explosion monitoring (NEM) from the buried nuclear device to the seismic sensor. The goal of this work is to improve understanding of the physical basis and prediction capabilities of seismic observables for NEM including source and path-propagation effects. We are pursuing research along three main thrusts. Firstly, we are modeling the non-linear hydrodynamic response of geologic materials to underground explosions in order to better understand how source emplacement conditions impact the seismic waves that emerge from the source region and are ultimately observed hundreds or thousands of kilometers away. Empirical evidence shows that the amplitudes and frequency content of seismic waves at all distances are strongly impacted by the physical properties of the source region (e.g. density, strength, porosity). To model the near-source shock-wave motions of an UNE, we use GEODYN, an Eulerian Godunov (finite volume) code incorporating thermodynamically consistent non-linear constitutive relations, including cavity formation, yielding, porous compaction, tensile failure, bulking and damage. In order to propagate motions to seismic distances we are developing a one-way coupling method to pass motions to WPP (a Cartesian anelastic finite difference code). Preliminary investigations of UNE's in canonical materials (granite, tuff and alluvium) confirm that emplacement conditions have a strong effect on seismic amplitudes and the generation of shear waves. Specifically, we find that motions from an explosion in high-strength, low-porosity granite have high compressional wave amplitudes and weak

  20. Gettering simulator: physical basis and algorithm

    Science.gov (United States)

    Hieslmair, H.; Balasubramanian, S.; Istratov, A. A.; Weber, E. R.

    2001-07-01

    The basic physical principles and mechanisms of gettering of metal impurities in silicon are well established. However, a predictive model of gettering that would enable one to determine what fraction of contaminants will be gettered in a particular process and how the existing process should be modified to optimize gettering is lacking. Predictive gettering of transition metals in silicon requires development of a robust algorithm to model diffusion and precipitation of transition metals in silicon, and material parameters to describe the kinetics of defect reactions and the stable equilibrium state of the formed complexes. This paper describes the algorithm of a gettering simulator, capable of modelling relaxation and segregation gettering of interstitially diffusing transition metal impurities in silicon wafers. The basic physical equations used to model gettering are differential equations for diffusion, precipitation and segregation. These equations are solved using the implicit finite-difference algorithm, based on the underlying physics of the problem. The material parameters required as input for the gettering simulator such as segregation coefficient, precipitation site density and precipitation radius, which need to be obtained experimentally, are briefly discussed.

  1. High performance homes

    DEFF Research Database (Denmark)

    Beim, Anne; Vibæk, Kasper Sánchez

    2014-01-01

    . Consideration of all these factors is a precondition for a truly integrated practice and as this chapter demonstrates, innovative project delivery methods founded on the manufacturing of prefabricated buildings contribute to the production of high performance homes that are cost effective to construct, energy......Can prefabrication contribute to the development of high performance homes? To answer this question, this chapter defines high performance in more broadly inclusive terms, acknowledging the technical, architectural, social and economic conditions under which energy consumption and production occur...... efficient to operate and valuable for building communities. Herein discussed are two successful examples of low energy prefabricated housing projects built in Copenhagen Denmark, which embraced both the constraints and possibilities offered by prefabrication....

  2. High-performance sports medicine.

    Science.gov (United States)

    Speed, Cathy

    2013-02-01

    High performance sports medicine involves the medical care of athletes, who are extraordinary individuals and who are exposed to intensive physical and psychological stresses during training and competition. The physician has a broad remit and acts as a 'medical guardian' to optimise health while minimising risks. This review describes this interesting field of medicine, its unique challenges and priorities for the physician in delivering best healthcare.

  3. Physics and simulation of communication lasers

    Energy Technology Data Exchange (ETDEWEB)

    Kazarinov, R.F. [AT and T Bell Labs., Murray Hill, NJ (United States)

    1994-12-31

    The authors review some AT and T works on physics of semiconductor lasers. They include a method for calculating the electronic states and optical properties of semiconductor quantum structures which is applicable to bulk, quantum wells, quantum wires and quantum dot lasers. They also include two dimensional numerical simulation of carrier transport in laser structures, which allows calculation of the efficiency of injected carrier consumption by the active region and the dependence of the laser current on applied voltage. Calculations of quantum efficiency and threshold current for bulk InGaAsP lasers are supported by the experimental data.

  4. Analyzing Virtual Physics Simulations with Tracker

    Science.gov (United States)

    Claessens, Tom

    2017-12-01

    In the physics teaching community, Tracker is well known as a user-friendly open source video analysis software, authored by Douglas Brown. With this tool, the user can trace markers indicated on a video or on stroboscopic photos and perform kinematic analyses. Tracker also includes a data modeling tool that allows one to fit some theoretical equations of motion onto experimentally obtained data. In the field of particle mechanics, Tracker has been effectively used for learning and teaching about projectile motion, "toss up" and free-fall vertical motion, and to explain the principle of mechanical energy conservation. Also, Tracker has been successfully used in rigid body mechanics to interpret the results of experiments with rolling/slipping cylinders and moving rods. In this work, I propose an original method in which Tracker is used to analyze virtual computer simulations created with a physics-based motion solver, instead of analyzing video recording or stroboscopic photos. This could be an interesting approach to study kinematics and dynamics problems in physics education, in particular when there is no or limited access to physical labs. I demonstrate the working method with a typical (but quite challenging) problem in classical mechanics: a slipping/rolling cylinder on a rough surface.

  5. INL High Performance Building Strategy

    Energy Technology Data Exchange (ETDEWEB)

    Jennifer D. Morton

    2010-02-01

    High performance buildings, also known as sustainable buildings and green buildings, are resource efficient structures that minimize the impact on the environment by using less energy and water, reduce solid waste and pollutants, and limit the depletion of natural resources while also providing a thermally and visually comfortable working environment that increases productivity for building occupants. As Idaho National Laboratory (INL) becomes the nation’s premier nuclear energy research laboratory, the physical infrastructure will be established to help accomplish this mission. This infrastructure, particularly the buildings, should incorporate high performance sustainable design features in order to be environmentally responsible and reflect an image of progressiveness and innovation to the public and prospective employees. Additionally, INL is a large consumer of energy that contributes to both carbon emissions and resource inefficiency. In the current climate of rising energy prices and political pressure for carbon reduction, this guide will help new construction project teams to design facilities that are sustainable and reduce energy costs, thereby reducing carbon emissions. With these concerns in mind, the recommendations described in the INL High Performance Building Strategy (previously called the INL Green Building Strategy) are intended to form the INL foundation for high performance building standards. This revised strategy incorporates the latest federal and DOE orders (Executive Order [EO] 13514, “Federal Leadership in Environmental, Energy, and Economic Performance” [2009], EO 13423, “Strengthening Federal Environmental, Energy, and Transportation Management” [2007], and DOE Order 430.2B, “Departmental Energy, Renewable Energy, and Transportation Management” [2008]), the latest guidelines, trends, and observations in high performance building construction, and the latest changes to the Leadership in Energy and Environmental Design

  6. Load management strategy for Particle-In-Cell simulations in high energy physics

    DEFF Research Database (Denmark)

    Beck, Arnaud; Frederiksen, Jacob Trier; Derouillat, Julien

    2016-01-01

    In the wake of the intense effort made for the experimental CILEX project, numerical simulation campaigns have been carried out in order to finalize the design of the facility and to identify optimal laser and plasma parameters. These simulations bring, of course, important insight into the funda......In the wake of the intense effort made for the experimental CILEX project, numerical simulation campaigns have been carried out in order to finalize the design of the facility and to identify optimal laser and plasma parameters. These simulations bring, of course, important insight...... into the fundamental physics at play. As a by-product, they also characterize the quality of our theoretical and numerical models. By comparing the results given by different codes, it is possible to point out algorithmic limitations both in terms of physical accuracy and computational performances. In this paper we...... towards a modern, accurate high-performance PIC code for high energy physics....

  7. Danish High Performance Concretes

    DEFF Research Database (Denmark)

    Nielsen, M. P.; Christoffersen, J.; Frederiksen, J.

    1994-01-01

    In this paper the main results obtained in the research program High Performance Concretes in the 90's are presented. This program was financed by the Danish government and was carried out in cooperation between The Technical University of Denmark, several private companies, and Aalborg University...

  8. High performance systems

    Energy Technology Data Exchange (ETDEWEB)

    Vigil, M.B. [comp.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  9. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  10. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  11. High-Performance Networking

    CERN Document Server

    CERN. Geneva

    2003-01-01

    The series will start with an historical introduction about what people saw as high performance message communication in their time and how that developed to the now to day known "standard computer network communication". It will be followed by a far more technical part that uses the High Performance Computer Network standards of the 90's, with 1 Gbit/sec systems as introduction for an in depth explanation of the three new 10 Gbit/s network and interconnect technology standards that exist already or emerge. If necessary for a good understanding some sidesteps will be included to explain important protocols as well as some necessary details of concerned Wide Area Network (WAN) standards details including some basics of wavelength multiplexing (DWDM). Some remarks will be made concerning the rapid expanding applications of networked storage.

  12. High Performance Concrete

    Directory of Open Access Journals (Sweden)

    Traian Oneţ

    2009-01-01

    Full Text Available The paper presents the last studies and researches accomplished in Cluj-Napoca related to high performance concrete, high strength concrete and self compacting concrete. The purpose of this paper is to raid upon the advantages and inconveniences when a particular concrete type is used. Two concrete recipes are presented, namely for the concrete used in rigid pavement for roads and another one for self-compacting concrete.

  13. High performance AC drives

    CERN Document Server

    Ahmad, Mukhtar

    2010-01-01

    This book presents a comprehensive view of high performance ac drives. It may be considered as both a text book for graduate students and as an up-to-date monograph. It may also be used by R & D professionals involved in the improvement of performance of drives in the industries. The book will also be beneficial to the researchers pursuing work on multiphase drives as well as sensorless and direct torque control of electric drives since up-to date references in these topics are provided. It will also provide few examples of modeling, analysis and control of electric drives using MATLAB/SIMULIN

  14. High Performance Liquid Chromatography

    Science.gov (United States)

    Talcott, Stephen

    High performance liquid chromatography (HPLC) has many applications in food chemistry. Food components that have been analyzed with HPLC include organic acids, vitamins, amino acids, sugars, nitrosamines, certain pesticides, metabolites, fatty acids, aflatoxins, pigments, and certain food additives. Unlike gas chromatography, it is not necessary for the compound being analyzed to be volatile. It is necessary, however, for the compounds to have some solubility in the mobile phase. It is important that the solubilized samples for injection be free from all particulate matter, so centrifugation and filtration are common procedures. Also, solid-phase extraction is used commonly in sample preparation to remove interfering compounds from the sample matrix prior to HPLC analysis.

  15. Clojure high performance programming

    CERN Document Server

    Kumar, Shantanu

    2013-01-01

    This is a short, practical guide that will teach you everything you need to know to start writing high performance Clojure code.This book is ideal for intermediate Clojure developers who are looking to get a good grip on how to achieve optimum performance. You should already have some experience with Clojure and it would help if you already know a little bit of Java. Knowledge of performance analysis and engineering is not required. For hands-on practice, you should have access to Clojure REPL with Leiningen.

  16. High Performance Tools And Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Collette, M R; Corey, I R; Johnson, J R

    2005-01-24

    This goal of this project was to evaluate the capability and limits of current scientific simulation development tools and technologies with specific focus on their suitability for use with the next generation of scientific parallel applications and High Performance Computing (HPC) platforms. The opinions expressed in this document are those of the authors, and reflect the authors' current understanding and functionality of the many tools investigated. As a deliverable for this effort, we are presenting this report describing our findings along with an associated spreadsheet outlining current capabilities and characteristics of leading and emerging tools in the high performance computing arena. This first chapter summarizes our findings (which are detailed in the other chapters) and presents our conclusions, remarks, and anticipations for the future. In the second chapter, we detail how various teams in our local high performance community utilize HPC tools and technologies, and mention some common concerns they have about them. In the third chapter, we review the platforms currently or potentially available to utilize these tools and technologies on to help in software development. Subsequent chapters attempt to provide an exhaustive overview of the available parallel software development tools and technologies, including their strong and weak points and future concerns. We categorize them as debuggers, memory checkers, performance analysis tools, communication libraries, data visualization programs, and other parallel development aides. The last chapter contains our closing information. Included with this paper at the end is a table of the discussed development tools and their operational environment.

  17. High performance data transfer

    Science.gov (United States)

    Cottrell, R.; Fang, C.; Hanushevsky, A.; Kreuger, W.; Yang, W.

    2017-10-01

    The exponentially increasing need for high speed data transfer is driven by big data, and cloud computing together with the needs of data intensive science, High Performance Computing (HPC), defense, the oil and gas industry etc. We report on the Zettar ZX software. This has been developed since 2013 to meet these growing needs by providing high performance data transfer and encryption in a scalable, balanced, easy to deploy and use way while minimizing power and space utilization. In collaboration with several commercial vendors, Proofs of Concept (PoC) consisting of clusters have been put together using off-the- shelf components to test the ZX scalability and ability to balance services using multiple cores, and links. The PoCs are based on SSD flash storage that is managed by a parallel file system. Each cluster occupies 4 rack units. Using the PoCs, between clusters we have achieved almost 200Gbps memory to memory over two 100Gbps links, and 70Gbps parallel file to parallel file with encryption over a 5000 mile 100Gbps link.

  18. Scalable high-performance algorithm for the simulation of exciton-dynamics. Application to the light harvesting complex II in the presence of resonant vibrational modes

    DEFF Research Database (Denmark)

    Kreisbeck, Christoph; Kramer, Tobias; Aspuru-Guzik, Alán

    2014-01-01

    the exciton dynamics within a density-matrix formalism are known, but are restricted to small systems with less than ten sites due to their computational complexity. To study the excitonic energy transfer in larger systems, we adapt and extend the exact hierarchical equation of motion (HEOM) method to various...... high-performance many-core platforms using the Open Compute Language (OpenCL). For the light-harvesting complex II (LHC II) found in spinach, the HEOM results deviate from predictions of approximate theories and clarify the time-scale of the transfer-process. We investigate the impact of resonantly...

  19. Teaching Physics (and Some Computation) Using Intentionally Incorrect Simulations

    Science.gov (United States)

    Cox, Anne J.; Junkin, William F.; Christian, Wolfgang; Belloni, Mario; Esquembre, Francisco

    2011-05-01

    Computer simulations are widely used in physics instruction because they can aid student visualization of abstract concepts, they can provide multiple representations of concepts (graphical, trajectories, charts), they can approximate real-world examples, and they can engage students interactively, all of which can enhance student understanding of physics concepts. For these reasons, we create and use simulations to teach physics,1,2 but we also want students to recognize that the simulations are only as good as the physics behind them, so we have developed a series of simulations that are intentionally incorrect, where the task is for students to find and correct the errors.3

  20. Exploring Space Physics Concepts Using Simulation Results

    Science.gov (United States)

    Gross, N. A.

    2008-05-01

    The Center for Integrated Space Weather Modeling (CISM), a Science and Technology Center (STC) funded by the National Science Foundation, has the goal of developing a suite of integrated physics based computer models of the space environment that can follow the evolution of a space weather event from the Sun to the Earth. In addition to the research goals, CISM is also committed to training the next generation of space weather professionals who are imbued with a system view of space weather. This view should include an understanding of both helio-spheric and geo-space phenomena. To this end, CISM offers a yearly Space Weather Summer School targeted to first year graduate students, although advanced undergraduates and space weather professionals have also attended. This summer school uses a number of innovative pedagogical techniques including devoting each afternoon to a computer lab exercise that use results from research quality simulations and visualization techniques, along with ground based and satellite data to explore concepts introduced during the morning lectures. These labs are suitable for use in wide variety educational settings from formal classroom instruction to outreach programs. The goal of this poster is to outline the goals and content of the lab materials so that instructors may evaluate their potential use in the classroom or other settings.

  1. High Performance Parallel Processing (HPPP) Finite Element Simulation of Fluid Structure Interactions Final Report CRADA No. TC-0824-94-A

    Energy Technology Data Exchange (ETDEWEB)

    Couch, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ziegler, D. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2018-01-24

    This project was a muki-partner CRADA. This was a partnership between Alcoa and LLNL. AIcoa developed a system of numerical simulation modules that provided accurate and efficient threedimensional modeling of combined fluid dynamics and structural response.

  2. High Performance Network Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, Jesse E [Los Alamos National Laboratory

    2012-08-10

    Network Monitoring requires a substantial use of data and error analysis to overcome issues with clusters. Zenoss and Splunk help to monitor system log messages that are reporting issues about the clusters to monitoring services. Infiniband infrastructure on a number of clusters upgraded to ibmon2. ibmon2 requires different filters to report errors to system administrators. Focus for this summer is to: (1) Implement ibmon2 filters on monitoring boxes to report system errors to system administrators using Zenoss and Splunk; (2) Modify and improve scripts for monitoring and administrative usage; (3) Learn more about networks including services and maintenance for high performance computing systems; and (4) Gain a life experience working with professionals under real world situations. Filters were created to account for clusters running ibmon2 v1.0.0-1 10 Filters currently implemented for ibmon2 using Python. Filters look for threshold of port counters. Over certain counts, filters report errors to on-call system administrators and modifies grid to show local host with issue.

  3. Animating virtual characters using physics-based simulation

    NARCIS (Netherlands)

    Geijtenbeek, T.|info:eu-repo/dai/nl/357514564

    2013-01-01

    Over the past decades, physics-based simulation has become an established method for the animation of passive phenomena, such as cloth, water and rag-doll characters. The conception that physics-based simulation can also be used for animating actively controlled characters dates back to the early

  4. High-performing physician executives.

    Science.gov (United States)

    Brown, M; Larson, S R; McCool, B P

    1988-01-01

    Physician leadership extends beyond traditional clinical disciplines to hospital administration, group practice management, health policy making, management of managed care programs, and many business positions. What kind of person makes a good physician executive? What stands out as the most important motivations, attributes, and interests of high-performing physician executives? How does this compare with non-physician health care executives? Such questions have long been high on the agenda of executives in other industries. This article builds on existing formal assessments of leadership attributes of high-performing business, government, and educational executives and on closer examination of health care executives. Previous studies looked at the need for innovative, entrepreneurial, energetic, community-oriented leaders for positions throughout health care. Traits that distinguish excellence and leadership were described by Brown and McCool.* That study characterized successful leaders in terms of physical strengths (high energy, good health, and propensity for hard work), mental strengths (creativity, intuition, and innovation), and organizational strengths (mission orientation, vision, and entrepreneurial spirit). In this investigation, a subset of health care executives, including physician executives, was examined more closely. It was initially assumed that successful physician executives exhibit many of the same positive traits as do nonphysician executives. This assumption was tested with physician leaders in a range of administrative and managerial positions. We also set out to identify key differences between physician and nonphysician executives. Even with our limited exploration, it seems to us that physician executives probably do differ from nonphysician executives.

  5. High-performance computing using FPGAs

    CERN Document Server

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  6. Simulating superluminal physics with superconducting circuit technology

    Science.gov (United States)

    Sabín, Carlos; Peropadre, Borja; Lamata, Lucas; Solano, Enrique

    2017-09-01

    We provide tools for the quantum simulation of superluminal motion with superconducting circuits. We show that it is possible to simulate the motion of a superconducting qubit at constant velocities that exceed the speed of light in the electromagnetic medium and the subsequent emission of Ginzburg radiation. We also consider possible setups for simulating the superluminal motion of a mirror, finding a link with the super-radiant phase transition of the Dicke model.

  7. Interaction and Impact Studies for Distributed Energy Resource, Transactive Energy, and Electric Grid, using High Performance Computing ?based Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kelley, B M

    2017-02-10

    The electric utility industry is undergoing significant transformations in its operation model, including a greater emphasis on automation, monitoring technologies, and distributed energy resource management systems (DERMS). With these changes and new technologies, while driving greater efficiencies and reliability, these new models may introduce new vectors of cyber attack. The appropriate cybersecurity controls to address and mitigate these newly introduced attack vectors and potential vulnerabilities are still widely unknown and performance of the control is difficult to vet. This proposal argues that modeling and simulation (M&S) is a necessary tool to address and better understand these problems introduced by emerging technologies for the grid. M&S will provide electric utilities a platform to model its transmission and distribution systems and run various simulations against the model to better understand the operational impact and performance of cybersecurity controls.

  8. Development of a fast high performance liquid chromatographic screening system for eight antidiabetic drugs by an improved methodology of in-silico robustness simulation.

    Science.gov (United States)

    Mokhtar, Hatem I; Abdel-Salam, Randa A; Haddad, Ghada M

    2015-06-19

    Robustness of RP-HPLC methods is a crucial method quality attribute which has gained an increasing interest throughout the efforts to apply quality by design concepts in analytical methodology. Improvement to design space modeling approaches to represent method robustness was the goal of many previous works. Modeling of design spaces regarding to method robustness fulfils quality by design essence of ensuring method validity throughout the design space. The current work aimed to describe an improvement to robustness modeling of design spaces in context of RP-HPLC method development for screening of eight antidiabetic drugs. The described improvement consisted of in-silico simulation of practical robustness testing procedures thus had the advantage of modeling design spaces with higher confidence in estimated of method robustness. The proposed in-silico robustness test was performed as a full factorial design of simulated method conditions deliberate shifts for each predicted point in knowledge space with modeling error propagation. Design space was then calculated as zones exceeding a threshold probability to pass the simulated robustness testing. Potential design spaces were mapped for three different stationary phases as a function of gradient elution parameters, pH and ternary solvent ratio. A robust and fast separation for the eight compounds within less than 6 min was selected and confirmed through experimental robustness testing. The effectiveness of this approach regarding definition of design spaces with ensured robustness and desired objectives was demonstrated. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. High-performance dual-energy imaging with a flat-panel detector: imaging physics from blackboard to benchtop to bedside

    Science.gov (United States)

    Siewerdsen, J. H.; Shkumat, N. A.; Dhanantwari, A. C.; Williams, D. B.; Richard, S.; Daly, M. J.; Paul, N. S.; Moseley, D. J.; Jaffray, D. A.; Yorkston, J.; Van Metter, R.

    2006-03-01

    The application of high-performance flat-panel detectors (FPDs) to dual-energy (DE) imaging offers the potential for dramatically improved detection and characterization of subtle lesions through reduction of "anatomical noise," with applications ranging from thoracic imaging to image-guided interventions. In this work, we investigate DE imaging performance from first principles of image science to preclinical implementation, including: 1.) generalized task-based formulation of NEQ and detectability as a guide to system optimization; 2.) measurements of imaging performance on a DE imaging benchtop; and 3.) a preclinical system developed in our laboratory for cardiac-gated DE chest imaging in a research cohort of 160 patients. Theoretical and benchtop studies directly guide clinical implementation, including the advantages of double-shot versus single-shot DE imaging, the value of differential added filtration between low- and high-kVp projections, and optimal selection of kVp pairs, filtration, and dose allocation. Evaluation of task-based NEQ indicates that the detectability of subtle lung nodules in double-shot DE imaging can exceed that of single-shot DE imaging by a factor of 4 or greater. Filter materials are investigated that not only harden the high-kVp beam (e.g., Cu or Ag) but also soften the low-kVp beam (e.g., Ce or Gd), leading to significantly increased contrast in DE images. A preclinical imaging system suitable for human studies has been constructed based upon insights gained from these theoretical and experimental studies. An important component of the system is a simple and robust means of cardiac-gated DE image acquisition, implemented here using a fingertip pulse oximeter. Timing schemes that provide cardiac-gated image acquisition on the same or successive heartbeats is described. Preclinical DE images to be acquired under research protocol will afford valuable testing of optimal deployment, facilitate the development of DE CAD, and support

  10. High-performance perovskite CH3NH3PbI3 thin films for solar cells prepared by single-source physical vapour deposition

    OpenAIRE

    Fan, Ping; Gu, Di; Liang, Guang-Xing; Luo, Jing-Ting; Chen, Ju-Long; Zheng, Zhuang-Hao; Zhang, Dong-Ping

    2016-01-01

    In this work, an alternative route to fabricating high-quality CH3NH3PbI3 thin films is proposed. Single-source physical vapour deposition (SSPVD) without a post-heat-treating process was used to prepare CH3NH3PbI3 thin films at room temperature. This new process enabled complete surface coverage and moisture stability in a non-vacuum solution. Moreover, the challenges of simultaneously controlling evaporation processes of the organic and inorganic sources via dual-source vapour evaporation a...

  11. Deployment of physics simulation apps using Easy JavaScript Simulations

    OpenAIRE

    Clemente, Félix J. García; Esquembre, Francisco; Wee, Loo Kang

    2017-01-01

    Physics simulations are widely used to improve the learning process in science and engineering education. Deployment of a computational physics simulation/model is extremely complex given the fact that both knowledge and skills for the science equations and the computational and programming aspects are required for a fully functional simulation, typically requiring a science educator and computer scientists/developer to work together. However, when using Easy JavaScript Simulation (EjsS) mode...

  12. Developing iPad-Based Physics Simulations That Can Help People Learn Newtonian Physics Concepts

    Science.gov (United States)

    Lee, Young-Jin

    2015-01-01

    The aims of this study are: (1) to develop iPad-based computer simulations called iSimPhysics that can help people learn Newtonian physics concepts; and (2) to assess its educational benefits and pedagogical usefulness. To facilitate learning, iSimPhysics visualizes abstract physics concepts, and allows for conducting a series of computer…

  13. A task-based parallelism and vectorized approach to 3D Method of Characteristics (MOC) reactor simulation for high performance computing architectures

    Science.gov (United States)

    Tramm, John R.; Gunow, Geoffrey; He, Tim; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.

    2016-05-01

    In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures.

  14. High performance polymer concrete

    Directory of Open Access Journals (Sweden)

    Frías, M.

    2007-06-01

    Full Text Available This paper studies the performance of concrete whose chief components are natural aggregate and an organic binder —a thermosetting polyester resin— denominated polymer concrete or PC. The material was examined macro- and microscopically and its basic physical and mechanical properties were determined using mercury porosimetry, scanning electron microscopy (SEM-EDAX, X-ray diffraction (XRD and strength tests (modulus of elasticity, stress-strain curves and ultimate strengths. According to the results of these experimental studies, the PC exhibited a low density (4.8%, closed pore system and a concomitantly continuous internal microstructure. This would at least partially explain its mechanical out-performance of traditional concrete, with average compressive and flexural strength values of 100 MPa and over 20 MPa, respectively. In the absence of standard criteria, the bending test was found to be a useful supplement to compressive strength tests for establishing PC strength classes.Este trabajo de investigación aborda el estudio de un hormigón de altas prestaciones, formado por áridos naturales y un aglomerante orgánico constituido por una resina termoestable poliéster, denominado hormigón polimérico HP. Se describe el material a nivel microscópico y macroscópico, presentando sus propiedades físicas y mecánicas fundamentales, mediante diferentes técnicas experimentales, tales como: porosimetría de mercurio, microscopía electrónica (SEM-EDAX, difracción de rayos X (DRX y ensayos mecánicos (módulo de elasticidad, curvas tensión- deformación y resistencias últimas. Como consecuencia del estudio experimental llevado a cabo, se ha podido apreciar cómo el HP está formado por porosidad cerrada del 4,8%, proporcionando una elevada continuidad a su microestructura interna, lo que justifica, en parte, la mejora de propiedades mecánicas respecto al hormigón tradicional, con unos valores medios de resistencia a compresión de 100

  15. Physics Educators as Designers of Simulation using Easy Java Simulation (Ejs) Part 2*

    CERN Document Server

    Wee, Loo Kang

    2012-01-01

    To deepen do-it-yourself (DIY) technology in the physics classroom, we seek to highlight the Open Source Physics (OSP) community of educators that engage, enable and empower teachers as learners so that we create DIY technology tools-simulation for inquiry learning. We learn through Web 2 online collaborative means to develop simulations together with reputable physicists through the open source digital library. By examining the open source codes of the simulation through the Easy Java Simulation (EJS) toolkit, we are able make sense of the physics from the computational models created by practicing physicists. We will share newer (2010-present) simulations that we have remixed from existing library of simulations models into suitable learning environments for inquiry of physics. We hope other teachers would find these simulations useful and remix them that suit their own context and contribute back to benefit all humankind, becoming citizens for the world. Abstract Footnotes: website prior to the meeting htt...

  16. Chemical Adsorption and Physical Confinement of Polysulfides with the Janus-faced Interlayer for High-performance Lithium-Sulfur Batteries.

    Science.gov (United States)

    Chiochan, Poramane; Kaewruang, Siriroong; Phattharasupakun, Nutthaphon; Wutthiprom, Juthaporn; Maihom, Thana; Limtrakul, Jumras; Nagarkar, Sanjog; Horike, Satoshi; Sawangphruk, Montree

    2017-12-18

    We design the Janus-like interlayer with two different functional faces for suppressing the shuttle of soluble lithium polysulfides (LPSs) in lithium-sulfur batteries (LSBs). At the front face, the conductive functionalized carbon fiber paper (f-CFP) having oxygen-containing groups i.e., -OH and -COOH on its surface was placed face to face with the sulfur cathode serving as the first barrier accommodating the volume expansion during cycling process and the oxygen-containing groups can also adsorb the soluble LPSs via lithium bonds. At the back face, a crystalline coordination network of [Zn(H2PO4)2(TzH)2]n (ZnPTz) was coated on the back side of f-CFP serving as the second barrier retarding the left LPSs passing through the front face via both physical confinement and chemical adsorption (i.e. Li bonding). The LSB using the Janus-like interlayer exhibits a high reversible discharge capacity of 1,416 mAh g-1 at 0.1C with a low capacity fading of 0.05% per cycle, 92% capacity retention after 200 cycles and ca. 100% coulombic efficiency. The fully charged LSB cell can practically supply electricity to a spinning motor with a nominal voltage of 3.0 V for 28 min demonstrating many potential applications.

  17. Physical and electrical characterization of high-performance Cu{sub 2}ZnSnSe{sub 4} based thin film solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Oueslati, S., E-mail: oueslatisouhaib@hotmail.fr [KACST-Intel Consortium Center of Excellence in Nano-manufacturing Applications (CENA), Riyadh (Saudi Arabia); Imec-partner in Solliance, Kapeldreef 75, 3001 Leuven (Belgium); Department of Physics, Faculty of Sciences of Tunis, Tunis El Manar University (Tunisia); Research Laboratory MMA, University of Carthage, National Institute of Applied Sciences and Technology, INSAT (Tunisia); Brammertz, G. [Imec Division IMOMEC — Partner in Solliance, Wetenschapspark 1, 3590 Diepenbeek (Belgium); Institute for Material Research (IMO), Hasselt University, Wetenschapspark 1, 3590 Diepenbeek (Belgium); Buffière, M. [Imec-partner in Solliance, Kapeldreef 75, 3001 Leuven (Belgium); Department of Electrical Engineering, KU Leuven, Kasteelpark Arenberg 10, 3001 Heverlee (Belgium); ElAnzeery, H. [KACST-Intel Consortium Center of Excellence in Nano-manufacturing Applications (CENA), Riyadh (Saudi Arabia); Imec-partner in Solliance, Kapeldreef 75, 3001 Leuven (Belgium); Microelectronics System Design Department, Nile University, Cairo (Egypt); Touayar, O. [Research Laboratory MMA, University of Carthage, National Institute of Applied Sciences and Technology, INSAT (Tunisia); Köble, C. [Helmholtz-Zentrum Berlin für Materialien und Energie GmbH, Hahn-Meitner-Platz 1, 14109 Berlin (Germany); Bekaert, J. [Condensed Matter Theory Group, Department of Physics, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerpen (Belgium); Meuris, M. [Imec Division IMOMEC — Partner in Solliance, Wetenschapspark 1, 3590 Diepenbeek (Belgium); Institute for Material Research (IMO), Hasselt University, Wetenschapspark 1, 3590 Diepenbeek (Belgium); and others

    2015-05-01

    We report on the electrical, optical and physical properties of Cu{sub 2}ZnSnSe{sub 4} solar cells using an absorber layer fabricated by selenization of sputtered Cu, Zn and Cu{sub 10}Sn{sub 90} multilayers. A maximum active-area conversion efficiency of 10.4% under AM1.5G was measured with a maximum short circuit current density of 39.7 mA/cm{sup 2}, an open circuit voltage of 394 mV and a fill factor of 66.4%. We perform electrical and optical characterization using photoluminescence spectroscopy, external quantum efficiency, current-voltage and admittance versus temperature measurements in order to derive information about possible causes for the low open circuit voltage values observed. The main defects derived from these measurements are strong potential fluctuations in the absorber layer as well as a potential barrier of the order of 133 meV at the back side contact. - Highlights: • We have fabricated 10.4% total area efficient Cu{sub 2}ZnSnSe{sub 4} solar cells. • An activation energy corresponding to a barrier at the back side was extracted. • Based on the admittance spectrum, no peaks could be observed related to deep defects.

  18. An introduction to computer simulation methods applications to physical systems

    CERN Document Server

    Gould, Harvey; Christian, Wolfgang

    2007-01-01

    Now in its third edition, this book teaches physical concepts using computer simulations. The text incorporates object-oriented programming techniques and encourages readers to develop good programming habits in the context of doing physics. Designed for readers at all levels , An Introduction to Computer Simulation Methods uses Java, currently the most popular programming language. Introduction, Tools for Doing Simulations, Simulating Particle Motion, Oscillatory Systems, Few-Body Problems: The Motion of the Planets, The Chaotic Motion of Dynamical Systems, Random Processes, The Dynamics of Many Particle Systems, Normal Modes and Waves, Electrodynamics, Numerical and Monte Carlo Methods, Percolation, Fractals and Kinetic Growth Models, Complex Systems, Monte Carlo Simulations of Thermal Systems, Quantum Systems, Visualization and Rigid Body Dynamics, Seeing in Special and General Relativity, Epilogue: The Unity of Physics For all readers interested in developing programming habits in the context of doing phy...

  19. Explore Effective Use of Computer Simulations for Physics Education

    Science.gov (United States)

    Lee, Yu-Fen; Guo, Yuying

    2008-01-01

    The dual purpose of this article is to provide a synthesis of the findings related to the use of computer simulations in physics education and to present implications for teachers and researchers in science education. We try to establish a conceptual framework for the utilization of computer simulations as a tool for learning and instruction in…

  20. Control of complex physically simulated robot groups

    Science.gov (United States)

    Brogan, David C.

    2001-10-01

    Actuated systems such as robots take many forms and sizes but each requires solving the difficult task of utilizing available control inputs to accomplish desired system performance. Coordinated groups of robots provide the opportunity to accomplish more complex tasks, to adapt to changing environmental conditions, and to survive individual failures. Similarly, groups of simulated robots, represented as graphical characters, can test the design of experimental scenarios and provide autonomous interactive counterparts for video games. The complexity of writing control algorithms for these groups currently hinders their use. A combination of biologically inspired heuristics, search strategies, and optimization techniques serve to reduce the complexity of controlling these real and simulated characters and to provide computationally feasible solutions.

  1. Electrical Storm Simulation to Improve the Learning Physics Process

    Science.gov (United States)

    Martínez Muñoz, Miriam; Jiménez Rodríguez, María Lourdes; Gutiérrez de Mesa, José Antonio

    2013-01-01

    This work is part of a research project whose main objective is to understand the impact that the use of Information and Communication Technology (ICT) has on the teaching and learning process on the subject of Physics. We will show that, with the use of a storm simulator, physics students improve their learning process on one hand they understand…

  2. Preservice Teachers' Theory Development in Physical and Simulated Environments

    Science.gov (United States)

    Marshall, Jill A.; Young, Erica Slate

    2006-01-01

    We report a study of three prospective secondary science teachers' development of theories-in-action as they worked together in a group to explore collisions using both physical manipulatives and a computer simulation (Interactive Physics). Analysis of their investigations using an existing theoretical framework indicates that, as the group moved…

  3. Sensitivity of tropical cyclone Jal simulations to physics ...

    Indian Academy of Sciences (India)

    ... to physics parameterizations is carried out with a view to determine the best set of physics options for prediction of cyclones originating in the north Indian Ocean. For this purpose, the tropical cyclone Jal has been simulated by the advanced (or state of science) mesoscale Weather Research and Forecasting (WRF) model ...

  4. APPLICATION OF INTERACTIVE ONLINE SIMULATIONS IN THE PHYSICS LABORATORY ACTIVITIES

    Directory of Open Access Journals (Sweden)

    Nina P. Dementievska

    2013-09-01

    Full Text Available Physics teachers should have professional competences, aimed at the use of online technologies associated with physical experiments. Lack of teaching materials for teachers in Ukrainian language leads to the use of virtual laboratories and computer simulations by traditional methods of education, not by the latest innovative modern educational technology, which may limit their use and greatly reduce their effectiveness. Ukrainian teaching literature has practically no information about the assessment of competencies, research skills of students for the laboratory activities. The aim of the article is to describe some components of instructional design for the Web site with simulations in school physical experiments and their evaluation.

  5. High Performance Computing in Science and Engineering '14

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2015-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS). The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and   engineers. The book comes with a wealth of color illustrations and tables of results.  

  6. Enriching Triangle Mesh Animations with Physically Based Simulation.

    Science.gov (United States)

    Li, Yijing; Xu, Hongyi; Barbic, Jernej

    2017-10-01

    We present a system to combine arbitrary triangle mesh animations with physically based Finite Element Method (FEM) simulation, enabling control over the combination both in space and time. The input is a triangle mesh animation obtained using any method, such as keyframed animation, character rigging, 3D scanning, or geometric shape modeling. The input may be non-physical, crude or even incomplete. The user provides weights, specified using a minimal user interface, for how much physically based simulation should be allowed to modify the animation in any region of the model, and in time. Our system then computes a physically-based animation that is constrained to the input animation to the amount prescribed by these weights. This permits smoothly turning physics on and off over space and time, making it possible for the output to strictly follow the input, to evolve purely based on physically based simulation, and anything in between. Achieving such results requires a careful combination of several system components. We propose and analyze these components, including proper automatic creation of simulation meshes (even for non-manifold and self-colliding undeformed triangle meshes), converting triangle mesh animations into animations of the simulation mesh, and resolving collisions and self-collisions while following the input.

  7. Monte Carlo Simulation in Statistical Physics An Introduction

    CERN Document Server

    Binder, Kurt

    2010-01-01

    Monte Carlo Simulation in Statistical Physics deals with the computer simulation of many-body systems in condensed-matter physics and related fields of physics, chemistry and beyond, to traffic flows, stock market fluctuations, etc.). Using random numbers generated by a computer, probability distributions are calculated, allowing the estimation of the thermodynamic properties of various systems. This book describes the theoretical background to several variants of these Monte Carlo methods and gives a systematic presentation from which newcomers can learn to perform such simulations and to analyze their results. The fifth edition covers Classical as well as Quantum Monte Carlo methods. Furthermore a new chapter on the sampling of free-energy landscapes has been added. To help students in their work a special web server has been installed to host programs and discussion groups (http://wwwcp.tphys.uni-heidelberg.de). Prof. Binder was awarded the Berni J. Alder CECAM Award for Computational Physics 2001 as well ...

  8. Impact of detector simulation in particle physics collider experiments

    Science.gov (United States)

    Daniel Elvira, V.

    2017-06-01

    Through the last three decades, accurate simulation of the interactions of particles with matter and modeling of detector geometries has proven to be of critical importance to the success of the international high-energy physics (HEP) experimental programs. For example, the detailed detector modeling and accurate physics of the Geant4-based simulation software of the CMS and ATLAS particle physics experiments at the European Center of Nuclear Research (CERN) Large Hadron Collider (LHC) was a determinant factor for these collaborations to deliver physics results of outstanding quality faster than any hadron collider experiment ever before. This review article highlights the impact of detector simulation on particle physics collider experiments. It presents numerous examples of the use of simulation, from detector design and optimization, through software and computing development and testing, to cases where the use of simulation samples made a difference in the precision of the physics results and publication turnaround, from data-taking to submission. It also presents estimates of the cost and economic impact of simulation in the CMS experiment. Future experiments will collect orders of magnitude more data with increasingly complex detectors, taxing heavily the performance of simulation and reconstruction software. Consequently, exploring solutions to speed up simulation and reconstruction software to satisfy the growing demand of computing resources in a time of flat budgets is a matter that deserves immediate attention. The article ends with a short discussion on the potential solutions that are being considered, based on leveraging core count growth in multicore machines, using new generation coprocessors, and re-engineering HEP code for concurrency and parallel computing.

  9. High Performance Monopropellants for Future Planetary Ascent Vehicles Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Physical Sciences Inc. proposes to design, develop, and demonstrate, a novel high performance monopropellant for application in future planetary ascent vehicles. Our...

  10. Using high performance Fortran for magnetohydrodynamic simulations

    NARCIS (Netherlands)

    Keppens, R.; Toth, G.

    2000-01-01

    Two scientific application programs, the Versatile Advection Code (VAC) and the HEating by Resonant Absorption (HERA) code are adapted to parallel computer platforms. Both programs can solve the time-dependent nonlinear partial differential equations of magnetohydrodynamics (MHD) with different

  11. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  12. APPLICATION OF INTERACTIVE ONLINE SIMULATIONS FOR DEMONSTRATION EXPERIMENT IN PHYSICS

    Directory of Open Access Journals (Sweden)

    Nina P. Dementievska

    2014-06-01

    Full Text Available Development of modern school physics experiment is related to the extensive use of ICT not only for data processing and visualization. Interactive computer simulation for processes and phenomena, developed by scientists and methodologists by the site Phet, helps to improve the physical demonstration experiment with the support of modern pedagogical technologies that change the traditional procedure to form students' understanding of the processes and phenomena, active cognitive activity. To study the influence of methods to integrate interactive computer simulations for better understanding the students' physical processes, phenomena and laws of the international community, teachers and Ukrainian scientists and teachers of physics have been involved. The aim of the article is to introduce the research results in the development and testing of individual components of educational technology in performing a physical experiment in secondary school.

  13. Quantum Accelerators for High-performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S. [ORNL; Britt, Keith A. [ORNL; Mohiyaddin, Fahd A. [ORNL

    2017-11-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.

  14. PREFACE: International conference on Computer Simulation in Physics and beyond (CSP2015)

    Science.gov (United States)

    2016-02-01

    The International conference on Computer Simulations in Physics and beyond (CSP2015) was held from 6-10 September 2015 at the campus of the Moscow Institute for Electronics and Mathematics (MIEM), National Research University Higher School of Economics, Moscow. Computer simulations are in increasingly popular tool for scientific research, supplementing experimental and analytical research. The main goal of the conference is contributing to the development of methods and algorithms which take into account trends in hardware development, which may help with intensive research. The conference also allowed senior scientists and students to have the opportunity to speak each other and exchange ideas and views on the developments in the area of high-performance computing in science. We would like to take this opportunity to thank our sponsors: the Russian Foundation for Basic Research, Federal Agency of Scientific Organizations, and Higher School of Economics.

  15. Physics-related epistemic uncertainties in proton depth dose simulation

    CERN Document Server

    Pia, Maria Grazia; Lechner, Anton; Quintieri, Lina; Saracco, Paolo

    2010-01-01

    A set of physics models and parameters pertaining to the simulation of proton energy deposition in matter are evaluated in the energy range up to approximately 65 MeV, based on their implementations in the Geant4 toolkit. The analysis assesses several features of the models and the impact of their associated epistemic uncertainties, i.e. uncertainties due to lack of knowledge, on the simulation results. Possible systematic effects deriving from uncertainties of this kind are highlighted; their relevance in relation to the application environment and different experimental requirements are discussed, with emphasis on the simulation of radiotherapy set-ups. By documenting quantitatively the features of a wide set of simulation models and the related intrinsic uncertainties affecting the simulation results, this analysis provides guidance regarding the use of the concerned simulation tools in experimental applications; it also provides indications for further experimental measurements addressing the sources of s...

  16. High Performance Space Pump Project

    Data.gov (United States)

    National Aeronautics and Space Administration — PDT is proposing a High Performance Space Pump based upon an innovative design using several technologies. The design will use a two-stage impeller, high temperature...

  17. Physics validation of detector simulation tools for LHC

    CERN Document Server

    Beringer, J

    2004-01-01

    Extensive studies aimed at validating the physics processes built into the detector simulation tools Geant4 and Fluka are in progress within all Large Hadron Collider (LHC) experiments, within the collaborations developing these tools, and within the LHC Computing Grid (LCG) Simulation Physics Validation Project, which has become the primary forum for these activities. This work includes detailed comparisons with test beam data, as well as benchmark studies of simple geometries and materials with single incident particles of various energies for which experimental data is available. We give an overview of these validation activities with emphasis on the latest results.

  18. A physically-based approach for lens flare simulation

    OpenAIRE

    Keshmirian, Arash

    2008-01-01

    In this thesis, we present a physically-based method for the computer graphics simulation of lens flare phenomena in photographic lenses. The proposed method can be used to render lens flares from nearly all types of lenses regardless of optical construction. The method described in this thesis utilizes the photon mapping technique (Jensen, 2001) to simulate the flow of light within the lens, and captures the visual effects of internal reflections and scattering within (and between) the optic...

  19. RavenDB high performance

    CERN Document Server

    Ritchie, Brian

    2013-01-01

    RavenDB High Performance is comprehensive yet concise tutorial that developers can use to.This book is for developers & software architects who are designing systems in order to achieve high performance right from the start. A basic understanding of RavenDB is recommended, but not required. While the book focuses on advanced topics, it does not assume that the reader has a great deal of prior knowledge of working with RavenDB.

  20. THREE-DIMENSIONAL WEB-BASED PHYSICS SIMULATION APPLICATION FOR PHYSICS LEARNING TOOL

    Directory of Open Access Journals (Sweden)

    William Salim

    2012-10-01

    Full Text Available The purpose of this research is to present a multimedia application for doing simulation in Physics. The application is a web based simulator that implementing HTML5, WebGL, and JavaScript. The objects and the environment will be in three dimensional views. This application is hoped will become the substitute for practicum activity. The current development is the application only covers Newtonian mechanics. Questionnaire and literature study is used as the data collecting method. While Waterfall Method used as the design method. The result is Three-DimensionalPhysics Simulator as online web application. Three-Dimensionaldesign and mentor-mentee relationship is the key features of this application. The conclusion made is Three-DimensionalPhysics Simulator already fulfilled in both design and functionality according to user. This application also helps them to understand Newtonian mechanics by simulation. Improvements are needed, because this application only covers Newtonian Mechanics. There is a lot possibility in the future that this simulation can also covers other Physics topic, such as optic, energy, or electricity.Keywords: Simulation, Physic, Learning Tool, HTML5, WebGL

  1. Route complexity and simulated physical ageing negatively influence wayfinding

    NARCIS (Netherlands)

    Zijlstra, Emma; Hagedoorn, Mariet; Krijnen, Wim P.; Schans, van der Cornelis; Mobach, Mark P.

    The aim of this age-simulation field experiment was to assess the influence of route complexity and physical ageing on wayfinding. Seventy-five people (aged 18-28) performed a total of 108 wayfinding tasks (i.e., 42 participants performed two wayfinding tasks and 33 performed one wayfinding task),

  2. Visible Light Communication Physical Layer Design for Jist Simulation

    Directory of Open Access Journals (Sweden)

    Tomaš Boris

    2014-12-01

    Full Text Available Current advances in computer networking consider using visible light spectrum to encode and decode digital data. This approach is relatively non expensive. However, designing appropriate MAC or any other upper layer protocol for Visible Light Communication (VLC requires appropriate hardware. This paper proposes and implements such hardware simulation (physical layer that is compatible with existing network stack.

  3. Hygrothermal Numerical Simulation Tools Applied to Building Physics

    CERN Document Server

    Delgado, João M P Q; Ramos, Nuno M M; Freitas, Vasco Peixoto

    2013-01-01

    This book presents a critical review on the development and application of hygrothermal analysis methods to simulate the coupled transport processes of Heat, Air, and Moisture (HAM) transfer for one or multidimensional cases. During the past few decades there has been relevant development in this field of study and an increase in the professional use of tools that simulate some of the physical phenomena that are involved in Heat, Air and Moisture conditions in building components or elements. Although there is a significant amount of hygrothermal models referred in the literature, the vast majority of them are not easily available to the public outside the institutions where they were developed, which restricts the analysis of this book to only 14 hygrothermal modelling tools. The special features of this book are (a) a state-of-the-art of numerical simulation tools applied to building physics, (b) the boundary conditions importance, (c) the material properties, namely, experimental methods for the measuremen...

  4. Computational physics simulation of classical and quantum systems

    CERN Document Server

    Scherer, Philipp O J

    2017-01-01

    This textbook presents basic numerical methods and applies them to a large variety of physical models in multiple computer experiments. Classical algorithms and more recent methods are explained. Partial differential equations are treated generally comparing important methods, and equations of motion are solved by a large number of simple as well as more sophisticated methods. Several modern algorithms for quantum wavepacket motion are compared. The first part of the book discusses the basic numerical methods, while the second part simulates classical and quantum systems. Simple but non-trivial examples from a broad range of physical topics offer readers insights into the numerical treatment but also the simulated problems. Rotational motion is studied in detail, as are simple quantum systems. A two-level system in an external field demonstrates elementary principles from quantum optics and simulation of a quantum bit. Principles of molecular dynamics are shown. Modern bounda ry element methods are presented ...

  5. 3rd International Conference on High Performance Scientific Computing

    CERN Document Server

    Kostina, Ekaterina; Phu, Hoang; Rannacher, Rolf

    2008-01-01

    This proceedings volume contains a selection of papers presented at the Third International Conference on High Performance Scientific Computing held at the Hanoi Institute of Mathematics, Vietnamese Academy of Science and Technology (VAST), March 6-10, 2006. The conference has been organized by the Hanoi Institute of Mathematics, Interdisciplinary Center for Scientific Computing (IWR), Heidelberg, and its International PhD Program ``Complex Processes: Modeling, Simulation and Optimization'', and Ho Chi Minh City University of Technology. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and applications in practice. Subjects covered are mathematical modelling, numerical simulation, methods for optimization and control, parallel computing, software development, applications of scientific computing in physics, chemistry, biology and mechanics, environmental and hydrology problems, transport, logistics and site loca...

  6. High Performance Computing at NASA

    Science.gov (United States)

    Bailey, David H.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    The speaker will give an overview of high performance computing in the U.S. in general and within NASA in particular, including a description of the recently signed NASA-IBM cooperative agreement. The latest performance figures of various parallel systems on the NAS Parallel Benchmarks will be presented. The speaker was one of the authors of the NAS (National Aerospace Standards) Parallel Benchmarks, which are now widely cited in the industry as a measure of sustained performance on realistic high-end scientific applications. It will be shown that significant progress has been made by the highly parallel supercomputer industry during the past year or so, with several new systems, based on high-performance RISC processors, that now deliver superior performance per dollar compared to conventional supercomputers. Various pitfalls in reporting performance will be discussed. The speaker will then conclude by assessing the general state of the high performance computing field.

  7. Learning from physics-based earthquake simulators: a minimal approach

    Science.gov (United States)

    Artale Harris, Pietro; Marzocchi, Warner; Melini, Daniele

    2017-04-01

    Physics-based earthquake simulators are aimed to generate synthetic seismic catalogs of arbitrary length, accounting for fault interaction, elastic rebound, realistic fault networks, and some simple earthquake nucleation process like rate and state friction. Through comparison of synthetic and real catalogs seismologists can get insights on the earthquake occurrence process. Moreover earthquake simulators can be used to to infer some aspects of the statistical behavior of earthquakes within the simulated region, by analyzing timescales not accessible through observations. The develoment of earthquake simulators is commonly led by the approach "the more physics, the better", pushing seismologists to go towards simulators more earth-like. However, despite the immediate attractiveness, we argue that this kind of approach makes more and more difficult to understand which physical parameters are really relevant to describe the features of the seismic catalog at which we are interested. For this reason, here we take an opposite minimal approach and analyze the behavior of a purposely simple earthquake simulator applied to a set of California faults. The idea is that a simple model may be more informative than a complex one for some specific scientific objectives, because it is more understandable. The model has three main components: the first one is a realistic tectonic setting, i.e., a fault dataset of California; the other two components are quantitative laws for earthquake generation on each single fault, and the Coulomb Failure Function for modeling fault interaction. The final goal of this work is twofold. On one hand, we aim to identify the minimum set of physical ingredients that can satisfactorily reproduce the features of the real seismic catalog, such as short-term seismic cluster, and to investigate on the hypothetical long-term behavior, and faults synchronization. On the other hand, we want to investigate the limits of predictability of the model itself.

  8. High performance flexible heat pipes

    Science.gov (United States)

    Shaubach, R. M.; Gernert, N. J.

    1985-01-01

    A Phase I SBIR NASA program for developing and demonstrating high-performance flexible heat pipes for use in the thermal management of spacecraft is examined. The program combines several technologies such as flexible screen arteries and high-performance circumferential distribution wicks within an envelope which is flexible in the adiabatic heat transport zone. The first six months of work during which the Phase I contract goal were met, are described. Consideration is given to the heat-pipe performance requirements. A preliminary evaluation shows that the power requirement for Phase II of the program is 30.5 kilowatt meters at an operating temperature from 0 to 100 C.

  9. Engineering uses of physics-based ground motion simulations

    Science.gov (United States)

    Baker, Jack W.; Luco, Nicolas; Abrahamson, Norman A.; Graves, Robert W.; Maechling, Phillip J.; Olsen, Kim B.

    2014-01-01

    This paper summarizes validation methodologies focused on enabling ground motion simulations to be used with confidence in engineering applications such as seismic hazard analysis and dynmaic analysis of structural and geotechnical systems. Numberical simullation of ground motion from large erthquakes, utilizing physics-based models of earthquake rupture and wave propagation, is an area of active research in the earth science community. Refinement and validatoin of these models require collaboration between earthquake scientists and engineering users, and testing/rating methodolgies for simulated ground motions to be used with confidence in engineering applications. This paper provides an introduction to this field and an overview of current research activities being coordinated by the Souther California Earthquake Center (SCEC). These activities are related both to advancing the science and computational infrastructure needed to produce ground motion simulations, as well as to engineering validation procedures. Current research areas and anticipated future achievements are also discussed.

  10. High-Performance Operating Systems

    DEFF Research Database (Denmark)

    Sharp, Robin

    1999-01-01

    Notes prepared for the DTU course 49421 "High Performance Operating Systems". The notes deal with quantitative and qualitative techniques for use in the design and evaluation of operating systems in computer systems for which performance is an important parameter, such as real-time applications......, communication systems and multimedia systems....

  11. High Performance Bulk Thermoelectric Materials

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Zhifeng [Boston College, Chestnut Hill, MA (United States)

    2013-03-31

    Over 13 plus years, we have carried out research on electron pairing symmetry of superconductors, growth and their field emission property studies on carbon nanotubes and semiconducting nanowires, high performance thermoelectric materials and other interesting materials. As a result of the research, we have published 104 papers, have educated six undergraduate students, twenty graduate students, nine postdocs, nine visitors, and one technician.

  12. Research of Simulation in Character Animation Based on Physics Engine

    Directory of Open Access Journals (Sweden)

    Yang Yu

    2017-01-01

    Full Text Available Computer 3D character animation essentially is a product, which is combined with computer graphics and robotics, physics, mathematics, and the arts. It is based on computer hardware and graphics algorithms and related sciences rapidly developed new technologies. At present, the mainstream character animation technology is based on the artificial production of key technologies and capture frames based on the motion capture device technology. 3D character animation is widely used not only in the production of film, animation, and other commercial areas but also in virtual reality, computer-aided education, flight simulation, engineering simulation, military simulation, and other fields. In this paper, we try to study physics based character animation to solve these problems such as poor real-time interaction that appears in the character, low utilization rate, and complex production. The paper deeply studied the kinematics, dynamics technology, and production technology based on the motion data. At the same time, it analyzed ODE, PhysX, Bullet, and other variety of mainstream physics engines and studied OBB hierarchy bounding box tree, AABB hierarchical tree, and other collision detection algorithms. Finally, character animation based on ODE is implemented, which is simulation of the motion and collision process of a tricycle.

  13. EDITORIAL: High performance under pressure High performance under pressure

    Science.gov (United States)

    Demming, Anna

    2011-11-01

    nanoelectromechanical systems. Researchers in China exploit the coupling between piezoelectric and semiconducting properties of ZnO in an optimised diode device design [6]. They used a Schottky rather than an ohmic contact to depress the off current. In addition they used ZnO nanobelts that have dominantly polar surfaces instead of [0001] ZnO nanowires to enhance the on current under the small applied forces obtained by using an atomic force microscopy tip. The nanobelts have potential for use in random access memory devices. Much of the success in applying piezoresistivity in device applications stems from a deepening understanding of the mechanisms behind the process. A collaboration of researchers in the USA and China have proposed a new criterion for identifying the carrier type of individual ZnO nanowires based on the piezoelectric output of a nanowire when it is mechanically deformed by a conductive atomic force microscopy tip in contact mode [7]. The p-type/n-type shell/core nanowires give positive piezoelectric outputs, while the n-type nanowires produce negative piezoelectric outputs. In this issue Zhong Lin Wang and colleagues in Italy and the US report theoretical investigations into the piezoresistive behaviour of ZnO nanowires for energy harvesting. The work develops previous research on the ability of vertically aligned ZnO nanowires under uniaxial compression to power a nanodevice, in particular a pH sensor [8]. Now the authors have used finite element simulations to study the system. Among their conclusions they find that, for typical geometries and donor concentrations, the length of the nanowire does not significantly influence the maximum output piezopotential because the potential mainly drops across the tip. This has important implications for low-cost, CMOS- and microelectromechanical-systems-compatible fabrication of nanogenerators. The simulations also reveal the influence of the dielectric surrounding the nanowire on the output piezopotential, especially for

  14. An Integrated Simulation Module for Cyber-Physical Automation Systems

    Directory of Open Access Journals (Sweden)

    Francesco Ferracuti

    2016-05-01

    Full Text Available The integration of Wireless Sensors Networks (WSNs into Cyber Physical Systems (CPSs is an important research problem to solve in order to increase the performances, safety, reliability and usability of wireless automation systems. Due to the complexity of real CPSs, emulators and simulators are often used to replace the real control devices and physical connections during the development stage. The most widespread simulators are free, open source, expandable, flexible and fully integrated into mathematical modeling tools; however, the connection at a physical level and the direct interaction with the real process via the WSN are only marginally tackled; moreover, the simulated wireless sensor motes are not able to generate the analogue output typically required for control purposes. A new simulation module for the control of a wireless cyber-physical system is proposed in this paper. The module integrates the COntiki OS JAva Simulator (COOJA, a cross-level wireless sensor network simulator, and the LabVIEW system design software from National Instruments. The proposed software module has been called “GILOO” (Graphical Integration of Labview and cOOja. It allows one to develop and to debug control strategies over the WSN both using virtual or real hardware modules, such as the National Instruments Real-Time Module platform, the CompactRio, the Supervisory Control And Data Acquisition (SCADA, etc. To test the proposed solution, we decided to integrate it with one of the most popular simulators, i.e., the Contiki OS, and wireless motes, i.e., the Sky mote. As a further contribution, the Contiki Sky DAC driver and a new “Advanced Sky GUI” have been proposed and tested in the COOJA Simulator in order to provide the possibility to develop control over the WSN. To test the performances of the proposed GILOO software module, several experimental tests have been made, and interesting preliminary results are reported. The GILOO module has been

  15. An Integrated Simulation Module for Cyber-Physical Automation Systems.

    Science.gov (United States)

    Ferracuti, Francesco; Freddi, Alessandro; Monteriù, Andrea; Prist, Mariorosario

    2016-05-05

    The integration of Wireless Sensors Networks (WSNs) into Cyber Physical Systems (CPSs) is an important research problem to solve in order to increase the performances, safety, reliability and usability of wireless automation systems. Due to the complexity of real CPSs, emulators and simulators are often used to replace the real control devices and physical connections during the development stage. The most widespread simulators are free, open source, expandable, flexible and fully integrated into mathematical modeling tools; however, the connection at a physical level and the direct interaction with the real process via the WSN are only marginally tackled; moreover, the simulated wireless sensor motes are not able to generate the analogue output typically required for control purposes. A new simulation module for the control of a wireless cyber-physical system is proposed in this paper. The module integrates the COntiki OS JAva Simulator (COOJA), a cross-level wireless sensor network simulator, and the LabVIEW system design software from National Instruments. The proposed software module has been called "GILOO" (Graphical Integration of Labview and cOOja). It allows one to develop and to debug control strategies over the WSN both using virtual or real hardware modules, such as the National Instruments Real-Time Module platform, the CompactRio, the Supervisory Control And Data Acquisition (SCADA), etc. To test the proposed solution, we decided to integrate it with one of the most popular simulators, i.e., the Contiki OS, and wireless motes, i.e., the Sky mote. As a further contribution, the Contiki Sky DAC driver and a new "Advanced Sky GUI" have been proposed and tested in the COOJA Simulator in order to provide the possibility to develop control over the WSN. To test the performances of the proposed GILOO software module, several experimental tests have been made, and interesting preliminary results are reported. The GILOO module has been applied to a smart home

  16. Computational Physics Simulation of Classical and Quantum Systems

    CERN Document Server

    Scherer, Philipp O. J

    2010-01-01

    This book encapsulates the coverage for a two-semester course in computational physics. The first part introduces the basic numerical methods while omitting mathematical proofs but demonstrating the algorithms by way of numerous computer experiments. The second part specializes in simulation of classical and quantum systems with instructive examples spanning many fields in physics, from a classical rotor to a quantum bit. All program examples are realized as Java applets ready to run in your browser and do not require any programming skills.

  17. Coupled Multi-physical Simulations for the Assessment of Nuclear Waste Repository Concepts: Modeling, Software Development and Simulation

    Science.gov (United States)

    Massmann, J.; Nagel, T.; Bilke, L.; Böttcher, N.; Heusermann, S.; Fischer, T.; Kumar, V.; Schäfers, A.; Shao, H.; Vogel, P.; Wang, W.; Watanabe, N.; Ziefle, G.; Kolditz, O.

    2016-12-01

    As part of the German site selection process for a high-level nuclear waste repository, different repository concepts in the geological candidate formations rock salt, clay stone and crystalline rock are being discussed. An open assessment of these concepts using numerical simulations requires physical models capturing the individual particularities of each rock type and associated geotechnical barrier concept to a comparable level of sophistication. In a joint work group of the Helmholtz Centre for Environmental Research (UFZ) and the German Federal Institute for Geosciences and Natural Resources (BGR), scientists of the UFZ are developing and implementing multiphysical process models while BGR scientists apply them to large scale analyses. The advances in simulation methods for waste repositories are incorporated into the open-source code OpenGeoSys. Here, recent application-driven progress in this context is highlighted. A robust implementation of visco-plasticity with temperature-dependent properties into a framework for the thermo-mechanical analysis of rock salt will be shown. The model enables the simulation of heat transport along with its consequences on the elastic response as well as on primary and secondary creep or the occurrence of dilatancy in the repository near field. Transverse isotropy, non-isothermal hydraulic processes and their coupling to mechanical stresses are taken into account for the analysis of repositories in clay stone. These processes are also considered in the near field analyses of engineered barrier systems, including the swelling/shrinkage of the bentonite material. The temperature-dependent saturation evolution around the heat-emitting waste container is described by different multiphase flow formulations. For all mentioned applications, we illustrate the workflow from model development and implementation, over verification and validation, to repository-scale application simulations using methods of high performance computing.

  18. Introduction to statistical physics and to computer simulations

    CERN Document Server

    Casquilho, João Paulo

    2015-01-01

    Rigorous and comprehensive, this textbook introduces undergraduate students to simulation methods in statistical physics. The book covers a number of topics, including the thermodynamics of magnetic and electric systems; the quantum-mechanical basis of magnetism; ferrimagnetism, antiferromagnetism, spin waves and magnons; liquid crystals as a non-ideal system of technological relevance; and diffusion in an external potential. It also covers hot topics such as cosmic microwave background, magnetic cooling and Bose-Einstein condensation. The book provides an elementary introduction to simulation methods through algorithms in pseudocode for random walks, the 2D Ising model, and a model liquid crystal. Any formalism is kept simple and derivations are worked out in detail to ensure the material is accessible to students from subjects other than physics.

  19. Enhanced Verification Test Suite for Physics Simulation Codes

    Energy Technology Data Exchange (ETDEWEB)

    Kamm, J R; Brock, J S; Brandon, S T; Cotrell, D L; Johnson, B; Knupp, P; Rider, W; Trucano, T; Weirs, V G

    2008-10-10

    This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations. The key points of this document are: (1) Verification deals with mathematical correctness of the numerical algorithms in a code, while validation deals with physical correctness of a simulation in a regime of interest. This document is about verification. (2) The current seven-problem Tri-Laboratory Verification Test Suite, which has been used for approximately five years at the DOE WP laboratories, is limited. (3) Both the methodology for and technology used in verification analysis have evolved and been improved since the original test suite was proposed. (4) The proposed test problems are in three basic areas: (a) Hydrodynamics; (b) Transport processes; and (c) Dynamic strength-of-materials. (5) For several of the proposed problems we provide a 'strong sense verification benchmark', consisting of (i) a clear mathematical statement of the problem with sufficient information to run a computer simulation, (ii) an explanation of how the code result and benchmark solution are to be evaluated, and (iii) a description of the acceptance criterion for simulation code results. (6) It is proposed that the set of verification test problems with which any particular code be evaluated include some of the problems described in this document. Analysis of the proposed verification test problems constitutes part of a necessary--but not sufficient--step that builds confidence in physics and engineering simulation codes. More complicated test cases, including physics models of

  20. Neo4j high performance

    CERN Document Server

    Raj, Sonal

    2015-01-01

    If you are a professional or enthusiast who has a basic understanding of graphs or has basic knowledge of Neo4j operations, this is the book for you. Although it is targeted at an advanced user base, this book can be used by beginners as it touches upon the basics. So, if you are passionate about taming complex data with the help of graphs and building high performance applications, you will be able to get valuable insights from this book.

  1. Enhanced verification test suite for physics simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Kamm, James R.; Brock, Jerry S.; Brandon, Scott T.; Cotrell, David L.; Johnson, Bryan; Knupp, Patrick; Rider, William J.; Trucano, Timothy G.; Weirs, V. Gregory

    2008-09-01

    This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations.

  2. Population annealing: Massively parallel simulations in statistical physics

    Science.gov (United States)

    Weigel, Martin; Barash, Lev Yu.; Borovský, Michal; Janke, Wolfhard; Shchur, Lev N.

    2017-11-01

    The canonical technique for Monte Carlo simulations in statistical physics is importance sampling via a suitably constructed Markov chain. While such approaches are quite successful, they are not particularly well suited for parallelization as the chain dynamics is sequential, and if replicated chains are used to increase statistics each of them relaxes into equilibrium with an intrinsic time constant that cannot be reduced by parallel work. Population annealing is a sequential Monte Carlo method that simulates an ensemble of system replica under a cooling protocol. The population element makes it naturally well suited for massively parallel simulations, and bias can be systematically reduced by increasing the population size. We present an implementation of population annealing on graphics processing units and discuss its behavior for different systems undergoing continuous and first-order phase transitions.

  3. Physics Simulations of fluids - a brief overview of Phoenix FD

    CERN Multimedia

    CERN. Geneva; Nikolov, Svetlin

    2014-01-01

    The presentation will briefly describe the simulation and rendering of fluids with Phoenix FD, and then proceed into implementation details. We will present our methods of parallelizing the core simulation algorithms and our utilization of the GPU. We will also show how we take advantage of computational fluid dynamics specifics in order to speed up the preview and final rendering, thus achieving a quick pipeline for the creation of various visual effects. About the speakers Ivaylo Iliev is a Senior Software developer at Chaos Group and is the creator of the Phoenix FD simulator for fluid effects. He has a strong interest in physics and has worked on military simulators before focusing on visual effects. He has a Master?s degree from the Varna Technical University. Svetlin Nikolov is a Senior Software developer at Chaos Group with keen interest in physics and artificial intelligence and 7 years of experience in the software industry. He comes from a game development background with a focu...

  4. Grand Challenges 1993: High Performance Computing and Communications. A Report by the Committee on Physical, Mathematical, and Engineering Sciences. The FY 1993 U.S. Research and Development Program.

    Science.gov (United States)

    Office of Science and Technology Policy, Washington, DC.

    This report presents the United States research and development program for 1993 for high performance computing and computer communications (HPCC) networks. The first of four chapters presents the program goals and an overview of the federal government's emphasis on high performance computing as an important factor in the nation's scientific and…

  5. Learning From Where Students Look While Observing Simulated Physical Phenomena

    Science.gov (United States)

    Demaree, Dedra

    2005-04-01

    The Physics Education Research (PER) Group at the Ohio State University (OSU) has developed Virtual Reality (VR) programs for teaching introductory physics concepts. Winter 2005, the PER group worked with OSU's cognitive science eye-tracking lab to probe what features students look at while using our VR programs. We see distinct differences in the features students fixate on depending upon whether or not they have formally studied the related physics. Students who first make predictions seem to fixate more on the relevant features of the simulation than those who do not, regardless of their level of education. It is known that students sometimes perform an experiment and report results consistent with their misconceptions but inconsistent with the experimental outcome. We see direct evidence of one student holding onto misconceptions despite fixating frequently on the information needed to understand the correct answer. Future studies using these technologies may prove valuable for tackling difficult questions regarding student learning.

  6. Computational physics simulation of classical and quantum systems

    CERN Document Server

    Scherer, Philipp O J

    2013-01-01

    This textbook presents basic and advanced computational physics in a very didactic style. It contains very-well-presented and simple mathematical descriptions of many of the most important algorithms used in computational physics. Many clear mathematical descriptions of important techniques in computational physics are given. The first part of the book discusses the basic numerical methods. A large number of exercises and computer experiments allows to study the properties of these methods. The second part concentrates on simulation of classical and quantum systems. It uses a rather general concept for the equation of motion which can be applied to ordinary and partial differential equations. Several classes of integration methods are discussed including not only the standard Euler and Runge Kutta method but also multistep methods and the class of Verlet methods which is introduced by studying the motion in Liouville space. Besides the classical methods, inverse interpolation is discussed, together with the p...

  7. Enabling Gravity Physics by Inquiry using Easy Java Simulation

    CERN Document Server

    Wee, Loo Kang; Chew, Charles

    2013-01-01

    Studying physics of very large scale like the solar system is difficult in real life, using telescope on clear skies over years. We are probably a world first to create four well designed gravity computer models to serve as powerful pedagogical tools for students active inquiry, based on real data. These models are syllabus customized, free and rapidly prototyped with Open Source Physics researchers educators. Pilot study suggests students enactment of investigative learning like scientist is now possible, where gravity-physics comes alive. We are still continually improving the features of these computer models through feedback from students and teachers and the models can be downloaded from the internet. We hope more teachers will find the simulations useful in their own classes and further customized them so that others will find them more intelligible and contribute back to the wider educational fraternity to benefit all humankind.

  8. COMPUTER EMULATORS AND SIMULATORS OFMEASURING INSTRUMENTS ON THE PHYSICS LESSONS

    Directory of Open Access Journals (Sweden)

    Yaroslav Yu. Dyma

    2010-10-01

    Full Text Available Prominent feature of educational physical experiment at the present stage is applications of computer equipment and special software – virtual measuring instruments. The purpose of this article – to explain, when by means of virtual instruments it is possible to lead real experience (they are emulators, and when – virtual (they are simulators. For the best understanding implementation of one laboratory experimentation with usage of the software of both types is given. As at learning physics advantage should be given to carrying out of natural experiment with learning the real phenomena and measuring of real physical quantities the most perspective examination of programs-emulators of measuring instruments for their further implantation in educational process sees.

  9. High performance computing system in the framework of the Higgs boson studies

    CERN Document Server

    Belyaev, Nikita; The ATLAS collaboration

    2017-01-01

    The Higgs boson physics is one of the most important and promising fields of study in modern High Energy Physics. To perform precision measurements of the Higgs boson properties, the use of fast and efficient instruments of Monte Carlo event simulation is required. Due to the increasing amount of data and to the growing complexity of the simulation software tools, the computing resources currently available for Monte Carlo simulation on the LHC GRID are not sufficient. One of the possibilities to address this shortfall of computing resources is the usage of institutes computer clusters, commercial computing resources and supercomputers. In this paper, a brief description of the Higgs boson physics, the Monte-Carlo generation and event simulation techniques are presented. A description of modern high performance computing systems and tests of their performance are also discussed. These studies have been performed on the Worldwide LHC Computing Grid and Kurchatov Institute Data Processing Center, including Tier...

  10. Multi-physics simulations using a hierarchical interchangeable software interface

    Science.gov (United States)

    Portegies Zwart, Simon F.; McMillan, Stephen L. W.; van Elteren, Arjen; Pelupessy, F. Inti; de Vries, Nathan

    2013-03-01

    We introduce a general-purpose framework for interconnecting scientific simulation programs using a homogeneous, unified interface. Our framework is intrinsically parallel, and conveniently separates all component numerical modules in memory. This strict separation allows automatic unit conversion, distributed execution of modules on different cores within a cluster or grid, and orderly recovery from errors. The framework can be efficiently implemented and incurs an acceptable overhead. In practice, we measure the time spent in the framework to be less than 1% of the wall-clock time. Due to the unified structure of the interface, incorporating multiple modules addressing the same physics in different ways is relatively straightforward. Different modules may be advanced serially or in parallel. Despite initial concerns, we have encountered relatively few problems with this strict separation between modules, and the results of our simulations are consistent with earlier results using more traditional monolithic approaches. This framework provides a platform to combine existing simulation codes or develop new physical solver codes within a rich “ecosystem” of interchangeable modules.

  11. 6th International Conference on High Performance Scientific Computing

    CERN Document Server

    Phu, Hoang; Rannacher, Rolf; Schlöder, Johannes

    2017-01-01

    This proceedings volume highlights a selection of papers presented at the Sixth International Conference on High Performance Scientific Computing, which took place in Hanoi, Vietnam on March 16-20, 2015. The conference was jointly organized by the Heidelberg Institute of Theoretical Studies (HITS), the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) at Heidelberg University, and the Vietnam Institute for Advanced Study in Mathematics, Ministry of Education The contributions cover a broad, interdisciplinary spectrum of scientific computing and showcase recent advances in theory, methods, and practical applications. Subjects covered numerical simulation, methods for optimization and control, parallel computing, and software development, as well as the applications of scientific computing in physics, mechanics, biomechanics and robotics, material science, hydrology, biotechnology, medicine, transport, scheduling, and in...

  12. 5th International Conference on High Performance Scientific Computing

    CERN Document Server

    Hoang, Xuan; Rannacher, Rolf; Schlöder, Johannes

    2014-01-01

    This proceedings volume gathers a selection of papers presented at the Fifth International Conference on High Performance Scientific Computing, which took place in Hanoi on March 5-9, 2012. The conference was organized by the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) of Heidelberg University, Ho Chi Minh City University of Technology, and the Vietnam Institute for Advanced Study in Mathematics. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and practical applications. Subjects covered include mathematical modeling; numerical simulation; methods for optimization and control; parallel computing; software development; and applications of scientific computing in physics, mechanics and biomechanics, material science, hydrology, chemistry, biology, biotechnology, medicine, sports, psychology, transport, logistics, com...

  13. Physics-Based Haptic Simulation of Bone Machining.

    Science.gov (United States)

    Arbabtafti, M; Moghaddam, M; Nahvi, A; Mahvash, M; Richardson, B; Shirinzadeh, B

    2011-01-01

    We present a physics-based training simulator for bone machining. Based on experimental studies, the energy required to remove a unit volume of bone is a constant for every particular bone material. We use this physical principle to obtain the forces required to remove bone material with a milling tool rotating at high speed. The rotating blades of the tool are modeled as a set of small cutting elements. The force of interaction between a cutting element and bone is calculated from the energy required to remove a bone chip with an estimated thickness and known material stiffness. The total force acting on the cutter at a particular instant is obtained by integrating the differential forces over all cutting elements engaged. A voxel representation is used to represent the virtual bone and removed chips for calculating forces of machining. We use voxels that carry bone material properties to represent the volumetric haptic body and to apply underlying physical changes during machining. Experimental results of machining samples of a real bone confirm the force model. A real-time haptic implementation of the method in a dental training simulator is described.

  14. Physical Mapping Using Simulated Annealing and Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Vesterstrøm, Jacob Svaneborg

    2003-01-01

    Physical mapping (PM) is a method of bioinformatics that assists in DNA sequencing. The goal is to determine the order of a collection of fragments taken from a DNA strand, given knowledge of certain unique DNA markers contained in the fragments. Simulated annealing (SA) is the most widely used o....... The analysis highlights the importance of a good PM model, a well-correlated fitness function, and high quality hybridization data. We suggest that future work in PM should focus on design of more reliable fitness functions and on developing error-screening algorithms....... optimization method when searching for an ordering of the fragments in PM. In this paper, we applied an evolutionary algorithm to the problem, and compared its performance to that of SA and local search on simulated PM data, in order to determine the important factors in finding a good ordering of the segments...

  15. Dynamic simulation of flash drums using rigorous physical property calculations

    Directory of Open Access Journals (Sweden)

    F. M. Gonçalves

    2007-06-01

    Full Text Available The dynamics of flash drums is simulated using a formulation adequate for phase modeling with equations of state (EOS. The energy and mass balances are written as differential equations for the internal energy and the number of moles of each species. The algebraic equations of the model, solved at each time step, are those of a flash with specified internal energy, volume and mole numbers (UVN flash. A new aspect of our dynamic simulations is the use of direct iterations in phase volumes (instead of pressure for solving the algebraic equations. It was also found that an iterative procedure previously suggested in the literature for UVN flashes becomes unreliable close to phase boundaries and a new alternative is proposed. Another unusual aspect of this work is that the model expressions, including the physical properties and their analytical derivatives, were quickly implemented using computer algebra.

  16. Finding the Missing Physics: Simulating Polydisperse Polymer Melts

    Science.gov (United States)

    Rorrer, Nichoals; Dorgan, John

    2014-03-01

    A Monte Carlo algorithm has been developed to model polydisperse polymer melts. For the first time, this enables the specification of a predetermined molecular weight distribution for lattice based simulations. It is demonstrated how to map an arbitrary probability distributions onto a discrete number of chains residing on an fcc lattice. The resulting algorithm is able to simulate a wide variety of behaviors for polydisperse systems including confinement effects, shear flow, and parabolic flow. The dynamic version of the algorithm accurately captures Rouse dynamics for short polymer chains, and reptation-like dynamics for longer chain lengths.1 When polydispersity is introduced, smaller Rouse times and broadened the transition between different scaling regimes are observed. Rouse times also decrease under confinement for both polydisperse and monodisperse systems and chain length dependent migration effects are observed. The steady-state version of the algorithm enables the simulation of flow and when polydisperse systems are subject to parabolic (Poiseulle) flow, a migration phenomenon based on chain length is again present. These and other phenomena highlight the importance of including polydispersity in obtaining physically realistic simulations of polymeric melts. 1. Dorgan, J.R.; Rorrer, N.A.; Maupin, C.M., Macromolecules 2012, 45(21), 8833-8840. Work funded by the Fluid Dynamics program of the National Science Foundation under grant CBET-1067707.

  17. gemcWeb: A Cloud Based Nuclear Physics Simulation Software

    Science.gov (United States)

    Markelon, Sam

    2017-09-01

    gemcWeb allows users to run nuclear physics simulations from the web. Being completely device agnostic, scientists can run simulations from anywhere with an Internet connection. Having a full user system, gemcWeb allows users to revisit and revise their projects, and share configurations and results with collaborators. gemcWeb is based on simulation software gemc, which is based on standard GEant4. gemcWeb requires no C++, gemc, or GEant4 knowledge. Using a simple but powerful GUI allows users to configure their project from geometries and configurations stored on the deployment server. Simulations are then run on the server, with results being posted to the user, and then securely stored. Python based and open-source, the main version of gemcWeb is hosted internally at Jefferson National Labratory and used by the CLAS12 and Electron-Ion Collider Project groups. However, as the software is open-source, and hosted as a GitHub repository, an instance can be deployed on the open web, or any institution's intra-net. An instance can be configured to host experiments specific to an institution, and the code base can be modified by any individual or group. Special thanks to: Maurizio Ungaro, PhD., creator of gemc; Markus Diefenthaler, PhD., advisor; and Kyungseon Joo, PhD., advisor.

  18. Physically-based, Hydrologic Simulations Driven by Three Precipitation Products

    Science.gov (United States)

    Chintalapudi, S.; Sharif, H. O.; Yeggina, S.; El Hassan, A.

    2011-12-01

    This study evaluates the model-simulated stream discharge over the Guadalupe River basin in central Texas driven by three precipitation products: the Guadalupe-Blanco River Authority (GBRA) rain gauge network, the Next Generation Weather Radar (NEXRAD) Stage ΙΙΙ precipitation product, and the Tropical Rainfall Measurement Mission (TRMM) 3B42 product. Focus will be on results from the Upper Guadalupe River sub-basin. This sub-basin is more prone to flooding due to its geological properties (thin soils, exposed bedrock, and sparse vegetation) and the impact of Balcones Escarpment on the moisture coming from the Gulf of Mexico. The physically based, distributed-parameter Gridded Surface Subsurface Hydrologic Analysis (GSSHA) hydrologic model was used to simulate the June-2002 flooding event. Simulations driven by NEXRAD Stage ΙΙΙ 15 - min precipitation yielded better results with low RMSE (88.3%), high NSE (0.6), high R2 (0.73), low RSR (0.63) and low PBIAS (-17.3%) compared to simulations driven by the other products.

  19. High performance computing applications in neurobiological research

    Science.gov (United States)

    Ross, Muriel D.; Cheng, Rei; Doshay, David G.; Linton, Samuel W.; Montgomery, Kevin; Parnas, Bruce R.

    1994-01-01

    The human nervous system is a massively parallel processor of information. The vast numbers of neurons, synapses and circuits is daunting to those seeking to understand the neural basis of consciousness and intellect. Pervading obstacles are lack of knowledge of the detailed, three-dimensional (3-D) organization of even a simple neural system and the paucity of large scale, biologically relevant computer simulations. We use high performance graphics workstations and supercomputers to study the 3-D organization of gravity sensors as a prototype architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scale-up, three-dimensional versions run on the Cray Y-MP and CM5 supercomputers.

  20. NUMERICAL SIMULATION OF PHYSICAL SYSTEMS IN AGRI-FOOD ENGINEERING

    Directory of Open Access Journals (Sweden)

    Angelo Fabbri

    2012-06-01

    Full Text Available In agri-food engineering many complex problems arise in plant and process design. Specifically the designer has to deal with fluid dynamics, thermal or mechanical problems, often characterized by physics coupling, non-linearity, irregular geometry, anisotropy and in definitive rather high complexity. In recent years, the ever growing availability of computational power at low cost, made these problems more often approached with numerical simulation techniques. Mainly in terms of finite elements and finite volumes. In this paper the fundamentals of numerical methods are briefly recalled and a discussion about their possibility of application in the food and agricultural engineering is developed.

  1. High Performance Flexible Thermal Link

    Science.gov (United States)

    Sauer, Arne; Preller, Fabian

    2014-06-01

    The paper deals with the design and performance verification of a high performance and flexible carbon fibre thermal link.Project goal was to design a space qualified thermal link combining low mass, flexibility and high thermal conductivity with new approaches regarding selected materials and processes. The idea was to combine the advantages of existing metallic links regarding flexibility and the thermal performance of high conductive carbon pitch fibres. Special focus is laid on the thermal performance improvement of matrix systems by means of nano-scaled carbon materials in order to improve the thermal performance also perpendicular to the direction of the unidirectional fibres.One of the main challenges was to establish a manufacturing process which allows handling the stiff and brittle fibres, applying the matrix and performing the implementation into an interface component using unconventional process steps like thermal bonding of fibres after metallisation.This research was funded by the German Federal Ministry for Economic Affairs and Energy (BMWi).

  2. High Performance Perovskite Solar Cells

    Science.gov (United States)

    Tong, Xin; Lin, Feng; Wu, Jiang

    2015-01-01

    Perovskite solar cells fabricated from organometal halide light harvesters have captured significant attention due to their tremendously low device costs as well as unprecedented rapid progress on power conversion efficiency (PCE). A certified PCE of 20.1% was achieved in late 2014 following the first study of long‐term stable all‐solid‐state perovskite solar cell with a PCE of 9.7% in 2012, showing their promising potential towards future cost‐effective and high performance solar cells. Here, notable achievements of primary device configuration involving perovskite layer, hole‐transporting materials (HTMs) and electron‐transporting materials (ETMs) are reviewed. Numerous strategies for enhancing photovoltaic parameters of perovskite solar cells, including morphology and crystallization control of perovskite layer, HTMs design and ETMs modifications are discussed in detail. In addition, perovskite solar cells outside of HTMs and ETMs are mentioned as well, providing guidelines for further simplification of device processing and hence cost reduction. PMID:27774402

  3. High Performance Perovskite Solar Cells.

    Science.gov (United States)

    Tong, Xin; Lin, Feng; Wu, Jiang; Wang, Zhiming M

    2016-05-01

    Perovskite solar cells fabricated from organometal halide light harvesters have captured significant attention due to their tremendously low device costs as well as unprecedented rapid progress on power conversion efficiency (PCE). A certified PCE of 20.1% was achieved in late 2014 following the first study of long-term stable all-solid-state perovskite solar cell with a PCE of 9.7% in 2012, showing their promising potential towards future cost-effective and high performance solar cells. Here, notable achievements of primary device configuration involving perovskite layer, hole-transporting materials (HTMs) and electron-transporting materials (ETMs) are reviewed. Numerous strategies for enhancing photovoltaic parameters of perovskite solar cells, including morphology and crystallization control of perovskite layer, HTMs design and ETMs modifications are discussed in detail. In addition, perovskite solar cells outside of HTMs and ETMs are mentioned as well, providing guidelines for further simplification of device processing and hence cost reduction.

  4. High Performance Proactive Digital Forensics

    Science.gov (United States)

    Alharbi, Soltan; Moa, Belaid; Weber-Jahnke, Jens; Traore, Issa

    2012-10-01

    With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.

  5. Petascale computation of multi-physics seismic simulations

    Science.gov (United States)

    Gabriel, Alice-Agnes; Madden, Elizabeth H.; Ulrich, Thomas; Wollherr, Stephanie; Duru, Kenneth C.

    2017-04-01

    Capturing the observed complexity of earthquake sources in concurrence with seismic wave propagation simulations is an inherently multi-scale, multi-physics problem. In this presentation, we present simulations of earthquake scenarios resolving high-detail dynamic rupture evolution and high frequency ground motion. The simulations combine a multitude of representations of model complexity; such as non-linear fault friction, thermal and fluid effects, heterogeneous fault stress and fault strength initial conditions, fault curvature and roughness, on- and off-fault non-elastic failure to capture dynamic rupture behavior at the source; and seismic wave attenuation, 3D subsurface structure and bathymetry impacting seismic wave propagation. Performing such scenarios at the necessary spatio-temporal resolution requires highly optimized and massively parallel simulation tools which can efficiently exploit HPC facilities. Our up to multi-PetaFLOP simulations are performed with SeisSol (www.seissol.org), an open-source software package based on an ADER-Discontinuous Galerkin (DG) scheme solving the seismic wave equations in velocity-stress formulation in elastic, viscoelastic, and viscoplastic media with high-order accuracy in time and space. Our flux-based implementation of frictional failure remains free of spurious oscillations. Tetrahedral unstructured meshes allow for complicated model geometry. SeisSol has been optimized on all software levels, including: assembler-level DG kernels which obtain 50% peak performance on some of the largest supercomputers worldwide; an overlapping MPI-OpenMP parallelization shadowing the multiphysics computations; usage of local time stepping; parallel input and output schemes and direct interfaces to community standard data formats. All these factors enable aim to minimise the time-to-solution. The results presented highlight the fact that modern numerical methods and hardware-aware optimization for modern supercomputers are essential

  6. Physical simulation for low-energy astrobiology environmental scenarios.

    Science.gov (United States)

    Gormly, Sherwin; Adams, V D; Marchand, Eric

    2003-01-01

    Speculations about the extent of life of independent origin and the potential for sustaining Earth-based life in subsurface environments on both Europa and Mars are of current and relevant interest. Theoretical modeling based on chemical energetics has demonstrated potential options for viable biochemical metabolism (metabolic pathways) in these types of environments. Also, similar environments on Earth show microbial activity. However, actual physical simulation testing of specific environments is required to confidently determine the interplay of various physical and chemical parameters on the viability of relevant metabolic pathways. This testing is required to determine the potential to sustain life in these environments on a specific scenario by scenario basis. This study examines the justification, design, and fabrication of, as well as the culture selection and screening for, a psychrophilic/halophilic/anaerobic digester. This digester is specifically designed to conform to physical testing needs of research relating to potential extent physical environments on Europa and other planetary bodies in the Solar System. The study is a long-term effort and is currently in an early phase, with only screening-level data at this time. Full study results will likely take an additional 2 years. However, researchers in electromagnetic biosignature and in situ instrument development should be aware of the study at this time, as they are invited to participate in planning for future applications of the digester facility.

  7. Implementation of interactive virtual simulation of physical systems

    Science.gov (United States)

    Sanchez, H.; Escobar, J. J.; Gonzalez, J. D.; Beltran, J.

    2014-03-01

    Considering the limited availability of laboratories for physics teaching and the difficulties this causes in the learning of school students in Santa Marta Colombia, we have developed software in order to generate greater student interaction with the phenomena physical and improve their understanding. Thereby, this system has been proposed in an architecture Model/View- View- Model (MVVM), sharing the benefits of MVC. Basically, this pattern consists of 3 parts: The Model, that is responsible for business logic related. The View, which is the part with which we are most familiar and the user sees. Its role is to display data to the user and allowing manipulation of the data of the application. The ViewModel, which is the middle part of the Model and the View (analogous to the Controller in the MVC pattern), as well as being responsible for implementing the behavior of the view to respond to user actions and expose data model in a way that is easy to use links to data in the view. .NET Framework 4.0 and editing package Silverlight 4 and 5 are the main requirements needed for the deployment of physical simulations that are hosted in the web application and a web browser (Internet Explorer, Mozilla Firefox or Chrome). The implementation of this innovative application in educational institutions has shown that students improved their contextualization of physical phenomena.

  8. Physics validation studies for muon collider detector background simulations

    Energy Technology Data Exchange (ETDEWEB)

    Morris, Aaron Owen; /Northern Illinois U.

    2011-07-01

    Within the broad discipline of physics, the study of the fundamental forces of nature and the most basic constituents of the universe belongs to the field of particle physics. While frequently referred to as 'high-energy physics,' or by the acronym 'HEP,' particle physics is not driven just by the quest for ever-greater energies in particle accelerators. Rather, particle physics is seen as having three distinct areas of focus: the cosmic, intensity, and energy frontiers. These three frontiers all provide different, but complementary, views of the basic building blocks of the universe. Currently, the energy frontier is the realm of hadron colliders like the Tevatron at Fermi National Accelerator Laboratory (Fermilab) or the Large Hadron Collider (LHC) at CERN. While the LHC is expected to be adequate for explorations up to 14 TeV for the next decade, the long development lead time for modern colliders necessitates research and development efforts in the present for the next generation of colliders. This paper focuses on one such next-generation machine: a muon collider. Specifically, this paper focuses on Monte Carlo simulations of beam-induced backgrounds vis-a-vis detector region contamination. Initial validation studies of a few muon collider physics background processes using G4beamline have been undertaken and results presented. While these investigations have revealed a number of hurdles to getting G4beamline up to the level of more established simulation suites, such as MARS, the close communication between us, as users, and the G4beamline developer, Tom Roberts, has allowed for rapid implementation of user-desired features. The main example of user-desired feature implementation, as it applies to this project, is Bethe-Heitler muon production. Regarding the neutron interaction issues, we continue to study the specifics of how GEANT4 implements nuclear interactions. The GEANT4 collaboration has been contacted regarding the minor

  9. Multi-Physics Simulation of TREAT Kinetics using MAMMOTH

    Energy Technology Data Exchange (ETDEWEB)

    DeHart, Mark; Gleicher, Frederick; Ortensi, Javier; Alberti, Anthony; Palmer, Todd

    2015-11-01

    With the advent of next generation reactor systems and new fuel designs, the U.S. Department of Energy (DOE) has identified the need for the resumption of transient testing of nuclear fuels. DOE has decided that the Transient Reactor Test Facility (TREAT) at Idaho National Laboratory (INL) is best suited for future testing. TREAT is a thermal neutron spectrum nuclear test facility that is designed to test nuclear fuels in transient scenarios. These specific fuels transient tests range from simple temperature transients to full fuel melt accidents. The current TREAT core is driven by highly enriched uranium (HEU) dispersed in a graphite matrix (1:10000 U-235/C atom ratio). At the center of the core, fuel is removed allowing for the insertion of an experimental test vehicle. TREAT’s design provides experimental flexibility and inherent safety during neutron pulsing. This safety stems from the graphite in the driver fuel having a strong negative temperature coefficient of reactivity resulting from a thermal Maxwellian shift with increased leakage, as well as graphite acting as a temperature sink. Air cooling is available, but is generally used post-transient for heat removal. DOE and INL have expressed a desire to develop a simulation capability that will accurately model the experiments before they are irradiated at the facility, with an emphasis on effective and safe operation while minimizing experimental time and cost. At INL, the Multi-physics Object Oriented Simulation Environment (MOOSE) has been selected as the model development framework for this work. This paper describes the results of preliminary simulations of a TREAT fuel element under transient conditions using the MOOSE-based MAMMOTH reactor physics tool.

  10. Quantum simulations and many-body physics with light.

    Science.gov (United States)

    Noh, Changsuk; Angelakis, Dimitris G

    2017-01-01

    In this review we discuss the works in the area of quantum simulation and many-body physics with light, from the early proposals on equilibrium models to the more recent works in driven dissipative platforms. We start by describing the founding works on Jaynes-Cummings-Hubbard model and the corresponding photon-blockade induced Mott transitions and continue by discussing the proposals to simulate effective spin models and fractional quantum Hall states in coupled resonator arrays (CRAs). We also analyse the recent efforts to study out-of-equilibrium many-body effects using driven CRAs, including the predictions for photon fermionisation and crystallisation in driven rings of CRAs as well as other dynamical and transient phenomena. We try to summarise some of the relatively recent results predicting exotic phases such as super-solidity and Majorana like modes and then shift our attention to developments involving 1D nonlinear slow light setups. There the simulation of strongly correlated phases characterising Tonks-Girardeau gases, Luttinger liquids, and interacting relativistic fermionic models is described. We review the major theory results and also briefly outline recent developments in ongoing experimental efforts involving different platforms in circuit QED, photonic crystals and nanophotonic fibres interfaced with cold atoms.

  11. Chewing simulation with a physically accurate deformable model.

    Science.gov (United States)

    Pascale, Andra Maria; Ruge, Sebastian; Hauth, Steffen; Kordaß, Bernd; Linsen, Lars

    2015-01-01

    Nowadays, CAD/CAM software is being used to compute the optimal shape and position of a new tooth model meant for a patient. With this possible future application in mind, we present in this article an independent and stand-alone interactive application that simulates the human chewing process and the deformation it produces in the food substrate. Chewing motion sensors are used to produce an accurate representation of the jaw movement. The substrate is represented by a deformable elastic model based on the finite linear elements method, which preserves physical accuracy. Collision detection based on spatial partitioning is used to calculate the forces that are acting on the deformable model. Based on the calculated information, geometry elements are added to the scene to enhance the information available for the user. The goal of the simulation is to present a complete scene to the dentist, highlighting the points where the teeth came into contact with the substrate and giving information about how much force acted at these points, which therefore makes it possible to indicate whether the tooth is being used incorrectly in the mastication process. Real-time interactivity is desired and achieved within limits, depending on the complexity of the employed geometric models. The presented simulation is a first step towards the overall project goal of interactively optimizing tooth position and shape under the investigation of a virtual chewing process using real patient data (Fig 1).

  12. Simulated, Emulated, and Physical Investigative Analysis (SEPIA) of networked systems.

    Energy Technology Data Exchange (ETDEWEB)

    Burton, David P.; Van Leeuwen, Brian P.; McDonald, Michael James; Onunkwo, Uzoma A.; Tarman, Thomas David; Urias, Vincent E.

    2009-09-01

    This report describes recent progress made in developing and utilizing hybrid Simulated, Emulated, and Physical Investigative Analysis (SEPIA) environments. Many organizations require advanced tools to analyze their information system's security, reliability, and resilience against cyber attack. Today's security analysis utilize real systems such as computers, network routers and other network equipment, computer emulations (e.g., virtual machines) and simulation models separately to analyze interplay between threats and safeguards. In contrast, this work developed new methods to combine these three approaches to provide integrated hybrid SEPIA environments. Our SEPIA environments enable an analyst to rapidly configure hybrid environments to pass network traffic and perform, from the outside, like real networks. This provides higher fidelity representations of key network nodes while still leveraging the scalability and cost advantages of simulation tools. The result is to rapidly produce large yet relatively low-cost multi-fidelity SEPIA networks of computers and routers that let analysts quickly investigate threats and test protection approaches.

  13. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  14. Physics and 3D in Flash Simulations: Open Source Reality

    Science.gov (United States)

    Harold, J. B.; Dusenbery, P.

    2009-12-01

    Over the last decade our ability to deliver simulations over the web has steadily advanced. The improvements in speed of the Adobe Flash engine, and the development of open source tools to expand it, allow us to deliver increasingly sophisticated simulation based games through the browser, with no additional downloads required. In this paper we will present activities we are developing as part of two asteroids education projects: Finding NEO (funded through NSF and NASA SMD), and Asteroids! (funded through NSF). The first activity is Rubble!, an asteroids deflection game built on the open source Box2D physics engine. This game challenges players to push asteroids in to safe orbits before they crash in to the Earth. The Box2D engine allows us to go well beyond simple 2-body orbital calculations and incorporate “rubble piles”. These objects, which are representative of many asteroids, are composed of 50 or more individual rocks which gravitationally bind and separate in realistic ways. Even bombs can be modeled with sufficient physical accuracy to convince players of the hazards of trying to “blow up” incoming asteroids. The ability to easily build games based on underlying physical models allows us to address physical misconceptions in a natural way: by having the player operate in a world that directly collides with those misconceptions. Rubble! provides a particularly compelling example of this due to the variety of well documented misconceptions regarding gravity. The second activity is a Light Curve challenge, which uses the open source PaperVision3D tools to analyze 3D asteroid models. The goal of this activity is to introduce the player to the concept of “light curves”, measurements of asteroid brightness over time which are used to calculate the asteroid’s period. These measurements can even be inverted to generate three dimensional models of asteroids that are otherwise too small and distant to directly image. Through the use of the Paper

  15. The path toward HEP High Performance Computing

    CERN Document Server

    Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on th...

  16. Holistic simulation of geotechnical installation processes numerical and physical modelling

    CERN Document Server

    2015-01-01

    The book provides suitable methods for the simulations of boundary value problems of geotechnical installation processes with reliable prediction for the deformation behavior of structures in static or dynamic interaction with the soil. It summarizes the basic research of a research group from scientists dealing with constitutive relations of soils and their implementations as well as contact element formulations in FE-codes. Numerical and physical experiments are presented providing benchmarks for future developments in this field. Boundary value problems have been formulated and solved with the developed tools in order to show the effectivity of the methods. Parametric studies of geotechnical installation processes in order to identify the governing parameters for the optimization of the process are given in such a way that the findings can be recommended to practice for further use. For many design engineers in practice the assessment of the serviceability of nearby structures due to geotechnical installat...

  17. Physical model simulations of seawater intrusion in unconfined aquifer

    Directory of Open Access Journals (Sweden)

    Tanapol Sriapai

    2012-12-01

    Full Text Available The objective of this study is to simulate the seawater intrusion into unconfined aquifer near shoreline and to assessthe effectiveness of its controlling methods by using scaled-down physical models. The intrusion controlled methods studiedhere include fresh water injection, saltwater extraction, and subsurface barrier. The results indicate that under natural dynamicequilibrium between the recharge of fresh water and the intrusion well agree with the Ghyben-Herzberg mathematical solution.Fresh water pumping from the aquifer notably move the fresh-salt water interface toward the pumping well, depending on thepumping rates and the head differences (h between the aquifer recharge and the salt water level. The fresh water injectionmethod is more favorable than the salt water extraction and subsurface barrier method. The fresh water injection rate of about10% of the usage rate can effectively push the interface toward the shoreline, and keeping the pumping well free of salinity.

  18. Petascale Kinetic Simulations in Space Sciences: New Simulations and Data Discovery Techniques and Physics Results

    Science.gov (United States)

    Karimabadi, Homa

    2012-03-01

    Recent advances in simulation technology and hardware are enabling breakthrough science where many longstanding problems can now be addressed for the first time. In this talk, we focus on kinetic simulations of the Earth's magnetosphere and magnetic reconnection process which is the key mechanism that breaks the protective shield of the Earth's dipole field, allowing the solar wind to enter the Earth's magnetosphere. This leads to the so-called space weather where storms on the Sun can affect space-borne and ground-based technological systems on Earth. The talk will consist of three parts: (a) overview of a new multi-scale simulation technique where each computational grid is updated based on its own unique timestep, (b) Presentation of a new approach to data analysis that we refer to as Physics Mining which entails combining data mining and computer vision algorithms with scientific visualization to extract physics from the resulting massive data sets. (c) Presentation of several recent discoveries in studies of space plasmas including the role of vortex formation and resulting turbulence in magnetized plasmas.

  19. Structural, Physical, and Compositional Analysis of Lunar Simulants and Regolith

    Science.gov (United States)

    Greenberg, Paul; Street, Kenneth W.; Gaier, James

    2008-01-01

    Relative to the prior manned Apollo and unmanned robotic missions, planned Lunar initiatives are comparatively complex and longer in duration. Individual crew rotations are envisioned to span several months, and various surface systems must function in the Lunar environment for periods of years. As a consequence, an increased understanding of the surface environment is required to engineer and test the associated materials, components, and systems necessary to sustain human habitation and surface operations. The effort described here concerns the analysis of existing simulant materials, with application to Lunar return samples. The interplay between these analyses fulfills the objective of ascertaining the critical properties of regolith itself, and the parallel objective of developing suitable stimulant materials for a variety of engineering applications. Presented here are measurements of the basic physical attributes, i.e. particle size distributions and general shape factors. Also discussed are structural and chemical properties, as determined through a variety of techniques, such as optical microscopy, SEM and TEM microscopy, Mossbauer Spectroscopy, X-ray diffraction, Raman microspectroscopy, inductively coupled argon plasma emission spectroscopy and energy dispersive X-ray fluorescence mapping. A comparative description of currently available stimulant materials is discussed, with implications for more detailed analyses, as well as the requirements for continued refinement of methods for simulant production.

  20. Space physics games and simulations for informal education

    Science.gov (United States)

    Harold, J.; Dusenbery, P.

    2008-12-01

    We will demonstrate and discuss several game and simulation based plasma physics education products. Developed using NSF education supplements and the long running Space Weather Outreach Program at the Space Science Institute, these activities range from a "mini-golf" game that uses research grade particle pushing algorithms, to a "whack the Earth" coronal mass ejection activity. These games have their roots in "informal" education settings: as a result they assume a short interaction time by the visitor (as compared to traditional classroom experiences), and they cannot assume a particular level of prior knowledge. On the other hand, as web based activities they have a tremendous reach, and are easily available for any instructor interested in using them in classroom environments. Several of the activities have also been programmed to collect data on the visitors' interactions, giving us a window in to both visitor engagement and the degree to which the activities accomplish their learning goals. In addition to exploring these results, we will discuss the next stage in the Space Weather Outreach Program, where we will explore the ability of a series of short games to build the necessary prior knowledge base for acquiring a firm grasp on basic space physics concepts.

  1. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  2. Numerical Simulations of Granular Physics in the Solar System

    Science.gov (United States)

    Ballouz, Ronald

    2017-08-01

    Granular physics is a sub-discipline of physics that attempts to combine principles that have been developed for both solid-state physics and engineering (such as soil mechanics) with fluid dynamics in order to formulate a coherent theory for the description of granular materials, which are found in both terrestrial (e.g., earthquakes, landslides, and pharmaceuticals) and extra-terrestrial settings (e.g., asteroids surfaces, asteroid interiors, and planetary ring systems). In the case of our solar system, the growth of this sub-discipline has been key in helping to interpret the formation, structure, and evolution of both asteroids and planetary rings. It is difficult to develop a deterministic theory for granular materials due to the fact that granular systems are composed of a large number of elements that interact through a non-linear combination of various forces (mechanical, gravitational, and electrostatic, for example) leading to a high degree of stochasticity. Hence, we study these environments using an N-body code, pkdgrav, that is able to simulate the gravitational, collisional, and cohesive interactions of grains. Using pkdgrav, I have studied the size segregation on asteroid surfaces due to seismic shaking (the Brazil-nut effect), the interaction of the OSIRIS-REx asteroid sample-return mission sampling head, TAGSAM, with the surface of the asteroid Bennu, the collisional disruptions of rubble-pile asteroids, and the formation of structure in Saturn's rings. In all of these scenarios, I have found that the evolution of a granular system depends sensitively on the intrinsic properties of the individual grains (size, shape, sand surface roughness). For example, through our simulations, we have been able to determine relationships between regolith properties and the amount of surface penetration a spacecraft achieves upon landing. Furthermore, we have demonstrated that this relationship also depends on the strength of the local gravity. By comparing our

  3. Physics Basis and Simulation of Burning Plasma Physics for the Fusion Ignition Research Experiment (FIRE)

    Energy Technology Data Exchange (ETDEWEB)

    C.E. Kessel; D. Meade; S.C. Jardin

    2002-01-18

    The FIRE [Fusion Ignition Research Experiment] design for a burning plasma experiment is described in terms of its physics basis and engineering features. Systems analysis indicates that the device has a wide operating space to accomplish its mission, both for the ELMing H-mode reference and the high bootstrap current/high beta advanced tokamak regimes. Simulations with 1.5D transport codes reported here both confirm and constrain the systems projections. Experimental and theoretical results are used to establish the basis for successful burning plasma experiments in FIRE.

  4. Indoor Air Quality in High Performance Schools

    Science.gov (United States)

    High performance schools are facilities that improve the learning environment while saving energy, resources, and money. The key is understanding the lifetime value of high performance schools and effectively managing priorities, time, and budget.

  5. Carpet Aids Learning in High Performance Schools

    Science.gov (United States)

    Hurd, Frank

    2009-01-01

    The Healthy and High Performance Schools Act of 2002 has set specific federal guidelines for school design, and developed a federal/state partnership program to assist local districts in their school planning. According to the Collaborative for High Performance Schools (CHPS), high-performance schools are, among other things, healthy, comfortable,…

  6. PREFACE: High Performance Computing Symposium 2011

    Science.gov (United States)

    Talon, Suzanne; Mousseau, Normand; Peslherbe, Gilles; Bertrand, François; Gauthier, Pierre; Kadem, Lyes; Moitessier, Nicolas; Rouleau, Guy; Wittig, Rod

    2012-02-01

    HPCS (High Performance Computing Symposium) is a multidisciplinary conference that focuses on research involving High Performance Computing and its application. Attended by Canadian and international experts and renowned researchers in the sciences, all areas of engineering, the applied sciences, medicine and life sciences, mathematics, the humanities and social sciences, it is Canada's pre-eminent forum for HPC. The 25th edition was held in Montréal, at the Université du Québec à Montréal, from 15-17 June and focused on HPC in Medical Science. The conference was preceded by tutorials held at Concordia University, where 56 participants learned about HPC best practices, GPU computing, parallel computing, debugging and a number of high-level languages. 274 participants from six countries attended the main conference, which involved 11 invited and 37 contributed oral presentations, 33 posters, and an exhibit hall with 16 booths from our sponsors. The work that follows is a collection of papers presented at the conference covering HPC topics ranging from computer science to bioinformatics. They are divided here into four sections: HPC in Engineering, Physics and Materials Science, HPC in Medical Science, HPC Enabling to Explore our World and New Algorithms for HPC. We would once more like to thank the participants and invited speakers, the members of the Scientific Committee, the referees who spent time reviewing the papers and our invaluable sponsors. To hear the invited talks and learn about 25 years of HPC development in Canada visit the Symposium website: http://2011.hpcs.ca/lang/en/conference/keynote-speakers/ Enjoy the excellent papers that follow, and we look forward to seeing you in Vancouver for HPCS 2012! Gilles Peslherbe Chair of the Scientific Committee Normand Mousseau Co-Chair of HPCS 2011 Suzanne Talon Chair of the Organizing Committee UQAM Sponsors The PDF also contains photographs from the conference banquet.

  7. Seventeenth Workshop on Computer Simulation Studies in Condensed-Matter Physics

    CERN Document Server

    Landau, David P; Schütler, Heinz-Bernd; Computer Simulation Studies in Condensed-Matter Physics XVI

    2006-01-01

    This status report features the most recent developments in the field, spanning a wide range of topical areas in the computer simulation of condensed matter/materials physics. Both established and new topics are included, ranging from the statistical mechanics of classical magnetic spin models to electronic structure calculations, quantum simulations, and simulations of soft condensed matter. The book presents new physical results as well as novel methods of simulation and data analysis. Highlights of this volume include various aspects of non-equilibrium statistical mechanics, studies of properties of real materials using both classical model simulations and electronic structure calculations, and the use of computer simulations in teaching.

  8. Evaluation of static physics performance of the jPET-D4 by Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Hasegawa, Tomoyuki [Allied Health Sciences, Kitasato University, Kitasato 1-15-1, Sagamihara, Kanagawa, 228-8555 (Japan); Yoshida, Eiji [Molecular Imaging Centre, National Institute of Radiological Sciences, Anagawa 4-9-1, Inage, Chiba, 263-8555 (Japan); Kobayashi, Ayako [Graduate School of Human Health Sciences, Tokyo Metropolitan University, Arakawa, Tokyo, 116-8551 (Japan); Shibuya, Kengo [Molecular Imaging Centre, National Institute of Radiological Sciences, Anagawa 4-9-1, Inage, Chiba, 263-8555 (Japan); Nishikido, Fumihiko [Molecular Imaging Centre, National Institute of Radiological Sciences, Anagawa 4-9-1, Inage, Chiba, 263-8555 (Japan); Kobayashi, Tetsuya [Graduate School of Science and Technology, Chiba University, 1-33 Yayoi, Inage, Chiba, 263-8522 (Japan); Suga, Mikio [Graduate School of Science and Technology, Chiba University, 1-33 Yayoi, Inage, Chiba, 263-8522 (Japan); Yamaya, Taiga [Molecular Imaging Centre, National Institute of Radiological Sciences, Anagawa 4-9-1, Inage, Chiba, 263-8555 (Japan); Kitamura, Keishi [Shimadzu Corporation, 1 Nishinokyo-kuwabara-cho, Nakagyo-ku, Kyoto, 604-8511 (Japan); Maruyama, Koichi [Allied Health Sciences, Kitasato University, Kitasato 1-15-1, Sagamihara, Kanagawa, 228-8555 (Japan); Murayama, Hideo [Molecular Imaging Centre, National Institute of Radiological Sciences, Anagawa 4-9-1, Inage, Chiba, 263-8555 (Japan)

    2007-01-07

    The jPET-D4 is the first PET scanner to introduce a unique four-layer depth-of-interaction (DOI) detector scheme in order to achieve high sensitivity and uniform high spatial resolution. This paper compares measurement and Monte Carlo simulation results of the static physics performance of this prototype research PET scanner. Measurement results include single and coincidence energy spectra, point and line source sensitivities, axial sensitivity profile (slice profile) and scatter fraction. We use GATE (Geant4 application for tomographic emission) as a Monte Carlo radiation transport model. Experimental results are reproduced well by the simulation model with reasonable assumptions on characteristic responses of the DOI detectors. In a previous study, the jPET-D4 was shown to provide a uniform spatial resolution as good as 3 mm (FHWM). In the present study, we demonstrate that a high sensitivity, 11.3 {+-} 0.5%, is provided at the FOV centre. However, about three-fourths of this sensitivity is related to multiple-crystal events, for which some misidentification of the crystal cannot be avoided. Therefore, it is crucial to develop a more efficient way to identify the crystal of interaction and to reduce misidentification in order to make use of these high performance values simultaneously. We expect that effective sensitivity can be improved by replacing the GSO crystals with more absorptive crystals such as BGO and LSO. The results we describe here are essential to take full advantage of the next generation PET systems that have DOI recognition capability.

  9. Evaluation of static physics performance of the jPET-D4 by Monte Carlo simulations.

    Science.gov (United States)

    Hasegawa, Tomoyuki; Yoshida, Eiji; Kobayashi, Ayako; Shibuya, Kengo; Nishikido, Fumihiko; Kobayashi, Tetsuya; Suga, Mikio; Yamaya, Taiga; Kitamura, Keishi; Maruyama, Koichi; Murayama, Hideo

    2007-01-07

    The jPET-D4 is the first PET scanner to introduce a unique four-layer depth-of-interaction (DOI) detector scheme in order to achieve high sensitivity and uniform high spatial resolution. This paper compares measurement and Monte Carlo simulation results of the static physics performance of this prototype research PET scanner. Measurement results include single and coincidence energy spectra, point and line source sensitivities, axial sensitivity profile (slice profile) and scatter fraction. We use GATE (Geant4 application for tomographic emission) as a Monte Carlo radiation transport model. Experimental results are reproduced well by the simulation model with reasonable assumptions on characteristic responses of the DOI detectors. In a previous study, the jPET-D4 was shown to provide a uniform spatial resolution as good as 3 mm (FHWM). In the present study, we demonstrate that a high sensitivity, 11.3 +/- 0.5%, is provided at the FOV centre. However, about three-fourths of this sensitivity is related to multiple-crystal events, for which some misidentification of the crystal cannot be avoided. Therefore, it is crucial to develop a more efficient way to identify the crystal of interaction and to reduce misidentification in order to make use of these high performance values simultaneously. We expect that effective sensitivity can be improved by replacing the GSO crystals with more absorptive crystals such as BGO and LSO. The results we describe here are essential to take full advantage of the next generation PET systems that have DOI recognition capability.

  10. A federation of simulations based on cellular automata in cyber-physical systems

    Directory of Open Access Journals (Sweden)

    Hoang Van Tran

    2016-02-01

    Full Text Available In cyber-physical system (CPS, cooperation between a variety of computational and physical elements usually poses difficulties to current modelling and simulation tools. Although much research has proposed to address those challenges, most solutions do not completely cover uncertain interactions in CPS. In this paper, we present a new approach to federate simulations for CPS. A federation is a combination of, and coordination between simulations upon a standard of communication. In addition, a mixed simulation is defined as several parallel simulations federated in a common time progress. Such simulations run on the models of physical systems, which are built based on cellular automata theory. The experimental results are performed on a federation of three simulations of forest fire spread, river pollution diffusion and wireless sensor network. The obtained results can be utilized to observe and predict the behaviours of physical systems in their interactions.

  11. A model for cosmological simulations of galaxy formation physics

    Science.gov (United States)

    Vogelsberger, Mark; Genel, Shy; Sijacki, Debora; Torrey, Paul; Springel, Volker; Hernquist, Lars

    2013-12-01

    We present a new comprehensive model of the physics of galaxy formation designed for large-scale hydrodynamical simulations of structure formation using the moving-mesh code AREPO. Our model includes primordial and metal-line cooling with self-shielding corrections, stellar evolution and feedback processes, gas recycling, chemical enrichment, a novel subgrid model for the metal loading of outflows, black hole (BH) seeding, BH growth and merging procedures, quasar- and radio-mode feedback, and a prescription for radiative electromagnetic (EM) feedback from active galactic nuclei (AGN). Our stellar evolution and chemical enrichment scheme follows nine elements (H, He, C, N, O, Ne, Mg, Si, Fe) independently. Stellar feedback is realized through kinetic outflows. The metal mass loading of outflows can be adjusted independently of the wind mass loading. This is required to simultaneously reproduce the stellar mass content of low-mass haloes and their gas oxygen abundances. Radiative EM AGN feedback is implemented assuming an average spectral energy distribution and a luminosity-dependent scaling of obscuration effects. This form of feedback suppresses star formation more efficiently than continuous thermal quasar-mode feedback alone, but is less efficient than mechanical radio-mode feedback in regulating star formation in massive haloes. We contrast simulation predictions for different variants of our galaxy formation model with key observations, allowing us to constrain the importance of different modes of feedback and their uncertain efficiency parameters. We identify a fiducial best match model and show that it reproduces, among other things, the cosmic star formation history, the stellar mass function, the stellar mass-halo mass relation, g-, r-, i- and z-band SDSS galaxy luminosity functions, and the Tully-Fisher relation. We can achieve this success only if we invoke very strong forms of stellar and AGN feedback such that star formation is adequately reduced in

  12. A Component Architecture for High-Performance Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, David E; Allan, Benjamin A; Armstrong, Robert C; Bertrand, Felipe; Chiu, Kenneth; Dahlgren, Tamara L; Damevski, Kostadin; Elwasif, Wael R; Epperly, Thomas G; Govindaraju, Madhusudhan; Katz, Daniel S; Kohl, James A; Krishnan, Manoj Kumar; Kumfert, Gary K; Larson, J Walter; Lefantzi, Sophia; Lewis, Michael J; Malony, Allen D; McInnes, Lois C; Nieplocha, Jarek; Norris, Boyana; Parker, Steven G; Ray, Jaideep; Shende, Sameer; Windus, Theresa L; Zhou, Shujia

    2006-07-03

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  13. A Component Architecture for High-Performance Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, D E; Allan, B A; Armstrong, R; Bertrand, F; Chiu, K; Dahlgren, T L; Damevski, K; Elwasif, W R; Epperly, T W; Govindaraju, M; Katz, D S; Kohl, J A; Krishnan, M; Kumfert, G; Larson, J W; Lefantzi, S; Lewis, M J; Malony, A D; McInnes, L C; Nieplocha, J; Norris, B; Parker, S G; Ray, J; Shende, S; Windus, T L; Zhou, S

    2004-12-14

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  14. High performance computing and communications program

    Science.gov (United States)

    Holcomb, Lee

    1992-01-01

    A review of the High Performance Computing and Communications (HPCC) program is provided in vugraph format. The goals and objectives of this federal program are as follows: extend U.S. leadership in high performance computing and computer communications; disseminate the technologies to speed innovation and to serve national goals; and spur gains in industrial competitiveness by making high performance computing integral to design and production.

  15. Simulation-based Education for Endoscopic Third Ventriculostomy : A Comparison Between Virtual and Physical Training Models

    NARCIS (Netherlands)

    Breimer, Gerben E.; Haji, Faizal A.; Bodani, Vivek; Cunningham, Melissa S.; Lopez-Rios, Adriana-Lucia; Okrainec, Allan; Drake, James M.

    BACKGROUND: The relative educational benefits of virtual reality (VR) and physical simulation models for endoscopic third ventriculostomy (ETV) have not been evaluated "head to head." OBJECTIVE: To compare and identify the relative utility of a physical and VR ETV simulation model for use in

  16. Simulation-Based Performance Assessment: An Innovative Approach to Exploring Understanding of Physical Science Concepts

    Science.gov (United States)

    Gale, Jessica; Wind, Stefanie; Koval, Jayma; Dagosta, Joseph; Ryan, Mike; Usselman, Marion

    2016-01-01

    This paper illustrates the use of simulation-based performance assessment (PA) methodology in a recent study of eighth-grade students' understanding of physical science concepts. A set of four simulation-based PA tasks were iteratively developed to assess student understanding of an array of physical science concepts, including net force,…

  17. High Performance Spaceflight Computing (HPSC) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In 2012, the NASA Game Changing Development Program (GCDP), residing in the NASA Space Technology Mission Directorate (STMD), commissioned a High Performance...

  18. High performance carbon nanocomposites for ultracapacitors

    Science.gov (United States)

    Lu, Wen

    2012-10-02

    The present invention relates to composite electrodes for electrochemical devices, particularly to carbon nanotube composite electrodes for high performance electrochemical devices, such as ultracapacitors.

  19. Computer science of the high performance; Informatica del alto rendimiento

    Energy Technology Data Exchange (ETDEWEB)

    Moraleda, A.

    2008-07-01

    The high performance computing is taking shape as a powerful accelerator of the process of innovation, to drastically reduce the waiting times for access to the results and the findings in a growing number of processes and activities as complex and important as medicine, genetics, pharmacology, environment, natural resources management or the simulation of complex processes in a wide variety of industries. (Author)

  20. High performance hand-held gas chromatograph

    Energy Technology Data Exchange (ETDEWEB)

    Yu, C.M.

    1998-04-28

    The Microtechnology Center of Lawrence Livermore National Laboratory has developed a high performance hand-held, real time detection gas chromatograph (HHGC) by Micro-Electro-Mechanical-System (MEMS) technology. The total weight of this hand-held gas chromatograph is about five lbs., with a physical size of 8{close_quotes} x 5{close_quotes} x 3{close_quotes} including carrier gas and battery. It consumes about 12 watts of electrical power with a response time on the order of one to two minutes. This HHGC has an average effective theoretical plate of about 40k. Presently, its sensitivity is limited by its thermal sensitive detector at PPM. Like a conventional G.C., this HHGC consists mainly of three major components: (1) the sample injector, (2) the column, and (3) the detector with related electronics. The present HHGC injector is a modified version of the conventional injector. Its separation column is fabricated completely on silicon wafers by means of MEMS technology. This separation column has a circular cross section with a diameter of 100 pm. The detector developed for this hand-held GC is a thermal conductivity detector fabricated on a silicon nitride window by MEMS technology. A normal Wheatstone bridge is used. The signal is fed into a PC and displayed through LabView software.

  1. Maintaining High-Performance Schools after Construction or Renovation

    Science.gov (United States)

    Luepke, Gary; Ronsivalli, Louis J., Jr.

    2009-01-01

    With taxpayers' considerable investment in schools, it is critical for school districts to preserve their community's assets with new construction or renovation and effective facility maintenance programs. "High-performance" school buildings are designed to link the physical environment to positive student achievement while providing such benefits…

  2. The use of high-performance computing to solve participating media radiative heat transfer problems-results of an NSF workshop

    Energy Technology Data Exchange (ETDEWEB)

    Gritzo, L.A.; Skocypec, R.D. [Sandia National Labs., Albuquerque, NM (United States); Tong, T.W. [Arizona State Univ., Tempe, AZ (United States). Dept. of Mechanical and Aerospace Engineering

    1995-01-11

    Radiation in participating media is an important transport mechanism in many physical systems. The simulation of complex radiative transfer has not effectively exploited high-performance computing capabilities. In response to this need, a workshop attended by members active in the high-performance computing community, members active in the radiative transfer community, and members from closely related fields was held to identify how high-performance computing can be used effectively to solve the transport equation and advance the state-of-the-art in simulating radiative heat transfer. This workshop was held on March 29-30, 1994 in Albuquerque, New Mexico and was conducted by Sandia National Laboratories. The objectives of this workshop were to provide a vehicle to stimulate interest and new research directions within the two communities to exploit the advantages of high-performance computing for solving complex radiative heat transfer problems that are otherwise intractable.

  3. 14th annual Results and Review Workshop on High Performance Computing in Science and Engineering

    CERN Document Server

    Nagel, Wolfgang E; Resch, Michael M; Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2011; High Performance Computing in Science and Engineering '11

    2012-01-01

    This book presents the state-of-the-art in simulation on supercomputers. Leading researchers present results achieved on systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2011. The reports cover all fields of computational science and engineering, ranging from CFD to computational physics and chemistry, to computer science, with a special emphasis on industrially relevant applications. Presenting results for both vector systems and microprocessor-based systems, the book allows readers to compare the performance levels and usability of various architectures. As HLRS

  4. Impact of the genfit2 Kalman-filter-based algorithms on physics simulations performed with PandaRoot

    Energy Technology Data Exchange (ETDEWEB)

    Prencipe, Elisabetta; Ritman, James [Forschungszentrum Juelich, IKP1, Juelich (Germany); Collaboration: PANDA-Collaboration

    2016-07-01

    PANDA is a planned experiment at FAIR (Darmstadt) with a cooled antiproton beam in a range [1.5;15] GeV/c, allowing a wide physics program in nuclear and particle physics. It is the only experiment worldwide, which combines a solenoid field (B=2 T) and a dipole field (B=2 Tm) in an experiment with a fixed target topology, in that energy regime. The tracking system of PANDA involves the presence of a high performance silicon vertex detector, a GEM detector, a Straw-Tubes central tracker, a forward tracking system, and a luminosity monitor. The offline tracking algorithm is developed within the PandaRoot framework, which is a part of the FAIRRoot project. The algorithm here presented is based on a tool containing the Kalman Filter equations and a deterministic annealing filter (genfit). The Kalman-Filter-based algorithms have a wide range of applications; among those in particle physics they can perform extrapolations of track parameters and covariance matrices. The impact on physics simulations performed for the PANDA experiment is shown for the first time, with the PandaRoot framework: improvement is shown for those channels where a good low momentum tracking is required (p{sub T}<400 MeV/c), i.e. D mesons and Λ reconstruction, of about a factor 2.

  5. Believability in simplifications of large scale physically based simulation

    KAUST Repository

    Han, Donghui

    2013-01-01

    We verify two hypotheses which are assumed to be true only intuitively in many rigid body simulations. I: In large scale rigid body simulation, viewers may not be able to perceive distortion incurred by an approximated simulation method. II: Fixing objects under a pile of objects does not affect the visual plausibility. Visual plausibility of scenarios simulated with these hypotheses assumed true are measured using subjective rating from viewers. As expected, analysis of results supports the truthfulness of the hypotheses under certain simulation environments. However, our analysis discovered four factors which may affect the authenticity of these hypotheses: number of collisions simulated simultaneously, homogeneity of colliding object pairs, distance from scene under simulation to camera position, and simulation method used. We also try to find an objective metric of visual plausibility from eye-tracking data collected from viewers. Analysis of these results indicates that eye-tracking does not present a suitable proxy for measuring plausibility or distinguishing between types of simulations. © 2013 ACM.

  6. High-performance liquid chromatography - Ultraviolet method for the determination of total specific migration of nine ultraviolet absorbers in food simulants based on 1,1,3,3-Tetramethylguanidine and organic phase anion exchange solid phase extraction to remove glyceride.

    Science.gov (United States)

    Wang, Jianling; Xiao, Xiaofeng; Chen, Tong; Liu, Tingfei; Tao, Huaming; He, Jun

    2016-06-17

    The glyceride in oil food simulant usually causes serious interferences to target analytes and leads to failure of the normal function of the RP-HPLC column. In this work, a convenient HPLC-UV method for the determination of the total specific migration of nine ultraviolet (UV) absorbers in food simulants was developed based on 1,1,3,3-tetramethylguanidine (TMG) and organic phase anion exchange (OPAE) SPE to efficiently remove glyceride in olive oil simulant. In contrast to the normal ion exchange carried out in an aqueous solution or aqueous phase environment, the OPAE SPE was performed in the organic phase environments, and the time-consuming and challenging extraction of the nine UV absorbers from vegetable oil with aqueous solution could be readily omitted. The method was proved to have good linearity (r≥0.99992), precision (intra-day RSD≤3.3%), and accuracy(91.0%≤recoveries≤107%); furthermore, the lower limit of quantifications (0.05-0.2mg/kg) in five types of food simulants(10% ethanol, 3% acetic acid, 20% ethanol, 50% ethanol and olive oil) was observed. The method was found to be well suited for quantitative determination of the total specific migration of the nine UV absorbers both in aqueous and vegetable oil simulant according to Commission Regulation (EU) No. 10/2011. Migration levels of the nine UV absorbers were determined in 31 plastic samples, and UV-24, UV-531, HHBP and UV-326 were frequently detected, especially in olive oil simulant for UV-326 in PE samples. In addition, the OPAE SPE procedure was also been applied to efficiently enrich or purify seven antioxidants in olive oil simulant. Results indicate that this procedure will have more extensive applications in the enriching or purification of the extremely weak acidic compounds with phenol hydroxyl group that are relatively stable in TMG n-hexane solution and that can be barely extracted from vegetable oil. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. An Associate Degree in High Performance Manufacturing.

    Science.gov (United States)

    Packer, Arnold

    In order for more individuals to enter higher paying jobs, employers must create a sufficient number of high-performance positions (the demand side), and workers must acquire the skills needed to perform in these restructured workplaces (the supply side). Creating an associate degree in High Performance Manufacturing (HPM) will help address four…

  8. Real-Time Animation Using a Mix of Physical Simulation and Kinematics

    NARCIS (Netherlands)

    van Welbergen, H.; Zwiers, Jakob; Ruttkay, Z.M.

    2009-01-01

    Expressive animation (such as gesturing or conducting) is typically generated using procedural animation techniques. These techniques offer precision in both timing and limb placement, but they lack physical realism. On the other hand, physical simulation offers physical realism, but does not

  9. Monte Carlo simulation in statistical physics an introduction

    CERN Document Server

    Binder, Kurt

    1992-01-01

    The Monte Carlo method is a computer simulation method which uses random numbers to simulate statistical fluctuations The method is used to model complex systems with many degrees of freedom Probability distributions for these systems are generated numerically and the method then yields numerically exact information on the models Such simulations may be used tosee how well a model system approximates a real one or to see how valid the assumptions are in an analyical theory A short and systematic theoretical introduction to the method forms the first part of this book The second part is a practical guide with plenty of examples and exercises for the student Problems treated by simple sampling (random and self-avoiding walks, percolation clusters, etc) are included, along with such topics as finite-size effects and guidelines for the analysis of Monte Carlo simulations The two parts together provide an excellent introduction to the theory and practice of Monte Carlo simulations

  10. A semi-physical simulation platform of attitude determination and control system for satellite

    Directory of Open Access Journals (Sweden)

    Yuanjin Yu

    2016-05-01

    Full Text Available A semi-physical simulation platform for attitude determination and control system is proposed to verify the attitude estimator and controller on ground. A simulation target, a host PC, many attitude sensors, and actuators compose the simulation platform. The simulation target is composed of a central processing unit board with VxWorks operating system and many input/output boards connected via Compact Peripheral Component Interconnect bus. The executable programs in target are automatically generated from the simulation models in Simulink based on Real-Time Workshop of MATLAB. A three-axes gyroscope, a three-axes magnetometer, a sun sensor, a star tracer, three flywheels, and a Global Positioning System receiver are connected to the simulation target, which formulates the attitude control cycle of a satellite. The simulation models of the attitude determination and control system are described in detail. Finally, the semi-physical simulation platform is used to demonstrate the availability and rationality of the control scheme of a micro-satellite. Comparing the results between the numerical simulation in Simulink and the semi-physical simulation, the semi-physical simulation platform is available and the control scheme successfully achieves three-axes stabilization.

  11. Alternative High-Performance Ceramic Waste Forms

    Energy Technology Data Exchange (ETDEWEB)

    Sundaram, S. K. [Alfred Univ., NY (United States)

    2017-02-01

    This final report (M5NU-12-NY-AU # 0202-0410) summarizes the results of the project titled “Alternative High-Performance Ceramic Waste Forms,” funded in FY12 by the Nuclear Energy University Program (NEUP Project # 12-3809) being led by Alfred University in collaboration with Savannah River National Laboratory (SRNL). The overall focus of the project is to advance fundamental understanding of crystalline ceramic waste forms and to demonstrate their viability as alternative waste forms to borosilicate glasses. We processed single- and multiphase hollandite waste forms based on simulated waste streams compositions provided by SRNL based on the advanced fuel cycle initiative (AFCI) aqueous separation process developed in the Fuel Cycle Research and Development (FCR&D). For multiphase simulated waste forms, oxide and carbonate precursors were mixed together via ball milling with deionized water using zirconia media in a polyethylene jar for 2 h. The slurry was dried overnight and then separated from the media. The blended powders were then subjected to melting or spark plasma sintering (SPS) processes. Microstructural evolution and phase assemblages of these samples were studied using x-ray diffraction (XRD), scanning electron microscopy (SEM), energy dispersion analysis of x-rays (EDAX), wavelength dispersive spectrometry (WDS), transmission electron spectroscopy (TEM), selective area x-ray diffraction (SAXD), and electron backscatter diffraction (EBSD). These results showed that the processing methods have significant effect on the microstructure and thus the performance of these waste forms. The Ce substitution into zirconolite and pyrochlore materials was investigated using a combination of experimental (in situ XRD and x-ray absorption near edge structure (XANES)) and modeling techniques to study these single phases independently. In zirconolite materials, a transition from the 2M to the 4M polymorph was observed with increasing Ce content. The resulting

  12. Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Merzari, E. [Argonne National Lab. (ANL), Argonne, IL (United States); Shemon, E. R. [Argonne National Lab. (ANL), Argonne, IL (United States); Yu, Y. Q. [Argonne National Lab. (ANL), Argonne, IL (United States); Thomas, J. W. [Argonne National Lab. (ANL), Argonne, IL (United States); Obabko, A. [Argonne National Lab. (ANL), Argonne, IL (United States); Jain, Rajeev [Argonne National Lab. (ANL), Argonne, IL (United States); Mahadevan, Vijay [Argonne National Lab. (ANL), Argonne, IL (United States); Tautges, Timothy [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Solberg, Jerome [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ferencz, Robert Mark [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Whitesides, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-12-21

    This report describes to employ SHARP to perform a first-of-a-kind analysis of the core radial expansion phenomenon in an SFR. This effort required significant advances in the framework Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit used to drive the coupled simulations, manipulate the mesh in response to the deformation of the geometry, and generate the necessary modified mesh files. Furthermore, the model geometry is fairly complex, and consistent mesh generation for the three physics modules required significant effort. Fully-integrated simulations of a 7-assembly mini-core test problem have been performed, and the results are presented here. Physics models of a full-core model of the Advanced Burner Test Reactor have also been developed for each of the three physics modules. Standalone results of each of the three physics modules for the ABTR are presented here, which provides a demonstration of the feasibility of the fully-integrated simulation.

  13. Tech-X Corporation releases simulation code for solving complex problems in plasma physics : VORPAL code provides a robust environment for simulating plasma processes in high-energy physics, IC fabrications and material processing applications

    CERN Multimedia

    2005-01-01

    Tech-X Corporation releases simulation code for solving complex problems in plasma physics : VORPAL code provides a robust environment for simulating plasma processes in high-energy physics, IC fabrications and material processing applications

  14. Workshop on data acquisition and trigger system simulations for high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1992-12-31

    This report discusses the following topics: DAQSIM: A data acquisition system simulation tool; Front end and DCC Simulations for the SDC Straw Tube System; Simulation of Non-Blocklng Data Acquisition Architectures; Simulation Studies of the SDC Data Collection Chip; Correlation Studies of the Data Collection Circuit & The Design of a Queue for this Circuit; Fast Data Compression & Transmission from a Silicon Strip Wafer; Simulation of SCI Protocols in Modsim; Visual Design with vVHDL; Stochastic Simulation of Asynchronous Buffers; SDC Trigger Simulations; Trigger Rates, DAQ & Online Processing at the SSC; Planned Enhancements to MODSEM II & SIMOBJECT -- an Overview -- R.; DAGAR -- A synthesis system; Proposed Silicon Compiler for Physics Applications; Timed -- LOTOS in a PROLOG Environment: an Algebraic language for Simulation; Modeling and Simulation of an Event Builder for High Energy Physics Data Acquisition Systems; A Verilog Simulation for the CDF DAQ; Simulation to Design with Verilog; The DZero Data Acquisition System: Model and Measurements; DZero Trigger Level 1.5 Modeling; Strategies Optimizing Data Load in the DZero Triggers; Simulation of the DZero Level 2 Data Acquisition System; A Fast Method for Calculating DZero Level 1 Jet Trigger Properties and Physics Input to DAQ Studies.

  15. Strategy Guideline: High Performance Residential Lighting

    Energy Technology Data Exchange (ETDEWEB)

    Holton, J.

    2012-02-01

    The Strategy Guideline: High Performance Residential Lighting has been developed to provide a tool for the understanding and application of high performance lighting in the home. The high performance lighting strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner's expectations for high quality lighting.

  16. High Performance Grinding and Advanced Cutting Tools

    CERN Document Server

    Jackson, Mark J

    2013-01-01

    High Performance Grinding and Advanced Cutting Tools discusses the fundamentals and advances in high performance grinding processes, and provides a complete overview of newly-developing areas in the field. Topics covered are grinding tool formulation and structure, grinding wheel design and conditioning and applications using high performance grinding wheels. Also included are heat treatment strategies for grinding tools, using grinding tools for high speed applications, laser-based and diamond dressing techniques, high-efficiency deep grinding, VIPER grinding, and new grinding wheels.

  17. High-performance phase-field modeling

    KAUST Repository

    Vignal, Philippe

    2015-04-27

    Many processes in engineering and sciences involve the evolution of interfaces. Among the mathematical frameworks developed to model these types of problems, the phase-field method has emerged as a possible solution. Phase-fields nonetheless lead to complex nonlinear, high-order partial differential equations, whose solution poses mathematical and computational challenges. Guaranteeing some of the physical properties of the equations has lead to the development of efficient algorithms and discretizations capable of recovering said properties by construction [2, 5]. This work builds-up on these ideas, and proposes novel discretization strategies that guarantee numerical energy dissipation for both conserved and non-conserved phase-field models. The temporal discretization is based on a novel method which relies on Taylor series and ensures strong energy stability. It is second-order accurate, and can also be rendered linear to speed-up the solution process [4]. The spatial discretization relies on Isogeometric Analysis, a finite element method that possesses the k-refinement technology and enables the generation of high-order, high-continuity basis functions. These basis functions are well suited to handle the high-order operators present in phase-field models. Two-dimensional and three dimensional results of the Allen-Cahn, Cahn-Hilliard, Swift-Hohenberg and phase-field crystal equation will be presented, which corroborate the theoretical findings, and illustrate the robustness of the method. Results related to more challenging examples, namely the Navier-Stokes Cahn-Hilliard and a diusion-reaction Cahn-Hilliard system, will also be presented. The implementation was done in PetIGA and PetIGA-MF, high-performance Isogeometric Analysis frameworks [1, 3], designed to handle non-linear, time-dependent problems.

  18. Physical simulations of cavity closure in a creeping material

    Energy Technology Data Exchange (ETDEWEB)

    Sutherland, H.J.; Preece, D.S.

    1985-09-01

    The finite element method has been used extensively to predict the creep closure of underground petroleum storage cavities in rock salt. Even though the numerical modeling requires many simplifying assumptions, the predictions have generally correlated with field data from instrumented wellheads, however, the field data are rather limited. To gain an insight into the behavior of three-dimensional arrays of cavities and to obtain a larger data base for the verification of analytical simulations of creep closure, a series of six centrifuge simulation experiments were performed using a cylindrical block of modeling clay, a creeping material. Three of the simulations were conducted with single, centerline cavities, and three were conducted with a symmetric array of three cavities surrounding a central cavity. The models were subjected to body force loading using a centrifuge. For the single cavity experiments, the models were tested at accelerations of 100, 125 and 150 g's for 2 hours. For the multi-cavity experiments, the simulations were conducted at 100 g's for 3.25 hours. The results are analyzed using dimensional analyses. The analyses illustrate that the centrifuge simulations yield self-consistent simulations of the creep closure of fluid-filled cavities and that the interaction of three-dimensional cavity layouts can be investigated using this technique.

  19. Physics for JavaScript games, animation, and simulations with HTML5 Canvas

    CERN Document Server

    Dobre, Adrian

    2014-01-01

    Have you ever wanted to include believable physical behaviors in your games and projects to give them that extra edge? Physics for JavaScript Games, Animation, and Simulations teaches you how to incorporate real physics, such as gravity, friction, and buoyancy, into your HTML5 games, animations, and simulations. It also includes more advanced topics, such as particle systems, which are essential for creating effects such as sparks or smoke. The book also addresses the key issue of balancing accuracy and simplicity in your games and simulations, and the final chapters provide you with the infor

  20. Eighteenth Workshop on Recent Developments in Computer Simulation Studies in Condensed Matter Physics

    CERN Document Server

    Landau, David P; Schüttler, Heinz-Bernd; Computer Simulation Studies in Condensed-Matter Physics XVIII

    2006-01-01

    This volume represents a "status report" emanating from presentations made during the 18th Annual Workshop on Computer Simulations Studies in Condensed Matter Physics at the Center for Simulational Physics at the University of Georgia in March 2005. It provides a broad overview of the most recent advances in the field, spanning the range from statistical physics to soft condensed matter and biological systems. Results on nanostructures and materials are included as are several descriptions of advances in quantum simulations and quantum computing as well as.methodological advances.

  1. Game simulates destruction according to the laws of physics

    Science.gov (United States)

    Banks, Michael

    2009-07-01

    Video-game enthusiasts who are usually disappointed by unrealistic physical effects should be delighted with a new game that claims to take into account the actual mass and density of buildings for the first time.

  2. Simulation of Engine Internal Flows Using Digital Physics Simulation des écoulements dans les moteurs avec la technique Digital Physics

    Directory of Open Access Journals (Sweden)

    Halliday J.

    2006-12-01

    Full Text Available This paper presents simulations of engine intake port and cylinder flows performed using PowerFLOW software. The numerical technique behind PowerFLOW, called Digital Physics, is based on statistical kinetic theory and it is numerically stable, so divergence does not occur during calculations. Digital Physics uses large numbers of computational cells with a simple turbulence model, giving grid-independence and high levels of accuracy. In addition, the technique is explicit in time, and so a transient simulation is always obtained. The paper outlines the numerical technique and presents details of an engine port and cylinder simulation. Cet article présente la simulation des écoulements dans les conduits d'admission et dans les cylindres avec le code PowerFLOW. La technique numérique sur laquelle repose PowerFLOW, appelée Digital Physics , est basée sur une théorie statistique de la cinétique. Elle est numériquement stable. La technique Digital Physicsutilise un nombre élevé de mailles de calcul avec un modèle de turbulence simple procurant une indépendance du maillage et une précision élevée. De plus, la méthodologie est explicite en temps, permettant la simulation des transitoires. Des simulations d'écoulements dans les conduits d'admission et dans les cylindres sont présentées.

  3. C++ Toolbox for Object-Oriented Modeling and Dynamic Simulation of Physical Systems

    DEFF Research Database (Denmark)

    Wagner, Falko Jens; Poulsen, Mikael Zebbelin

    1999-01-01

    This paper presents the efforts made in an ongoing project that exploits the advantages of using object-oriented methodologies for describing and simulating dynamical systems. The background for this work is a search for new and better ways to simulate physical systems.......This paper presents the efforts made in an ongoing project that exploits the advantages of using object-oriented methodologies for describing and simulating dynamical systems. The background for this work is a search for new and better ways to simulate physical systems....

  4. Radiation Hard High Performance Optoelectronic Devices Project

    Data.gov (United States)

    National Aeronautics and Space Administration — High-performance, radiation-hard, widely-tunable integrated laser/modulator chip and large-area avalanche photodetectors (APDs) are key components of optical...

  5. High Performance Methane Thrust Chamber (HPMTC) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — ORBITEC proposes to develop a High-Performance Methane Thrust Chamber (HPMRE) to meet the demands of advanced chemical propulsion systems for deep-space mission...

  6. High Performance Liquid Chromatography Method for the ...

    African Journals Online (AJOL)

    High Performance Liquid Chromatography Method for the Determination of Anethole in Rat Plasma. ... Journal Home > Vol 13, No 5 (2014) > ... Results: GC determination showed that anethole in the essential oil of star anise exhibited a ...

  7. Analog circuit design designing high performance amplifiers

    CERN Document Server

    Feucht, Dennis

    2010-01-01

    The third volume Designing High Performance Amplifiers applies the concepts from the first two volumes. It is an advanced treatment of amplifier design/analysis emphasizing both wideband and precision amplification.

  8. High performance visual display for HENP detectors

    CERN Document Server

    McGuigan, M; Spiletic, J; Fine, V; Nevski, P

    2001-01-01

    A high end visual display for High Energy Nuclear Physics (HENP) detectors is necessary because of the sheer size and complexity of the detector. For BNL this display will be of special interest because of STAR and ATLAS. To load, rotate, query, and debug simulation code with a modern detector simply takes too long even on a powerful work station. To visualize the HENP detectors with maximal performance we have developed software with the following characteristics. We develop a visual display of HENP detectors on BNL multiprocessor visualization server at multiple level of detail. We work with general and generic detector framework consistent with ROOT, GAUDI etc, to avoid conflicting with the many graphic development groups associated with specific detectors like STAR and ATLAS. We develop advanced OpenGL features such as transparency and polarized stereoscopy. We enable collaborative viewing of detector and events by directly running the analysis in BNL stereoscopic theatre. We construct enhanced interactiv...

  9. Experimental Comparisons between Tetrakis(dimethylamino)titanium Precursor-Based Atomic-Layer-Deposited and Physical-Vapor-Deposited Titanium-Nitride Gate for High-Performance Fin-Type Metal-Oxide-Semiconductor Field-Effect Transistors

    Science.gov (United States)

    Hayashida, Tetsuro; Endo, Kazuhiko; Liu, Yongxun; O'uchi, Shin-ichi; Matsukawa, Takashi; Mizubayashi, Wataru; Migita, Shinji; Morita, Yukinori; Ota, Hiroyuki; Hashiguchi, Hiroki; Kosemura, Daisuke; Kamei, Takahiro; Tsukada, Junichi; Ishikawa, Yuki; Yamauchi, Hiromi; Ogura, Atsushi; Masahara, Meishoku

    2012-04-01

    In this study, we successfully introduced an atomic-layer-deposited (ALD) titanium nitride (TiN) gate grown with a tetrakis(dimethylamino)titanium (TDMAT) precursor into fin-type metal-oxide-semiconductor field-effect transistor (FinFET) fabrication for the first time, and comparatively investigated the electrical characteristics, including mobility and threshold voltage (Vth) variation, of the fabricated ALD and physical-vapor-deposited (PVD)-TiN gate FinFETs. The ALD-TiN gate FinFETs showed superior conformality to the PVD-TiN gate FinFETs. The electron mobilities of the ALD- and PVD-TiN gate FinFETs were comparable in the small Lg region. It was also confirmed that the ALD-TiN gate FinFETs showed a smaller Vth variation than the PVD-TiN gate FinFETs.

  10. High performance computing in Windows Azure cloud

    OpenAIRE

    Ambruš, Dejan

    2013-01-01

    High performance, security, availability, scalability, flexibility and lower costs of maintenance have essentially contributed to the growing popularity of cloud computing in all spheres of life, especially in business. In fact cloud computing offers even more than this. With usage of virtual computing clusters a runtime environment for high performance computing can be efficiently implemented also in a cloud. There are many advantages but also some disadvantages of cloud computing, some ...

  11. CMS: Simulated Physical-Biogeochemical Data, SABGOM Model, Gulf of Mexico, 2005-2010

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset contains monthly mean ocean surface physical and biogeochemical data for the Gulf of Mexico simulated by the South Atlantic Bight and Gulf of Mexico...

  12. Simulation technology achievement of students in physical education classes.

    Directory of Open Access Journals (Sweden)

    Тіmoshenko A.V.

    2010-06-01

    Full Text Available Technology of evaluation of progress was studied during employments by physical exercises. Possibility of the use of design method was probed in an educational process during determination of progress of students. The value of mathematical models in pedagogical activity in the field of physical culture and sport is certain. Mathematical models are offered for the evaluation of success of student young people during employments swimming. Possibility of development of models of evaluation of success is rotined on sporting games, track-and-field, gymnastics.

  13. Interferences and events on epistemic shifts in physics through computer simulations

    CERN Document Server

    Warnke, Martin

    2017-01-01

    Computer simulations are omnipresent media in today's knowledge production. For scientific endeavors such as the detection of gravitational waves and the exploration of subatomic worlds, simulations are essential; however, the epistemic status of computer simulations is rather controversial as they are neither just theory nor just experiment. Therefore, computer simulations have challenged well-established insights and common scientific practices as well as our very understanding of knowledge. This volume contributes to the ongoing discussion on the epistemic position of computer simulations in a variety of physical disciplines, such as quantum optics, quantum mechanics, and computational physics. Originating from an interdisciplinary event, it shows that accounts of contemporary physics can constructively interfere with media theory, philosophy, and the history of science.

  14. Physical simulation of dry microburst using impinging jet model with ...

    African Journals Online (AJOL)

    In this work, an attempt has been made to simulate the dry microburst (microburst not accompanied by rain) experimentally using the impinging jet model for investigating the macroflow dynamics and scale (Reynolds number) dependency of the downburst flow. Flow visualization is done using a smoke generator for ...

  15. Multi-physics Simulation of Thermoelectric Generators through Numerically Modeling

    DEFF Research Database (Denmark)

    Chen, Min; Rosendahl, Lasse; Bach, Inger Palsgaard

    2007-01-01

    The governing equations taken from the assumption of local equilibrium and the heat transfer rate form of Onsager flux have been compared with those based on classical heat transfer formulation by a simplified one dimensional (1-D) thermoelectric generator (TEG) model. In this paper, the simulation...

  16. Perception of realism during mock resuscitations by pediatric housestaff: the impact of simulated physical features.

    Science.gov (United States)

    Donoghue, Aaron J; Durbin, Dennis R; Nadel, Frances M; Stryjewski, Glenn R; Kost, Suzanne I; Nadkarni, Vinay M

    2010-02-01

    Physical signs that can be seen, heard, and felt are one of the cardinal features that convey realism in patient simulations. In critically ill children, physical signs are relied on for clinical management despite their subjective nature. Current technology is limited in its ability to effectively simulate some of these subjective signs; at the same time, data supporting the educational benefit of simulated physical features as a distinct entity are lacking. We surveyed pediatric housestaff as to the realism of scenarios with and without simulated physical signs. Residents at three children's hospitals underwent a before-and-after assessment of performance in mock resuscitations requiring Pediatric Advanced Life Support (PALS), with a didactic review of PALS as the intervention between the assessments. Each subject was randomized to a simulator with physical features either activated (simulator group) or deactivated (mannequin group). Subjects were surveyed as to the realism of the scenarios. Univariate analysis of responses was done between groups. Subjects in the high-fidelity group were surveyed as to the relative importance of specific physical features in enhancing realism. Fifty-one subjects completed all surveys. Subjects in the high-fidelity group rated all scenarios more highly than low-fidelity subjects; the difference achieved statistical significance in scenarios featuring a patient in asystole or pulseless ventricular tachycardia (P realism. PALS scenarios were rated as highly realistic by pediatric residents. Slight differences existed between subjects exposed to simulated physical features and those not exposed to them; these differences were most pronounced in scenarios involving pulselessness. Specific physical features were rated as more important than others by subjects. Data from these surveys may be informative in designing future simulation technology.

  17. Synergetic approachto simulation of physical wear of engineering technical systems

    Directory of Open Access Journals (Sweden)

    Kirillov Andrey Mikhaylovich

    2015-05-01

    Full Text Available In course of time in structural elements of engineering technical systems defects and damages are accumulated, which is caused by loadings and environmental influence. The defects are any inconsistencies with normative documents, and damages are discontinuances of structure. The defects and damages lead to decrease of operational properties of structures (their bearing capacity, waterproofing, thermal resistance, etc. The occurrences of such character are called physical wear.In the article the authors show the possibility of phase trajectory use of the processes of physical wear, creep and cusp catastrophe for determinating the critical timepoint, corresponding to the beginning of the system damage catastrophic growth. The alternative approach to the description of the processes of physical wear and creep of pavement consisting in comparison of asphalt concrete creep curve and the curve of the mathematical model of cusp catastrophe, is received. The applied synergetic approach gives us the chance to improve the existing and create new methods of pavement resource forecasting and assessment of physical wear of any technical constructions.

  18. Co-simulation of cyber-physical systems using HLA

    NARCIS (Netherlands)

    Nagele, T.; Hooman, J.

    2017-01-01

    The development of cyber-physical systems (CPSs) with mechanical, electrical and software components requires a multi-disciplinary approach. Moreover, the use of models is important to support trade-offs and design decisions early in the development process. Since the different engineering

  19. Wavy channel transistor for area efficient high performance operation

    KAUST Repository

    Fahad, Hossain M.

    2013-04-05

    We report a wavy channel FinFET like transistor where the channel is wavy to increase its width without any area penalty and thereby increasing its drive current. Through simulation and experiments, we show the effectiveness of such device architecture is capable of high performance operation compared to conventional FinFETs with comparatively higher area efficiency and lower chip latency as well as lower power consumption.

  20. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  1. Toward high performance in Powder Metallurgy

    Directory of Open Access Journals (Sweden)

    Torralba, José M.

    2014-06-01

    Full Text Available Powder Metallurgy (PM is technology well known for mass production of parts at low cost but usually with worse mechanical properties than same parts obtained by alternative routes. But using this technology, high performance materials can be obtained, depending of the processing route and the type and amount of porosity. In this paper, a brief review of the capabilities of powder technology is made with the objective of attaining the highest level of mechanical and physical properties. For this purpose, different strategies over the processing can be chosen: to act over the density/porosity level and properties of the pores, to act over strengthening mechanisms apart from the density of the material (the alloying system, the microstructure, the grain size,.., to improve the sintering activity by different routes and to use techniques that avoid the grain growth during sintering.La Pulvimetalurgia es una tecnología bien conocida por su faceta de producir piezas de forma masiva a bajo coste, pero habitualmente con una pérdida de propiedades mecánicas si se la compara con tecnologías alternativas para obtener las mismas piezas. Sin embargo, mediante esta tecnología, también se pueden obtener piezas de altas prestaciones, dependiendo de la ruta de procesado y del nivel de porosidad. En este trabajo, se realiza una sucinta revisión de las posibilidades de la tecnología de polvos que permitirían obtener los mayores niveles de prestaciones en cuanto a propiedades mecánicas y físicas. Se pueden elegir distintas estrategias en el procesado: actuar sobre el nivel de densidad/porosidad y las propiedades de los poros, actuar sobre mecanismos de endurecimiento distintos a la densidad (el sistema de aleación, la microestructura, el tamaño de grano,…, mejorar la activación durante la sinterización y utilizar técnicas que inhiban el tamaño de grano durante la sinterización.

  2. Simulation of Forming Process as an Educational Tool Using Physical Modeling

    Science.gov (United States)

    Abdullah, A. B.; Muda, M. R.; Samad, Z.

    2008-01-01

    Metal forming process simulation requires a very high cost including the cost for dies, machine and material and tight process control since the process involve very huge pressure. A physical modeling technique is developed and initiates a new era of educational tool of simulating the process effectively. Several publications and findings have…

  3. Teaching Harmonic Motion in Trigonometry: Inductive Inquiry Supported by Physics Simulations

    Science.gov (United States)

    Sokolowski, Andrzej; Rackley, Robin

    2011-01-01

    In this article, the authors present a lesson whose goal is to utilise a scientific environment to immerse a trigonometry student in the process of mathematical modelling. The scientific environment utilised during this activity is a physics simulation called "Wave on a String" created by the PhET Interactive Simulations Project at…

  4. The Effect of Metacognitive Training and Prompting on Learning Success in Simulation-Based Physics Learning

    Science.gov (United States)

    Moser, Stephanie; Zumbach, Joerg; Deibl, Ines

    2017-01-01

    Computer-based simulations are of particular interest to physics learning because they allow learners to actively manipulate graphical visualizations of complex phenomena. However, learning with simulations requires supportive elements to scaffold learners' activities. Thus, our motivation was to investigate whether direct or indirect…

  5. Strategy Guideline. Partnering for High Performance Homes

    Energy Technology Data Exchange (ETDEWEB)

    Prahl, Duncan [IBACOS, Inc., Pittsburgh, PA (United States)

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. This guide is intended for use by all parties associated in the design and construction of high performance homes. It serves as a starting point and features initial tools and resources for teams to collaborate to continually improve the energy efficiency and durability of new houses.

  6. Nuclear Forces and High-Performance Computing: The Perfect Match

    Energy Technology Data Exchange (ETDEWEB)

    Luu, T; Walker-Loud, A

    2009-06-12

    High-performance computing is now enabling the calculation of certain nuclear interaction parameters directly from Quantum Chromodynamics, the quantum field theory that governs the behavior of quarks and gluons and is ultimately responsible for the nuclear strong force. We briefly describe the state of the field and describe how progress in this field will impact the greater nuclear physics community. We give estimates of computational requirements needed to obtain certain milestones and describe the scientific and computational challenges of this field.

  7. Simulation-based Education for Endoscopic Third Ventriculostomy: A Comparison Between Virtual and Physical Training Models.

    Science.gov (United States)

    Breimer, Gerben E; Haji, Faizal A; Bodani, Vivek; Cunningham, Melissa S; Lopez-Rios, Adriana-Lucia; Okrainec, Allan; Drake, James M

    2017-02-01

    The relative educational benefits of virtual reality (VR) and physical simulation models for endoscopic third ventriculostomy (ETV) have not been evaluated "head to head." To compare and identify the relative utility of a physical and VR ETV simulation model for use in neurosurgical training. Twenty-three neurosurgical residents and 3 fellows performed an ETV on both a physical and VR simulation model. Trainees rated the models using 5-point Likert scales evaluating the domains of anatomy, instrument handling, procedural content, and the overall fidelity of the simulation. Paired t tests were performed for each domain's mean overall score and individual items. The VR model has relative benefits compared with the physical model with respect to realistic representation of intraventricular anatomy at the foramen of Monro (4.5, standard deviation [SD] = 0.7 vs 4.1, SD = 0.6; P = .04) and the third ventricle floor (4.4, SD = 0.6 vs 4.0, SD = 0.9; P = .03), although the overall anatomy score was similar (4.2, SD = 0.6 vs 4.0, SD = 0.6; P = .11). For overall instrument handling and procedural content, the physical simulator outperformed the VR model (3.7, SD = 0.8 vs 4.5; SD = 0.5, P simulators was not perceived as significantly different. Simulation model selection should be based on educational objectives. Training focused on learning anatomy or decision-making for anatomic cues may be aided with the VR simulation model. A focus on developing manual dexterity and technical skills using endoscopic equipment in the operating room may be better learned on the physical simulation model.

  8. Toward High Performance in Industrial Refrigeration Systems

    DEFF Research Database (Denmark)

    Thybo, C.; Izadi-Zamanabadi, Roozbeh; Niemann, H.

    2002-01-01

    Achieving high performance in complex industrial systems requires information manipulation at different system levels. The paper shows how different models of same subsystems, but using different quality of information/data, are used for fault diagnosis as well as robust control design in industr......Achieving high performance in complex industrial systems requires information manipulation at different system levels. The paper shows how different models of same subsystems, but using different quality of information/data, are used for fault diagnosis as well as robust control design...... in industrial refrigeration systems....

  9. Towards High Performance in Industrial Refrigeration Systems

    DEFF Research Database (Denmark)

    Thybo, C.; Izadi-Zamanabadi, Roozbeh; Niemann, H.

    2002-01-01

    Achieving high performance in complex industrial systems requires information manipulation at different system levels. The paper shows how different models of same subsystems, but using different quality of information/data, are used for fault diagnosis as well as robust control design in industr......Achieving high performance in complex industrial systems requires information manipulation at different system levels. The paper shows how different models of same subsystems, but using different quality of information/data, are used for fault diagnosis as well as robust control design...... in industrial refrigeration systems....

  10. High performance parallel I/O

    CERN Document Server

    Prabhat

    2014-01-01

    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  11. High performance computing at Sandia National Labs

    Energy Technology Data Exchange (ETDEWEB)

    Cahoon, R.M.; Noe, J.P.; Vandevender, W.H.

    1995-10-01

    Sandia`s High Performance Computing Environment requires a hierarchy of resources ranging from desktop, to department, to centralized, and finally to very high-end corporate resources capable of teraflop performance linked via high-capacity Asynchronous Transfer Mode (ATM) networks. The mission of the Scientific Computing Systems Department is to provide the support infrastructure for an integrated corporate scientific computing environment that will meet Sandia`s needs in high-performance and midrange computing, network storage, operational support tools, and systems management. This paper describes current efforts at SNL/NM to expand and modernize centralized computing resources in support of this mission.

  12. A Hybrid Model for Multiscale Laser Plasma Simulations with Detailed Collisional Physics

    Science.gov (United States)

    2017-06-15

    important physical process as possible with as little computational cost as possible. • To that end, we are in the early processes of characterizing...Detailed Collisional Physics David Bilyeu, Carl Lederman, Richard Abrantes Air Force Research Laboratory (AFMC) AFRL/RQRS 1 Ara Drive Edwards AFB, CA...for Public Release; Distribution is Unlimited. PA# 17383 A Hybrid Model for Multiscale Laser Plasma Simulations with Detailed Collisional Physics

  13. Simulating cosmic ray physics on a moving mesh

    Science.gov (United States)

    Pfrommer, C.; Pakmor, R.; Schaal, K.; Simpson, C. M.; Springel, V.

    2017-03-01

    We discuss new methods to integrate the cosmic ray (CR) evolution equations coupled to magnetohydrodynamics on an unstructured moving mesh, as realized in the massively parallel AREPO code for cosmological simulations. We account for diffusive shock acceleration of CRs at resolved shocks and at supernova remnants in the interstellar medium (ISM) and follow the advective CR transport within the magnetized plasma, as well as anisotropic diffusive transport of CRs along the local magnetic field. CR losses are included in terms of Coulomb and hadronic interactions with the thermal plasma. We demonstrate the accuracy of our formalism for CR acceleration at shocks through simulations of plane-parallel shock tubes that are compared to newly derived exact solutions of the Riemann shock-tube problem with CR acceleration. We find that the increased compressibility of the post-shock plasma due to the produced CRs decreases the shock speed. However, CR acceleration at spherically expanding blast waves does not significantly break the self-similarity of the Sedov-Taylor solution; the resulting modifications can be approximated by a suitably adjusted, but constant adiabatic index. In first applications of the new CR formalism to simulations of isolated galaxies and cosmic structure formation, we find that CRs add an important pressure component to the ISM that increases the vertical scaleheight of disc galaxies and thus reduces the star formation rate. Strong external structure formation shocks inject CRs into the gas, but the relative pressure of this component decreases towards halo centres as adiabatic compression favours the thermal over the CR pressure.

  14. Tsunami Simulators in Physical Modelling - Concept to Practical Solutions

    Science.gov (United States)

    Chandler, Ian; Allsop, William; Robinson, David; Rossetto, Tiziana; McGovern, David; Todd, David

    2017-04-01

    Whilst many researchers have conducted simple 'tsunami impact' studies, few engineering tools are available to assess the onshore impacts of tsunami, with no agreed methods available to predict loadings on coastal defences, buildings or related infrastructure. Most previous impact studies have relied upon unrealistic waveforms (solitary or dam-break waves and bores) rather than full-duration tsunami waves, or have used simplified models of nearshore and over-land flows. Over the last 10+ years, pneumatic Tsunami Simulators for the hydraulic laboratory have been developed into an exciting and versatile technology, allowing the forces of real-world tsunami to be reproduced and measured in a laboratory environment for the first time. These devices have been used to model generic elevated and N-wave tsunamis up to and over simple shorelines, and at example coastal defences and infrastructure. They have also reproduced full-duration tsunamis including Mercator 2004 and Tohoku 2011, both at 1:50 scale. Engineering scale models of these tsunamis have measured wave run-up on simple slopes, forces on idealised sea defences, pressures / forces on buildings, and scour at idealised buildings. This presentation will describe how these Tsunami Simulators work, demonstrate how they have generated tsunami waves longer than the facilities within which they operate, and will present research results from three generations of Tsunami Simulators. Highlights of direct importance to natural hazard modellers and coastal engineers include measurements of wave run-up levels, forces on single and multiple buildings and comparison with previous theoretical predictions. Multiple buildings have two malign effects. The density of buildings to flow area (blockage ratio) increases water depths and flow velocities in the 'streets'. But the increased building densities themselves also increase the cost of flow per unit area (both personal and monetary). The most recent study with the Tsunami

  15. The Design and Semi-Physical Simulation Test of Fault-Tolerant Controller for Aero Engine

    Science.gov (United States)

    Liu, Yuan; Zhang, Xin; Zhang, Tianhong

    2017-11-01

    A new fault-tolerant control method for aero engine is proposed, which can accurately diagnose the sensor fault by Kalman filter banks and reconstruct the signal by real-time on-board adaptive model combing with a simplified real-time model and an improved Kalman filter. In order to verify the feasibility of the method proposed, a semi-physical simulation experiment has been carried out. Besides the real I/O interfaces, controller hardware and the virtual plant model, semi-physical simulation system also contains real fuel system. Compared with the hardware-in-the-loop (HIL) simulation, semi-physical simulation system has a higher degree of confidence. In order to meet the needs of semi-physical simulation, a rapid prototyping controller with fault-tolerant control ability based on NI CompactRIO platform is designed and verified on the semi-physical simulation test platform. The result shows that the controller can realize the aero engine control safely and reliably with little influence on controller performance in the event of fault on sensor.

  16. Quantum simulation of 2D topological physics in a 1D array of optical cavities.

    Science.gov (United States)

    Luo, Xi-Wang; Zhou, Xingxiang; Li, Chuan-Feng; Xu, Jin-Shi; Guo, Guang-Can; Zhou, Zheng-Wei

    2015-07-06

    Orbital angular momentum of light is a fundamental optical degree of freedom characterized by unlimited number of available angular momentum states. Although this unique property has proved invaluable in diverse recent studies ranging from optical communication to quantum information, it has not been considered useful or even relevant for simulating nontrivial physics problems such as topological phenomena. Contrary to this misconception, we demonstrate the incredible value of orbital angular momentum of light for quantum simulation by showing theoretically how it allows to study a variety of important 2D topological physics in a 1D array of optical cavities. This application for orbital angular momentum of light not only reduces required physical resources but also increases feasible scale of simulation, and thus makes it possible to investigate important topics such as edge-state transport and topological phase transition in a small simulator ready for immediate experimental exploration.

  17. GENASIS Basics: Object-oriented utilitarian functionality for large-scale physics simulations

    Science.gov (United States)

    Cardall, Christian Y.; Budiardja, Reuben D.

    2015-11-01

    Aside from numerical algorithms and problem setup, large-scale physics simulations on distributed-memory supercomputers require more basic utilitarian functionality, such as physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of this sort of rudimentary functionality, along with individual 'unit test' programs and larger example problems demonstrating their use. These classes compose the Basics division of our developing astrophysics simulation code GENASIS (General Astrophysical Simulation System), but their fundamental nature makes them useful for physics simulations in many fields.

  18. Superman to the rescue: Simulating physical invulnerability attenuates exclusion-related interpersonal biases.

    Science.gov (United States)

    Huang, Julie Y; Ackerman, Joshua M; Bargh, John A

    2013-05-01

    People cope with social exclusion both by seeking reconnection with familiar individuals and by denigrating unfamiliar and disliked others. These reactions can be seen as adaptive responses in ancestral environments where ostracism exposed people to physical dangers and even death. To the extent that reactions to ostracism evolved to minimize exposure to danger, alleviating these foundational concerns with danger may lessen people's need to cope with exclusion. Three studies demonstrate how a novel physical invulnerability simulation lessens both positive and negative reactions to social exclusion. Study 1 found that simulating physical invulnerability lessened exclusion-triggered negative attitudes toward stigmatized groups, and demonstrated that perceived invulnerability to injury (vs. imperviousness to pain) accounted for this effect. Studies 2 and 3 focused on another facet of social bias by revealing that simulating physical invulnerability lessened the desire for social connection.

  19. High Performance Networks for High Impact Science

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Mary A.; Bair, Raymond A.

    2003-02-13

    This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

  20. An Introduction to High Performance Fortran

    Directory of Open Access Journals (Sweden)

    John Merlin

    1995-01-01

    Full Text Available High Performance Fortran (HPF is an informal standard for extensions to Fortran 90 to assist its implementation on parallel architectures, particularly for data-parallel computation. Among other things, it includes directives for specifying data distribution across multiple memories, and concurrent execution features. This article provides a tutorial introduction to the main features of HPF.

  1. Debugging a high performance computing program

    Science.gov (United States)

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  2. High Performance Work Systems for Online Education

    Science.gov (United States)

    Contacos-Sawyer, Jonna; Revels, Mark; Ciampa, Mark

    2010-01-01

    The purpose of this paper is to identify the key elements of a High Performance Work System (HPWS) and explore the possibility of implementation in an online institution of higher learning. With the projected rapid growth of the demand for online education and its importance in post-secondary education, providing high quality curriculum, excellent…

  3. Optimization and validation of high performance liquid ...

    African Journals Online (AJOL)

    Optimization and validation of high performance liquid chromatography-ultra violet method for quantitation of metoprolol in rabbit plasma: application to ... Methods: Mobile phase of methanol and 50 mM ammonium dihydrogen phosphate solution (50:50) at pH 3.05 was used for separation of metoprolol on BDS hypersil ...

  4. Project materials [Commercial High Performance Buildings Project

    Energy Technology Data Exchange (ETDEWEB)

    None

    2001-01-01

    The Consortium for High Performance Buildings (ChiPB) is an outgrowth of DOE'S Commercial Whole Buildings Roadmapping initiatives. It is a team-driven public/private partnership that seeks to enable and demonstrate the benefit of buildings that are designed, built and operated to be energy efficient, environmentally sustainable, superior quality, and cost effective.

  5. Comparing Dutch and British high performing managers

    NARCIS (Netherlands)

    Waal, A.A. de; Heijden, B.I.J.M. van der; Selvarajah, C.; Meyer, D.

    2016-01-01

    National cultures have a strong influence on the performance of organizations and should be taken into account when studying the traits of high performing managers. At the same time, many studies that focus upon the attributes of successful managers show that there are attributes that are similar

  6. Performance, Performance System, and High Performance System

    Science.gov (United States)

    Jang, Hwan Young

    2009-01-01

    This article proposes needed transitions in the field of human performance technology. The following three transitions are discussed: transitioning from training to performance, transitioning from performance to performance system, and transitioning from learning organization to high performance system. A proposed framework that comprises…

  7. Gradient High Performance Liquid Chromatography Method ...

    African Journals Online (AJOL)

    Purpose: To develop a gradient high performance liquid chromatography (HPLC) method for the simultaneous determination of phenylephrine (PHE) and ibuprofen (IBU) in solid dosage form. Methods: HPLC determination was carried out on an Agilent XDB C-18 column (4.6 x 150mm, 5 μ particle size) with a gradient ...

  8. Teacher Accountability at High Performing Charter Schools

    Science.gov (United States)

    Aguirre, Moises G.

    2016-01-01

    This study will examine the teacher accountability and evaluation policies and practices at three high performing charter schools located in San Diego County, California. Charter schools are exempted from many laws, rules, and regulations that apply to traditional school systems. By examining the teacher accountability systems at high performing…

  9. Technology Leadership in Malaysia's High Performance School

    Science.gov (United States)

    Yieng, Wong Ai; Daud, Khadijah Binti

    2017-01-01

    Headmaster as leader of the school also plays a role as a technology leader. This applies to the high performance schools (HPS) headmaster as well. The HPS excel in all aspects of education. In this study, researcher is interested in examining the role of the headmaster as a technology leader through interviews with three headmasters of high…

  10. High Performance Computing and Communications Panel Report.

    Science.gov (United States)

    President's Council of Advisors on Science and Technology, Washington, DC.

    This report offers advice on the strengths and weaknesses of the High Performance Computing and Communications (HPCC) initiative, one of five presidential initiatives launched in 1992 and coordinated by the Federal Coordinating Council for Science, Engineering, and Technology. The HPCC program has the following objectives: (1) to extend U.S.…

  11. High Performance Liquid Chromatographic Determination of ...

    African Journals Online (AJOL)

    Purpose: To develop a simple, precise and rapid high-performance liquid chromatographic technique coupled with photodiode array detection (DAD) method for the simultaneous determination of rutin, quercetin, luteolin, genistein, galangin and curcumin in propolis. Methods: Ultrasound-assisted extraction was applied to ...

  12. Rapid high performance liquid chromatographic determination of ...

    African Journals Online (AJOL)

    Rapid high performance liquid chromatographic determination of chlorpropamide in human plasma. MTB Odunola, IS Enemali, M Garba, OO Obodozie. Abstract. Samples were extracted with dichloromethane and the organic layer evaporated to dryness. The residue was dissolved in methanol, and 25 ìl aliquot injected ...

  13. High Performance Liquid Chromatography Method for the ...

    African Journals Online (AJOL)

    chromatography (HPLC) technique with UV-VIS detection method was developed for the determination of the compound in rat ... Keywords: Anethole, High performance liguid chromatography, Star anise, Essential oil, Rat plasma,. Illicium verum Hook. .... solution of anethole. Plasma proteins were precipitated by adding 0.3.

  14. High-performance computing reveals missing genes

    OpenAIRE

    Whyte, Barry James

    2010-01-01

    Scientists at the Virginia Bioinformatics Institute and the Department of Computer Science at Virginia Tech have used high-performance computing to locate small genes that have been missed by scientists in their quest to define the microbial DNA sequences of life.

  15. High performance thermal insulation systems (HiPTI). Vacuum insulated products (VIP). Proceedings of the international conference and workshop

    Energy Technology Data Exchange (ETDEWEB)

    Zimmermann, M.; Bertschinger, H.

    2001-07-01

    These are the proceedings of the International Conference and Workshop held at EMPA Duebendorf, Switzerland, in January 2001. The papers presented at the conference's first day included contributions on the role of high-performance insulation in energy efficiency - providing an overview of available technologies and reviewing physical aspects of heat transfer and the development of thermal insulation as well as the state of the art of glazing technologies such as high-performance and vacuum glazing. Also, vacuum-insulated products (VIP) with fumed silica, applications of VIP systems in technical building systems, nanogels, VIP packaging materials and technologies, measurement of physical properties, VIP for advanced retrofit solutions for buildings and existing and future applications for advanced low energy building are discussed. Finally, research and development concerning VIP for buildings are reported on. The workshops held on the second day covered a preliminary study on high-performance thermal insulation materials with gastight porosity, flexible pipes with high performance thermal insulation, evaluation of modern insulation systems by simulation methods as well as the development of vacuum insulation panels with a stainless steel envelope.

  16. Can We Make Time for Physical Activity? Simulating Effects of Daily Physical Activity on Mortality

    OpenAIRE

    Geoff Rowe; Tremblay, Mark S.; Douglas G. Manuel

    2012-01-01

    Background. The link between physical activity and health outcomes is well established, yet levels of physical activity remain low. This study quantifies effects on mortality of the substitution of low activity episodes by higher activity alternatives using time-use data. Methods. Sample time profiles are representative of the Canadian population (n=19,597). Activity time and mortality are linked using metabolic equivalents(METs). Mortality risk is determined by peak daily METs and hours sp...

  17. Distributed GIS Computing for High Performance Simulation and Visualization Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Today, the ability of sensors to generate geographical data is virtually limitless. Although NASA now provides (together with other agencies such as the USGS) a...

  18. Calculating Fragmentation Functions in Heavy Ion Physics Simulations

    Science.gov (United States)

    Hughes, Charles; Aukerman, Alex; Krobatsch, Thomas; Matyja, Adam; Nattrass, Christine; Neuhaus, James; Sorensen, Soren; Witt, William

    2017-09-01

    A hot dense liquid of quarks and gluons called a Quark Gluon Plasma (QGP) is formed in high energy nuclear collisions at the Relativistic Heavy Ion Collider and the Large Hadron Collider. The high energy partons which scatter during these collisions can serve as probes for measuring QGP bulk properties. The details of how partons lose energy to the QGP medium as they traverse it can be used to constrain models of their energy loss. Specifically, measurements of fragmentation functions in the QGP medium can provide experimental constraints on theoretical parton energy loss mechanisms. However, the high background in heavy ion collisions limits the precision of these measurements. We investigate methods for measuring fragmentation functions in a simple model in order to assess their feasibility. We generate a data-driven heavy ion background based on measurements of charged hadron transverse momentum spectra, charged hadron azimuthal flow, and charged hadron rapidity spectra. We then calculate fragmentation functions in this heavy ion background and compare to calculations in proton-proton simulations. We present the current status of these studies.

  19. High Performance Interactive System Dynamics Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Duckworth, Jonathan C [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-14

    This brochure describes a system dynamics simulation (SD) framework that supports an end-to-end analysis workflow that is optimized for deployment on ESIF facilities(Peregrine and the Insight Center). It includes (I) parallel and distributed simulation of SD models, (ii) real-time 3D visualization of running simulations, and (iii) comprehensive database-oriented persistence of simulation metadata, inputs, and outputs.

  20. High Performance Interactive System Dynamics Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Duckworth, Jonathan C [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-14

    This presentation describes a system dynamics simulation (SD) framework that supports an end-to-end analysis workflow that is optimized for deployment on ESIF facilities(Peregrine and the Insight Center). It includes (I) parallel and distributed simulation of SD models, (ii) real-time 3D visualization of running simulations, and (iii) comprehensive database-oriented persistence of simulation metadata, inputs, and outputs.

  1. The Centre of High-Performance Scientific Computing, Geoverbund, ABC/J - Geosciences enabled by HPSC

    Science.gov (United States)

    Kollet, Stefan; Görgen, Klaus; Vereecken, Harry; Gasper, Fabian; Hendricks-Franssen, Harrie-Jan; Keune, Jessica; Kulkarni, Ketan; Kurtz, Wolfgang; Sharples, Wendy; Shrestha, Prabhakar; Simmer, Clemens; Sulis, Mauro; Vanderborght, Jan

    2016-04-01

    The Centre of High-Performance Scientific Computing (HPSC TerrSys) was founded 2011 to establish a centre of competence in high-performance scientific computing in terrestrial systems and the geosciences enabling fundamental and applied geoscientific research in the Geoverbund ABC/J (geoscientfic research alliance of the Universities of Aachen, Cologne, Bonn and the Research Centre Jülich, Germany). The specific goals of HPSC TerrSys are to achieve relevance at the national and international level in (i) the development and application of HPSC technologies in the geoscientific community; (ii) student education; (iii) HPSC services and support also to the wider geoscientific community; and in (iv) the industry and public sectors via e.g., useful applications and data products. A key feature of HPSC TerrSys is the Simulation Laboratory Terrestrial Systems, which is located at the Jülich Supercomputing Centre (JSC) and provides extensive capabilities with respect to porting, profiling, tuning and performance monitoring of geoscientific software in JSC's supercomputing environment. We will present a summary of success stories of HPSC applications including integrated terrestrial model development, parallel profiling and its application from watersheds to the continent; massively parallel data assimilation using physics-based models and ensemble methods; quasi-operational terrestrial water and energy monitoring; and convection permitting climate simulations over Europe. The success stories stress the need for a formalized education of students in the application of HPSC technologies in future.

  2. Cactus and Visapult: An ultra-high performance grid-distributedvisualization architecture using connectionless protocols

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; Shalf, John

    2002-08-31

    This past decade has seen rapid growth in the size,resolution, and complexity of Grand Challenge simulation codes. Thistrend is accompanied by a trend towards multinational, multidisciplinaryteams who carry out this research in distributed teams, and thecorresponding growth of Grid infrastructure to support these widelydistributed Virtual Organizations. As the number and diversity ofdistributed teams grow, the need for visualization tools to analyze anddisplay multi-terabyte, remote data becomes more pronounced and moreurgent. One such tool that has been successfully used to address thisproblem is Visapult. Visapult is a parallel visualization tool thatemploys Grid-distributed components, latency tolerant visualization andgraphics algorithms, along with high performance network I/O in order toachieve effective remote analysis of massive datasets. In this paper wediscuss improvements to network bandwidth utilization and responsivenessof the Visapult application that result from using connectionlessprotocols to move data payload between the distributed Visapultcomponents and a Grid-enabled, high performance physics simulation usedto study gravitational waveforms of colliding black holes: The Cactuscode. These improvements have boosted Visapult's network efficiency to88-96 percent of the maximum theoretical available bandwidth onmulti-gigabit Wide Area Networks, and greatly enhanced interactivity.Such improvements are critically important for future development ofeffective interactive Grid applications.

  3. Physically Based Modeling and Simulation with Dynamic Spherical Volumetric Simplex Splines

    Science.gov (United States)

    Tan, Yunhao; Hua, Jing; Qin, Hong

    2009-01-01

    In this paper, we present a novel computational modeling and simulation framework based on dynamic spherical volumetric simplex splines. The framework can handle the modeling and simulation of genus-zero objects with real physical properties. In this framework, we first develop an accurate and efficient algorithm to reconstruct the high-fidelity digital model of a real-world object with spherical volumetric simplex splines which can represent with accuracy geometric, material, and other properties of the object simultaneously. With the tight coupling of Lagrangian mechanics, the dynamic volumetric simplex splines representing the object can accurately simulate its physical behavior because it can unify the geometric and material properties in the simulation. The visualization can be directly computed from the object’s geometric or physical representation based on the dynamic spherical volumetric simplex splines during simulation without interpolation or resampling. We have applied the framework for biomechanic simulation of brain deformations, such as brain shifting during the surgery and brain injury under blunt impact. We have compared our simulation results with the ground truth obtained through intra-operative magnetic resonance imaging and the real biomechanic experiments. The evaluations demonstrate the excellent performance of our new technique. PMID:20161636

  4. Architecting Web Sites for High Performance

    Directory of Open Access Journals (Sweden)

    Arun Iyengar

    2002-01-01

    Full Text Available Web site applications are some of the most challenging high-performance applications currently being developed and deployed. The challenges emerge from the specific combination of high variability in workload characteristics and of high performance demands regarding the service level, scalability, availability, and costs. In recent years, a large body of research has addressed the Web site application domain, and a host of innovative software and hardware solutions have been proposed and deployed. This paper is an overview of recent solutions concerning the architectures and the software infrastructures used in building Web site applications. The presentation emphasizes three of the main functions in a complex Web site: the processing of client requests, the control of service levels, and the interaction with remote network caches.

  5. High performance cloud auditing and applications

    CERN Document Server

    Choi, Baek-Young; Song, Sejun

    2014-01-01

    This book mainly focuses on cloud security and high performance computing for cloud auditing. The book discusses emerging challenges and techniques developed for high performance semantic cloud auditing, and presents the state of the art in cloud auditing, computing and security techniques with focus on technical aspects and feasibility of auditing issues in federated cloud computing environments.   In summer 2011, the United States Air Force Research Laboratory (AFRL) CyberBAT Cloud Security and Auditing Team initiated the exploration of the cloud security challenges and future cloud auditing research directions that are covered in this book. This work was supported by the United States government funds from the Air Force Office of Scientific Research (AFOSR), the AFOSR Summer Faculty Fellowship Program (SFFP), the Air Force Research Laboratory (AFRL) Visiting Faculty Research Program (VFRP), the National Science Foundation (NSF) and the National Institute of Health (NIH). All chapters were partially suppor...

  6. Securing Cloud Infrastructure for High Performance Scientific Computations Using Cryptographic Techniques

    OpenAIRE

    Patra, G. K.; Nilotpal Chakraborty

    2014-01-01

    In today's scenario, a large scale of engineering and scientific applications requires high performance computation power in order to simulate various models. Scientific and Engineering models such as Climate Modeling, Weather Forecasting, Large Scale Ocean Modeling, Cyclone Prediction etc require parallel processing of data on high performance computing infrastructure. With the rise of cloud computing, it would be great if such high performance computations can be provided as a service to th...

  7. Failure analysis of high performance ballistic fibers

    OpenAIRE

    Spatola, Jennifer S

    2015-01-01

    High performance fibers have a high tensile strength and modulus, good wear resistance, and a low density, making them ideal for applications in ballistic impact resistance, such as body armor. However, the observed ballistic performance of these fibers is much lower than the predicted values. Since the predictions assume only tensile stress failure, it is safe to assume that the stress state is affecting fiber performance. The purpose of this research was to determine if there are failure mo...

  8. Performance tuning for high performance computing systems

    OpenAIRE

    Pahuja, Himanshu

    2017-01-01

    A Distributed System is composed by integration between loosely coupled software components and the underlying hardware resources that can be distributed over the standard internet framework. High Performance Computing used to involve utilization of supercomputers which could churn a lot of computing power to process massively complex computational tasks, but is now evolving across distributed systems, thereby having the ability to utilize geographically distributed computing resources. We...

  9. Nanoparticles for high performance concrete (HPC)

    OpenAIRE

    Torgal, Fernando Pacheco; Miraldo, Sérgio; Ding, Yining; J.A. Labrincha

    2013-01-01

    According to the 2011 ERMCO statistics, only 11% of the production of ready-mixed concrete relates to the high performance concrete (HPC) target. This percentage has remained unchanged since at least 2001 and appears a strange choice on the part of the construction industry, as HPC offers several advantages over normal-strength concrete, specifically those of high strength and durability. It allows for concrete structures requiring less steel reinforcement and offers a longer serviceable life...

  10. Robust High Performance Aquaporin based Biomimetic Membranes

    DEFF Research Database (Denmark)

    Helix Nielsen, Claus; Zhao, Yichun; Qiu, C.

    2013-01-01

    Aquaporins are water channel proteins with high water permeability and solute rejection, which makes them promising for preparing high-performance biomimetic membranes. Despite the growing interest in aquaporin-based biomimetic membranes (ABMs), it is challenging to produce robust and defect......% rejection for urea and a water permeability around 10 L/(m2h) with 2M NaCl as draw solution. Our results demonstrate the feasibility of using aquaporin proteins in biomimetic membranes for technological applications....

  11. High Performance Computing Operations Review Report

    Energy Technology Data Exchange (ETDEWEB)

    Cupps, Kimberly C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-19

    The High Performance Computing Operations Review (HPCOR) meeting—requested by the ASC and ASCR program headquarters at DOE—was held November 5 and 6, 2013, at the Marriott Hotel in San Francisco, CA. The purpose of the review was to discuss the processes and practices for HPC integration and its related software and facilities. Experiences and lessons learned from the most recent systems deployed were covered in order to benefit the deployment of new systems.

  12. High performance work practices, innovation and performance

    DEFF Research Database (Denmark)

    Jørgensen, Frances; Newton, Cameron; Johnston, Kim

    2013-01-01

    Research spanning nearly 20 years has provided considerable empirical evidence for relationships between High Performance Work Practices (HPWPs) and various measures of performance including increased productivity, improved customer service, and reduced turnover. What stands out from......, and Africa to examine these various questions relating to the HPWP-innovation-performance relationship. Each paper discusses a practice that has been identified in HPWP literature and potential variables that can facilitate or hinder the effects of these practices of innovation- and performance...

  13. vSphere high performance cookbook

    CERN Document Server

    Sarkar, Prasenjit

    2013-01-01

    vSphere High Performance Cookbook is written in a practical, helpful style with numerous recipes focusing on answering and providing solutions to common, and not-so common, performance issues and problems.The book is primarily written for technical professionals with system administration skills and some VMware experience who wish to learn about advanced optimization and the configuration features and functions for vSphere 5.1.

  14. High Performance Electronics on Flexible Silicon

    KAUST Repository

    Sevilla, Galo T.

    2016-09-01

    Over the last few years, flexible electronic systems have gained increased attention from researchers around the world because of their potential to create new applications such as flexible displays, flexible energy harvesters, artificial skin, and health monitoring systems that cannot be integrated with conventional wafer based complementary metal oxide semiconductor processes. Most of the current efforts to create flexible high performance devices are based on the use of organic semiconductors. However, inherent material\\'s limitations make them unsuitable for big data processing and high speed communications. The objective of my doctoral dissertation is to develop integration processes that allow the transformation of rigid high performance electronics into flexible ones while maintaining their performance and cost. In this work, two different techniques to transform inorganic complementary metal-oxide-semiconductor electronics into flexible ones have been developed using industry compatible processes. Furthermore, these techniques were used to realize flexible discrete devices and circuits which include metal-oxide-semiconductor field-effect-transistors, the first demonstration of flexible Fin-field-effect-transistors, and metal-oxide-semiconductors-based circuits. Finally, this thesis presents a new technique to package, integrate, and interconnect flexible high performance electronics using low cost additive manufacturing techniques such as 3D printing and inkjet printing. This thesis contains in depth studies on electrical, mechanical, and thermal properties of the fabricated devices.

  15. Supervising the highly performing general practice registrar.

    Science.gov (United States)

    Morgan, Simon

    2014-02-01

    There is extensive literature on the poorly performing learner. In contrast, there is very little written on supervising the highly performing registrar. Outstanding trainees with high-level knowledge and skills can be a challenge for supervisors to supervise and teach. Narrative review and discussion. As with all learners, a learning-needs analysis is fundamental to successful supervision. The key to effective teaching of the highly performing registrar is to contextualise clinical knowledge and skills with the wisdom of accumulated experience. Moreover, supervisors must provide a stimulating learning environment, with regular opportunities for intellectual challenge. The provision of specific, constructive feedback is essential. There are potential opportunities to extend the highly performing registrar in all domains of general practice, namely communication skills and patient-centred care, applied knowledge and skills, population health, professionalism, and organisation and legal issues. Specific teaching strategies include role-play, video-consultation review, random case analysis, posing hypothetical clinical scenarios, role modelling and teaching other learners. © 2014 John Wiley & Sons Ltd.

  16. High Performance with Prescriptive Optimization and Debugging

    DEFF Research Database (Denmark)

    Jensen, Nicklas Bo

    Parallel programming is the dominant approach to achieve high performance in computing today. Correctly writing efficient and fast parallel programs is a big challenge mostly carried out by experts. We investigate optimization and debugging of parallel programs. We argue that automatic paralleliz......Parallel programming is the dominant approach to achieve high performance in computing today. Correctly writing efficient and fast parallel programs is a big challenge mostly carried out by experts. We investigate optimization and debugging of parallel programs. We argue that automatic...... analysis and vectorizer in GCC. Automatic optimizations often fail for theoretical and practical reasons. When they fail we argue that a hybrid approach can be effective. Using compiler feedback, we propose to use the programmer’s intuition and insight to achieve high performance. Compiler feedback...... the prescriptive debugging model, which is a user-guided model that allows the programmer to use his intuition to diagnose bugs in parallel programs. The model is scalable, yet capable enough, to be general-purpose. In our evaluation we demonstrate low run time overhead and logarithmic scalability. This enable...

  17. Computational Biology and High Performance Computing 2000

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  18. Laser additive manufacturing of high-performance materials

    CERN Document Server

    Gu, Dongdong

    2015-01-01

    This book entitled “Laser Additive Manufacturing of High-Performance Materials” covers the specific aspects of laser additive manufacturing of high-performance new materials components based on an unconventional materials incremental manufacturing philosophy, in terms of materials design and preparation, process control and optimization, and theories of physical and chemical metallurgy. This book describes the capabilities and characteristics of the development of new metallic materials components by laser additive manufacturing process, including nanostructured materials, in situ composite materials, particle reinforced metal matrix composites, etc. The topics presented in this book, similar as laser additive manufacturing technology itself, show a significant interdisciplinary feature, integrating laser technology, materials science, metallurgical engineering, and mechanical engineering. This is a book for researchers, students, practicing engineers, and manufacturing industry professionals interested i...

  19. Technical Basis for Physical Fidelity of NRC Control Room Training Simulators for Advanced Reactors

    Energy Technology Data Exchange (ETDEWEB)

    Minsk, Brian S.; Branch, Kristi M.; Bates, Edward K.; Mitchell, Mark R.; Gore, Bryan F.; Faris, Drury K.

    2009-10-09

    The objective of this study is to determine how simulator physical fidelity influences the effectiveness of training the regulatory personnel responsible for examination and oversight of operating personnel and inspection of technical systems at nuclear power reactors. It seeks to contribute to the U.S. Nuclear Regulatory Commission’s (NRC’s) understanding of the physical fidelity requirements of training simulators. The goal of the study is to provide an analytic framework, data, and analyses that inform NRC decisions about the physical fidelity requirements of the simulators it will need to train its staff for assignment at advanced reactors. These staff are expected to come from increasingly diverse educational and experiential backgrounds.

  20. Computational physics an introduction to Monte Carlo simulations of matrix field theory

    CERN Document Server

    Ydri, Badis

    2017-01-01

    This book is divided into two parts. In the first part we give an elementary introduction to computational physics consisting of 21 simulations which originated from a formal course of lectures and laboratory simulations delivered since 2010 to physics students at Annaba University. The second part is much more advanced and deals with the problem of how to set up working Monte Carlo simulations of matrix field theories which involve finite dimensional matrix regularizations of noncommutative and fuzzy field theories, fuzzy spaces and matrix geometry. The study of matrix field theory in its own right has also become very important to the proper understanding of all noncommutative, fuzzy and matrix phenomena. The second part, which consists of 9 simulations, was delivered informally to doctoral students who are working on various problems in matrix field theory. Sample codes as well as sample key solutions are also provided for convenience and completness. An appendix containing an executive arabic summary of t...

  1. XVI 'Jacques-Louis Lions' Spanish-French School on Numerical Simulation in Physics and Engineering

    CERN Document Server

    Roldán, Teo; Torrens, Juan

    2016-01-01

    This book presents lecture notes from the XVI ‘Jacques-Louis Lions’ Spanish-French School on Numerical Simulation in Physics and Engineering, held in Pamplona (Navarra, Spain) in September 2014. The subjects covered include: numerical analysis of isogeometric methods, convolution quadrature for wave simulations, mathematical methods in image processing and computer vision, modeling and optimization techniques in food processes, bio-processes and bio-systems, and GPU computing for numerical simulation. The book is highly recommended to graduate students in Engineering or Science who want to focus on numerical simulation, either as a research topic or in the field of industrial applications. It can also benefit senior researchers and technicians working in industry who are interested in the use of state-of-the-art numerical techniques in the fields addressed here. Moreover, the book can be used as a textbook for master courses in Mathematics, Physics, or Engineering.

  2. Performance Characteristics of HYDRA - a Multi-Physics simulation code from Lawrence Livermore National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Langer, Steven H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Karlin, Ian [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Marinak, Marty M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-01-09

    HYDRA is used to simulate a variety of experiments carried out at the National Ignition Facility (NIF) [4] and other high energy density physics facilities. HYDRA has packages to simulate radiation transfer, atomic physics, hydrodynamics, laser propagation, and a number of other physics effects. HYDRA has over one million lines of code and includes both MPI and thread-level (OpenMP and pthreads) parallelism. This paper measures the performance characteristics of HYDRA using hardware counters on an IBM BlueGene/Q system. We report key ratios such as bytes/instruction and memory bandwidth for several different physics packages. The total number of bytes read and written per time step is also reported. We show that none of the packages which use significant time are memory bandwidth limited on a Blue Gene/Q. HYDRA currently issues very few SIMD instructions. The pressure on memory bandwidth will increase if high levels of SIMD instructions can be achieved.

  3. Learning and the variation in focus among physics students when using a computer simulation

    Directory of Open Access Journals (Sweden)

    Åke Ingerman

    2012-07-01

    Full Text Available This article presents a qualitative analysis of the essential characteristics of university students’ “focus of awareness” whilst engaged with learning physics related to the Bohr model with the aid of a computer simulation. The research is located within the phenomenographic research tradition, with empirical data comprising audio and video recordings of student discussions and interactions, supplemented by interviews. Analysis of this data resulted in descriptions of four qualitatively distinct focuses: Doing the Assignment, Observing the Presentation, Manipulating the Parameters and Exploring the Physics. The focuses are further elucidated in terms of students’ perceptions of learning and the nature of physics. It is concluded that the learning outcomes possible for the students are dependent on the focus that is adopted in the pedagogical situation. Implications for teaching physics using interactive-type simulations can be drawn through epistemological and meta-cognitive considerations of the kind of mindful interventions appropriate to a specific focus.

  4. Physical and Numerical Simulation of Aerodynamics of Cyclone Heating Device with Distributed Gas Input

    Directory of Open Access Journals (Sweden)

    E. N. Saburov

    2010-01-01

    Full Text Available The paper presents results of physical and numerical simulation of aerodynamics of a cyclone heating device. Calculation models of axial and radial flow motions at various outlet diameters and also cyclone flow motion trajectory have been developed in the paper. The paper considers and compares experimental and calculated distributions of tangential and axial component of full flow rate. The comparison of numerical and physical experimental results has revealed good prospects concerning usage of CFX ®10.0 programming complex for simulation of aerodynamics of cyclone heating devices and further improvement of methodologies and their aerodynamic calculation. 

  5. Simulating hydrological responses with a physically based model in a mountainous watershed

    Directory of Open Access Journals (Sweden)

    Q. Xu

    2015-06-01

    Full Text Available A physical and distributed approach was proposed by Reggiani et al. (1998 to describe the hydrological responses at the catchment scale. The rigorous balance equations for mass, momentum, energy and entropy are applied on the divided spatial domains which are called Representative Elementary Watershed (REW. Based on the 2nd law of thermodynamics, Reggiani (1999 put forward several constitutive relations of hydrological processes. Associated with the above equations, the framework of a physically based distributed hydrological model was established. The crucial step for successfully applying this approach is to develop physically based closure relations for these terms and simplify the set of equations. The paper showed how a theoretical hydrological model based on the REW method was applied to prosecute the hydrological response simulation for a humid watershed. The established model was used to carry on the long-term (daily runoff forecasting and short-term (runoff simulation of storm event hydrological simulation in the studied watershed and the simulated results were analysed. These results and analysis proved that this physically based distributed hydrological model can produce satisfied simulation results and describe the hydrological responses correctly. Finally, several aspects to improve the model demonstrated by the results and analysis were put forward which would be carried out in the future.

  6. Monitoring Change of Body Fluid during Physical Exercise using Bioimpedance Spectroscopy and Finite Element Simulations

    Directory of Open Access Journals (Sweden)

    Lisa Röthlingshöfer

    2011-12-01

    Full Text Available Athletes need a balanced body composition in order to achieve maximum performance. Especially dehydration reduces power and endurance during physical exercise. Monitoring the body composition, with a focus on body fluid, may help to avoid reduction in performance and other health problems.For this, a potential measurement method is bioimpedance spectroscopy (BIS. BIS is a simple, non-invasive measurement method that allows to determine different body compartments (body fluid, fat, fat-free mass. However, because many physiological changes occur during physical exercise that can influence impedance measurements and distort results, it cannot be assumed that the BIS data are related to body fluid loss alone.To confirm that BIS can detect body fluid loss due to physical exercise, finite element (FE simulations were done. Besides impedance, also the current density contribution during a BIS measurement was modeled to evaluate the influence of certain tissues on BIS measurements.Simulations were done using CST EM Studio (Computer Simulation Technology, Germany and the Visible Human Data Set (National Library of Medicine, USA. In addition to the simulations, BIS measurements were also made on athletes. Comparison between the measured bioimpedance data and simulation data, as well as body weight loss during sport, indicates that BIS measurements are sensitive enough to monitor body fluid loss during physical exercise.doi:10.5617/jeb.178 J Electr Bioimp, vol. 2, pp. 79-85, 2011

  7. Monte Carlo 2000 Conference : Advanced Monte Carlo for Radiation Physics, Particle Transport Simulation and Applications

    CERN Document Server

    Baräo, Fernando; Nakagawa, Masayuki; Távora, Luis; Vaz, Pedro

    2001-01-01

    This book focusses on the state of the art of Monte Carlo methods in radiation physics and particle transport simulation and applications, the latter involving in particular, the use and development of electron--gamma, neutron--gamma and hadronic codes. Besides the basic theory and the methods employed, special attention is paid to algorithm development for modeling, and the analysis of experiments and measurements in a variety of fields ranging from particle to medical physics.

  8. Segmentation and Simulation of Objects Represented in Images using Physical Principles

    OpenAIRE

    Patrícia C. T. Gonçalves; Tavares, João Manuel R. S.; Natal Jorge, R.M.

    2008-01-01

    The main goals of the present work are to automatically extract the Contour of an object and to simulate its deformation using a physical approach. In this work, to segment an object represented in an image, an initial contour is manually defined for it that will then automatically evolve until it reaches the border of the desired object. In this approach, the contour is modelled by a physical formulation using the finite element method. and its temporal evolution to the desired final Contour...

  9. Learning from avatars: Learning assistants practice physics pedagogy in a classroom simulator

    Directory of Open Access Journals (Sweden)

    Jacquelyn J. Chini

    2016-02-01

    Full Text Available [This paper is part of the Focused Collection on Preparing and Supporting University Physics Educators.] Undergraduate students are increasingly being used to support course transformations that incorporate research-based instructional strategies. While such students are typically selected based on strong content knowledge and possible interest in teaching, they often do not have previous pedagogical training. The current training models make use of real students or classmates role playing as students as the test subjects. We present a new environment for facilitating the practice of physics pedagogy skills, a highly immersive mixed-reality classroom simulator, and assess its effectiveness for undergraduate physics learning assistants (LAs. LAs prepared, taught, and reflected on a lesson about motion graphs for five highly interactive computer generated student avatars in the mixed-reality classroom simulator. To assess the effectiveness of the simulator for this population, we analyzed the pedagogical skills LAs intended to practice and exhibited during their lessons and explored LAs’ descriptions of their experiences with the simulator. Our results indicate that the classroom simulator created a safe, effective environment for LAs to practice a variety of skills, such as questioning styles and wait time. Additionally, our analysis revealed areas for improvement in our preparation of LAs and use of the simulator. We conclude with a summary of research questions this environment could facilitate.

  10. Learning from avatars: Learning assistants practice physics pedagogy in a classroom simulator

    Science.gov (United States)

    Chini, Jacquelyn J.; Straub, Carrie L.; Thomas, Kevin H.

    2016-06-01

    [This paper is part of the Focused Collection on Preparing and Supporting University Physics Educators.] Undergraduate students are increasingly being used to support course transformations that incorporate research-based instructional strategies. While such students are typically selected based on strong content knowledge and possible interest in teaching, they often do not have previous pedagogical training. The current training models make use of real students or classmates role playing as students as the test subjects. We present a new environment for facilitating the practice of physics pedagogy skills, a highly immersive mixed-reality classroom simulator, and assess its effectiveness for undergraduate physics learning assistants (LAs). LAs prepared, taught, and reflected on a lesson about motion graphs for five highly interactive computer generated student avatars in the mixed-reality classroom simulator. To assess the effectiveness of the simulator for this population, we analyzed the pedagogical skills LAs intended to practice and exhibited during their lessons and explored LAs' descriptions of their experiences with the simulator. Our results indicate that the classroom simulator created a safe, effective environment for LAs to practice a variety of skills, such as questioning styles and wait time. Additionally, our analysis revealed areas for improvement in our preparation of LAs and use of the simulator. We conclude with a summary of research questions this environment could facilitate.

  11. Toward a theory of high performance.

    Science.gov (United States)

    Kirby, Julia

    2005-01-01

    What does it mean to be a high-performance company? The process of measuring relative performance across industries and eras, declaring top performers, and finding the common drivers of their success is such a difficult one that it might seem a fool's errand to attempt. In fact, no one did for the first thousand or so years of business history. The question didn't even occur to many scholars until Tom Peters and Bob Waterman released In Search of Excellence in 1982. Twenty-three years later, we've witnessed several more attempts--and, just maybe, we're getting closer to answers. In this reported piece, HBR senior editor Julia Kirby explores why it's so difficult to study high performance and how various research efforts--including those from John Kotter and Jim Heskett; Jim Collins and Jerry Porras; Bill Joyce, Nitin Nohria, and Bruce Roberson; and several others outlined in a summary chart-have attacked the problem. The challenge starts with deciding which companies to study closely. Are the stars the ones with the highest market caps, the ones with the greatest sales growth, or simply the ones that remain standing at the end of the game? (And when's the end of the game?) Each major study differs in how it defines success, which companies it therefore declares to be worthy of emulation, and the patterns of activity and attitude it finds in common among them. Yet, Kirby concludes, as each study's method incrementally solves problems others have faced, we are progressing toward a consensus theory of high performance.

  12. High performance HRM: NHS employee perspectives.

    Science.gov (United States)

    Hyde, Paula; Sparrow, Paul; Boaden, Ruth; Harris, Claire

    2013-01-01

    The purpose of this paper is to examine National Health Service (NHS) employee perspectives of how high performance human resource (HR) practices contribute to their performance. The paper draws on an extensive qualitative study of the NHS. A novel two-part method was used; the first part used focus group data from managers to identify high-performance HR practices specific to the NHS. Employees then conducted a card-sort exercise where they were asked how or whether the practices related to each other and how each practice affected their work. In total, 11 high performance HR practices relevant to the NHS were identified. Also identified were four reactions to a range of HR practices, which the authors developed into a typology according to anticipated beneficiaries (personal gain, organisation gain, both gain and no-one gains). Employees were able to form their own patterns (mental models) of performance contribution for a range of HR practices (60 interviewees produced 91 groupings). These groupings indicated three bundles particular to the NHS (professional development, employee contribution and NHS deal). These mental models indicate employee perceptions about how health services are organised and delivered in the NHS and illustrate the extant mental models of health care workers. As health services are rearranged and financial pressures begin to bite, these mental models will affect employee reactions to changes both positively and negatively. The novel method allows for identification of mental models that explain how NHS workers understand service delivery. It also delineates the complex and varied relationships between HR practices and individual performance.

  13. Strategy Guideline. High Performance Residential Lighting

    Energy Technology Data Exchange (ETDEWEB)

    Holton, J. [IBACOS, Inc., Pittsburgh, PA (United States)

    2012-02-01

    This report has been developed to provide a tool for the understanding and application of high performance lighting in the home. The strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner’s expectations for high quality lighting.

  14. The monogroove high performance heat pipe

    Science.gov (United States)

    Alario, J.; Haslett, R.; Kosson, R.

    1981-06-01

    The development of the monogroove heat pipe, a fundamentally new high-performance device suitable for multi-kilowatt space radiator heat-rejection systems, is reported. The design separates heat transport and transfer functions, so that each can be separately optimized to yield heat transport capacities on the order of 25 kW/m. Test versions of the device have proven the concept of heat transport capacity control by pore dimensions and the permeability of the circumferential wall wick structure, which together render it insensitive to tilt. All cases tested were for localized, top-side heat input and cooling and produced results close to theoretical predictions.

  15. Playa: High-Performance Programmable Linear Algebra

    Directory of Open Access Journals (Sweden)

    Victoria E. Howle

    2012-01-01

    Full Text Available This paper introduces Playa, a high-level user interface layer for composing algorithms for complex multiphysics problems out of objects from other Trilinos packages. Among other features, Playa provides very high-performance overloaded operators implemented through an expression template mechanism. In this paper, we give an overview of the central Playa objects from a user's perspective, show application to a sequence of increasingly complex solver algorithms, provide timing results for Playa's overloaded operators and other functions, and briefly survey some of the implementation issues involved.

  16. High performance channel injection sealant invention abstract

    Science.gov (United States)

    Rosser, R. W.; Basiulis, D. I.; Salisbury, D. P. (Inventor)

    1982-01-01

    High performance channel sealant is based on NASA patented cyano and diamidoximine-terminated perfluoroalkylene ether prepolymers that are thermally condensed and cross linked. The sealant contains asbestos and, in its preferred embodiments, Lithofrax, to lower its thermal expansion coefficient and a phenolic metal deactivator. Extensive evaluation shows the sealant is extremely resistant to thermal degradation with an onset point of 280 C. The materials have a volatile content of 0.18%, excellent flexibility, and adherence properties, and fuel resistance. No corrosibility to aluminum or titanium was observed.

  17. Portability Support for High Performance Computing

    Science.gov (United States)

    Cheng, Doreen Y.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    While a large number of tools have been developed to support application portability, high performance application developers often prefer to use vendor-provided, non-portable programming interfaces. This phenomena indicates the mismatch between user priorities and tool capabilities. This paper summarizes the results of a user survey and a developer survey. The user survey has revealed the user priorities and resulted in three criteria for evaluating tool support for portability. The developer survey has resulted in the evaluation of portability support and indicated the possibilities and difficulties of improvements.

  18. The entropy core in galaxy clusters: numerical and physical effects in cosmological grid simulations

    OpenAIRE

    Vazza, F.

    2010-01-01

    We investigated the numerical and physical reasons leading to a flat distribution of low gas entropy in the core region of galaxy clusters, as commonly found in grid cosmological simulations. To this end, we run a set of 30 high resolution re-simulations of a 3 x 10^14 M_sol/h cluster of galaxies with the AMR code ENZO, exploring and investigating the details involved in the production of entropy in simulated galaxy clusters. The occurrence of the flat entropy core is found to be mainly due t...

  19. High-Performance Tiled WMS and KML Web Server

    Science.gov (United States)

    Plesea, Lucian

    2007-01-01

    This software is an Apache 2.0 module implementing a high-performance map server to support interactive map viewers and virtual planet client software. It can be used in applications that require access to very-high-resolution geolocated images, such as GIS, virtual planet applications, and flight simulators. It serves Web Map Service (WMS) requests that comply with a given request grid from an existing tile dataset. It also generates the KML super-overlay configuration files required to access the WMS image tiles.

  20. Top scientific research center deploys Zambeel Aztera (TM) network storage system in high performance environment

    CERN Multimedia

    2002-01-01

    " The National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory has implemented a Zambeel Aztera storage system and software to accelerate the productivity of scientists running high performance scientific simulations and computations" (1 page).

  1. Quantum simulations with photons and polaritons merging quantum optics with condensed matter physics

    CERN Document Server

    2017-01-01

    This book reviews progress towards quantum simulators based on photonic and hybrid light-matter systems, covering theoretical proposals and recent experimental work. Quantum simulators are specially designed quantum computers. Their main aim is to simulate and understand complex and inaccessible quantum many-body phenomena found or predicted in condensed matter physics, materials science and exotic quantum field theories. Applications will include the engineering of smart materials, robust optical or electronic circuits, deciphering quantum chemistry and even the design of drugs. Technological developments in the fields of interfacing light and matter, especially in many-body quantum optics, have motivated recent proposals for quantum simulators based on strongly correlated photons and polaritons generated in hybrid light-matter systems. The latter have complementary strengths to cold atom and ion based simulators and they can probe for example out of equilibrium phenomena in a natural driven-dissipative sett...

  2. ION BEAM HEATED TARGET SIMULATIONS FOR WARM DENSE MATTER PHYSICS AND INERTIAL FUSION ENERGY

    Energy Technology Data Exchange (ETDEWEB)

    Barnard, J.J.; Armijo, J.; Bailey, D.S.; Friedman, A.; Bieniosek, F.M.; Henestroza, E.; Kaganovich, I.; Leung, P.T.; Logan, B.G.; Marinak, M.M.; More, R.M.; Ng, S.F.; Penn, G.E.; Perkins, L.J.; Veitzer, S.; Wurtele, J.S.; Yu, S.S.; Zylstra, A.B.

    2008-08-01

    Hydrodynamic simulations have been carried out using the multi-physics radiation hydrodynamics code HYDRA and the simplified one-dimensional hydrodynamics code DISH. We simulate possible targets for a near-term experiment at LBNL (the Neutralized Drift Compression Experiment, NDCX) and possible later experiments on a proposed facility (NDCX-II) for studies of warm dense matter and inertial fusion energy related beam-target coupling. Simulations of various target materials (including solids and foams) are presented. Experimental configurations include single pulse planar metallic solid and foam foils. Concepts for double-pulsed and ramped-energy pulses on cryogenic targets and foams have been simulated for exploring direct drive beam target coupling, and concepts and simulations for collapsing cylindrical and spherical bubbles to enhance temperature and pressure for warm dense matter studies are described.

  3. Ion Beam Heated Target Simulations for Warm Dense Matter Physics and Inertial Fusion Energy

    Energy Technology Data Exchange (ETDEWEB)

    Barnard, J J; Armijo, J; Bailey, D S; Friedman, A; Bieniosek, F M; Henestroza, E; Kaganovich, I; Leung, P T; Logan, B G; Marinak, M M; More, R M; Ng, S F; Penn, G E; Perkins, L J; Veitzer, S; Wurtele, J S; Yu, S S; Zylstra, A B

    2008-08-12

    Hydrodynamic simulations have been carried out using the multi-physics radiation hydrodynamics code HYDRA and the simplified one-dimensional hydrodynamics code DISH. We simulate possible targets for a near-term experiment at LBNL (the Neutralized Drift Compression Experiment, NDCX) and possible later experiments on a proposed facility (NDCX-II) for studies of warm dense matter and inertial fusion energy related beam-target coupling. Simulations of various target materials (including solids and foams) are presented. Experimental configurations include single pulse planar metallic solid and foam foils. Concepts for double-pulsed and ramped-energy pulses on cryogenic targets and foams have been simulated for exploring direct drive beam target coupling, and concepts and simulations for collapsing cylindrical and spherical bubbles to enhance temperature and pressure for warm dense matter studies are described.

  4. II - Detector simulation for the LHC and beyond : how to match computing resources and physics requirements

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Detector simulation at the LHC is one of the most computing intensive activities. In these lectures we will show how physics requirements were met for the LHC experiments and extrapolate to future experiments (FCC-hh case). At the LHC, detectors are complex, very precise and ambitious: this implies modern modelisation tools for geometry and response. Events are busy and characterised by an unprecedented energy scale with hundreds of particles to be traced and high energy showers to be accurately simulated. Furthermore, high luminosities imply many events in a bunch crossing and many bunch crossings to be considered at the same time. In addition, backgrounds not directly correlated to bunch crossings have also to be taken into account. Solutions chosen for ATLAS (a mixture of detailed simulation and fast simulation/parameterisation) will be described and CPU and memory figures will be given. An extrapolation to the FCC-hh case will be tried by taking as example the calorimeter simulation.

  5. I - Detector Simulation for the LHC and beyond: how to match computing resources and physics requirements

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Detector simulation at the LHC is one of the most computing intensive activities. In these lectures we will show how physics requirements were met for the LHC experiments and extrapolate to future experiments (FCC-hh case). At the LHC, detectors are complex, very precise and ambitious: this implies modern modelisation tools for geometry and response. Events are busy and characterised by an unprecedented energy scale with hundreds of particles to be traced and high energy showers to be accurately simulated. Furthermore, high luminosities imply many events in a bunch crossing and many bunch crossings to be considered at the same time. In addition, backgrounds not directly correlated to bunch crossings have also to be taken into account. Solutions chosen for ATLAS (a mixture of detailed simulation and fast simulation/parameterisation) will be described and CPU and memory figures will be given. An extrapolation to the FCC-hh case will be tried by taking as example the calorimeter simulation.

  6. Developing Digital Simulations and its Impact on Physical Education of Pre-Service Teachers

    Directory of Open Access Journals (Sweden)

    Esther Zaretsky

    2006-08-01

    Full Text Available The creation of digital simulations through the use of computers improved physical education of pre-service teachers. The method which was based on up-to-date studies focuses on the visualization of the body's movements in space. The main program of the research concentrated on building curriculum for teaching physical education through computerized presentations. The pre-service teachers reported about their progress in a variety of physical skills and their motivation in both kinds of learning was enhanced.

  7. Design of High Performance Permanent-Magnet Synchronous Wind Generators

    Directory of Open Access Journals (Sweden)

    Chun-Yu Hsiao

    2014-11-01

    Full Text Available This paper is devoted to the analysis and design of high performance permanent-magnet synchronous wind generators (PSWGs. A systematic and sequential methodology for the design of PMSGs is proposed with a high performance wind generator as a design model. Aiming at high induced voltage, low harmonic distortion as well as high generator efficiency, optimal generator parameters such as pole-arc to pole-pitch ratio and stator-slot-shoes dimension, etc. are determined with the proposed technique using Maxwell 2-D, Matlab software and the Taguchi method. The proposed double three-phase and six-phase winding configurations, which consist of six windings in the stator, can provide evenly distributed current for versatile applications regarding the voltage and current demands for practical consideration. Specifically, windings are connected in series to increase the output voltage at low wind speed, and in parallel during high wind speed to generate electricity even when either one winding fails, thereby enhancing the reliability as well. A PMSG is designed and implemented based on the proposed method. When the simulation is performed with a 6 Ω load, the output power for the double three-phase winding and six-phase winding are correspondingly 10.64 and 11.13 kW. In addition, 24 Ω load experiments show that the efficiencies of double three-phase winding and six-phase winding are 96.56% and 98.54%, respectively, verifying the proposed high performance operation.

  8. Strategy Guideline: Partnering for High Performance Homes

    Energy Technology Data Exchange (ETDEWEB)

    Prahl, D.

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. In an environment where the builder is the only source of communication between trades and consultants and where relationships are, in general, adversarial as opposed to cooperative, the chances of any one building system to fail are greater. Furthermore, it is much harder for the builder to identify and capitalize on synergistic opportunities. Partnering can help bridge the cross-functional aspects of the systems approach and achieve performance-based criteria. Critical success factors for partnering include support from top management, mutual trust, effective and open communication, effective coordination around common goals, team building, appropriate use of an outside facilitator, a partnering charter progress toward common goals, an effective problem-solving process, long-term commitment, continuous improvement, and a positive experience for all involved.

  9. High-performance computing for airborne applications

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Heather M [Los Alamos National Laboratory; Manuzzato, Andrea [Los Alamos National Laboratory; Fairbanks, Tom [Los Alamos National Laboratory; Dallmann, Nicholas [Los Alamos National Laboratory; Desgeorges, Rose [Los Alamos National Laboratory

    2010-06-28

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  10. Building Trust in High-Performing Teams

    Directory of Open Access Journals (Sweden)

    Aki Soudunsaari

    2012-06-01

    Full Text Available Facilitation of growth is more about good, trustworthy contacts than capital. Trust is a driving force for business creation, and to create a global business you need to build a team that is capable of meeting the challenge. Trust is a key factor in team building and a needed enabler for cooperation. In general, trust building is a slow process, but it can be accelerated with open interaction and good communication skills. The fast-growing and ever-changing nature of global business sets demands for cooperation and team building, especially for startup companies. Trust building needs personal knowledge and regular face-to-face interaction, but it also requires empathy, respect, and genuine listening. Trust increases communication, and rich and open communication is essential for the building of high-performing teams. Other building materials are a shared vision, clear roles and responsibilities, willingness for cooperation, and supporting and encouraging leadership. This study focuses on trust in high-performing teams. It asks whether it is possible to manage trust and which tools and operation models should be used to speed up the building of trust. In this article, preliminary results from the authors’ research are presented to highlight the importance of sharing critical information and having a high level of communication through constant interaction.

  11. A Linux Workstation for High Performance Graphics

    Science.gov (United States)

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  12. Hybrid ventilation systems and high performance buildings

    Energy Technology Data Exchange (ETDEWEB)

    Utzinger, D.M. [Wisconsin Univ., Milwaukee, WI (United States). School of Architecture and Urban Planning

    2009-07-01

    This paper described hybrid ventilation design strategies and their impact on 3 high performance buildings located in southern Wisconsin. The Hybrid ventilation systems combined occupant controlled natural ventilation with mechanical ventilation systems. Natural ventilation was shown to provide adequate ventilation when appropriately designed. Proper control integration of natural ventilation into hybrid systems was shown to reduce energy consumption in high performance buildings. This paper also described the lessons learned from the 3 buildings. The author served as energy consultant on all three projects and had the responsibility of designing and integrating the natural ventilation systems into the HVAC control strategy. A post occupancy evaluation of building energy performance has provided learning material for architecture students. The 3 buildings included the Schlitz Audubon Nature Center completed in 2003; the Urban Ecology Center completed in 2004; and the Aldo Leopold Legacy Center completed in 2007. This paper included the size, measured energy utilization intensity and percentage of energy supplied by renewable solar power and bio-fuels on site for each building. 6 refs., 2 tabs., 6 figs.

  13. Management issues for high performance storage systems

    Energy Technology Data Exchange (ETDEWEB)

    Louis, S. [Lawrence Livermore National Lab., CA (United States); Burris, R. [Oak Ridge National Lab., TN (United States)

    1995-03-01

    Managing distributed high-performance storage systems is complex and, although sharing common ground with traditional network and systems management, presents unique storage-related issues. Integration technologies and frameworks exist to help manage distributed network and system environments. Industry-driven consortia provide open forums where vendors and users cooperate to leverage solutions. But these new approaches to open management fall short addressing the needs of scalable, distributed storage. We discuss the motivation and requirements for storage system management (SSM) capabilities and describe how SSM manages distributed servers and storage resource objects in the High Performance Storage System (HPSS), a new storage facility for data-intensive applications and large-scale computing. Modem storage systems, such as HPSS, require many SSM capabilities, including server and resource configuration control, performance monitoring, quality of service, flexible policies, file migration, file repacking, accounting, and quotas. We present results of initial HPSS SSM development including design decisions and implementation trade-offs. We conclude with plans for follow-on work and provide storage-related recommendations for vendors and standards groups seeking enterprise-wide management solutions.

  14. PHYSICS

    CERN Multimedia

    P. Sphicas

    The CPT project came to an end in December 2006 and its original scope is now shared among three new areas, namely Computing, Offline and Physics. In the physics area the basic change with respect to the previous system (where the PRS groups were charged with detector and physics object reconstruction and physics analysis) was the split of the detector PRS groups (the old ECAL-egamma, HCAL-jetMET, Tracker-btau and Muons) into two groups each: a Detector Performance Group (DPG) and a Physics Object Group. The DPGs are now led by the Commissioning and Run Coordinator deputy (Darin Acosta) and will appear in the correspond¬ing column in CMS bulletins. On the physics side, the physics object groups are charged with the reconstruction of physics objects, the tuning of the simulation (in collaboration with the DPGs) to reproduce the data, the provision of code for the High-Level Trigger, the optimization of the algorithms involved for the different physics analyses (in collaboration with the analysis gr...

  15. Effects of a Haptic Augmented Simulation on K-12 Students' Achievement and Their Attitudes Towards Physics

    Science.gov (United States)

    Civelek, Turhan; Ucar, Erdem; Ustunel, Hakan; Aydin, Mehmet Kemal

    2014-01-01

    The current research aims to explore the effects of a haptic augmented simulation on students' achievement and their attitudes towards Physics in an immersive virtual reality environment (VRE). A quasi-experimental post-test design was employed utilizing experiment and control groups. The participants were 215 students from a K-12 school in…

  16. Physical modelling and numerical simulation of the round-to-square forward extrusion

    DEFF Research Database (Denmark)

    Gouveia, B.P.P.A.; Rodrigues, J.M.C.; Martins, P.A.F.

    2001-01-01

    In this paper, three-dimensional forward extrusion of a square section from a round billet through a straight converging die is analysed using both physical modelling and numerical simulation (finite element and upper bound analysis). Theoretical fundamentals for each method are reviewed, and com...

  17. Conflicting audio-haptic feedback in physically based simulation of walking sounds

    DEFF Research Database (Denmark)

    Turchet, Luca; Serafin, Stefania; Dimitrov, Smilen

    2010-01-01

    We describe an audio-haptic experiment conducted using a system which simulates in real-time the auditory and haptic sensation of walking on different surfaces. The system is based on physical models, that drive both the haptic and audio synthesizers, and a pair of shoes enhanced with sensors...

  18. Effects of Physical Models and Simulations to Understand Daily Life Applications of Electromagnetic Induction

    Science.gov (United States)

    Tural, Güner; Tarakçi, Demet

    2017-01-01

    Background: One of the topics students have difficulties in understanding is electromagnetic induction. Active learning methods instead of traditional learning method may be able to help facilitate students' understanding such topics more effectively. Purpose: The study investigated the effectiveness of physical models and simulations on students'…

  19. Physical modeling and numerical simulation of V-die forging ingot with central void

    DEFF Research Database (Denmark)

    Christiansen, Peter; Hattel, Jesper Henri; Bay, Niels

    2014-01-01

    Numerical simulation and physical modeling performed on small-scale ingots made from pure lead, having a hole drilled through their centerline to mimic porosity, are utilized to characterize the deformation mechanics of a single open die forging compression stage and to identify the influence...

  20. PhET + Hypercam2 = Simulation Videos for Distance Learning Physics Courses for Elementary Classroom Teachers

    Science.gov (United States)

    Callaway, Thomas

    2010-03-01

    The Physics Education Technology (PhET) simulations offer a fantastic set of tools to present simulations of science phenomena in the classroom. The problem with asynchronous distance learning instruction is that you do not have an opportunity to provide live instruction on the controls for each simulation. For those familiar with physics phenomena, the nature of the controls are usually obvious, but for pre-service elementary school teachers this is not the case. The on-line course that we offer presents physics lectures on DVD. By recording the computer screen and audio from the computer microphone (I use free Hypercam2), it is possible to create avi files that can be incorporated into lecture content that show how to conduct PhET simulations. The avi files can be offered as stand alone presentations, but I incorporate these into lectures using Adobe Premier video editing software. This presentation gives a description of some options on the use of video produced using PhET simulations and screen recording.

  1. Simulated Patients in Physical Therapy Education: Systematic Review and Meta-Analysis.

    Science.gov (United States)

    Pritchard, Shane A; Blackstock, Felicity C; Nestel, Debra; Keating, Jenny L

    2016-09-01

    Traditional models of physical therapy clinical education are experiencing unprecedented pressures. Simulation-based education with simulated (standardized) patients (SPs) is one alternative that has significant potential value, and implementation is increasing globally. However, no review evaluating the effects of SPs on professional (entry-level) physical therapy education is available. The purpose of this study was to synthesize and critically appraise the findings of empirical studies evaluating the contribution of SPs to entry-level physical therapy education, compared with no SP interaction or an alternative education strategy, on any outcome relevant to learning. A systematic search was conducted of Ovid MEDLINE, PubMed, AMED, ERIC, and CINAHL Plus databases and reference lists of included articles, relevant reviews, and gray literature up to May 2015. Articles reporting quantitative or qualitative data evaluating the contribution of SPs to entry-level physical therapy education were included. Two reviewers independently extracted study characteristics, intervention details, and quantitative and qualitative evaluation data from the 14 articles that met the eligibility criteria. Pooled random-effects meta-analysis indicated that replacing up to 25% of authentic patient-based physical therapist practice with SP-based education results in comparable competency (mean difference=1.55/100; 95% confidence interval=-1.08, 4.18; P=.25). Thematic analysis of qualitative data indicated that students value learning with SPs. Assumptions were made to enable pooling of data, and the search strategy was limited to English. Simulated patients appear to have an effect comparable to that of alternative educational strategies on development of physical therapy clinical practice competencies and serve a valuable role in entry-level physical therapy education. However, available research lacks the rigor required for confidence in findings. Given the potential advantages for

  2. Physics-based statistical model and simulation method of RF propagation in urban environments

    Science.gov (United States)

    Pao, Hsueh-Yuan; Dvorak, Steven L.

    2010-09-14

    A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.

  3. On the Dependence of Cloud Feedbacks on Physical Parameterizations in WRF Aquaplanet Simulations

    Science.gov (United States)

    Cesana, Grégory; Suselj, Kay; Brient, Florent

    2017-10-01

    We investigate the effects of physical parameterizations on cloud feedback uncertainty in response to climate change. For this purpose, we construct an ensemble of eight aquaplanet simulations using the Weather Research and Forecasting (WRF) model. In each WRF-derived simulation, we replace only one parameterization at a time while all other parameters remain identical. By doing so, we aim to (i) reproduce cloud feedback uncertainty from state-of-the-art climate models and (ii) understand how parametrizations impact cloud feedbacks. Our results demonstrate that this ensemble of WRF simulations, which differ only in physical parameterizations, replicates the range of cloud feedback uncertainty found in state-of-the-art climate models. We show that microphysics and convective parameterizations govern the magnitude and sign of cloud feedbacks, mostly due to tropical low-level clouds in subsidence regimes. Finally, this study highlights the advantages of using WRF to analyze cloud feedback mechanisms owing to its plug-and-play parameterization capability.

  4. Assessment of robotic patient simulators for training in manual physical therapy examination techniques.

    Science.gov (United States)

    Ishikawa, Shun; Okamoto, Shogo; Isogai, Kaoru; Akiyama, Yasuhiro; Yanagihara, Naomi; Yamada, Yoji

    2015-01-01

    Robots that simulate patients suffering from joint resistance caused by biomechanical and neural impairments are used to aid the training of physical therapists in manual examination techniques. However, there are few methods for assessing such robots. This article proposes two types of assessment measures based on typical judgments of clinicians. One of the measures involves the evaluation of how well the simulator presents different severities of a specified disease. Experienced clinicians were requested to rate the simulated symptoms in terms of severity, and the consistency of their ratings was used as a performance measure. The other measure involves the evaluation of how well the simulator presents different types of symptoms. In this case, the clinicians were requested to classify the simulated resistances in terms of symptom type, and the average ratios of their answers were used as performance measures. For both types of assessment measures, a higher index implied higher agreement among the experienced clinicians that subjectively assessed the symptoms based on typical symptom features. We applied these two assessment methods to a patient knee robot and achieved positive appraisals. The assessment measures have potential for use in comparing several patient simulators for training physical therapists, rather than as absolute indices for developing a standard.

  5. How to create high-performing teams.

    Science.gov (United States)

    Lam, Samuel M

    2010-02-01

    This article is intended to discuss inspirational aspects on how to lead a high-performance team. Cogent topics discussed include how to hire staff through methods of "topgrading" with reference to Geoff Smart and "getting the right people on the bus" referencing Jim Collins' work. In addition, once the staff is hired, this article covers how to separate the "eagles from the ducks" and how to inspire one's staff by creating the right culture with suggestions for further reading by Don Miguel Ruiz (The four agreements) and John Maxwell (21 Irrefutable laws of leadership). In addition, Simon Sinek's concept of "Start with Why" is elaborated to help a leader know what the core element should be with any superior culture. Thieme Medical Publishers.

  6. Parallel Algebraic Multigrid Methods - High Performance Preconditioners

    Energy Technology Data Exchange (ETDEWEB)

    Yang, U M

    2004-11-11

    The development of high performance, massively parallel computers and the increasing demands of computationally challenging applications have necessitated the development of scalable solvers and preconditioners. One of the most effective ways to achieve scalability is the use of multigrid or multilevel techniques. Algebraic multigrid (AMG) is a very efficient algorithm for solving large problems on unstructured grids. While much of it can be parallelized in a straightforward way, some components of the classical algorithm, particularly the coarsening process and some of the most efficient smoothers, are highly sequential, and require new parallel approaches. This chapter presents the basic principles of AMG and gives an overview of various parallel implementations of AMG, including descriptions of parallel coarsening schemes and smoothers, some numerical results as well as references to existing software packages.

  7. A high performance microfabricated surface ion trap

    Science.gov (United States)

    Lobser, Daniel; Blain, Matthew; Haltli, Raymond; Hollowell, Andrew; Revelle, Melissa; Stick, Daniel; Yale, Christopher; Maunz, Peter

    2017-04-01

    Microfabricated surface ion traps present a natural solution to the problem of scalability in trapped ion quantum computing architectures. We address some of the chief concerns about surface ion traps by demonstrating low heating rates, long trapping times as well as other high-performance features of Sandia's high optical access (HOA-2) trap. For example, due to the HOA's specific electrode layout, we are able to rotate principal axes of the trapping potential from 0 to 2 π without any change in the secular trap frequencies. We have also achieved the first single-qubit gates with a diamond norm below a rigorous fault tolerance threshold, and a two-qubit Mølmer-Sørensen gate with a process fidelity of 99.58(6). Here we present specific details of trap capabilities, such as shuttling and ion reordering, as well as details of our high fidelity single- and two-qubit gates.

  8. High-Performance, Low Environmental Impact Refrigerants

    Science.gov (United States)

    McCullough, E. T.; Dhooge, P. M.; Glass, S. M.; Nimitz, J. S.

    2001-01-01

    Refrigerants used in process and facilities systems in the US include R-12, R-22, R-123, R-134a, R-404A, R-410A, R-500, and R-502. All but R-134a, R-404A, and R-410A contain ozone-depleting substances that will be phased out under the Montreal Protocol. Some of the substitutes do not perform as well as the refrigerants they are replacing, require new equipment, and have relatively high global warming potentials (GWPs). New refrigerants are needed that addresses environmental, safety, and performance issues simultaneously. In efforts sponsored by Ikon Corporation, NASA Kennedy Space Center (KSC), and the US Environmental Protection Agency (EPA), ETEC has developed and tested a new class of refrigerants, the Ikon (registered) refrigerants, based on iodofluorocarbons (IFCs). These refrigerants are nonflammable, have essentially zero ozone-depletion potential (ODP), low GWP, high performance (energy efficiency and capacity), and can be dropped into much existing equipment.

  9. High performance nano-composite technology development

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D. [KAERI, Taejon (Korea, Republic of); Kim, E. K.; Jung, S. Y.; Ryu, H. J. [KRICT, Taejon (Korea, Republic of); Hwang, S. S.; Kim, J. K.; Hong, S. M. [KIST, Taejon (Korea, Republic of); Chea, Y. B. [KIGAM, Taejon (Korea, Republic of); Choi, C. H.; Kim, S. D. [ATS, Taejon (Korea, Republic of); Cho, B. G.; Lee, S. H. [HGREC, Taejon (Korea, Republic of)

    1999-06-15

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  10. High Performance Database Management for Earth Sciences

    Science.gov (United States)

    Rishe, Naphtali; Barton, David; Urban, Frank; Chekmasov, Maxim; Martinez, Maria; Alvarez, Elms; Gutierrez, Martha; Pardo, Philippe

    1998-01-01

    The High Performance Database Research Center at Florida International University is completing the development of a highly parallel database system based on the semantic/object-oriented approach. This system provides exceptional usability and flexibility. It allows shorter application design and programming cycles and gives the user control via an intuitive information structure. It empowers the end-user to pose complex ad hoc decision support queries. Superior efficiency is provided through a high level of optimization, which is transparent to the user. Manifold reduction in storage size is allowed for many applications. This system allows for operability via internet browsers. The system will be used for the NASA Applications Center program to store remote sensing data, as well as for Earth Science applications.

  11. High performance stepper motors for space mechanisms

    Science.gov (United States)

    Sega, Patrick; Estevenon, Christine

    1995-01-01

    Hybrid stepper motors are very well adapted to high performance space mechanisms. They are very simple to operate and are often used for accurate positioning and for smooth rotations. In order to fulfill these requirements, the motor torque, its harmonic content, and the magnetic parasitic torque have to be properly designed. Only finite element computations can provide enough accuracy to determine the toothed structures' magnetic permeance, whose derivative function leads to the torque. It is then possible to design motors with a maximum torque capability or with the most reduced torque harmonic content (less than 3 percent of fundamental). These later motors are dedicated to applications where a microstep or a synchronous mode is selected for minimal dynamic disturbances. In every case, the capability to convert electrical power into torque is much higher than on DC brushless motors.

  12. High Performance OLED Panel and Luminaire

    Energy Technology Data Exchange (ETDEWEB)

    Spindler, Jeffrey [OLEDWorks LLC, Rochester, NY (United States)

    2017-02-20

    In this project, OLEDWorks developed and demonstrated the technology required to produce OLED lighting panels with high energy efficiency and excellent light quality. OLED panels developed in this program produce high quality warm white light with CRI greater than 85 and efficacy up to 80 lumens per watt (LPW). An OLED luminaire employing 24 of the high performance panels produces practical levels of illumination for general lighting, with a flux of over 2200 lumens at 60 LPW. This is a significant advance in the state of the art for OLED solid-state lighting (SSL), which is expected to be a complementary light source to the more advanced LED SSL technology that is rapidly replacing all other traditional forms of lighting.

  13. High performance APCS conceptual design and evaluation scoping study

    Energy Technology Data Exchange (ETDEWEB)

    Soelberg, N.; Liekhus, K.; Chambers, A.; Anderson, G.

    1998-02-01

    This Air Pollution Control System (APCS) Conceptual Design and Evaluation study was conducted to evaluate a high-performance (APC) system for minimizing air emissions from mixed waste thermal treatment systems. Seven variations of high-performance APCS designs were conceptualized using several design objectives. One of the system designs was selected for detailed process simulation using ASPEN PLUS to determine material and energy balances and evaluate performance. Installed system capital costs were also estimated. Sensitivity studies were conducted to evaluate the incremental cost and benefit of added carbon adsorber beds for mercury control, specific catalytic reduction for NO{sub x} control, and offgas retention tanks for holding the offgas until sample analysis is conducted to verify that the offgas meets emission limits. Results show that the high-performance dry-wet APCS can easily meet all expected emission limits except for possibly mercury. The capability to achieve high levels of mercury control (potentially necessary for thermally treating some DOE mixed streams) could not be validated using current performance data for mercury control technologies. The engineering approach and ASPEN PLUS modeling tool developed and used in this study identified APC equipment and system performance, size, cost, and other issues that are not yet resolved. These issues need to be addressed in feasibility studies and conceptual designs for new facilities or for determining how to modify existing facilities to meet expected emission limits. The ASPEN PLUS process simulation with current and refined input assumptions and calculations can be used to provide system performance information for decision-making, identifying best options, estimating costs, reducing the potential for emission violations, providing information needed for waste flow analysis, incorporating new APCS technologies in existing designs, or performing facility design and permitting activities.

  14. Training Knowledge Bots for Physics-Based Simulations Using Artificial Neural Networks

    Science.gov (United States)

    Samareh, Jamshid A.; Wong, Jay Ming

    2014-01-01

    Millions of complex physics-based simulations are required for design of an aerospace vehicle. These simulations are usually performed by highly trained and skilled analysts, who execute, monitor, and steer each simulation. Analysts rely heavily on their broad experience that may have taken 20-30 years to accumulate. In addition, the simulation software is complex in nature, requiring significant computational resources. Simulations of system of systems become even more complex and are beyond human capacity to effectively learn their behavior. IBM has developed machines that can learn and compete successfully with a chess grandmaster and most successful jeopardy contestants. These machines are capable of learning some complex problems much faster than humans can learn. In this paper, we propose using artificial neural network to train knowledge bots to identify the idiosyncrasies of simulation software and recognize patterns that can lead to successful simulations. We examine the use of knowledge bots for applications of computational fluid dynamics (CFD), trajectory analysis, commercial finite-element analysis software, and slosh propellant dynamics. We will show that machine learning algorithms can be used to learn the idiosyncrasies of computational simulations and identify regions of instability without including any additional information about their mathematical form or applied discretization approaches.

  15. High performance anode for advanced Li batteries

    Energy Technology Data Exchange (ETDEWEB)

    Lake, Carla [Applied Sciences, Inc., Cedarville, OH (United States)

    2015-11-02

    The overall objective of this Phase I SBIR effort was to advance the manufacturing technology for ASI’s Si-CNF high-performance anode by creating a framework for large volume production and utilization of low-cost Si-coated carbon nanofibers (Si-CNF) for the battery industry. This project explores the use of nano-structured silicon which is deposited on a nano-scale carbon filament to achieve the benefits of high cycle life and high charge capacity without the consequent fading of, or failure in the capacity resulting from stress-induced fracturing of the Si particles and de-coupling from the electrode. ASI’s patented coating process distinguishes itself from others, in that it is highly reproducible, readily scalable and results in a Si-CNF composite structure containing 25-30% silicon, with a compositionally graded interface at the Si-CNF interface that significantly improve cycling stability and enhances adhesion of silicon to the carbon fiber support. In Phase I, the team demonstrated the production of the Si-CNF anode material can successfully be transitioned from a static bench-scale reactor into a fluidized bed reactor. In addition, ASI made significant progress in the development of low cost, quick testing methods which can be performed on silicon coated CNFs as a means of quality control. To date, weight change, density, and cycling performance were the key metrics used to validate the high performance anode material. Under this effort, ASI made strides to establish a quality control protocol for the large volume production of Si-CNFs and has identified several key technical thrusts for future work. Using the results of this Phase I effort as a foundation, ASI has defined a path forward to commercialize and deliver high volume and low-cost production of SI-CNF material for anodes in Li-ion batteries.

  16. Wearable Accelerometers in High Performance Jet Aircraft.

    Science.gov (United States)

    Rice, G Merrill; VanBrunt, Thomas B; Snider, Dallas H; Hoyt, Robert E

    2016-02-01

    Wearable accelerometers have become ubiquitous in the fields of exercise physiology and ambulatory hospital settings. However, these devices have yet to be validated in extreme operational environments. The objective of this study was to correlate the gravitational forces (G forces) detected by wearable accelerometers with the G forces detected by high performance aircraft. We compared the in-flight G forces detected by the two commercially available portable accelerometers to the F/A-18 Carrier Aircraft Inertial Navigation System (CAINS-2) during 20 flights performed by the Navy's Flight Demonstration Squadron (Blue Angels). Postflight questionnaires were also used to assess the perception of distractibility during flight. Of the 20 flights analyzed, 10 complete in-flight comparisons were made, accounting for 25,700 s of correlation between the CAINS-2 and the two tested accelerometers. Both accelerometers had strong correlations with that of the F/A-18 Gz axis, averaging r = 0.92 and r = 0.93, respectively, over 10 flights. Comparison of both portable accelerometer's average vector magnitude to each other yielded an average correlation of r = 0.93. Both accelerometers were found to be minimally distracting. These results suggest the use of wearable accelerometers is a valid means of detecting G forces during high performance aircraft flight. Future studies using this surrogate method of detecting accelerative forces combined with physiological information may yield valuable in-flight normative data that heretofore has been technically difficult to obtain and hence holds the promise of opening the door for a new golden age of aeromedical research.

  17. Development of 2D implicit particle simulation code for ohmic breakdown physics in a tokamak

    Science.gov (United States)

    Yoo, Min-Gu; Lee, Jeongwon; Kim, Young-Gi; Na, Yong-Su

    2017-12-01

    A physical mechanism of an ohmic breakdown in a tokamak has not been clearly understood due to its complexity in physics and geometry especially for a role of space charge in the plasma. We have developed a 2D implicit particle simulation code BREAK, to study the ohmic breakdown physics under a realistic complicated situation considering the space charge and kinetic effects consistently. The ohmic breakdown phenomena span a broad range of spatio-temporal scales, from picoseconds order of the electron gyromotion to milliseconds order of the plasma transport. It is impossible to employ a typical explicit particle simulation method to see the slow plasma transport phenomena of our interest, because a time step size is restricted to be smaller than a period of the electron gyromotion in the explicit scheme. Hence, we adopt several physical and numerical models, such as a toroidally symmetric model and a direct-implicit method, to relax or remove the spatio-temporal restrictions. In addition, coalescence strategies are introduced to control the number of numerical super particles within acceptable ranges to handle the exponentially growing plasma density during the ohmic breakdown. The performance of BREAK is verified with several test cases so that BREAK is expected to be applicable to investigate the ohmic breakdown physics in the tokamak by considering 2-dimensional plasma physics in the RZ plane, self-consistently.

  18. Sleep restriction during simulated wildfire suppression: effect on physical task performance.

    Directory of Open Access Journals (Sweden)

    Grace Vincent

    Full Text Available OBJECTIVES: To examine the effects of sleep restriction on firefighters' physical task performance during simulated wildfire suppression. METHODS: Thirty-five firefighters were matched and randomly allocated to either a control condition (8-hour sleep opportunity, n = 18 or a sleep restricted condition (4-hour sleep opportunity, n = 17. Performance on physical work tasks was evaluated across three days. In addition, heart rate, core temperature, and worker activity were measured continuously. Rate of perceived and exertion and effort sensation were evaluated during the physical work periods. RESULTS: There were no differences between the sleep-restricted and control groups in firefighters' task performance, heart rate, core temperature, or perceptual responses during self-paced simulated firefighting work tasks. However, the sleep-restricted group were less active during periods of non-physical work compared to the control group. CONCLUSIONS: Under self-paced work conditions, 4 h of sleep restriction did not adversely affect firefighters' performance on physical work tasks. However, the sleep-restricted group were less physically active throughout the simulation. This may indicate that sleep-restricted participants adapted their behaviour to conserve effort during rest periods, to subsequently ensure they were able to maintain performance during the firefighter work tasks. This work contributes new knowledge to inform fire agencies of firefighters' operational capabilities when their sleep is restricted during multi-day wildfire events. The work also highlights the need for further research to explore how sleep restriction affects physical performance during tasks of varying duration, intensity, and complexity.

  19. Negotiated meanings of disability simulations in an adapted physical activity course: learning from student reflections.

    Science.gov (United States)

    Leo, Jennifer; Goodwin, Donna

    2014-04-01

    Disability simulations have been used as a pedagogical tool to simulate the functional and cultural experiences of disability. Despite their widespread application, disagreement about their ethical use, value, and efficacy persists. The purpose of this study was to understand how postsecondary kinesiology students experienced participation in disability simulations. An interpretative phenomenological approach guided the study's collection of journal entries and clarifying one-on-one interviews with four female undergraduate students enrolled in a required adapted physical activity course. The data were analyzed thematically and interpreted using the conceptual framework of situated learning. Three themes transpired: unnerving visibility, negotiating environments differently, and tomorrow I'll be fine. The students described emotional responses to the use of wheelchairs as disability artifacts, developed awareness of environmental barriers to culturally and socially normative activities, and moderated their discomfort with the knowledge they could end the simulation at any time.

  20. A compact physical model for the simulation of pNML-based architectures

    Directory of Open Access Journals (Sweden)

    G. Turvani

    2017-05-01

    Full Text Available Among emerging technologies, perpendicular Nanomagnetic Logic (pNML seems to be very promising because of its capability of combining logic and memory onto the same device, scalability, 3D-integration and low power consumption. Recently, Full Adder (FA structures clocked by a global magnetic field have been experimentally demonstrated and detailed characterizations of the switching process governing the domain wall (DW nucleation probability Pnuc and time tnuc have been performed. However, the design of pNML architectures represent a crucial point in the study of this technology; this can have a remarkable impact on the reliability of pNML structures. Here, we present a compact model developed in VHDL which enables to simulate complex pNML architectures while keeping into account critical physical parameters. Therefore, such parameters have been extracted from the experiments, fitted by the corresponding physical equations and encapsulated into the proposed model. Within this, magnetic structures are decomposed into a few basic elements (nucleation centers, nanowires, inverters etc. represented by the according physical description. To validate the model, we redesigned a FA and compared our simulation results to the experiment. With this compact model of pNML devices we have envisioned a new methodology which makes it possible to simulate and test the physical behavior of complex architectures with very low computational costs.

  1. A compact physical model for the simulation of pNML-based architectures

    Science.gov (United States)

    Turvani, G.; Riente, F.; Plozner, E.; Schmitt-Landsiedel, D.; Breitkreutz-v. Gamm, S.

    2017-05-01

    Among emerging technologies, perpendicular Nanomagnetic Logic (pNML) seems to be very promising because of its capability of combining logic and memory onto the same device, scalability, 3D-integration and low power consumption. Recently, Full Adder (FA) structures clocked by a global magnetic field have been experimentally demonstrated and detailed characterizations of the switching process governing the domain wall (DW) nucleation probability Pnuc and time tnuc have been performed. However, the design of pNML architectures represent a crucial point in the study of this technology; this can have a remarkable impact on the reliability of pNML structures. Here, we present a compact model developed in VHDL which enables to simulate complex pNML architectures while keeping into account critical physical parameters. Therefore, such parameters have been extracted from the experiments, fitted by the corresponding physical equations and encapsulated into the proposed model. Within this, magnetic structures are decomposed into a few basic elements (nucleation centers, nanowires, inverters etc.) represented by the according physical description. To validate the model, we redesigned a FA and compared our simulation results to the experiment. With this compact model of pNML devices we have envisioned a new methodology which makes it possible to simulate and test the physical behavior of complex architectures with very low computational costs.

  2. Optimizing targeted vaccination across cyber-physical networks: an empirically based mathematical simulation study

    DEFF Research Database (Denmark)

    Mones, Enys; Stopczynski, Arkadiusz; Pentland, Alex 'Sandy'

    2018-01-01

    . If interruption of disease transmission is the goal, targeting requires knowledge of underlying person-to-person contact networks. Digital communication networks may reflect not only virtual but also physical interactions that could result in disease transmission, but the precise overlap between these cyber...... and physical networks has never been empirically explored in real-life settings. Here, we study the digital communication activity of more than 500 individuals along with their person-to-person contacts at a 5-min temporal resolution. We then simulate different disease transmission scenarios on the person......-to-person physical contact network to determine whether cyber communication networks can be harnessed to advance the goal of targeted vaccination for a disease spreading on the network of physical proximity. We show that individuals selected on the basis of their closeness centrality within cyber networks (what we...

  3. SISYPHUS: A high performance seismic inversion factory

    Science.gov (United States)

    Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas

    2016-04-01

    In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with

  4. NCI's Transdisciplinary High Performance Scientific Data Platform

    Science.gov (United States)

    Evans, Ben; Antony, Joseph; Bastrakova, Irina; Car, Nicholas; Cox, Simon; Druken, Kelsey; Evans, Bradley; Fraser, Ryan; Ip, Alex; Kemp, Carina; King, Edward; Minchin, Stuart; Larraondo, Pablo; Pugh, Tim; Richards, Clare; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    The Australian National Computational Infrastructure (NCI) manages Earth Systems data collections sourced from several domains and organisations onto a single High Performance Data (HPD) Node to further Australia's national priority research and innovation agenda. The NCI HPD Node has rapidly established its value, currently managing over 10 PBytes of datasets from collections that span a wide range of disciplines including climate, weather, environment, geoscience, geophysics, water resources and social sciences. Importantly, in order to facilitate broad user uptake, maximise reuse and enable transdisciplinary access through software and standardised interfaces, the datasets, associated information systems and processes have been incorporated into the design and operation of a unified platform that NCI has called, the National Environmental Research Data Interoperability Platform (NERDIP). The key goal of the NERDIP is to regularise data access so that it is easily discoverable, interoperable for different domains and enabled for high performance methods. It adopts and implements international standards and data conventions, and promotes scientific integrity within a high performance computing and data analysis environment. NCI has established a rich and flexible computing environment to access to this data, through the NCI supercomputer; a private cloud that supports both domain focused virtual laboratories and in-common interactive analysis interfaces; as well as remotely through scalable data services. Data collections of this importance must be managed with careful consideration of both their current use and the needs of the end-communities, as well as its future potential use, such as transitioning to more advanced software and improved methods. It is therefore critical that the data platform is both well-managed and trusted for stable production use (including transparency and reproducibility), agile enough to incorporate new technological advances and

  5. Inquiry-Based Whole-Class Teaching with Computer Simulations in Physics

    Science.gov (United States)

    Rutten, Nico; van der Veen, Jan T.; van Joolingen, Wouter R.

    2015-05-01

    In this study we investigated the pedagogical context of whole-class teaching with computer simulations. We examined relations between the attitudes and learning goals of teachers and their students regarding the use of simulations in whole-class teaching, and how teachers implement these simulations in their teaching practices. We observed lessons presented by 24 physics teachers in which they used computer simulations. Students completed questionnaires about the lesson, and each teacher was interviewed afterwards. These three data sources captured implementation by the teacher, and the learning goals and attitudes of students and their teachers regarding teaching with computer simulations. For each teacher, we calculated an Inquiry-Cycle-Score (ICS) based on the occurrence and order of the inquiry activities of predicting, observing and explaining during teaching, and a Student-Response-Rate (SRR) reflecting the level of active student participation. Statistical analyses revealed positive correlations between the inquiry-based character of the teaching approach and students' attitudes regarding its contribution to their motivation and insight, a negative correlation between the SRR and the ICS, and a positive correlation between teachers' attitudes about inquiry-based teaching with computer simulations and learning goal congruence between the teacher and his/her students. This means that active student participation is likely to be lower when the instruction more closely resembles the inquiry cycle, and that teachers with a positive attitude about inquiry-based teaching with computer simulations realize the importance of learning goal congruence.

  6. ALEGRA-HEDP Multi-Dimensional Simulations of Z-pinch Related Physics

    Science.gov (United States)

    Garasi, Christopher J.

    2003-10-01

    The marriage of experimental diagnostics and computer simulations continues to enhance our understanding of the physics and dynamics associated with current-driven wire arrays. Early models that assumed the formation of an unstable, cylindrical shell of plasma due to wire merger have been replaced with a more complex picture involving wire material ablating non-uniformly along the wires, creating plasma pre-fill interior to the array before the bulk of the array collapses due to magnetic forces. Non-uniform wire ablation leads to wire breakup, which provides a mechanism for some wire material to be left behind as the bulk of the array stagnates onto the pre-fill. Once the bulk of the material has stagnated, electrical current can then shift back to the material left behind and cause it to stagnate onto the already collapsed bulk array mass. These complex effects impact the total radiation output from the wire array which is very important to application of that radiation for inertial confinement fusion. A detailed understanding of the formation and evolution of wire array perturbations is needed, especially for those which are three-dimensional in nature. Sandia National Laboratories has developed a multi-physics research code tailored to simulate high energy density physics (HEDP) environments. ALEGRA-HEDP has begun to simulate the evolution of wire arrays and has produced the highest fidelity, two-dimensional simulations of wire-array precursor ablation to date. Our three-dimensional code capability now provides us with the ability to solve for the magnetic field and current density distribution associated with the wire array and the evolution of three-dimensional effects seen experimentally. The insight obtained from these multi-dimensional simulations of wire arrays will be presented and specific simulations will be compared to experimental data.

  7. Constraining physical parameters of ultra-fast outflows in PDS 456 with Monte Carlo simulations

    Science.gov (United States)

    Hagino, K.; Odaka, H.; Done, C.; Gandhi, P.; Takahashi, T.

    2014-07-01

    Deep absorption lines with extremely high velocity of ˜0.3c observed in PDS 456 spectra strongly indicate the existence of ultra-fast outflows (UFOs). However, the launching and acceleration mechanisms of UFOs are still uncertain. One possible way to solve this is to constrain physical parameters as a function of distance from the source. In order to study the spatial dependence of parameters, it is essential to adopt 3-dimensional Monte Carlo simulations that treat radiation transfer in arbitrary geometry. We have developed a new simulation code of X-ray radiation reprocessed in AGN outflow. Our code implements radiative transfer in 3-dimensional biconical disk wind geometry, based on Monte Carlo simulation framework called MONACO (Watanabe et al. 2006, Odaka et al. 2011). Our simulations reproduce FeXXV and FeXXVI absorption features seen in the spectra. Also, broad Fe emission lines, which reflects the geometry and viewing angle, is successfully reproduced. By comparing the simulated spectra with Suzaku data, we obtained constraints on physical parameters. We discuss launching and acceleration mechanisms of UFOs in PDS 456 based on our analysis.

  8. Physical properties of simulated galaxy populations at z = 2 - II. Effects of cosmology, reionization and ISM physics

    Science.gov (United States)

    Haas, Marcel R.; Schaye, Joop; Booth, C. M.; Dalla Vecchia, Claudio; Springel, Volker; Theuns, Tom; Wiersma, Robert P. C.

    2013-11-01

    We use hydrodynamical simulations from the OverWhelmingly Large Simulations project to investigate the dependence of the physical properties of galaxy populations at redshift 2 on the assumed star formation law, the equation of state imposed on the unresolved interstellar medium, the stellar initial mass function, the reionization history and the assumed cosmology. This work complements that of Paper I, where we studied the effects of varying models for galactic winds driven by star formation and active galactic nucleus. The normalization of the matter power spectrum strongly affects the galaxy mass function, but has a relatively small effect on the physical properties of galaxies residing in haloes of a fixed mass. Reionization suppresses the stellar masses and gas fractions of low-mass galaxies, but by z = 2 the results are insensitive to the timing of reionization. The stellar initial mass function mainly determines the physical properties of galaxies through its effect on the efficiency of the feedback, while changes in the recycled mass and metal fractions play a smaller role. If we use a recipe for star formation that reproduces the observed star formation law independently of the assumed equation of state of the unresolved interstellar medium, then the latter is unimportant. The star formation law, i.e. the gas consumption time-scale as a function of surface density, determines the mass of dense, star-forming gas in galaxies, but affects neither the star formation rate nor the stellar mass. This can be understood in terms of self-regulation: the gas fraction adjusts until the outflow rate balances the inflow rate.

  9. Implementations of multiphysics simulation for MEMS by coupling single-physics solvers

    Directory of Open Access Journals (Sweden)

    Jian Guo

    2007-09-01

    Full Text Available Due to the growing demands from industries, the multiphysics simulation plays a more and more important role in the design of MEMS devices. This paper presents a fast convergence scheme which implements multiphysics simulation by coupling phenomena-specific single-physics solvers. The proposed scheme is based on the traditional staggered/relaxation approach but employs the Steffensen’s acceleration technique to speed up the convergence procedure. The performance of the proposed scheme is compared with three traditional techniques: the staggered/relaxation, the multilevel Newton and the quasi-Newton methods through several examples. The results show that this scheme is promising.

  10. Physical Properties and Hydrogen-Bonding Network of Water-Ethanol Mixtures from Molecular Dynamics Simulations.

    Science.gov (United States)

    Ghoufi, A; Artzner, F; Malfreyt, P

    2016-02-04

    While many numerical and experimental works were focused on water-ethanol mixtures at low ethanol concentration, this work reports predictions of a few physical properties (thermodynamical, interfacial, dynamical, and dielectrical properties) of water-ethanol mixture at high alcohol concentrations by means of molecular dynamics simulations. By using a standard force field a good agreement was found between experiment and molecular simulation. This was allowed us to explore the dynamics, structure, and interplay between both hydrogen-bonding networks of water and ethanol.

  11. Tackling some of the most intricate geophysical challenges via high-performance computing

    Science.gov (United States)

    Khosronejad, A.

    2016-12-01

    Recently, world has been witnessing significant enhancements in computing power of supercomputers. Computer clusters in conjunction with the advanced mathematical algorithms has set the stage for developing and applying powerful numerical tools to tackle some of the most intricate geophysical challenges that today`s engineers face. One such challenge is to understand how turbulent flows, in real-world settings, interact with (a) rigid and/or mobile complex bed bathymetry of waterways and sea-beds in the coastal areas; (b) objects with complex geometry that are fully or partially immersed; and (c) free-surface of waterways and water surface waves in the coastal area. This understanding is especially important because the turbulent flows in real-world environments are often bounded by geometrically complex boundaries, which dynamically deform and give rise to multi-scale and multi-physics transport phenomena, and characterized by multi-lateral interactions among various phases (e.g. air/water/sediment phases). Herein, I present some of the multi-scale and multi-physics geophysical fluid mechanics processes that I have attempted to study using an in-house high-performance computational model, the so-called VFS-Geophysics. More specifically, I will present the simulation results of turbulence/sediment/solute/turbine interactions in real-world settings. Parts of the simulations I present are performed to gain scientific insights into the processes such as sand wave formation (A. Khosronejad, and F. Sotiropoulos, (2014), Numerical simulation of sand waves in a turbulent open channel flow, Journal of Fluid Mechanics, 753:150-216), while others are carried out to predict the effects of climate change and large flood events on societal infrastructures ( A. Khosronejad, et al., (2016), Large eddy simulation of turbulence and solute transport in a forested headwater stream, Journal of Geophysical Research:, doi: 10.1002/2014JF003423).

  12. Materials for high performance light water reactors

    Science.gov (United States)

    Ehrlich, K.; Konys, J.; Heikinheimo, L.

    2004-05-01

    A state-of-the-art study was performed to investigate the operational conditions for in-core and out-of-core materials in a high performance light water reactor (HPLWR) and to evaluate the potential of existing structural materials for application in fuel elements, core structures and out-of-core components. In the conventional parts of a HPLWR-plant the approved materials of supercritical fossil power plants (SCFPP) can be used for given temperatures (⩽600 °C) and pressures (≈250 bar). These are either commercial ferritic/martensitic or austenitic stainless steels. Taking the conditions of existing light water reactors (LWR) into account an assessment of potential cladding materials was made, based on existing creep-rupture data, an extensive analysis of the corrosion in conventional steam power plants and available information on material behaviour under irradiation. As a major result it is shown that for an assumed maximum temperature of 650 °C not only Ni-alloys, but also austenitic stainless steels can be used as cladding materials.

  13. Optimizing High Performance Self Compacting Concrete

    Directory of Open Access Journals (Sweden)

    Raymond A Yonathan

    2017-01-01

    Full Text Available This paper’s objectives are to learn the effect of glass powder, silica fume, Polycarboxylate Ether, and gravel to optimizing composition of each factor in making High Performance SCC. Taguchi method is proposed in this paper as best solution to minimize specimen variable which is more than 80 variations. Taguchi data analysis method is applied to provide composition, optimizing, and the effect of contributing materials for nine variable of specimens. Concrete’s workability was analyzed using Slump flow test, V-funnel test, and L-box test. Compressive and porosity test were performed for the hardened state. With a dimension of 100×200 mm the cylindrical specimens were cast for compressive test with the age of 3, 7, 14, 21, 28 days. Porosity test was conducted at 28 days. It is revealed that silica fume contributes greatly to slump flow and porosity. Coarse aggregate shows the greatest contributing factor to L-box and compressive test. However, all factors show unclear result to V-funnel test.

  14. Automatic Energy Schemes for High Performance Applications

    Energy Technology Data Exchange (ETDEWEB)

    Sundriyal, Vaibhav [Iowa State Univ., Ames, IA (United States)

    2013-01-01

    Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale. Drastic increases in the power consumption of supercomputers affect significantly their operating costs and failure rates. In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This work first studies two important collective communication operations, all-to-all and allgather and proposes energy saving strategies on the per-call basis. Next, it targets point-to-point communications to group them into phases and apply frequency scaling to them to save energy by exploiting the architectural and communication stalls. Finally, it proposes an automatic runtime system which combines both collective and point-to-point communications into phases, and applies throttling to them apart from DVFS to maximize energy savings. The experimental results are presented for NAS parallel benchmark problems as well as for the realistic parallel electronic structure calculations performed by the widely used quantum chemistry package GAMESS. Close to the maximum energy savings were obtained with a substantially low performance loss on the given platform.

  15. Development of a High Performance Spacer Grid

    Energy Technology Data Exchange (ETDEWEB)

    Song, Kee Nam; Song, K. N.; Yoon, K. H. (and others)

    2007-03-15

    A spacer grid in a LWR fuel assembly is a key structural component to support fuel rods and to enhance the heat transfer from the fuel rod to the coolant. In this research, the main research items are the development of inherent and high performance spacer grid shapes, the establishment of mechanical/structural analysis and test technology, and the set-up of basic test facilities for the spacer grid. The main research areas and results are as follows. 1. 18 different spacer grid candidates have been invented and applied for domestic and US patents. Among the candidates 16 are chosen from the patent. 2. Two kinds of spacer grids are finally selected for the advanced LWR fuel after detailed performance tests on the candidates and commercial spacer grids from a mechanical/structural point of view. According to the test results the features of the selected spacer grids are better than those of the commercial spacer grids. 3. Four kinds of basic test facilities are set up and the relevant test technologies are established. 4. Mechanical/structural analysis models and technology for spacer grid performance are developed and the analysis results are compared with the test results to enhance the reliability of the models.

  16. Initial rheological description of high performance concretes

    Directory of Open Access Journals (Sweden)

    Alessandra Lorenzetti de Castro

    2006-12-01

    Full Text Available Concrete is defined as a composite material and, in rheological terms, it can be understood as a concentrated suspension of solid particles (aggregates in a viscous liquid (cement paste. On a macroscopic scale, concrete flows as a liquid. It is known that the rheological behavior of the concrete is close to that of a Bingham fluid and two rheological parameters regarding its description are needed: yield stress and plastic viscosity. The aim of this paper is to present the initial rheological description of high performance concretes using the modified slump test. According to the results, an increase of yield stress was observed over time, while a slight variation in plastic viscosity was noticed. The incorporation of silica fume showed changes in the rheological properties of fresh concrete. The behavior of these materials also varied with the mixing procedure employed in their production. The addition of superplasticizer meant that there was a large reduction in the mixture's yield stress, while plastic viscosity remained practically constant.

  17. High Performance Graphene Oxide Based Rubber Composites

    Science.gov (United States)

    Mao, Yingyan; Wen, Shipeng; Chen, Yulong; Zhang, Fazhong; Panine, Pierre; Chan, Tung W.; Zhang, Liqun; Liang, Yongri; Liu, Li

    2013-08-01

    In this paper, graphene oxide/styrene-butadiene rubber (GO/SBR) composites with complete exfoliation of GO sheets were prepared by aqueous-phase mixing of GO colloid with SBR latex and a small loading of butadiene-styrene-vinyl-pyridine rubber (VPR) latex, followed by their co-coagulation. During co-coagulation, VPR not only plays a key role in the prevention of aggregation of GO sheets but also acts as an interface-bridge between GO and SBR. The results demonstrated that the mechanical properties of the GO/SBR composite with 2.0 vol.% GO is comparable with those of the SBR composite reinforced with 13.1 vol.% of carbon black (CB), with a low mass density and a good gas barrier ability to boot. The present work also showed that GO-silica/SBR composite exhibited outstanding wear resistance and low-rolling resistance which make GO-silica/SBR very competitive for the green tire application, opening up enormous opportunities to prepare high performance rubber composites for future engineering applications.

  18. High Performance Graphene Oxide Based Rubber Composites

    Science.gov (United States)

    Mao, Yingyan; Wen, Shipeng; Chen, Yulong; Zhang, Fazhong; Panine, Pierre; Chan, Tung W.; Zhang, Liqun; Liang, Yongri; Liu, Li

    2013-01-01

    In this paper, graphene oxide/styrene-butadiene rubber (GO/SBR) composites with complete exfoliation of GO sheets were prepared by aqueous-phase mixing of GO colloid with SBR latex and a small loading of butadiene-styrene-vinyl-pyridine rubber (VPR) latex, followed by their co-coagulation. During co-coagulation, VPR not only plays a key role in the prevention of aggregation of GO sheets but also acts as an interface-bridge between GO and SBR. The results demonstrated that the mechanical properties of the GO/SBR composite with 2.0 vol.% GO is comparable with those of the SBR composite reinforced with 13.1 vol.% of carbon black (CB), with a low mass density and a good gas barrier ability to boot. The present work also showed that GO-silica/SBR composite exhibited outstanding wear resistance and low-rolling resistance which make GO-silica/SBR very competitive for the green tire application, opening up enormous opportunities to prepare high performance rubber composites for future engineering applications. PMID:23974435

  19. Design of high performance CMC brake discs

    Energy Technology Data Exchange (ETDEWEB)

    Krenkel, W.; Henke, T. [Deutsche Forschungsanstalt fuer Luft- und Raumfahrt e.V. (DLR), Stuttgart (Germany)

    1999-03-01

    Ceramic matrix composite (CMC) materials based on 2D-carbon fibre preforms show high heat-absorption capacities and good tribological as well as thermomechanical properties. To take advantage of the full lightweight potential of these new materials in high performance automotive brake discs, the thermal conductivity transverse to the friction surface has to be high in order to reduce the surface temperature. Experimental tests showed, that lower surface temperatures prevent overheating of the brake`s periphery and stabilizes the friction behaviour. In this study different design approaches with improved transverse heat conductivity have been investigated by finite element analysis. C/C-SiC bolts as well as SiC coatings and combinations of them have been investigated and compared with an orthotropic brake disc, showing a reduction of temperature of up to 50%. Original sized brake discs with C/C-SiC have been manufactured and tested under real conditions which verified the calculations. Using only low-cost CMC materials and avoiding any additional processing steps, the potential of C/C-SiC brake discs are very attractive under tribological as well as under economical aspects. (orig.) 4 refs.

  20. Simulations and Measurements of Physics Debris Losses at the 4 Tev LHC

    CERN Document Server

    Marsili, A; Cerutti, F; Redaelli, S

    2013-01-01

    At the Large Hadron Collider (LHC), dedicated physics debris collimators protect the machine from the collision products at the high-luminosity experiments. These collimators reduce the risk of quenches by stopping physics debris losses. Several measurements have been performed at 4 TeV, with peak luminosity values up to 4•10^33 cm^2 •s^-11 to address the need of these devices and optimize their settings. In this paper, the measurement results are presented and compared with SixTrack simulations of beam losses in IR1 and IR5 for the same conditions.

  1. LLNL Final Design for PDV Measurements of Godiva for Validation of Multi-Physics Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Heinrichs, David [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Nuclear Criticality Safety Division; Scorby, John [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Nuclear Criticality Safety Division; Bandong, Brian [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Chemical Sciences Division; Beller, Tim [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Burch, Jennifer [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Nuclear Criticality Safety Division; Goda, Joetta [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Halvorson, Craig [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Global Security N Program and National Security Engineering Division; Hickman, David [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Dosimetry Lab.; May, Mark [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). B-Division; Sinibaldi, Jose [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). B-Division; Whitworth, Tony [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Defense Technologies Engineering Division; Klingensmith, Amanda [National Security Technologies, LLC. (NSTec), Mercury, NV (United States). National Center for Nuclear Security (NCNS)

    2014-03-18

    This document is a Final Design (CED-2) Report for IER-268, “PDV Measurements of Godiva for Validation of Multi-Physics Simulation”. These measurements will measure surface velocities in several locations, the initial α and α-t curve as a function of fission yield using existing LLNL assets including: The portable Photonic Doppler Velocimeter (PDV) detector; The “alpha box”, and; Aluminum-encapsulated 235U fission foils. These experiments will be simulated using multi-physics methods funded separately under LLNL-AM2. A supercritical benchmark specification will be developed; and, if funding permits, an ICSBEP (or classified equivalent) evaluation will be published.

  2. Green Schools as High Performance Learning Facilities

    Science.gov (United States)

    Gordon, Douglas E.

    2010-01-01

    In practice, a green school is the physical result of a consensus process of planning, design, and construction that takes into account a building's performance over its entire 50- to 60-year life cycle. The main focus of the process is to reinforce optimal learning, a goal very much in keeping with the parallel goals of resource efficiency and…

  3. 24 CFR 902.71 - Incentives for high performers.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Incentives for high performers. 902... DEVELOPMENT PUBLIC HOUSING ASSESSMENT SYSTEM PHAS Incentives and Remedies § 902.71 Incentives for high performers. (a) Incentives for high performer PHAs. A PHA that is designated a high performer will be...

  4. An Improved Coupling of Numerical and Physical Models for Simulating Wave Propagation

    DEFF Research Database (Denmark)

    Yang, Zhiwen; Liu, Shu-xue; Li, Jin-xuan

    2014-01-01

    An improved coupling of numerical and physical models for simulating 2D wave propagation is developed in this paper. In the proposed model, an unstructured finite element model (FEM) based Boussinesq equations is applied for the numerical wave simulation, and a 2D piston-type wavemaker is used...... for the physical wave generation. An innovative scheme combining fourth-order Lagrange interpolation and Runge-Kutta scheme is described for solving the coupling equation. A Transfer function modulation method is presented to minimize the errors induced from the hydrodynamic invalidity of the coupling model and....../or the mechanical capability of the wavemaker in area where nonlinearities or dispersion predominate. The overall performance and applicability of the coupling model has been experimentally validated by accounting for both regular and irregular waves and varying bathymetry. Experimental results show...

  5. THE COMPARISON BETWEEN COMPUTER SIMULATION AND PHYSICAL MODEL IN CALCULATING ILLUMINANCE LEVEL OF ATRIUM BUILDING

    Directory of Open Access Journals (Sweden)

    Sushardjanti Felasari

    2003-01-01

    Full Text Available This research examines the accuracy of computer programmes to simulate the illuminance level in atrium buildings compare to the measurement of those in physical models. The case was taken in atrium building with 4 types of roof i.e. pitched roof, barrel vault roof, monitor pitched roof (both monitor pitched roof and monitor barrel vault roof, and north light roof (both with north orientation and south orientation. The results show that both methods have agreement and disagreement. They show the same pattern of daylight distribution. In the other side, in terms of daylight factors, computer simulation tends to underestimate calculation compared to physical model measurement, while for average and minimum illumination, it tends to overestimate the calculation.

  6. AHPCRC - Army High Performance Computing Research Center

    Science.gov (United States)

    2010-01-01

    being developed to protect many types of surfaces that are at risk from microbial contamination— kitchen countertops, protective apparel, and ship...These simulations will as- sist in understanding the mechanisms by which anti- microbial peptides contact, penetrate, and puncture bacterial cell... plat - forms. One such problem is the determination of optimal wing shapes and motions. Work in progress involves coupling the PDE-solver AERO-F and

  7. Physical Model and Mesoscale Simulation of Mortar and Concrete Deformations under Freeze–Thaw Cycles

    OpenAIRE

    Gong, Fuyuan; Sicat, Evdon; Wang, Yi; Ueda, Tamon; Zhang, Dawei

    2014-01-01

    The degradation of concrete material under multiple freeze–thaw cycles is an important issue for structures in cold and wet regions. This paper proposed a physical and mechanical model to explain the deformation behavior observed in previous experiments, from internal pressure calculation to mesoscale simulation, and for both closed and open freeze–thaw tests. Three kinds of internal pressures are considered in this study: hydraulic pressure due to ice volume expansion, crystallization pressu...

  8. Simulation system for radiology education integration of physical and virtual realities: Overview and software considerations

    OpenAIRE

    Ali A Alghamdi

    2015-01-01

    Introduction: The aim of the proposed system is to give students a flexible, realistic, and interactive learning environment to study the physical limit of different postures and various imaging procedures. The suggested system will also familiarise the students with various imaging modalities, the anatomical structures that are observable under different X-ray tube settings and the quality of the resulting image. Current teaching practice for radiological sciences asks students to simulate t...

  9. A mathematical and Physical Model Improves Accuracy in Simulating Solid Material Relaxation Modulus and Viscoelastic Responses

    OpenAIRE

    Xu, Qinwu; Engquist, Bjorn

    2014-01-01

    We propose a new material viscoelastic model and mathematical solution to simulate relaxation modulus and viscoelastic response. The model formula of relaxation modulus is extended from sigmoidal function considering nonlinear strain hardening and softening. Its physical mechanism can be interpreted by a spring network viscous medium model with only five parameters in a simpler format than the molecular-chain based polymer models to represent general materials. We also developed a three-dimen...

  10. Computer simulations in teaching physics: Development and implementation of a hypermedia system for high school teachers

    Science.gov (United States)

    da Silva, A. M. R.; de Macêdo, J. A.

    2016-06-01

    On the basis of the technological advancement in the middle and the difficulty of learning by the students in the discipline of physics, this article describes the process of elaboration and implementation of a hypermedia system for high school teachers involving computer simulations for teaching basic concepts of electromagnetism, using free tool. With the completion and publication of the project there will be a new possibility of interaction of students and teachers with the technology in the classroom and in labs.

  11. Audio-haptic physically-based simulation of walking on different grounds

    DEFF Research Database (Denmark)

    Turchet, Luca; Nordahl, Rolf; Serafin, Stefania

    2010-01-01

    We describe a system which simulates in realtime the auditory and haptic sensations of walking on different surfaces. The system is based on a pair of sandals enhanced with pressure sensors and actuators. The pressure sensors detect the interaction force during walking, and control several...... physically based synthesis algorithms, which drive both the auditory and haptic feedback. The different hardware and software components of the system are described, together with possible uses and possibilities for improvements in future design iterations....

  12. Scalable resource management in high performance computers.

    Energy Technology Data Exchange (ETDEWEB)

    Frachtenberg, E. (Eitan); Petrini, F. (Fabrizio); Fernandez Peinador, J. (Juan); Coll, S. (Salvador)

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  13. High-performance laboratories and cleanrooms

    Energy Technology Data Exchange (ETDEWEB)

    Tschudi, William; Sartor, Dale; Mills, Evan; Xu, Tengfang

    2002-07-01

    The California Energy Commission sponsored this roadmap to guide energy efficiency research and deployment for high performance cleanrooms and laboratories. Industries and institutions utilizing these building types (termed high-tech buildings) have played an important part in the vitality of the California economy. This roadmap's key objective to present a multi-year agenda to prioritize and coordinate research efforts. It also addresses delivery mechanisms to get the research products into the market. Because of the importance to the California economy, it is appropriate and important for California to take the lead in assessing the energy efficiency research needs, opportunities, and priorities for this market. In addition to the importance to California's economy, energy demand for this market segment is large and growing (estimated at 9400 GWH for 1996, Mills et al. 1996). With their 24hr. continuous operation, high tech facilities are a major contributor to the peak electrical demand. Laboratories and cleanrooms constitute the high tech building market, and although each building type has its unique features, they are similar in that they are extremely energy intensive, involve special environmental considerations, have very high ventilation requirements, and are subject to regulations--primarily safety driven--that tend to have adverse energy implications. High-tech buildings have largely been overlooked in past energy efficiency research. Many industries and institutions utilize laboratories and cleanrooms. As illustrated, there are many industries operating cleanrooms in California. These include semiconductor manufacturing, semiconductor suppliers, pharmaceutical, biotechnology, disk drive manufacturing, flat panel displays, automotive, aerospace, food, hospitals, medical devices, universities, and federal research facilities.

  14. High-performance mass storage system for workstations

    Science.gov (United States)

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    media, and the tapes are used as backup media. The storage system is managed by the IEEE mass storage reference model-based UniTree software package. UniTree software will keep track of all files in the system, will automatically migrate the lesser used files to archive media, and will stage the files when needed by the system. The user can access the files without knowledge of their physical location. The high-performance mass storage system developed by Loral AeroSys will significantly boost the system I/O performance and reduce the overall data storage cost. This storage system provides a highly flexible and cost-effective architecture for a variety of applications (e.g., realtime data acquisition with a signal and image processing requirement, long-term data archiving and distribution, and image analysis and enhancement).

  15. Development of a multi-physics simulation framework for semiconductor materials and devices

    Science.gov (United States)

    Almeida, Nuno Sucena

    Modern day semiconductor technology devices face the ever increasing issue of accounting for quantum mechanics effects on their modeling and performance assessment. The objective of this work is to create a user-friendly, extensible and powerful multi-physics simulation blackbox for nano-scale semiconductor devices. By using a graphical device modeller this work will provide a friendly environment were a user without deep knowledge of device physics can create a device, simulate it and extract optical and electrical characteristics deemed of interest to his engineering occupation. Resorting to advanced template C++ object-oriented design from the start, this work was able to implement algorithms to simulate 1,2 and 3D devices which along with scripting using the well known Python language enables the user to create batch simulations, to better optimize device performance. Higher-dimensional semiconductors, like wires and dots, require a huge computational cost. MPI parallel libraries enable the software to tackle complex geometries which otherwise would be unfeasible on a small single-CPU computer. Quantum mechanical phenomena is described by Schrodinger's equation which must be solved self-consistently with Poisson's equation for the electrostatic charge and, if required, make use of piezoelectric charge terms from elasticity constraints. Since the software implements a generic n-dimensional FEM engine, virtually any kind of Partial Differential Equation can be solved and in the future, other required solvers besides the ones already implemented will also be included for easy of use. In particular for the semiconductor device physics, we solve the quantum mechanics effective mass conduction-valence band k·p approximation to the Schrodinger-Poisson, in any crystal growth orientation (C,polar M,A and semi-polar planes or any user defined angle) and also include Piezoelectric effects caused by strain in lattice mismatched layers, where the implemented software

  16. High Performance Computing for Medical Image Interpretation

    Science.gov (United States)

    1993-10-01

    patterns from which the diagnoses can be made. A general problem arising from this modality is the detection of small I rom td physics poit of view...applied in, e.g., chest radiography and orthodontics (Scott and Symons (1982)). Computer Tomography (CT) applies to all techniques by which...density in the z-direction towards its equilibrium value. T2 is the transverse or spin-spin relaxation time which governs the evolution of the

  17. Supercomputing with TOUGH2 family codes for coupled multi-physics simulations of geologic carbon sequestration

    Science.gov (United States)

    Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.

    2015-12-01

    Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent

  18. Using discrete-event simulation in strategic capacity planning for an outpatient physical therapy service.

    Science.gov (United States)

    Rau, Chi-Lun; Tsai, Pei-Fang Jennifer; Liang, Sheau-Farn Max; Tan, Jhih-Cian; Syu, Hong-Cheng; Jheng, Yue-Ling; Ciou, Ting-Syuan; Jaw, Fu-Shan

    2013-12-01

    This study uses a simulation model as a tool for strategic capacity planning for an outpatient physical therapy clinic in Taipei, Taiwan. The clinic provides a wide range of physical treatments, with 6 full-time therapists in each session. We constructed a discrete-event simulation model to study the dynamics of patient mixes with realistic treatment plans, and to estimate the practical capacity of the physical therapy room. The changes in time-related and space-related performance measurements were used to evaluate the impact of various strategies on the capacity of the clinic. The simulation results confirmed that the clinic is extremely patient-oriented, with a bottleneck occurring at the traction units for Intermittent Pelvic Traction (IPT), with usage at 58.9 %. Sensitivity analysis showed that attending to more patients would significantly increase the number of patients staying for overtime sessions. We found that pooling the therapists produced beneficial results. The average waiting time per patient could be reduced by 45 % when we pooled 2 therapists. We found that treating up to 12 new patients per session had no significantly negative impact on returning patients. Moreover, we found that the average waiting time for new patients decreased if they were given priority over returning patients when called by the therapists.

  19. SmartSIM - a virtual reality simulator for laparoscopy training using a generic physics engine.

    Science.gov (United States)

    Khan, Zohaib Amjad; Kamal, Nabeel; Hameed, Asad; Mahmood, Amama; Zainab, Rida; Sadia, Bushra; Mansoor, Shamyl Bin; Hasan, Osman

    2017-09-01

    Virtual reality (VR) training simulators have started playing a vital role in enhancing surgical skills, such as hand-eye coordination in laparoscopy, and practicing surgical scenarios that cannot be easily created using physical models. We describe a new VR simulator for basic training in laparoscopy, i.e. SmartSIM, which has been developed using a generic open-source physics engine called the simulation open framework architecture (SOFA). This paper describes the systems perspective of SmartSIM including design details of both hardware and software components, while highlighting the critical design decisions. Some of the distinguishing features of SmartSIM include: (i) an easy-to-fabricate custom-built hardware interface; (ii) use of a generic physics engine to facilitate wider accessibility of our work and flexibility in terms of using various graphical modelling algorithms and their implementations; and (iii) an intelligent and smart evaluation mechanism that facilitates unsupervised and independent learning. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Evaluation of spacer grid spring characteristics by means of physical tests and numerical simulation

    Energy Technology Data Exchange (ETDEWEB)

    Schettino, Carlos Frederico Mattos, E-mail: carlosschettino@inb.gov.br [Industrias Nucleares do Brasil (INB), Resende, RJ (Brazil)

    2017-11-01

    Among all fuel assemblies' components, the spacer grids play an important structural role during the energy generation process, mainly due for its primary functional requirement, that is, to provide fuel rod support. The present work aims to evaluate the spring characteristics of a specific spacer grid design used in a PWR fuel assembly type 16 x 16. These spring characteristics comprises the load versus deflection capability and its spring rate, which are very important, and also mandatory, to be correctly established in order to preclude spacer grid spring and fuel rod cladding fretting during operation, as well as prevent an excessive fuel rod buckling. This study includes physical tests and numerical simulation. The tests were performed on an adapted load cell mechanical device, using as a specimen a single strap of the spacer grid. Three numerical models were prepared using the Finite Element Method, with the support of the commercial code ANSYS. One model was built to validate the simulation according to the performed physical test, the others were built inserting a gradient of temperature (Beginning Of Life hot condition) and to evaluate the spacer grid spring characteristics in End Of Life condition. The obtained results from physical test and numerical model have shown a good agreement between them, therefore validating the simulation. The obtained results from numerical models make available information regarding the spacer grid design purpose, such as the behavior of the fuel rod cladding support during operation. Therewith, these evaluations could be useful to improve the spacer grid design. (author)

  1. Understanding and Improving High-Performance I/O Subsystems

    Science.gov (United States)

    El-Ghazawi, Tarek A.; Frieder, Gideon; Clark, A. James

    1996-01-01

    This research program has been conducted in the framework of the NASA Earth and Space Science (ESS) evaluations led by Dr. Thomas Sterling. In addition to the many important research findings for NASA and the prestigious publications, the program has helped orienting the doctoral research program of two students towards parallel input/output in high-performance computing. Further, the experimental results in the case of the MasPar were very useful and helpful to MasPar with which the P.I. has had many interactions with the technical management. The contributions of this program are drawn from three experimental studies conducted on different high-performance computing testbeds/platforms, and therefore presented in 3 different segments as follows: 1. Evaluating the parallel input/output subsystem of a NASA high-performance computing testbeds, namely the MasPar MP- 1 and MP-2; 2. Characterizing the physical input/output request patterns for NASA ESS applications, which used the Beowulf platform; and 3. Dynamic scheduling techniques for hiding I/O latency in parallel applications such as sparse matrix computations. This study also has been conducted on the Intel Paragon and has also provided an experimental evaluation for the Parallel File System (PFS) and parallel input/output on the Paragon. This report is organized as follows. The summary of findings discusses the results of each of the aforementioned 3 studies. Three appendices, each containing a key scholarly research paper that details the work in one of the studies are included.

  2. MO-DE-BRA-02: SIMAC: A Simulation Tool for Teaching Linear Accelerator Physics

    Energy Technology Data Exchange (ETDEWEB)

    Carlone, M; Harnett, N [Princess Margaret Hospital, Toronto, ON (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario (Canada); Harris, W [Duke University Medical Physics Graduate Program, Durham NC (United States); Norrlinger, B [Princess Margaret Hospital, Toronto, ON (Canada); MacPherson, M [The Ottawa Hospital, Ottawa, Ontario (Canada); Lamey, M [Trillium Health Partners, Mississauga, Ontario (Canada); Oldham, M [Duke University Medical Medical Center, Durham NC (United States); Duke University Medical Physics Graduate Program, Durham NC (United States); Anderson, R

    2016-06-15

    Purpose: The first goal of this work is to develop software that can simulate the physics of linear accelerators (linac). The second goal is to show that this simulation tool is effective in teaching linac physics to medical physicists and linac service engineers. Methods: Linacs were modeled using analytical expressions that can correctly describe the physical response of a linac to parameter changes in real time. These expressions were programmed with a graphical user interface in order to produce an environment similar to that of linac service mode. The software, “SIMAC”, has been used as a learning aid in a professional development course 3 times (2014 – 2016) as well as in a physics graduate program. Exercises were developed to supplement the didactic components of the courses consisting of activites designed to reinforce the concepts of beam loading; the effect of steering coil currents on beam symmetry; and the relationship between beam energy and flatness. Results: SIMAC was used to teach 35 professionals (medical physicists; regulators; service engineers; 1 week course) as well as 20 graduate students (1 month project). In the student evaluations, 85% of the students rated the effectiveness of SIMAC as very good or outstanding, and 70% rated the software as the most effective part of the courses. Exercise results were collected showing that 100% of the students were able to use the software correctly. In exercises involving gross changes to linac operating points (i.e. energy changes) the majority of students were able to correctly perform these beam adjustments. Conclusion: Software simulation(SIMAC), can be used to effectively teach linac physics. In short courses, students were able to correctly make gross parameter adjustments that typically require much longer training times using conventional training methods.

  3. Impact of the choice of physics list on GEANT4 simulations of hadronic showers in tungsten

    CERN Document Server

    Speckmayer, P

    2010-01-01

    The development of pion induced showers in a large block of matter (tungsten, lead, iron) is simulated for pions from 1 to 50GeV. Two GEANT4 physics lists (QGSP BERT and QGSP BERT HP) are compared. The deposited energy at each step of the simulation is identified as visible, invisible or escaped. It will be shown, that for tungsten in most of the hadronic showers more than 90% of the energy is deposited visibly if QGSP BERT is used. This fraction drops to only 60% for QGSP BERT HP. The latter fraction is similar to lead, even when QGSP BERT is used for the simulation. The impact of this behaviour on the energy resolution of a sampling calorimeter with scintillator as active material is shown. Although more energy is deposited visibly for QGSP BERT than for QGSP BERT HP, the reconstructed energy resolution is about 5 to 10% percent better for the latter.

  4. Circuit simulation and physical implementation for a memristor-based colpitts oscillator

    Directory of Open Access Journals (Sweden)

    Hongmin Deng

    2017-03-01

    Full Text Available This paper implements two kinds of memristor-based colpitts oscillators, namely, the circuit where the memristor is added into the feedback network of the oscillator in parallel and series, respectively. First, a MULTISIM simulation circuit for the memristive colpitts oscillator is built, where an emulator constructed by some off-the-shelf components is utilized to replace the memristor. Then the physical system is implemented in terms of the MULTISIM simulation circuit. Circuit simulation and experimental study show that this memristive colpitts oscillator can exhibit periodic, quasi-periodic, and chaotic behaviors with certain parameter’s variances. Besides, in a sense, the circuit is robust with circuit parameters and device types.

  5. A physical-based gas-surface interaction model for rarefied gas flow simulation

    Science.gov (United States)

    Liang, Tengfei; Li, Qi; Ye, Wenjing

    2018-01-01

    Empirical gas-surface interaction models, such as the Maxwell model and the Cercignani-Lampis model, are widely used as the boundary condition in rarefied gas flow simulations. The accuracy of these models in the prediction of macroscopic behavior of rarefied gas flows is less satisfactory in some cases especially the highly non-equilibrium ones. Molecular dynamics simulation can accurately resolve the gas-surface interaction process at atomic scale, and hence can predict accurate macroscopic behavior. They are however too computationally expensive to be applied in real problems. In this work, a statistical physical-based gas-surface interaction model, which complies with the basic relations of boundary condition, is developed based on the framework of the washboard model. In virtue of its physical basis, this new model is capable of capturing some important relations/trends for which the classic empirical models fail to model correctly. As such, the new model is much more accurate than the classic models, and in the meantime is more efficient than MD simulations. Therefore, it can serve as a more accurate and efficient boundary condition for rarefied gas flow simulations.

  6. A high performance SAR ADC for WLAN analog front end

    Science.gov (United States)

    Lian, Pengfei; Yi, Bo; Wu, Bin; Wang, Han; Pu, Yilin

    2017-08-01

    A 10 bit 100 MS/s successive approximation register (SAR) analog to digital converter (ADC) for WLAN analog front end is presented. To ensure high performance and low power, we presented a method that is based on figure of merit (FOM) to obtain the optimal unit capacitance of the digital-to-analog converter (DAC) capacitor network. With this method, we obtain the minimum FOM 17.92 fJ/Conv.-step as well as the optimal unit capacitance of the DAC capacitor network 1.59 fF. What is more, to ensure high performance of the dynamic comparator, a comparator clock logic is presented. Post-simulation results in 55 nm CMOS technology show that, this 10-bit 100-MS/s ADC achieves the signal-to-noise-and-distortion ratio (SNDR), the signal-to-noise ratio (SNR) and the spurious-free dynamic range (SFDR) of 61.7 dB, 63.7 dB and 72.5 dB with 1.3V supply. The ADC consumes 1.67 mW, and the active area is only 0.0162 mm2.

  7. High performance parallel computing of flows in complex geometries: II. Applications

    Energy Technology Data Exchange (ETDEWEB)

    Gourdain, N; Gicquel, L; Staffelbach, G; Vermorel, O; Duchaine, F; Boussuge, J-F [Computational Fluid Dynamics Team, CERFACS, Toulouse, 31057 (France); Poinsot, T [Institut de Mecanique des Fluides de Toulouse, Toulouse, 31400 (France)], E-mail: Nicolas.gourdain@cerfacs.fr

    2009-01-01

    Present regulations in terms of pollutant emissions, noise and economical constraints, require new approaches and designs in the fields of energy supply and transportation. It is now well established that the next breakthrough will come from a better understanding of unsteady flow effects and by considering the entire system and not only isolated components. However, these aspects are still not well taken into account by the numerical approaches or understood whatever the design stage considered. The main challenge is essentially due to the computational requirements inferred by such complex systems if it is to be simulated by use of supercomputers. This paper shows how new challenges can be addressed by using parallel computing platforms for distinct elements of a more complex systems as encountered in aeronautical applications. Based on numerical simulations performed with modern aerodynamic and reactive flow solvers, this work underlines the interest of high-performance computing for solving flow in complex industrial configurations such as aircrafts, combustion chambers and turbomachines. Performance indicators related to parallel computing efficiency are presented, showing that establishing fair criterions is a difficult task for complex industrial applications. Examples of numerical simulations performed in industrial systems are also described with a particular interest for the computational time and the potential design improvements obtained with high-fidelity and multi-physics computing methods. These simulations use either unsteady Reynolds-averaged Navier-Stokes methods or large eddy simulation and deal with turbulent unsteady flows, such as coupled flow phenomena (thermo-acoustic instabilities, buffet, etc). Some examples of the difficulties with grid generation and data analysis are also presented when dealing with these complex industrial applications.

  8. MPPhys—A many-particle simulation package for computational physics education

    Science.gov (United States)

    Müller, Thomas

    2014-03-01

    In a first course to classical mechanics elementary physical processes like elastic two-body collisions, the mass-spring model, or the gravitational two-body problem are discussed in detail. The continuation to many-body systems, however, is deferred to graduate courses although the underlying equations of motion are essentially the same and although there is a strong motivation for high-school students in particular because of the use of particle systems in computer games. The missing link between the simple and the more complex problem is a basic introduction to solve the equations of motion numerically which could be illustrated, however, by means of the Euler method. The many-particle physics simulation package MPPhys offers a platform to experiment with simple particle simulations. The aim is to give a principle idea how to implement many-particle simulations and how simulation and visualization can be combined for interactive visual explorations. Catalogue identifier: AERR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERR_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 111327 No. of bytes in distributed program, including test data, etc.: 608411 Distribution format: tar.gz Programming language: C++, OpenGL, GLSL, OpenCL. Computer: Linux and Windows platforms with OpenGL support. Operating system: Linux and Windows. RAM: Source Code 4.5 MB Complete package 242 MB Classification: 14, 16.9. External routines: OpenGL, OpenCL Nature of problem: Integrate N-body simulations, mass-spring models Solution method: Numerical integration of N-body-simulations, 3D-Rendering via OpenGL. Running time: Problem dependent

  9. Designing Open Source Computer Models for Physics by Inquiry using Easy Java Simulation

    CERN Document Server

    Wee, Loo Kang

    2012-01-01

    The Open Source Physics community has created hundreds of physics computer models (Wolfgang Christian, Esquembre, & Barbato, 2011; F. K. Hwang & Esquembre, 2003) which are mathematical computation representations of real-life Physics phenomenon. Since the source codes are available and can be modified for redistribution licensed Creative Commons Attribution or other compatible copyrights like GNU General Public License (GPL), educators can customize (Wee & Mak, 2009) these models for more targeted productive (Wee, 2012) activities for their classroom teaching and redistribute them to benefit all humankind. In this interactive event, we will share the basics of using the free authoring toolkit called Easy Java Simulation (W. Christian, Esquembre, & Mason, 2010; Esquembre, 2010) so that participants can modify the open source computer models for their own learning and teaching needs. These computer models has the potential to provide the experience and context, essential for deepening students c...

  10. Simulations of oscillatory systems with award-winning software, physics of oscillations

    CERN Document Server

    Butikov, Eugene I

    2015-01-01

    Deepen Your Students' Understanding of Oscillations through Interactive Experiments Simulations of Oscillatory Systems: with Award-Winning Software, Physics of Oscillations provides a hands-on way of visualizing and understanding the fundamental concepts of the physics of oscillations. Both the textbook and software are designed as exploration-oriented supplements for courses in general physics and the theory of oscillations. The book is conveniently structured according to mathematical complexity. Each chapter in Part I contains activities, questions, exercises, and problems of varying levels of difficulty, from straightforward to quite challenging. Part II presents more sophisticated, highly mathematical material that delves into the serious theoretical background for the computer-aided study of oscillations. The software package allows students to observe the motion of linear and nonlinear mechanical oscillatory systems and to obtain plots of the variables that describe the systems along with phase diagram...

  11. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.

    2009-10-01

    This report is an account of the deliberations and conclusions of the workshop on "Forefront Questions in Nuclear Science and the Role of High Performance Computing" held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to 1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; 2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; 3) provide nuclear physicists the opportunity to influence the development of high performance computing; and 4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  12. Integrating advanced facades into high performance buildings

    Energy Technology Data Exchange (ETDEWEB)

    Selkowitz, Stephen E.

    2001-05-01

    Glass is a remarkable material but its functionality is significantly enhanced when it is processed or altered to provide added intrinsic capabilities. The overall performance of glass elements in a building can be further enhanced when they are designed to be part of a complete facade system. Finally the facade system delivers the greatest performance to the building owner and occupants when it becomes an essential element of a fully integrated building design. This presentation examines the growing interest in incorporating advanced glazing elements into more comprehensive facade and building systems in a manner that increases comfort, productivity and amenity for occupants, reduces operating costs for building owners, and contributes to improving the health of the planet by reducing overall energy use and negative environmental impacts. We explore the role of glazing systems in dynamic and responsive facades that provide the following functionality: Enhanced sun protection and cooling load control while improving thermal comfort and providing most of the light needed with daylighting; Enhanced air quality and reduced cooling loads using natural ventilation schemes employing the facade as an active air control element; Reduced operating costs by minimizing lighting, cooling and heating energy use by optimizing the daylighting-thermal tradeoffs; Net positive contributions to the energy balance of the building using integrated photovoltaic systems; Improved indoor environments leading to enhanced occupant health, comfort and performance. In addressing these issues facade system solutions must, of course, respect the constraints of latitude, location, solar orientation, acoustics, earthquake and fire safety, etc. Since climate and occupant needs are dynamic variables, in a high performance building the facade solution have the capacity to respond and adapt to these variable exterior conditions and to changing occupant needs. This responsive performance capability

  13. High Performance Commercial Fenestration Framing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Mike Manteghi; Sneh Kumar; Joshua Early; Bhaskar Adusumalli

    2010-01-31

    A major objective of the U.S. Department of Energy is to have a zero energy commercial building by the year 2025. Windows have a major influence on the energy performance of the building envelope as they control over 55% of building energy load, and represent one important area where technologies can be developed to save energy. Aluminum framing systems are used in over 80% of commercial fenestration products (i.e. windows, curtain walls, store fronts, etc.). Aluminum framing systems are often required in commercial buildings because of their inherent good structural properties and long service life, which is required from commercial and architectural frames. At the same time, they are lightweight and durable, requiring very little maintenance, and offer design flexibility. An additional benefit of aluminum framing systems is their relatively low cost and easy manufacturability. Aluminum, being an easily recyclable material, also offers sustainable features. However, from energy efficiency point of view, aluminum frames have lower thermal performance due to the very high thermal conductivity of aluminum. Fenestration systems constructed of aluminum alloys therefore have lower performance in terms of being effective barrier to energy transfer (heat loss or gain). Despite the lower energy performance, aluminum is the choice material for commercial framing systems and dominates the commercial/architectural fenestration market because of the reasons mentioned above. In addition, there is no other cost effective and energy efficient replacement material available to take place of aluminum in the commercial/architectural market. Hence it is imperative to improve the performance of aluminum framing system to improve the energy performance of commercial fenestration system and in turn reduce the energy consumption of commercial building and achieve zero energy building by 2025. The objective of this project was to develop high performance, energy efficient commercial

  14. New High Performance Deterministic Interleavers for Turbo Codes

    Directory of Open Access Journals (Sweden)

    TRIFINA, L.

    2010-05-01

    Full Text Available Turbo codes offer extraordinary performance, especially at low signal to noise ratios, due to a low multiplicity of low weight code words. The interleaver design is critical in order to realize an apparent randomness of the code, thus further enhancing its performance, especially for short block frames. This paper presents four new deterministic interleaver design methods, that lead to highly performing turbo coding systems, namely the block-spread, the block-backtracking and their variations the linearly-spread and linearly-backtracking interleavers. The design methods are explained in depth and the results are compared against some of the most wide-spread turbo code interleavers. Furthermore, the selection method of the generator polynomials used in the simulations is explained.

  15. Probabilistic performance-based design for high performance control systems

    Science.gov (United States)

    Micheli, Laura; Cao, Liang; Gong, Yongqiang; Cancelli, Alessandro; Laflamme, Simon; Alipour, Alice

    2017-04-01

    High performance control systems (HPCS) are advanced damping systems capable of high damping performance over a wide frequency bandwidth, ideal for mitigation of multi-hazards. They include active, semi-active, and hybrid damping systems. However, HPCS are more expensive than typical passive mitigation systems, rely on power and hardware (e.g., sensors, actuators) to operate, and require maintenance. In this paper, a life cycle cost analysis (LCA) approach is proposed to estimate the economic benefit these systems over the entire life of the structure. The novelty resides in the life cycle cost analysis in the performance based design (PBD) tailored to multi-level wind hazards. This yields a probabilistic performance-based design approach for HPCS. Numerical simulations are conducted on a building located in Boston, MA. LCA are conducted for passive control systems and HPCS, and the concept of controller robustness is demonstrated. Results highlight the promise of the proposed performance-based design procedure.

  16. Scalability of DL_POLY on High Performance Computing Platform

    Directory of Open Access Journals (Sweden)

    Mabule Samuel Mabakane

    2017-12-01

    Full Text Available This paper presents a case study on the scalability of several versions of the molecular dynamics code (DL_POLY performed on South Africa‘s Centre for High Performance Computing e1350 IBM Linux cluster, Sun system and Lengau supercomputers. Within this study different problem sizes were designed and the same chosen systems were employed in order to test the performance of DL_POLY using weak and strong scalability. It was found that the speed-up results for the small systems were better than large systems on both Ethernet and Infiniband network. However, simulations of large systems in DL_POLY performed well using Infiniband network on Lengau cluster as compared to e1350 and Sun supercomputer.

  17. NASA High Performance Computing and Communications program

    Science.gov (United States)

    Holcomb, Lee; Smith, Paul; Hunter, Paul

    1994-01-01

    The National Aeronautics and Space Administration's HPCC program is part of a new Presidential initiative aimed at producing a 1000-fold increase in supercomputing speed and a 1(X)-fold improvement in available communications capability by 1997. As more advanced technologies are developed under the HPCC program, they will be used to solve NASA's 'Grand Challenge' problems, which include improving the design and simulation of advanced aerospace vehicles, allowing people at remote locations to communicate more effectively and share information, increasing scientists' abilities to model the Earth's climate and forecast global environmental trends, and improving the development of advanced spacecraft. NASA's HPCC program is organized into three projects which are unique to the agency's mission: the Computational Aerosciences (CAS) project, the Earth and Space Sciences (ESS) project, and the Remote Exploration and Experimentation (REE) project. An additional project, the Basic Research and Human Resources (BRHR) project, exists to promote long term research in computer science and engineering and to increase the pool of trained personnel in a variety of scientific disciplines. This document presents an overview of the objectives and organization of these projects, as well as summaries of early accomplishments and the significance, status, and plans for individual research and development programs within each project. Areas of emphasis include benchmarking, testbeds, software and simulation methods.

  18. High Performance Embedded System for Real-Time Pattern Matching

    CERN Document Server

    Sotiropoulou, Calliope Louisa; The ATLAS collaboration; Gkaitatzis, Stamatios; Citraro, Saverio; Giannetti, Paola; Dell'Orso, Mauro

    2016-01-01

    We present an innovative and high performance embedded system for real-time pattern matching. This system is based on the evolution of hardware and algorithms developed for the field of High Energy Physics (HEP) and more specifically for the execution of extremely fast pattern matching for tracking of particles produced by proton-proton collisions in hadron collider experiments. A miniaturized version of this complex system is being developed for pattern matching in generic image processing applications. The design uses the flexibility of Field Programmable Gate Arrays (FPGAs) and the powerful Associative Memory Chip (ASIC) to achieve real-time performance. The system works as a contour identifier able to extract the salient features of an image. It is based on the principles of cognitive image processing, which means that it executes fast pattern matching and data reduction mimicking the operation of the human brain.

  19. Assessment of the high performance light water reactor concept

    Energy Technology Data Exchange (ETDEWEB)

    Starflinger, J. [Univ. of Stuttgart, IKE, (Germany); Schulenberg, T. [Karlsruhe Inst. of Tech., Karlsruhe (Germany); Bittermann, D. [AREVA NP GmbH, Erlangen (Germany); Andreani, M. [Paul Scherrer Inst., Villigen (Switzerland); Maraczy, C. [AEKI-KFKI, Budapest (Hungary)

    2011-07-01

    From 2006-2010, the High Performance Light Water Reactor (HPLWR) was investigated within a European Funded project called HPLWR Phase 2. Operated at 25MPa with a heat-up rate in the core from 280{sup o}C to 500{sup o}C, this reactor concept provides a technological challenge in the fields of design, neutronics, thermal-hydraulics and heat transfer, materials, and safety. The assessment of the concept with respect to the goals of the technology roadmap for Generation IV Nuclear Reactors of the Generation IV International Forum shows that the HPLWR has a potential to fulfil the goals of economics, safety and proliferation resistance and physical protection. In terms of sustainability, the HPLWR with a thermal neutron spectrum investigated within this project, does not differ from existing Light Water Reactors in terms of usage of fuel and waste production. (author)

  20. High Performance Embedded System for Real-Time Pattern Matching

    CERN Document Server

    Sotiropoulou, Calliope Louisa; The ATLAS collaboration; Gkaitatzis, Stamatios; Citraro, Saverio; Giannetti, Paola; Dell'Orso, Mauro

    2016-01-01

    In this paper we present an innovative and high performance embedded system for real-time pattern matching. This system is based on the evolution of hardware and algorithms developed for the field of High Energy Physics (HEP) and more specifically for the execution of extremely fast pattern matching for tracking of particles produced by proton-proton collisions in hadron collider experiments. A miniaturised version of this complex system is being developed for pattern matching in generic image processing applications. The system works as a contour identifier able to extract the salient features of an image. It is based on the principles of cognitive image processing, which means that it executes fast pattern matching and data reduction mimicking the operation of the human brain. The pattern matching can be executed by a custom designed Associative Memory (AM) chip. The reference patterns are chosen by a complex training algorithm implemented on an FPGA device. Post processing algorithms (e.g. pixel clustering...

  1. High performance coronagraphy for direct imaging of exoplanets

    Directory of Open Access Journals (Sweden)

    Guyon O.

    2011-07-01

    Full Text Available Coronagraphy has recently been an extremely active field of research, with several high performance concepts proposed, and several new coronagraphs tested in laboratories and telescopes. Coronagraph concepts can be grouped in a few broad categories: Lyot-type coronagraphs, pupil apodization and nulling interferometers. Among existing coronagraph concepts, several approach the fundamental performance limit imposed by the physical nature of light. To achieve their full potential, coronagraphs require exquisite wavefront control and calibration. This has been, and still is, the main bottleneck for the scientifically productive use of coronagraphs on ground-based telescopes. New and promising wavefront sensing techniques suitable for high contrast imaging have however been developed in the last few years and are started to be realized in laboratories. I will review some of these enabling technologies, and show that coronagraphs are now ready for “prime time” on existing and future telescopes.

  2. Do Danes enjoy a high performing chronic care system

    DEFF Research Database (Denmark)

    Juul, Annegrete; Olejaz, Maria; Rudkjøbing, Andreas

    2012-01-01

    The trends in population health in Denmark are similar to those in most Western European countries. Major health issues include, among others, the high prevalence of chronic illnesses and lifestyle related risk factors such as obesity, tobacco, physical inactivity and alcohol. This has pressed...... the health system towards a model of provision of care based on the management of chronic care conditions. While the Chronic Care Model was introduced in 2005, the Danish health system does not fulfil the ten key preconditions that would characterise a high-performing chronic care system. As revealed...... in a recent report, the fragmented structure of the Danish health system poses challenges in providing effectively coordinated care to patients with chronic diseases....

  3. Vertical structure and physical processes of the Madden-Julian oscillation: Exploring key model physics in climate simulations: KEY PHYSICS IN MODELING THE MJO

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Xianan [Joint Institute for Regional Earth System Science and Engineering, University of California, Los Angeles California USA; Jet Propulsion Laboratory, California Institute of Technology, Pasadena California USA; Waliser, Duane E. [Joint Institute for Regional Earth System Science and Engineering, University of California, Los Angeles California USA; Jet Propulsion Laboratory, California Institute of Technology, Pasadena California USA; Xavier, Prince K. [UK Met Office, Exeter UK; Petch, Jon [UK Met Office, Exeter UK; Klingaman, Nicholas P. [National Centre for Atmospheric Science and Department of Meteorology, University Reading, Reading UK; Woolnough, Steven J. [National Centre for Atmospheric Science and Department of Meteorology, University Reading, Reading UK; Guan, Bin [Joint Institute for Regional Earth System Science and Engineering, University of California, Los Angeles California USA; Jet Propulsion Laboratory, California Institute of Technology, Pasadena California USA; Bellon, Gilles [CNRM-GAME, Centre National de la Recherche Scientifique/Météo-France, Toulouse France; Crueger, Traute [Max Planck Institute for Meteorology, Hamburg Germany; DeMott, Charlotte [Department of Atmospheric Science, Colorado State University, Fort Collins Colorado USA; Hannay, Cecile [National Center for Atmospheric Research, Boulder Colorado USA; Lin, Hai [Environment Canada, Dorval Quebec Canada; Hu, Wenting [State Key Laboratory of Numerical Modeling for Atmospheric Sciences and Geophysical Fluid Dynamics, Institute of Atmospheric Physics, Chinese Academy of Sciences, Beijing China; Kim, Daehyun [Lamont-Doherty Earth Observatory, Columbia University, New York New York USA; Lappen, Cara-Lyn [Department of Atmospheric Science, Texas A& M University, College Station Texas USA; Lu, Mong-Ming [Central Weather Bureau, Taipei Taiwan; Ma, Hsi-Yen [Lawrence Livermore National Laboratory, Livermore California USA; Miyakawa, Tomoki [Department of Coupled Ocean-Atmosphere-Land Processes Research, Japan Agency for Marine-Earth Science and Technology, Yokosuka Japan; Ridout, James A. [Naval Research Laboratory, Monterey California USA; Schubert, Siegfried D. [Global Modeling and Assimilation Office, NASA GSFC, Greenbelt Maryland USA; Scinocca, John [Canadian Centre for Climate Modelling and Analysis, Environment Canada, Victoria British Columbia Canada; Seo, Kyong-Hwan [Department of Atmospheric Sciences, Pusan National University, Pusan South Korea; Shindo, Eiki [Climate Research Department, Meteorological Research Institute, Tsukuba Japan; Song, Xiaoliang [Scripps Institute of Oceanography, La Jolla California USA; Stan, Cristiana [Department of Atmospheric, Oceanic and Earth Sciences, George Mason University, Fairfax Virginia USA; Tseng, Wan-Ling [University Research Center for Environmental Changes, Academia Sinica, Taipei Taiwan; Wang, Wanqiu [Climate Prediction Center, National Centers for Environmental Prediction/NOAA, Camp Springs Maryland USA; Wu, Tongwen [Beijing Climate Center, China Meteorological Administration, Beijing China; Wu, Xiaoqing [Department of Geological and Atmospheric Sciences, Iowa State University, Ames Iowa USA; Wyser, Klaus [Rossby Centre, Swedish Meteorological and Hydrological Institute, Norrkoping Sweden; Zhang, Guang J. [Scripps Institute of Oceanography, La Jolla California USA; Zhu, Hongyan [Centre for Australian Weather and Climate Research, Bureau of Meteorology, Melbourne Victoria Australia

    2015-05-26

    Aimed at reducing deficiencies in representing the Madden-Julian oscillation (MJO) in general circulation models (GCMs), a global model evaluation project on vertical structure and physical processes of the MJO was coordinated. In this paper, results from the climate simulation component of this project are reported. It is shown that the MJO remains a great challenge in these latest generation GCMs. The systematic eastward propagation of the MJO is only well simulated in about one fourth of the total participating models. The observed vertical westward tilt with altitude of the MJO is well simulated in good MJO models but not in the poor ones. Damped Kelvin wave responses to the east of convection in the lower troposphere could be responsible for the missing MJO preconditioning process in these poor MJO models. Several process-oriented diagnostics were conducted to discriminate key processes for realistic MJO simulations. While large-scale rainfall partition and low-level mean zonal winds over the Indo-Pacific in a model are not found to be closely associated with its MJO skill, two metrics, including the low-level relative humidity difference between high- and low-rain events and seasonal mean gross moist stability, exhibit statistically significant correlations with the MJO performance. It is further indicated that increased cloud-radiative feedback tends to be associated with reduced amplitude of intraseasonal variability, which is incompatible with the radiative instability theory previously proposed for the MJO. Results in this study confirm that inclusion of air-sea interaction can lead to significant improvement in simulating the MJO.

  4. High performance embedded system for real-time pattern matching

    Energy Technology Data Exchange (ETDEWEB)

    Sotiropoulou, C.-L., E-mail: c.sotiropoulou@cern.ch [University of Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Luciano, P. [University of Cassino and Southern Lazio, Gaetano di Biasio 43, Cassino 03043 (Italy); INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Gkaitatzis, S. [Aristotle University of Thessaloniki, 54124 Thessaloniki (Greece); Citraro, S. [University of Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Giannetti, P. [INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Dell' Orso, M. [University of Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); INFN-Pisa Section, Largo B. Pontecorvo 3, 56127 Pisa (Italy)

    2017-02-11

    In this paper we present an innovative and high performance embedded system for real-time pattern matching. This system is based on the evolution of hardware and algorithms developed for the field of High Energy Physics and more specifically for the execution of extremely fast pattern matching for tracking of particles produced by proton–proton collisions in hadron collider experiments. A miniaturized version of this complex system is being developed for pattern matching in generic image processing applications. The system works as a contour identifier able to extract the salient features of an image. It is based on the principles of cognitive image processing, which means that it executes fast pattern matching and data reduction mimicking the operation of the human brain. The pattern matching can be executed by a custom designed Associative Memory chip. The reference patterns are chosen by a complex training algorithm implemented on an FPGA device. Post processing algorithms (e.g. pixel clustering) are also implemented on the FPGA. The pattern matching can be executed on a 2D or 3D space, on black and white or grayscale images, depending on the application and thus increasing exponentially the processing requirements of the system. We present the firmware implementation of the training and pattern matching algorithm, performance and results on a latest generation Xilinx Kintex Ultrascale FPGA device. - Highlights: • A high performance embedded system for real-time pattern matching is proposed. • It is based on a system developed for High Energy Physics experiment triggers. • It mimics the operation of the human brain (cognitive image processing). • The process can be executed on 2D and 3D, black and white or grayscale images. • The implementation uses FPGAs and custom designed associative memory (AM) chips.

  5. Intelligent Facades for High Performance Green Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Dyson, Anna [Rensselaer Polytechnic Inst., Troy, NY (United States)

    2017-03-01

    Progress Towards Net-Zero and Net-Positive-Energy Commercial Buildings and Urban Districts Through Intelligent Building Envelope Strategies Previous research and development of intelligent facades systems has been limited in their contribution towards national goals for achieving on-site net zero buildings, because this R&D has failed to couple the many qualitative requirements of building envelopes such as the provision of daylighting, access to exterior views, satisfying aesthetic and cultural characteristics, with the quantitative metrics of energy harvesting, storage and redistribution. To achieve energy self-sufficiency from on-site solar resources, building envelopes can and must address this gamut of concerns simultaneously. With this project, we have undertaken a high-performance building integrated combined-heat and power concentrating photovoltaic system with high temperature thermal capture, storage and transport towards multiple applications (BICPV/T). The critical contribution we are offering with the Integrated Concentrating Solar Façade (ICSF) is conceived to improve daylighting quality for improved health of occupants and mitigate solar heat gain while maximally capturing and transferring onsite solar energy. The ICSF accomplishes this multi-functionality by intercepting only the direct-normal component of solar energy (which is responsible for elevated cooling loads) thereby transforming a previously problematic source of energy into a high quality resource that can be applied to building demands such as heating, cooling, dehumidification, domestic hot water, and possible further augmentation of electrical generation through organic Rankine cycles. With the ICSF technology, our team is addressing the global challenge in transitioning commercial and residential building stock towards on-site clean energy self-sufficiency, by fully integrating innovative environmental control systems strategies within an intelligent and responsively dynamic building

  6. High performance FDTD algorithm for GPGPU supercomputers

    Science.gov (United States)

    Zakirov, Andrey; Levchenko, Vadim; Perepelkina, Anastasia; Zempo, Yasunari

    2016-10-01

    An implementation of FDTD method for solution of optical and other electrodynamic problems of high computational cost is described. The implementation is based on the LRnLA algorithm DiamondTorre, which is developed specifically for GPGPU hardware. The specifics of the DiamondTorre algorithms for staggered grid (Yee cell) and many-GPU devices are shown. The algorithm is implemented in the software for real physics calculation. The software performance is estimated through algorithms parameters and computer model. The real performance is tested on one GPU device, as well as on the many-GPU cluster. The performance of up to 0.65 • 1012 cell updates per second for 3D domain with 0.3 • 1012 Yee cells total is achieved.

  7. High performance hydrophobic solvent, carbon dioxide capture

    Science.gov (United States)

    Nulwala, Hunaid; Luebke, David

    2017-05-09

    Methods and compositions useful, for example, for physical solvent carbon capture. A method comprising: contacting at least one first composition comprising carbon dioxide with at least one second composition to at least partially dissolve the carbon dioxide of the first composition in the second composition, wherein the second composition comprises at least one siloxane compound which is covalently modified with at least one non-siloxane group comprising at least one heteroatom. Polydimethylsiloxane (PDMS) materials and ethylene-glycol based materials have high carbon dioxide solubility but suffer from various problems. PDMS is hydrophobic but suffers from low selectivity. Ethylene-glycol based systems have good solubility and selectivity, but suffer from high affinity to water. Solvents were developed which keep the desired combinations of properties, and result in a simplified, overall process for carbon dioxide removal from a mixed gas stream.

  8. Idle waves in high-performance computing.

    Science.gov (United States)

    Markidis, Stefano; Vencels, Juris; Peng, Ivy Bo; Akhmetova, Dana; Laure, Erwin; Henri, Pierre

    2015-01-01

    The vast majority of parallel scientific applications distributes computation among processes that are in a busy state when computing and in an idle state when waiting for information from other processes. We identify the propagation of idle waves through processes in scientific applications with a local information exchange between the two processes. Idle waves are nondispersive and have a phase velocity inversely proportional to the average busy time. The physical mechanism enabling the propagation of idle waves is the local synchronization between two processes due to remote data dependency. This study provides a description of the large number of processes in parallel scientific applications as a continuous medium. This work also is a step towards an understanding of how localized idle periods can affect remote processes, leading to the degradation of global performance in parallel scientific applications.

  9. PHYSICS

    CERN Multimedia

    L. Demortier

    Physics-wise, the CMS week in December was dominated by discussions of the analyses that will be carried out in the “next six months”, i.e. while waiting for the first LHC collisions.  As presented in December, analysis approvals based on Monte Carlo simulation were re-opened, with the caveat that for this work to be helpful to the goals of CMS, it should be carried out using the new software (CMSSW_2_X) and associated samples.  By the end of the week, the goal for the physics groups was set to be the porting of our physics commissioning methods and plans, as well as the early analyses (based an integrated luminosity in the range 10-100pb-1) into this new software. Since December, the large data samples from CMSSW_2_1 were completed. A big effort by the production group gave a significant number of events over the end-of-year break – but also gave out the first samples with the fast simulation. Meanwhile, as mentioned in December, the arrival of 2_2 meant that ...

  10. Physically based modelling of sediment generation and transport under a large rainfall simulator

    Science.gov (United States)

    Adams, Russell; Elliott, Sandy

    2006-07-01

    A series of large rainfall simulator experiments was conducted in 2002 and 2003 on a small plot located in an experimental catchment in the North Island of New Zealand. These experiments measured both runoff and sediment transport under carefully controlled conditions. A physically based hydrological modelling system (SHETRAN) was then applied to reproduce the observed hydrographs and sedigraphs. SHETRAN uses physically based equations to represent flow and sediment transport, and two erodibility coefficients to model detachment of soil particles by raindrop erosion and overland flow erosion. The rate of raindrop erosion also depended on the amount of bare ground under the simulator; this was estimated before each experiment. These erodibility coefficients were calibrated systematically for summer and winter experiments separately, and lower values were obtained for the summer experiments. Earlier studies using small rainfall simulators in the vicinity of the plot also found the soil to be less erodible in summer and autumn. Limited validation of model parameters was carried out using results from a series of autumn experiments. The modelled suspended sediment load was also sensitive to parameters controlling the generation of runoff from the rainfall simulator plot; therefore, we found that accurate runoff predictions were important for the sediment predictions, especially from the experiments where the pasture cover was good and overland flow erosion was the dominant mechanism. The rainfall simulator experiments showed that the mass of suspended sediment increased post-grazing, and according to the model this was due to raindrop detachment. The results indicated that grazing cattle or sheep on steeply sloping hill-country paddocks should be carefully managed, especially in winter, to limit the transport of suspended sediment into watercourses.

  11. Psychological effects of acute physical inactivty during microgravitiy simulated by bed rest

    Directory of Open Access Journals (Sweden)

    Petra Dolenc

    2009-05-01

    Full Text Available Long-duration weightlessness simulated by bed rest represents an important model to study the consequences of physical inactivity and sedentarism on the human body. This study evaluated changes of mood status, psychological well-being, coping strategies and physical self in ten healthy young male subjects during a 35-day horizontal bed rest. Participants were asked to complete psychometrical inventories before and after the bed rest experiment. The preceived satisfaction with life and the physical self-concept did not change during bed rest period and mood states were relatively stable during the experiment according to the Emotional States Questionnaire. The neurotic level was enhanced during the bed rest period according to the Slovenian version of the General Health Questionnaire. However, even after the period of physical immobilization, the expression of these symptoms remains relatively low and does not represent a risk to the mental health of the subjects. The results from Coping Resources Inventory indicated a tendency toward an increase of emotion focused coping and a decrease of problem focused coping strategies. The importance of this research was to provide evidence that the provision of favourable habitability countermeasures can prevent deterioration in the psychological state under conditions of physical immobilisation. Our findings have applied value in the field of health prevention and rehabilitaion.

  12. Correlations between the simulated military tasks performance and physical fitness tests at high altitude

    Directory of Open Access Journals (Sweden)

    Eduardo Borba Neves

    2017-11-01

    Full Text Available The aim of this study was to investigate the Correlations between the Simulated Military Tasks Performance and Physical Fitness Tests at high altitude. This research is part of a project to modernize the physical fitness test of the Colombian Army. Data collection was performed at the 13th Battalion of Instruction and Training, located 30km south of Bogota D.C., with a temperature range from 1ºC to 23ºC during the study period, and at 3100m above sea level. The sample was composed by 60 volunteers from three different platoons. The volunteers start the data collection protocol after 2 weeks of acclimation at this altitude. The main results were the identification of a high positive correlation between the 3 Assault wall in succession and the Simulated Military Tasks performance (r = 0.764, p<0.001, and a moderate negative correlation between pull-ups and the Simulated Military Tasks performance (r = -0.535, p<0.001. It can be recommended the use of the 20-consecutive overtaking of the 3 Assault wall in succession as a good way to estimate the performance in operational tasks which involve: assault walls, network of wires, military Climbing Nets, Tarzan jump among others, at high altitude.

  13. Experimental quantum simulations of many-body physics with trapped ions.

    Science.gov (United States)

    Schneider, Ch; Porras, Diego; Schaetz, Tobias

    2012-02-01

    Direct experimental access to some of the most intriguing quantum phenomena is not granted due to the lack of precise control of the relevant parameters in their naturally intricate environment. Their simulation on conventional computers is impossible, since quantum behaviour arising with superposition states or entanglement is not efficiently translatable into the classical language. However, one could gain deeper insight into complex quantum dynamics by experimentally simulating the quantum behaviour of interest in another quantum system, where the relevant parameters and interactions can be controlled and robust effects detected sufficiently well. Systems of trapped ions provide unique control of both the internal (electronic) and external (motional) degrees of freedom. The mutual Coulomb interaction between the ions allows for large interaction strengths at comparatively large mutual ion distances enabling individual control and readout. Systems of trapped ions therefore exhibit a prominent system in several physical disciplines, for example, quantum information processing or metrology. Here, we will give an overview of different trapping techniques of ions as well as implementations for coherent manipulation of their quantum states and discuss the related theoretical basics. We then report on the experimental and theoretical progress in simulating quantum many-body physics with trapped ions and present current approaches for scaling up to more ions and more-dimensional systems.

  14. Wall modeled large eddy simulation of supersonic flow physics over compression-expansion ramp

    Science.gov (United States)

    Goshtasbi Rad, Ebrahim; Mousavi, Seyed Mahmood

    2015-12-01

    In the present work, wall modeled large-eddy simulation (WMLES) in the Fluent software is used to investigate the flow physics of a three-dimensional shock-turbulent boundary layer interaction, as an important phenomenon in aerospace science, on a compression-expansion ramp with the angle of 25°. Fine flow structures are obtained via Laplacian of density that called shadowgraph, in which shock wave structures are visible distinctly. The results are compared with the experimental data of Zheltovodov et al., 1990 [33], in the same condition regarding geometry, boundary conditions, etc. as those used by them. Results show that not only there are a good agreement with experimental trends concerning wall pressure, friction coefficient distribution and mean velocity profiles, but also in comparison with those presented by Grilli et al., 2013 [24]. LES simulation, used in this study, presents more accurate results with fewer computational costs. Afterwards, we investigated the influence of discontinuity in wall temperature, varying stagnation pressure and Reynolds number on physics of flow in order to control the shock behavior. Our simulations shows that, discontinuity in wall temperature, varying free stream stagnation pressure and Reynolds number (the free stream Mach number remained essentially constant) influences the starting point of shock, shock strength, separation length and the collision angle of separated and reattachment shock waves.

  15. Enabling the ATLAS Experiment at the LHC for High Performance Computing

    CERN Document Server

    AUTHOR|(CDS)2091107; Ereditato, Antonio

    In this thesis, I studied the feasibility of running computer data analysis programs from the Worldwide LHC Computing Grid, in particular large-scale simulations of the ATLAS experiment at the CERN LHC, on current general purpose High Performance Computing (HPC) systems. An approach for integrating HPC systems into the Grid is proposed, which has been implemented and tested on the „Todi” HPC machine at the Swiss National Supercomputing Centre (CSCS). Over the course of the test, more than 500000 CPU-hours of processing time have been provided to ATLAS, which is roughly equivalent to the combined computing power of the two ATLAS clusters at the University of Bern. This showed that current HPC systems can be used to efficiently run large-scale simulations of the ATLAS detector and of the detected physics processes. As a first conclusion of my work, one can argue that, in perspective, running large-scale tasks on a few large machines might be more cost-effective than running on relatively small dedicated com...

  16. Simulating heavy fermion physics in optical lattice: Periodic Anderson model with harmonic trapping potential

    Science.gov (United States)

    Zhong, Yin; Liu, Yu; Luo, Hong-Gang

    2017-10-01

    The periodic Anderson model (PAM), where local electron orbitals interplay with itinerant electronic carriers, plays an essential role in our understanding of heavy fermion materials. Motivated by recent proposals for simulating the Kondo lattice model (KLM) in terms of alkaline-earth metal atoms, we take another step toward the simulation of PAM, which includes the crucial charge/valence fluctuation of local f-electrons beyond purely low-energy spin fluctuation in the KLM. To realize PAM, a transition induced by a suitable laser between the electronic excited and ground state of alkaline-earth metal atoms (1 S 0⇌3 P 0) is introduced. This leads to effective hybridization between local electrons and conduction electrons in PAM. Generally, the SU( N) version of PAM can be realized by our proposal, which gives a unique opportunity to detect large- N physics without complexity in realistic materials. In the present work, high-temperature physical features of standard [ SU(2)] PAM with harmonic trapping potential are analyzed by quantum Monte Carlo and dynamic mean-field theory, where the Mott/orbital-selective Mott state was found to coexist with metallic states. Indications for near-future experiments are provided. We expect our theoretical proposal and (hopefully) forthcoming experiments will deepen our understanding of heavy fermion systems. At the same time, we hope these will trigger further studies on related Mott physics, quantum criticality, and non-trivial topology in both the inhomogeneous and nonequilibrium realms.

  17. Physical compatibility of cisatracurium with selected drugs during simulated Y-site administration.

    Science.gov (United States)

    Foushee, Jaime A; Fox, Laura M; Gormley, Lyndsay R; Lineberger, Megan S

    2015-03-15

    The physical compatibility of cisatracurium with selected drugs during simulated Y-site administration was studied. Study drugs were selected based on the lack of physical compatibility data with cisatracurium and their use in intensive care units. Test admixtures were prepared by mixing 2.5-mL samples of varying concentrations of calcium gluconate, diltiazem, esomeprazole, regular insulin, nicardipine, pantoprazole, and vasopressin with either 2.5 mL of normal saline 0.9% (control) or 2.5 mL of cisatracurium (experimental) to simulate a 1:1 Y-site ratio. Drug infusions were prepared at the maximum concentrations used clinically. Physical compatibility of the admixtures was determined by visual and turbidimetric assessments performed in triplicate immediately after mixing and at 15, 30, and 60 minutes. Visual incompatibility was defined as a change in color, the formation of haze or precipitate, the presence of particles, or the formation of gas in the experimental groups compared with the controls. Disturbances invisible to the naked eye were determined by assessing changes in turbidity of experimental admixtures compared with the controls. None of the admixtures exhibited visual changes when mixed with cisatracurium. Six of the seven admixtures exhibited turbidimetric compatibility with cisatracurium. Pantoprazole admixtures demonstrated a significant difference in turbidimetric assessment between the control and experimental groups when mixed with cisatracurium (p control and experimental samples. Copyright © 2015 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  18. High-performance commercial building systems

    Energy Technology Data Exchange (ETDEWEB)

    Selkowitz, Stephen

    2003-10-01

    This report summarizes key technical accomplishments resulting from the three year PIER-funded R&D program, ''High Performance Commercial Building Systems'' (HPCBS). The program targets the commercial building sector in California, an end-use sector that accounts for about one-third of all California electricity consumption and an even larger fraction of peak demand, at a cost of over $10B/year. Commercial buildings also have a major impact on occupant health, comfort and productivity. Building design and operations practices that influence energy use are deeply engrained in a fragmented, risk-averse industry that is slow to change. Although California's aggressive standards efforts have resulted in new buildings designed to use less energy than those constructed 20 years ago, the actual savings realized are still well below technical and economic potentials. The broad goal of this program is to develop and deploy a set of energy-saving technologies, strategies, and techniques, and improve processes for designing, commissioning, and operating commercial buildings, while improving health, comfort, and performance of occupants, all in a manner consistent with sound economic investment practices. Results are to be broadly applicable to the commercial sector for different building sizes and types, e.g. offices and schools, for different classes of ownership, both public and private, and for owner-occupied as well as speculative buildings. The program aims to facilitate significant electricity use savings in the California commercial sector by 2015, while assuring that these savings are affordable and promote high quality indoor environments. The five linked technical program elements contain 14 projects with 41 distinct R&D tasks. Collectively they form a comprehensive Research, Development, and Demonstration (RD&D) program with the potential to capture large savings in the commercial building sector, providing significant economic benefits to

  19. High Performance Home Building Guide for Habitat for Humanity Affiliates

    Energy Technology Data Exchange (ETDEWEB)

    Lindsey Marburger

    2010-10-01

    This guide covers basic principles of high performance Habitat construction, steps to achieving high performance Habitat construction, resources to help improve building practices, materials, etc., and affiliate profiles and recommendations.

  20. Leveraging on Easy Java Simulation tool and open source computer simulation library to create interactive digital media for mass customization of high school physics curriculum

    CERN Document Server

    Wee, Loo Kang

    2012-01-01

    This paper highlights the diverse possibilities in the rich community of educators from the Conceptual Learning of Science (CoLoS) and Open Source Physics (OSP) movement to engage, enable and empower educators and students, to create interactive digital media through computer modeling. This concept revolves around a paradigmatic shift towards participatory learning through immersive computer modeling, as opposed to using technology for information transmission. We aim to engage high school educators to professionally develop themselves by creating and customizing simulations possible through Easy Java Simulation (Ejs) and its learning community. Ejs allows educators to be designers of learning environments through modifying source codes of the simulation. Educators can conduct lessons with students' using these interactive digital simulations and rapidly enhance the simulation through changing the source codes personally. Ejs toolkit, its library of simulations and growing community contributed simulation cod...