WorldWideScience

Sample records for high-performance scientific simulation

  1. A high performance scientific cloud computing environment for materials simulations

    OpenAIRE

    Jorissen, Kevin; Vila, Fernando D.; Rehr, John J.

    2011-01-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including...

  2. A high performance scientific cloud computing environment for materials simulations

    Science.gov (United States)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  3. High-performance scientific computing in the cloud

    Science.gov (United States)

    Jorissen, Kevin; Vila, Fernando; Rehr, John

    2011-03-01

    Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.

  4. 3rd International Conference on High Performance Scientific Computing

    CERN Document Server

    Kostina, Ekaterina; Phu, Hoang; Rannacher, Rolf

    2008-01-01

    This proceedings volume contains a selection of papers presented at the Third International Conference on High Performance Scientific Computing held at the Hanoi Institute of Mathematics, Vietnamese Academy of Science and Technology (VAST), March 6-10, 2006. The conference has been organized by the Hanoi Institute of Mathematics, Interdisciplinary Center for Scientific Computing (IWR), Heidelberg, and its International PhD Program ``Complex Processes: Modeling, Simulation and Optimization'', and Ho Chi Minh City University of Technology. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and applications in practice. Subjects covered are mathematical modelling, numerical simulation, methods for optimization and control, parallel computing, software development, applications of scientific computing in physics, chemistry, biology and mechanics, environmental and hydrology problems, transport, logistics and site loca...

  5. Component-based software for high-performance scientific computing

    Energy Technology Data Exchange (ETDEWEB)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

  6. Component-based software for high-performance scientific computing

    International Nuclear Information System (INIS)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly

  7. 5th International Conference on High Performance Scientific Computing

    CERN Document Server

    Hoang, Xuan; Rannacher, Rolf; Schlöder, Johannes

    2014-01-01

    This proceedings volume gathers a selection of papers presented at the Fifth International Conference on High Performance Scientific Computing, which took place in Hanoi on March 5-9, 2012. The conference was organized by the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) of Heidelberg University, Ho Chi Minh City University of Technology, and the Vietnam Institute for Advanced Study in Mathematics. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and practical applications. Subjects covered include mathematical modeling; numerical simulation; methods for optimization and control; parallel computing; software development; and applications of scientific computing in physics, mechanics and biomechanics, material science, hydrology, chemistry, biology, biotechnology, medicine, sports, psychology, transport, logistics, com...

  8. 6th International Conference on High Performance Scientific Computing

    CERN Document Server

    Phu, Hoang; Rannacher, Rolf; Schlöder, Johannes

    2017-01-01

    This proceedings volume highlights a selection of papers presented at the Sixth International Conference on High Performance Scientific Computing, which took place in Hanoi, Vietnam on March 16-20, 2015. The conference was jointly organized by the Heidelberg Institute of Theoretical Studies (HITS), the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) at Heidelberg University, and the Vietnam Institute for Advanced Study in Mathematics, Ministry of Education The contributions cover a broad, interdisciplinary spectrum of scientific computing and showcase recent advances in theory, methods, and practical applications. Subjects covered numerical simulation, methods for optimization and control, parallel computing, and software development, as well as the applications of scientific computing in physics, mechanics, biomechanics and robotics, material science, hydrology, biotechnology, medicine, transport, scheduling, and in...

  9. Top scientific research center deploys Zambeel Aztera (TM) network storage system in high performance environment

    CERN Multimedia

    2002-01-01

    " The National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory has implemented a Zambeel Aztera storage system and software to accelerate the productivity of scientists running high performance scientific simulations and computations" (1 page).

  10. Scientific computer simulation review

    International Nuclear Information System (INIS)

    Kaizer, Joshua S.; Heller, A. Kevin; Oberkampf, William L.

    2015-01-01

    Before the results of a scientific computer simulation are used for any purpose, it should be determined if those results can be trusted. Answering that question of trust is the domain of scientific computer simulation review. There is limited literature that focuses on simulation review, and most is specific to the review of a particular type of simulation. This work is intended to provide a foundation for a common understanding of simulation review. This is accomplished through three contributions. First, scientific computer simulation review is formally defined. This definition identifies the scope of simulation review and provides the boundaries of the review process. Second, maturity assessment theory is developed. This development clarifies the concepts of maturity criteria, maturity assessment sets, and maturity assessment frameworks, which are essential for performing simulation review. Finally, simulation review is described as the application of a maturity assessment framework. This is illustrated through evaluating a simulation review performed by the U.S. Nuclear Regulatory Commission. In making these contributions, this work provides a means for a more objective assessment of a simulation’s trustworthiness and takes the next step in establishing scientific computer simulation review as its own field. - Highlights: • We define scientific computer simulation review. • We develop maturity assessment theory. • We formally define a maturity assessment framework. • We describe simulation review as the application of a maturity framework. • We provide an example of a simulation review using a maturity framework

  11. Cray XT4: An Early Evaluation for Petascale Scientific Simulation

    International Nuclear Information System (INIS)

    Alam, Sadaf R.; Barrett, Richard F.; Fahey, Mark R.; Kuehn, Jeffery A.; Sankaran, Ramanan; Worley, Patrick H.; Larkin, Jeffrey M.

    2007-01-01

    The scientific simulation capabilities of next generation high-end computing technology will depend on striking a balance among memory, processor, I/O, and local and global network performance across the breadth of the scientific simulation space. The Cray XT4 combines commodity AMD dual core Opteron processor technology with the second generation of Cray's custom communication accelerator in a system design whose balance is claimed to be driven by the demands of scientific simulation. This paper presents an evaluation of the Cray XT4 using microbenchmarks to develop a controlled understanding of individual system components, providing the context for analyzing and comprehending the performance of several petascale-ready applications. Results gathered from several strategic application domains are compared with observations on the previous generation Cray XT3 and other high-end computing systems, demonstrating performance improvements across a wide variety of application benchmark problems.

  12. Numerical research on the thermal performance of high altitude scientific balloons

    International Nuclear Information System (INIS)

    Dai, Qiumin; Xing, Daoming; Fang, Xiande; Zhao, Yingjie

    2017-01-01

    Highlights: • A model is presented to evaluate the IR radiation between translucent surfaces. • Comprehensive ascent and thermal models of balloons are established. • The effect of IR transmissivity on film temperature distribution is unneglectable. • Atmospheric IR radiation is the primary thermal factor of balloons at night. • Solar radiation is the primary thermal factor of balloons during the day. - Abstract: Internal infrared (IR) radiation is an important factor that affects the thermal performance of high altitude balloons. The internal IR radiation is commonly neglected or treated as the IR radiation between opaque gray bodies. In this paper, a mathematical model which considers the IR transmissivity of the film is proposed to estimate the internal IR radiation. Comprehensive ascent and thermal models for high altitude scientific balloons are established. Based on the models, thermal characteristics of a NASA super pressure balloon are simulated. The effects of film IR property on the thermal behaviors of the balloon are discussed in detail. The results are helpful for the design and operation of high altitude scientific balloons.

  13. The Centre of High-Performance Scientific Computing, Geoverbund, ABC/J - Geosciences enabled by HPSC

    Science.gov (United States)

    Kollet, Stefan; Görgen, Klaus; Vereecken, Harry; Gasper, Fabian; Hendricks-Franssen, Harrie-Jan; Keune, Jessica; Kulkarni, Ketan; Kurtz, Wolfgang; Sharples, Wendy; Shrestha, Prabhakar; Simmer, Clemens; Sulis, Mauro; Vanderborght, Jan

    2016-04-01

    The Centre of High-Performance Scientific Computing (HPSC TerrSys) was founded 2011 to establish a centre of competence in high-performance scientific computing in terrestrial systems and the geosciences enabling fundamental and applied geoscientific research in the Geoverbund ABC/J (geoscientfic research alliance of the Universities of Aachen, Cologne, Bonn and the Research Centre Jülich, Germany). The specific goals of HPSC TerrSys are to achieve relevance at the national and international level in (i) the development and application of HPSC technologies in the geoscientific community; (ii) student education; (iii) HPSC services and support also to the wider geoscientific community; and in (iv) the industry and public sectors via e.g., useful applications and data products. A key feature of HPSC TerrSys is the Simulation Laboratory Terrestrial Systems, which is located at the Jülich Supercomputing Centre (JSC) and provides extensive capabilities with respect to porting, profiling, tuning and performance monitoring of geoscientific software in JSC's supercomputing environment. We will present a summary of success stories of HPSC applications including integrated terrestrial model development, parallel profiling and its application from watersheds to the continent; massively parallel data assimilation using physics-based models and ensemble methods; quasi-operational terrestrial water and energy monitoring; and convection permitting climate simulations over Europe. The success stories stress the need for a formalized education of students in the application of HPSC technologies in future.

  14. Topic 14+16: High-performance and scientific applications and extreme-scale computing (Introduction)

    KAUST Repository

    Downes, Turlough P.

    2013-01-01

    As our understanding of the world around us increases it becomes more challenging to make use of what we already know, and to increase our understanding still further. Computational modeling and simulation have become critical tools in addressing this challenge. The requirements of high-resolution, accurate modeling have outstripped the ability of desktop computers and even small clusters to provide the necessary compute power. Many applications in the scientific and engineering domains now need very large amounts of compute time, while other applications, particularly in the life sciences, frequently have large data I/O requirements. There is thus a growing need for a range of high performance applications which can utilize parallel compute systems effectively, which have efficient data handling strategies and which have the capacity to utilise current and future systems. The High Performance and Scientific Applications topic aims to highlight recent progress in the use of advanced computing and algorithms to address the varied, complex and increasing challenges of modern research throughout both the "hard" and "soft" sciences. This necessitates being able to use large numbers of compute nodes, many of which are equipped with accelerators, and to deal with difficult I/O requirements. © 2013 Springer-Verlag.

  15. High performance electromagnetic simulation tools

    Science.gov (United States)

    Gedney, Stephen D.; Whites, Keith W.

    1994-10-01

    Army Research Office Grant #DAAH04-93-G-0453 has supported the purchase of 24 additional compute nodes that were installed in the Intel iPsC/860 hypercube at the Univesity Of Kentucky (UK), rendering a 32-node multiprocessor. This facility has allowed the investigators to explore and extend the boundaries of electromagnetic simulation for important areas of defense concerns including microwave monolithic integrated circuit (MMIC) design/analysis and electromagnetic materials research and development. The iPSC/860 has also provided an ideal platform for MMIC circuit simulations. A number of parallel methods based on direct time-domain solutions of Maxwell's equations have been developed on the iPSC/860, including a parallel finite-difference time-domain (FDTD) algorithm, and a parallel planar generalized Yee-algorithm (PGY). The iPSC/860 has also provided an ideal platform on which to develop a 'virtual laboratory' to numerically analyze, scientifically study and develop new types of materials with beneficial electromagnetic properties. These materials simulations are capable of assembling hundreds of microscopic inclusions from which an electromagnetic full-wave solution will be obtained in toto. This powerful simulation tool has enabled research of the full-wave analysis of complex multicomponent MMIC devices and the electromagnetic properties of many types of materials to be performed numerically rather than strictly in the laboratory.

  16. Language interoperability for high-performance parallel scientific components

    International Nuclear Information System (INIS)

    Elliot, N; Kohn, S; Smolinski, B

    1999-01-01

    With the increasing complexity and interdisciplinary nature of scientific applications, code reuse is becoming increasingly important in scientific computing. One method for facilitating code reuse is the use of components technologies, which have been used widely in industry. However, components have only recently worked their way into scientific computing. Language interoperability is an important underlying technology for these component architectures. In this paper, we present an approach to language interoperability for a high-performance parallel, component architecture being developed by the Common Component Architecture (CCA) group. Our approach is based on Interface Definition Language (IDL) techniques. We have developed a Scientific Interface Definition Language (SIDL), as well as bindings to C and Fortran. We have also developed a SIDL compiler and run-time library support for reference counting, reflection, object management, and exception handling (Babel). Results from using Babel to call a standard numerical solver library (written in C) from C and Fortran show that the cost of using Babel is minimal, where as the savings in development time and the benefits of object-oriented development support for C and Fortran far outweigh the costs

  17. HPCToolkit: performance tools for scientific computing

    Energy Technology Data Exchange (ETDEWEB)

    Tallent, N; Mellor-Crummey, J; Adhianto, L; Fagan, M; Krentel, M [Department of Computer Science, Rice University, Houston, TX 77005 (United States)

    2008-07-15

    As part of the U.S. Department of Energy's Scientific Discovery through Advanced Computing (SciDAC) program, science teams are tackling problems that require simulation and modeling on petascale computers. As part of activities associated with the SciDAC Center for Scalable Application Development Software (CScADS) and the Performance Engineering Research Institute (PERI), Rice University is building software tools for performance analysis of scientific applications on the leadership-class platforms. In this poster abstract, we briefly describe the HPCToolkit performance tools and how they can be used to pinpoint bottlenecks in SPMD and multi-threaded parallel codes. We demonstrate HPCToolkit's utility by applying it to two SciDAC applications: the S3D code for simulation of turbulent combustion and the MFDn code for ab initio calculations of microscopic structure of nuclei.

  18. HPCToolkit: performance tools for scientific computing

    International Nuclear Information System (INIS)

    Tallent, N; Mellor-Crummey, J; Adhianto, L; Fagan, M; Krentel, M

    2008-01-01

    As part of the U.S. Department of Energy's Scientific Discovery through Advanced Computing (SciDAC) program, science teams are tackling problems that require simulation and modeling on petascale computers. As part of activities associated with the SciDAC Center for Scalable Application Development Software (CScADS) and the Performance Engineering Research Institute (PERI), Rice University is building software tools for performance analysis of scientific applications on the leadership-class platforms. In this poster abstract, we briefly describe the HPCToolkit performance tools and how they can be used to pinpoint bottlenecks in SPMD and multi-threaded parallel codes. We demonstrate HPCToolkit's utility by applying it to two SciDAC applications: the S3D code for simulation of turbulent combustion and the MFDn code for ab initio calculations of microscopic structure of nuclei

  19. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Yier [Univ. of Central Florida, Orlando, FL (United States)

    2017-07-14

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from this project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.

  20. RAPPORT: running scientific high-performance computing applications on the cloud.

    Science.gov (United States)

    Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt

    2013-01-28

    Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.

  1. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    International Nuclear Information System (INIS)

    Khaleel, Mohammad A.

    2009-01-01

    This report is an account of the deliberations and conclusions of the workshop on 'Forefront Questions in Nuclear Science and the Role of High Performance Computing' held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to (1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; (2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; (3) provide nuclear physicists the opportunity to influence the development of high performance computing; and (4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  2. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.

    2009-10-01

    This report is an account of the deliberations and conclusions of the workshop on "Forefront Questions in Nuclear Science and the Role of High Performance Computing" held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to 1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; 2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; 3) provide nuclear physicists the opportunity to influence the development of high performance computing; and 4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  3. High-Performance Beam Simulator for the LANSCE Linac

    International Nuclear Information System (INIS)

    Pang, Xiaoying; Rybarcyk, Lawrence J.; Baily, Scott A.

    2012-01-01

    A high performance multiparticle tracking simulator is currently under development at Los Alamos. The heart of the simulator is based upon the beam dynamics simulation algorithms of the PARMILA code, but implemented in C++ on Graphics Processing Unit (GPU) hardware using NVIDIA's CUDA platform. Linac operating set points are provided to the simulator via the EPICS control system so that changes of the real time linac parameters are tracked and the simulation results updated automatically. This simulator will provide valuable insight into the beam dynamics along a linac in pseudo real-time, especially where direct measurements of the beam properties do not exist. Details regarding the approach, benefits and performance are presented.

  4. High-performance dual-speed CCD camera system for scientific imaging

    Science.gov (United States)

    Simpson, Raymond W.

    1996-03-01

    Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.

  5. High Performance Data Distribution for Scientific Community

    Science.gov (United States)

    Tirado, Juan M.; Higuero, Daniel; Carretero, Jesus

    2010-05-01

    Institutions such as NASA, ESA or JAXA find solutions to distribute data from their missions to the scientific community, and their long term archives. This is a complex problem, as it includes a vast amount of data, several geographically distributed archives, heterogeneous architectures with heterogeneous networks, and users spread around the world. We propose a novel architecture (HIDDRA) that solves this problem aiming to reduce user intervention in data acquisition and processing. HIDDRA is a modular system that provides a highly efficient parallel multiprotocol download engine, using a publish/subscribe policy which helps the final user to obtain data of interest transparently. Our system can deal simultaneously with multiple protocols (HTTP,HTTPS, FTP, GridFTP among others) to obtain the maximum bandwidth, reducing the workload in data server and increasing flexibility. It can also provide high reliability and fault tolerance, as several sources of data can be used to perform one file download. HIDDRA architecture can be arranged into a data distribution network deployed on several sites that can cooperate to provide former features. HIDDRA has been addressed by the 2009 e-IRG Report on Data Management as a promising initiative for data interoperability. Our first prototype has been evaluated in collaboration with the ESAC centre in Villafranca del Castillo (Spain) that shows a high scalability and performance, opening a wide spectrum of opportunities. Some preliminary results have been published in the Journal of Astrophysics and Space Science [1]. [1] D. Higuero, J.M. Tirado, J. Carretero, F. Félix, and A. de La Fuente. HIDDRA: a highly independent data distribution and retrieval architecture for space observation missions. Astrophysics and Space Science, 321(3):169-175, 2009

  6. MUMAX: A new high-performance micromagnetic simulation tool

    International Nuclear Information System (INIS)

    Vansteenkiste, A.; Van de Wiele, B.

    2011-01-01

    We present MUMAX, a general-purpose micromagnetic simulation tool running on graphical processing units (GPUs). MUMAX is designed for high-performance computations and specifically targets large simulations. In that case speedups of over a factor 100 x can be obtained compared to the CPU-based OOMMF program developed at NIST. MUMAX aims to be general and broadly applicable. It solves the classical Landau-Lifshitz equation taking into account the magnetostatic, exchange and anisotropy interactions, thermal effects and spin-transfer torque. Periodic boundary conditions can optionally be imposed. A spatial discretization using finite differences in two or three dimensions can be employed. MUMAX is publicly available as open-source software. It can thus be freely used and extended by community. Due to its high computational performance, MUMAX should open up the possibility of running extensive simulations that would be nearly inaccessible with typical CPU-based simulators. - Highlights: → Novel, open-source micromagnetic simulator on GPU hardware. → Speedup of ∝100x compared to other widely used tools. → Extensively validated against standard problems. → Makes previously infeasible simulations accessible.

  7. Enabling high performance computational science through combinatorial algorithms

    International Nuclear Information System (INIS)

    Boman, Erik G; Bozdag, Doruk; Catalyurek, Umit V; Devine, Karen D; Gebremedhin, Assefaw H; Hovland, Paul D; Pothen, Alex; Strout, Michelle Mills

    2007-01-01

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation

  8. Enabling high performance computational science through combinatorial algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Boman, Erik G [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Bozdag, Doruk [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Catalyurek, Umit V [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Devine, Karen D [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Gebremedhin, Assefaw H [Computer Science and Center for Computational Science, Old Dominion University (United States); Hovland, Paul D [Mathematics and Computer Science Division, Argonne National Laboratory (United States); Pothen, Alex [Computer Science and Center for Computational Science, Old Dominion University (United States); Strout, Michelle Mills [Computer Science, Colorado State University (United States)

    2007-07-15

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation.

  9. BurstMem: A High-Performance Burst Buffer System for Scientific Applications

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Teng [Auburn University, Auburn, Alabama; Oral, H Sarp [ORNL; Wang, Yandong [Auburn University, Auburn, Alabama; Settlemyer, Bradley W [ORNL; Atchley, Scott [ORNL; Yu, Weikuan [Auburn University, Auburn, Alabama

    2014-01-01

    The growth of computing power on large-scale sys- tems requires commensurate high-bandwidth I/O system. Many parallel file systems are designed to provide fast sustainable I/O in response to applications soaring requirements. To meet this need, a novel system is imperative to temporarily buffer the bursty I/O and gradually flush datasets to long-term parallel file systems. In this paper, we introduce the design of BurstMem, a high- performance burst buffer system. BurstMem provides a storage framework with efficient storage and communication manage- ment strategies. Our experiments demonstrate that BurstMem is able to speed up the I/O performance of scientific applications by up to 8.5 on leadership computer systems.

  10. Mixed-Language High-Performance Computing for Plasma Simulations

    Directory of Open Access Journals (Sweden)

    Quanming Lu

    2003-01-01

    Full Text Available Java is receiving increasing attention as the most popular platform for distributed computing. However, programmers are still reluctant to embrace Java as a tool for writing scientific and engineering applications due to its still noticeable performance drawbacks compared with other programming languages such as Fortran or C. In this paper, we present a hybrid Java/Fortran implementation of a parallel particle-in-cell (PIC algorithm for plasma simulations. In our approach, the time-consuming components of this application are designed and implemented as Fortran subroutines, while less calculation-intensive components usually involved in building the user interface are written in Java. The two types of software modules have been glued together using the Java native interface (JNI. Our mixed-language PIC code was tested and its performance compared with pure Java and Fortran versions of the same algorithm on a Sun E6500 SMP system and a Linux cluster of Pentium~III machines.

  11. Software Engineering for Scientific Computer Simulations

    Science.gov (United States)

    Post, Douglass E.; Henderson, Dale B.; Kendall, Richard P.; Whitney, Earl M.

    2004-11-01

    Computer simulation is becoming a very powerful tool for analyzing and predicting the performance of fusion experiments. Simulation efforts are evolving from including only a few effects to many effects, from small teams with a few people to large teams, and from workstations and small processor count parallel computers to massively parallel platforms. Successfully making this transition requires attention to software engineering issues. We report on the conclusions drawn from a number of case studies of large scale scientific computing projects within DOE, academia and the DoD. The major lessons learned include attention to sound project management including setting reasonable and achievable requirements, building a good code team, enforcing customer focus, carrying out verification and validation and selecting the optimum computational mathematics approaches.

  12. RSYST: From nuclear reactor calculations towards a highly sophisticated scientific software integration environment

    International Nuclear Information System (INIS)

    Noack, M.; Seybold, J.; Ruehle, R.

    1996-01-01

    The software environment RSYST was originally used to solve problems of reactor physics. The consideration of advanced scientific simulation requirements and the strict application of modern software design principles led to a system which is perfectly suitable to solve problems in various complex scientific problem domains. Starting with a review of the early days of RSYST, we describe the straight evolution driven by the need of software environment which combines the advantages of a high-performance database system with the capability to integrate sophisticated scientific technical applications. The RSYST architecture is presented and the data modelling capabilities are described. To demonstrate the powerful possibilities and flexibility of the RSYST environment, we describe a wide range of RSYST applications, e.g., mechanical simulations of multibody systems, which are used in biomechanical research, civil engineering and robotics. In addition, a hypermedia system which is used for scientific technical training and documentation is presented. (orig.) [de

  13. Scientific Data Services -- A High-Performance I/O System with Array Semantics

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Byna, Surendra; Rotem, Doron; Shoshani, Arie

    2011-09-21

    As high-performance computing approaches exascale, the existing I/O system design is having trouble keeping pace in both performance and scalability. We propose to address this challenge by adopting database principles and techniques in parallel I/O systems. First, we propose to adopt an array data model because many scientific applications represent their data in arrays. This strategy follows a cardinal principle from database research, which separates the logical view from the physical layout of data. This high-level data model gives the underlying implementation more freedom to optimize the physical layout and to choose the most effective way of accessing the data. For example, knowing that a set of write operations is working on a single multi-dimensional array makes it possible to keep the subarrays in a log structure during the write operations and reassemble them later into another physical layout as resources permit. While maintaining the high-level view, the storage system could compress the user data to reduce the physical storage requirement, collocate data records that are frequently used together, or replicate data to increase availability and fault-tolerance. Additionally, the system could generate secondary data structures such as database indexes and summary statistics. We expect the proposed Scientific Data Services approach to create a “live” storage system that dynamically adjusts to user demands and evolves with the massively parallel storage hardware.

  14. High performance ultrasonic field simulation on complex geometries

    Science.gov (United States)

    Chouh, H.; Rougeron, G.; Chatillon, S.; Iehl, J. C.; Farrugia, J. P.; Ostromoukhov, V.

    2016-02-01

    Ultrasonic field simulation is a key ingredient for the design of new testing methods as well as a crucial step for NDT inspection simulation. As presented in a previous paper [1], CEA-LIST has worked on the acceleration of these simulations focusing on simple geometries (planar interfaces, isotropic materials). In this context, significant accelerations were achieved on multicore processors and GPUs (Graphics Processing Units), bringing the execution time of realistic computations in the 0.1 s range. In this paper, we present recent works that aim at similar performances on a wider range of configurations. We adapted the physical model used by the CIVA platform to design and implement a new algorithm providing a fast ultrasonic field simulation that yields nearly interactive results for complex cases. The improvements over the CIVA pencil-tracing method include adaptive strategies for pencil subdivisions to achieve a good refinement of the sensor geometry while keeping a reasonable number of ray-tracing operations. Also, interpolation of the times of flight was used to avoid time consuming computations in the impulse response reconstruction stage. To achieve the best performance, our algorithm runs on multi-core superscalar CPUs and uses high performance specialized libraries such as Intel Embree for ray-tracing, Intel MKL for signal processing and Intel TBB for parallelization. We validated the simulation results by comparing them to the ones produced by CIVA on identical test configurations including mono-element and multiple-element transducers, homogeneous, meshed 3D CAD specimens, isotropic and anisotropic materials and wave paths that can involve several interactions with interfaces. We show performance results on complete simulations that achieve computation times in the 1s range.

  15. Progress on H5Part: A Portable High Performance Parallel Data Interface for Electromagnetics Simulations

    International Nuclear Information System (INIS)

    Adelmann, Andreas; Gsell, Achim; Oswald, Benedikt; Schietinger, Thomas; Bethel, Wes; Shalf, John; Siegerist, Cristina; Stockinger, Kurt

    2007-01-01

    Significant problems facing all experimental and computational sciences arise from growing data size and complexity. Common to all these problems is the need to perform efficient data I/O on diverse computer architectures. In our scientific application, the largest parallel particle simulations generate vast quantities of six-dimensional data. Such a simulation run produces data for an aggregate data size up to several TB per run. Motivated by the need to address data I/O and access challenges, we have implemented H5Part, an open source data I/O API that simplifies the use of the Hierarchical Data Format v5 library (HDF5). HDF5 is an industry standard for high performance, cross-platform data storage and retrieval that runs on all contemporary architectures from large parallel supercomputers to laptops. H5Part, which is oriented to the needs of the particle physics and cosmology communities, provides support for parallel storage and retrieval of particles, structured and in the future unstructured meshes. In this paper, we describe recent work focusing on I/O support for particles and structured meshes and provide data showing performance on modern supercomputer architectures like the IBM POWER 5

  16. Scientific computing and algorithms in industrial simulations projects and products of Fraunhofer SCAI

    CERN Document Server

    Schüller, Anton; Schweitzer, Marc

    2017-01-01

    The contributions gathered here provide an overview of current research projects and selected software products of the Fraunhofer Institute for Algorithms and Scientific Computing SCAI. They show the wide range of challenges that scientific computing currently faces, the solutions it offers, and its important role in developing applications for industry. Given the exciting field of applied collaborative research and development it discusses, the book will appeal to scientists, practitioners, and students alike. The Fraunhofer Institute for Algorithms and Scientific Computing SCAI combines excellent research and application-oriented development to provide added value for our partners. SCAI develops numerical techniques, parallel algorithms and specialized software tools to support and optimize industrial simulations. Moreover, it implements custom software solutions for production and logistics, and offers calculations on high-performance computers. Its services and products are based on state-of-the-art metho...

  17. High performance real-time flight simulation at NASA Langley

    Science.gov (United States)

    Cleveland, Jeff I., II

    1994-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be deterministic and be completed in as short a time as possible. This includes simulation mathematical model computational and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, personnel at NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to a standard input/output system to provide for high bandwidth, low latency data acquisition and distribution. The Computer Automated Measurement and Control technology (IEEE standard 595) was extended to meet the performance requirements for real-time simulation. This technology extension increased the effective bandwidth by a factor of ten and increased the performance of modules necessary for simulator communications. This technology is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications of this technology are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC have completed the development of the use of supercomputers for simulation mathematical model computational to support real-time flight simulation. This includes the development of a real-time operating system and the development of specialized software and hardware for the CAMAC simulator network. This work, coupled with the use of an open systems software architecture, has advanced the state of the art in real time flight simulation. The data acquisition technology innovation and experience with recent developments in this technology are described.

  18. Comprehensive Simulation Lifecycle Management for High Performance Computing Modeling and Simulation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — There are significant logistical barriers to entry-level high performance computing (HPC) modeling and simulation (M IllinoisRocstar) sets up the infrastructure for...

  19. Simulations of KSTAR high performance steady state operation scenarios

    International Nuclear Information System (INIS)

    Na, Yong-Su; Kessel, C.E.; Park, J.M.; Yi, Sumin; Kim, J.Y.; Becoulet, A.; Sips, A.C.C.

    2009-01-01

    We report the results of predictive modelling of high performance steady state operation scenarios in KSTAR. Firstly, the capabilities of steady state operation are investigated with time-dependent simulations using a free-boundary plasma equilibrium evolution code coupled with transport calculations. Secondly, the reproducibility of high performance steady state operation scenarios developed in the DIII-D tokamak, of similar size to that of KSTAR, is investigated using the experimental data taken from DIII-D. Finally, the capability of ITER-relevant steady state operation is investigated in KSTAR. It is found that KSTAR is able to establish high performance steady state operation scenarios; β N above 3, H 98 (y, 2) up to 2.0, f BS up to 0.76 and f NI equals 1.0. In this work, a realistic density profile is newly introduced for predictive simulations by employing the scaling law of a density peaking factor. The influence of the current ramp-up scenario and the transport model is discussed with respect to the fusion performance and non-inductive current drive fraction in the transport simulations. As observed in the experiments, both the heating and the plasma current waveforms in the current ramp-up phase produce a strong effect on the q-profile, the fusion performance and also on the non-inductive current drive fraction in the current flattop phase. A criterion in terms of q min is found to establish ITER-relevant steady state operation scenarios. This will provide a guideline for designing the current ramp-up phase in KSTAR. It is observed that the transport model also affects the predictive values of fusion performance as well as the non-inductive current drive fraction. The Weiland transport model predicts the highest fusion performance as well as non-inductive current drive fraction in KSTAR. In contrast, the GLF23 model exhibits the lowest ones. ITER-relevant advanced scenarios cannot be obtained with the GLF23 model in the conditions given in this work

  20. Computational Simulations and the Scientific Method

    Science.gov (United States)

    Kleb, Bil; Wood, Bill

    2005-01-01

    As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

  1. An Advanced, Interactive, High-Performance Liquid Chromatography Simulator and Instructor Resources

    Science.gov (United States)

    Boswell, Paul G.; Stoll, Dwight R.; Carr, Peter W.; Nagel, Megan L.; Vitha, Mark F.; Mabbott, Gary A.

    2013-01-01

    High-performance liquid chromatography (HPLC) simulation software has long been recognized as an effective educational tool, yet many of the existing HPLC simulators are either too expensive, outdated, or lack many important features necessary to make them widely useful for educational purposes. Here, a free, open-source HPLC simulator is…

  2. Simulating experiments using a Comsol application for teaching scientific research methods

    NARCIS (Netherlands)

    Schijndel, van A.W.M.

    2015-01-01

    For universities it is important to teach the principles of scientific methods as soon as possible. However, in case of performing experiments, students need to have some knowledge and skills before start doing measurements. In this case, Comsol can be helpfully by simulating the experiments before

  3. DoSSiER: Database of Scientific Simulation and Experimental Results

    CERN Document Server

    Wenzel, Hans; Genser, Krzysztof; Elvira, Daniel; Pokorski, Witold; Carminati, Federico; Konstantinov, Dmitri; Ribon, Alberto; Folger, Gunter; Dotti, Andrea

    2017-01-01

    The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER can be easily accessed via a web application. In addition, a web service allows for programmatic access to the repository to extract records in json or xml exchange formats. In this article, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.

  4. NCI's Transdisciplinary High Performance Scientific Data Platform

    Science.gov (United States)

    Evans, Ben; Antony, Joseph; Bastrakova, Irina; Car, Nicholas; Cox, Simon; Druken, Kelsey; Evans, Bradley; Fraser, Ryan; Ip, Alex; Kemp, Carina; King, Edward; Minchin, Stuart; Larraondo, Pablo; Pugh, Tim; Richards, Clare; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    The Australian National Computational Infrastructure (NCI) manages Earth Systems data collections sourced from several domains and organisations onto a single High Performance Data (HPD) Node to further Australia's national priority research and innovation agenda. The NCI HPD Node has rapidly established its value, currently managing over 10 PBytes of datasets from collections that span a wide range of disciplines including climate, weather, environment, geoscience, geophysics, water resources and social sciences. Importantly, in order to facilitate broad user uptake, maximise reuse and enable transdisciplinary access through software and standardised interfaces, the datasets, associated information systems and processes have been incorporated into the design and operation of a unified platform that NCI has called, the National Environmental Research Data Interoperability Platform (NERDIP). The key goal of the NERDIP is to regularise data access so that it is easily discoverable, interoperable for different domains and enabled for high performance methods. It adopts and implements international standards and data conventions, and promotes scientific integrity within a high performance computing and data analysis environment. NCI has established a rich and flexible computing environment to access to this data, through the NCI supercomputer; a private cloud that supports both domain focused virtual laboratories and in-common interactive analysis interfaces; as well as remotely through scalable data services. Data collections of this importance must be managed with careful consideration of both their current use and the needs of the end-communities, as well as its future potential use, such as transitioning to more advanced software and improved methods. It is therefore critical that the data platform is both well-managed and trusted for stable production use (including transparency and reproducibility), agile enough to incorporate new technological advances and

  5. Crystal and molecular simulation of high-performance polymers.

    Science.gov (United States)

    Colquhoun, H M; Williams, D J

    2000-03-01

    Single-crystal X-ray analyses of oligomeric models for high-performance aromatic polymers, interfaced to computer-based molecular modeling and diffraction simulation, have enabled the determination of a range of previously unknown polymer crystal structures from X-ray powder data. Materials which have been successfully analyzed using this approach include aromatic polyesters, polyetherketones, polythioetherketones, polyphenylenes, and polycarboranes. Pure macrocyclic homologues of noncrystalline polyethersulfones afford high-quality single crystals-even at very large ring sizes-and have provided the first examples of a "protein crystallographic" approach to the structures of conventionally amorphous synthetic polymers.

  6. High performance MRI simulations of motion on multi-GPU systems.

    Science.gov (United States)

    Xanthis, Christos G; Venetis, Ioannis E; Aletras, Anthony H

    2014-07-04

    MRI physics simulators have been developed in the past for optimizing imaging protocols and for training purposes. However, these simulators have only addressed motion within a limited scope. The purpose of this study was the incorporation of realistic motion, such as cardiac motion, respiratory motion and flow, within MRI simulations in a high performance multi-GPU environment. Three different motion models were introduced in the Magnetic Resonance Imaging SIMULator (MRISIMUL) of this study: cardiac motion, respiratory motion and flow. Simulation of a simple Gradient Echo pulse sequence and a CINE pulse sequence on the corresponding anatomical model was performed. Myocardial tagging was also investigated. In pulse sequence design, software crushers were introduced to accommodate the long execution times in order to avoid spurious echoes formation. The displacement of the anatomical model isochromats was calculated within the Graphics Processing Unit (GPU) kernel for every timestep of the pulse sequence. Experiments that would allow simulation of custom anatomical and motion models were also performed. Last, simulations of motion with MRISIMUL on single-node and multi-node multi-GPU systems were examined. Gradient Echo and CINE images of the three motion models were produced and motion-related artifacts were demonstrated. The temporal evolution of the contractility of the heart was presented through the application of myocardial tagging. Better simulation performance and image quality were presented through the introduction of software crushers without the need to further increase the computational load and GPU resources. Last, MRISIMUL demonstrated an almost linear scalable performance with the increasing number of available GPU cards, in both single-node and multi-node multi-GPU computer systems. MRISIMUL is the first MR physics simulator to have implemented motion with a 3D large computational load on a single computer multi-GPU configuration. The incorporation

  7. Age and Scientific Performance.

    Science.gov (United States)

    Cole, Stephen

    1979-01-01

    The long-standing belief that age is negatively associated with scientific productivity and creativity is shown to be based upon incorrect analysis of data. Studies reported in this article suggest that the relationship between age and scientific performance is influenced by the operation of the reward system. (Author)

  8. Application of High-performance Visual Analysis Methods to Laser Wakefield Particle Acceleration Data

    International Nuclear Information System (INIS)

    Rubel, Oliver; Prabhat, Mr.; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Ahern, Sean; Weber, Gunther H.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes

    2008-01-01

    Our work combines and extends techniques from high-performance scientific data management and visualization to enable scientific researchers to gain insight from extremely large, complex, time-varying laser wakefield particle accelerator simulation data. We extend histogram-based parallel coordinates for use in visual information display as well as an interface for guiding and performing data mining operations, which are based upon multi-dimensional and temporal thresholding and data subsetting operations. To achieve very high performance on parallel computing platforms, we leverage FastBit, a state-of-the-art index/query technology, to accelerate data mining and multi-dimensional histogram computation. We show how these techniques are used in practice by scientific researchers to identify, visualize and analyze a particle beam in a large, time-varying dataset

  9. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard; Allcock, William; Beggio, Chris; Campbell, Stuart; Cherry, Andrew; Cholia, Shreyas; Dart, Eli; England, Clay; Fahey, Tim; Foertter, Fernanda; Goldstone, Robin; Hick, Jason; Karelitz, David; Kelly, Kaki; Monroe, Laura; Prabhat,; Skinner, David; White, Julia

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at the DOE national laboratories. The report contains findings from that review.

  10. High performance simulation for the Silva project using the tera computer

    International Nuclear Information System (INIS)

    Bergeaud, V.; La Hargue, J.P.; Mougery, F.; Boulet, M.; Scheurer, B.; Le Fur, J.F.; Comte, M.; Benisti, D.; Lamare, J. de; Petit, A.

    2003-01-01

    In the context of the SILVA Project (Atomic Vapor Laser Isotope Separation), numerical simulation of the plant scale propagation of laser beams through uranium vapour was a great challenge. The PRODIGE code has been developed to achieve this goal. Here we focus on the task of achieving high performance simulation on the TERA computer. We describe the main issues for optimizing the parallelization of the PRODIGE code on TERA. Thus, we discuss advantages and drawbacks of the implemented diagonal parallelization scheme. As a consequence, it has been found fruitful to fit out the code in three aspects: memory allocation, MPI communications and interconnection network bandwidth usage. We stress out the interest of MPI/IO in this context and the benefit obtained for production computations on TERA. Finally, we shall illustrate our developments. We indicate some performance measurements reflecting the good parallelization properties of PRODIGE on the TERA computer. The code is currently used for demonstrating the feasibility of the laser propagation at a plant enrichment level and for preparing the 2003 Menphis experiment. We conclude by emphasizing the contribution of high performance TERA simulation to the project. (authors)

  11. High performance simulation for the Silva project using the tera computer

    Energy Technology Data Exchange (ETDEWEB)

    Bergeaud, V.; La Hargue, J.P.; Mougery, F. [CS Communication and Systemes, 92 - Clamart (France); Boulet, M.; Scheurer, B. [CEA Bruyeres-le-Chatel, 91 - Bruyeres-le-Chatel (France); Le Fur, J.F.; Comte, M.; Benisti, D.; Lamare, J. de; Petit, A. [CEA Saclay, 91 - Gif sur Yvette (France)

    2003-07-01

    In the context of the SILVA Project (Atomic Vapor Laser Isotope Separation), numerical simulation of the plant scale propagation of laser beams through uranium vapour was a great challenge. The PRODIGE code has been developed to achieve this goal. Here we focus on the task of achieving high performance simulation on the TERA computer. We describe the main issues for optimizing the parallelization of the PRODIGE code on TERA. Thus, we discuss advantages and drawbacks of the implemented diagonal parallelization scheme. As a consequence, it has been found fruitful to fit out the code in three aspects: memory allocation, MPI communications and interconnection network bandwidth usage. We stress out the interest of MPI/IO in this context and the benefit obtained for production computations on TERA. Finally, we shall illustrate our developments. We indicate some performance measurements reflecting the good parallelization properties of PRODIGE on the TERA computer. The code is currently used for demonstrating the feasibility of the laser propagation at a plant enrichment level and for preparing the 2003 Menphis experiment. We conclude by emphasizing the contribution of high performance TERA simulation to the project. (authors)

  12. High performance cellular level agent-based simulation with FLAME for the GPU.

    Science.gov (United States)

    Richmond, Paul; Walker, Dawn; Coakley, Simon; Romano, Daniela

    2010-05-01

    Driven by the availability of experimental data and ability to simulate a biological scale which is of immediate interest, the cellular scale is fast emerging as an ideal candidate for middle-out modelling. As with 'bottom-up' simulation approaches, cellular level simulations demand a high degree of computational power, which in large-scale simulations can only be achieved through parallel computing. The flexible large-scale agent modelling environment (FLAME) is a template driven framework for agent-based modelling (ABM) on parallel architectures ideally suited to the simulation of cellular systems. It is available for both high performance computing clusters (www.flame.ac.uk) and GPU hardware (www.flamegpu.com) and uses a formal specification technique that acts as a universal modelling format. This not only creates an abstraction from the underlying hardware architectures, but avoids the steep learning curve associated with programming them. In benchmarking tests and simulations of advanced cellular systems, FLAME GPU has reported massive improvement in performance over more traditional ABM frameworks. This allows the time spent in the development and testing stages of modelling to be drastically reduced and creates the possibility of real-time visualisation for simple visual face-validation.

  13. High Performance Electrical Modeling and Simulation Verification Test Suite - Tier I; TOPICAL

    International Nuclear Information System (INIS)

    SCHELLS, REGINA L.; BOGDAN, CAROLYN W.; WIX, STEVEN D.

    2001-01-01

    This document describes the High Performance Electrical Modeling and Simulation (HPEMS) Global Verification Test Suite (VERTS). The VERTS is a regression test suite used for verification of the electrical circuit simulation codes currently being developed by the HPEMS code development team. This document contains descriptions of the Tier I test cases

  14. COMSOL-PHREEQC: a tool for high performance numerical simulation of reactive transport phenomena

    International Nuclear Information System (INIS)

    Nardi, Albert; Vries, Luis Manuel de; Trinchero, Paolo; Idiart, Andres; Molinero, Jorge

    2012-01-01

    Document available in extended abstract form only. Comsol Multiphysics (COMSOL, from now on) is a powerful Finite Element software environment for the modelling and simulation of a large number of physics-based systems. The user can apply variables, expressions or numbers directly to solid and fluid domains, boundaries, edges and points, independently of the computational mesh. COMSOL then internally compiles a set of equations representing the entire model. The availability of extremely powerful pre and post processors makes COMSOL a numerical platform well known and extensively used in many branches of sciences and engineering. On the other hand, PHREEQC is a freely available computer program for simulating chemical reactions and transport processes in aqueous systems. It is perhaps the most widely used geochemical code in the scientific community and is openly distributed. The program is based on equilibrium chemistry of aqueous solutions interacting with minerals, gases, solid solutions, exchangers, and sorption surfaces, but also includes the capability to model kinetic reactions with rate equations that are user-specified in a very flexible way by means of Basic statements directly written in the input file. Here we present COMSOL-PHREEQC, a software interface able to communicate and couple these two powerful simulators by means of a Java interface. The methodology is based on Sequential Non Iterative Approach (SNIA), where PHREEQC is compiled as a dynamic subroutine (iPhreeqc) that is called by the interface to solve the geochemical system at every element of the finite element mesh of COMSOL. The numerical tool has been extensively verified by comparison with computed results of 1D, 2D and 3D benchmark examples solved with other reactive transport simulators. COMSOL-PHREEQC is parallelized so that CPU time can be highly optimized in multi-core processors or clusters. Then, fully 3D detailed reactive transport problems can be readily simulated by means of

  15. High Performance Wideband CMOS CCI and its Application in Inductance Simulator Design

    Directory of Open Access Journals (Sweden)

    ARSLAN, E.

    2012-08-01

    Full Text Available In this paper, a new, differential pair based, low-voltage, high performance and wideband CMOS first generation current conveyor (CCI is proposed. The proposed CCI has high voltage swings on ports X and Y and very low equivalent impedance on port X due to super source follower configuration. It also has high voltage swings (close to supply voltages on input and output ports and wideband current and voltage transfer ratios. Furthermore, two novel grounded inductance simulator circuits are proposed as application examples. Using HSpice, it is shown that the simulation results of the proposed CCI and also of the presented inductance simulators are in very good agreement with the expected ones.

  16. Assessing Scientific Performance.

    Science.gov (United States)

    Weiner, John M.; And Others

    1984-01-01

    A method for assessing scientific performance based on relationships displayed numerically in published documents is proposed and illustrated using published documents in pediatric oncology for the period 1979-1982. Contributions of a major clinical investigations group, the Childrens Cancer Study Group, are analyzed. Twenty-nine references are…

  17. Simulation model of a twin-tail, high performance airplane

    Science.gov (United States)

    Buttrill, Carey S.; Arbuckle, P. Douglas; Hoffler, Keith D.

    1992-01-01

    The mathematical model and associated computer program to simulate a twin-tailed high performance fighter airplane (McDonnell Douglas F/A-18) are described. The simulation program is written in the Advanced Continuous Simulation Language. The simulation math model includes the nonlinear six degree-of-freedom rigid-body equations, an engine model, sensors, and first order actuators with rate and position limiting. A simplified form of the F/A-18 digital control laws (version 8.3.3) are implemented. The simulated control law includes only inner loop augmentation in the up and away flight mode. The aerodynamic forces and moments are calculated from a wind-tunnel-derived database using table look-ups with linear interpolation. The aerodynamic database has an angle-of-attack range of -10 to +90 and a sideslip range of -20 to +20 degrees. The effects of elastic deformation are incorporated in a quasi-static-elastic manner. Elastic degrees of freedom are not actively simulated. In the engine model, the throttle-commanded steady-state thrust level and the dynamic response characteristics of the engine are based on airflow rate as determined from a table look-up. Afterburner dynamics are switched in at a threshold based on the engine airflow and commanded thrust.

  18. Simulating Effects of High Angle of Attack on Turbofan Engine Performance

    Science.gov (United States)

    Liu, Yuan; Claus, Russell W.; Litt, Jonathan S.; Guo, Ten-Huei

    2013-01-01

    A method of investigating the effects of high angle of attack (AOA) flight on turbofan engine performance is presented. The methodology involves combining a suite of diverse simulation tools. Three-dimensional, steady-state computational fluid dynamics (CFD) software is used to model the change in performance of a commercial aircraft-type inlet and fan geometry due to various levels of AOA. Parallel compressor theory is then applied to assimilate the CFD data with a zero-dimensional, nonlinear, dynamic turbofan engine model. The combined model shows that high AOA operation degrades fan performance and, thus, negatively impacts compressor stability margins and engine thrust. In addition, the engine response to high AOA conditions is shown to be highly dependent upon the type of control system employed.

  19. Computer simulation, rhetoric, and the scientific imagination how virtual evidence shapes science in the making and in the news

    CERN Document Server

    Roundtree, Aimee Kendall

    2013-01-01

    Computer simulations help advance climatology, astrophysics, and other scientific disciplines. They are also at the crux of several high-profile cases of science in the news. How do simulation scientists, with little or no direct observations, make decisions about what to represent? What is the nature of simulated evidence, and how do we evaluate its strength? Aimee Kendall Roundtree suggests answers in Computer Simulation, Rhetoric, and the Scientific Imagination. She interprets simulations in the sciences by uncovering the argumentative strategies that underpin the production and disseminati

  20. Blaze-DEMGPU: Modular high performance DEM framework for the GPU architecture

    Directory of Open Access Journals (Sweden)

    Nicolin Govender

    2016-01-01

    Full Text Available Blaze-DEMGPU is a modular GPU based discrete element method (DEM framework that supports polyhedral shaped particles. The high level performance is attributed to the light weight and Single Instruction Multiple Data (SIMD that the GPU architecture offers. Blaze-DEMGPU offers suitable algorithms to conduct DEM simulations on the GPU and these algorithms can be extended and modified. Since a large number of scientific simulations are particle based, many of the algorithms and strategies for GPU implementation present in Blaze-DEMGPU can be applied to other fields. Blaze-DEMGPU will make it easier for new researchers to use high performance GPU computing as well as stimulate wider GPU research efforts by the DEM community.

  1. High-Performance Modeling of Carbon Dioxide Sequestration by Coupling Reservoir Simulation and Molecular Dynamics

    KAUST Repository

    Bao, Kai; Yan, Mi; Allen, Rebecca; Salama, Amgad; Lu, Ligang; Jordan, Kirk E.; Sun, Shuyu; Keyes, David E.

    2015-01-01

    The present work describes a parallel computational framework for carbon dioxide (CO2) sequestration simulation by coupling reservoir simulation and molecular dynamics (MD) on massively parallel high-performance-computing (HPC) systems

  2. Autonomy vs. dependency of scientific collaboration in scientific performance

    Energy Technology Data Exchange (ETDEWEB)

    Chinchilla-Rodriguez, Z.; Miguel, S.; Perianes-Rodriguez, A.; Ovalle-Perandones, M.A.; Olmeda-Gomez, C.

    2016-07-01

    This article explores the capacity of Latin America in the generation of scientific knowledge and its visibility at the global level. The novelty of the contribution lies in the decomposition of leadership, plus its combination with the results of performance indicators. We compare the normalized citation of all output against the leading output, as well as scientific excellence (Chinchilla, et al. 2016a; 2016b), technological impact and the trends in collaboration types and normalized citation. The main goal is to determine to what extent the main Latin American producers of scientific output depend on collaboration to heighten research performance in terms of citation; or to the contrary, whether there is enough autonomy and capacity to leverage its competitiveness through the design of research and development agendas. To the best of our knowledge this is the first study adopting this approach at the country level within the field of N&N. (Author)

  3. Visualization and Analysis of Climate Simulation Performance Data

    Science.gov (United States)

    Röber, Niklas; Adamidis, Panagiotis; Behrens, Jörg

    2015-04-01

    Visualization is the key process of transforming abstract (scientific) data into a graphical representation, to aid in the understanding of the information hidden within the data. Climate simulation data sets are typically quite large, time varying, and consist of many different variables sampled on an underlying grid. A large variety of climate models - and sub models - exist to simulate various aspects of the climate system. Generally, one is mainly interested in the physical variables produced by the simulation runs, but model developers are also interested in performance data measured along with these simulations. Climate simulation models are carefully developed complex software systems, designed to run in parallel on large HPC systems. An important goal thereby is to utilize the entire hardware as efficiently as possible, that is, to distribute the workload as even as possible among the individual components. This is a very challenging task, and detailed performance data, such as timings, cache misses etc. have to be used to locate and understand performance problems in order to optimize the model implementation. Furthermore, the correlation of performance data to the processes of the application and the sub-domains of the decomposed underlying grid is vital when addressing communication and load imbalance issues. High resolution climate simulations are carried out on tens to hundreds of thousands of cores, thus yielding a vast amount of profiling data, which cannot be analyzed without appropriate visualization techniques. This PICO presentation displays and discusses the ICON simulation model, which is jointly developed by the Max Planck Institute for Meteorology and the German Weather Service and in partnership with DKRZ. The visualization and analysis of the models performance data allows us to optimize and fine tune the model, as well as to understand its execution on the HPC system. We show and discuss our workflow, as well as present new ideas and

  4. Aging analysis of high performance FinFET flip-flop under Dynamic NBTI simulation configuration

    Science.gov (United States)

    Zainudin, M. F.; Hussin, H.; Halim, A. K.; Karim, J.

    2018-03-01

    A mechanism known as Negative-bias Temperature Instability (NBTI) degrades a main electrical parameters of a circuit especially in terms of performance. So far, the circuit design available at present are only focussed on high performance circuit without considering the circuit reliability and robustness. In this paper, the main circuit performances of high performance FinFET flip-flop such as delay time, and power were studied with the presence of the NBTI degradation. The aging analysis was verified using a 16nm High Performance Predictive Technology Model (PTM) based on different commands available at Synopsys HSPICE. The results shown that the circuit under the longer dynamic NBTI simulation produces the highest impact in the increasing of gate delay and decrease in the average power reduction from a fresh simulation until the aged stress time under a nominal condition. In addition, the circuit performance under a varied stress condition such as temperature and negative stress gate bias were also studied.

  5. Software quality and process improvement in scientific simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Ambrosiano, J.; Webster, R. [Los Alamos National Lab., NM (United States)

    1997-11-01

    This report contains viewgraphs on the quest to develope better simulation code quality through process modeling and improvement. This study is based on the experience of the authors and interviews with ten subjects chosen from simulation code development teams at LANL. This study is descriptive rather than scientific.

  6. LIAR -- A computer program for the modeling and simulation of high performance linacs

    International Nuclear Information System (INIS)

    Assmann, R.; Adolphsen, C.; Bane, K.; Emma, P.; Raubenheimer, T.; Siemann, R.; Thompson, K.; Zimmermann, F.

    1997-04-01

    The computer program LIAR (LInear Accelerator Research Code) is a numerical modeling and simulation tool for high performance linacs. Amongst others, it addresses the needs of state-of-the-art linear colliders where low emittance, high-intensity beams must be accelerated to energies in the 0.05-1 TeV range. LIAR is designed to be used for a variety of different projects. LIAR allows the study of single- and multi-particle beam dynamics in linear accelerators. It calculates emittance dilutions due to wakefield deflections, linear and non-linear dispersion and chromatic effects in the presence of multiple accelerator imperfections. Both single-bunch and multi-bunch beams can be simulated. Several basic and advanced optimization schemes are implemented. Present limitations arise from the incomplete treatment of bending magnets and sextupoles. A major objective of the LIAR project is to provide an open programming platform for the accelerator physics community. Due to its design, LIAR allows straight-forward access to its internal FORTRAN data structures. The program can easily be extended and its interactive command language ensures maximum ease of use. Presently, versions of LIAR are compiled for UNIX and MS Windows operating systems. An interface for the graphical visualization of results is provided. Scientific graphs can be saved in the PS and EPS file formats. In addition a Mathematica interface has been developed. LIAR now contains more than 40,000 lines of source code in more than 130 subroutines. This report describes the theoretical basis of the program, provides a reference for existing features and explains how to add further commands. The LIAR home page and the ONLINE version of this manual can be accessed under: http://www.slac.stanford.edu/grp/arb/rwa/liar.htm

  7. Correlations between the simulated military tasks performance and physical fitness tests at high altitude

    Directory of Open Access Journals (Sweden)

    Eduardo Borba Neves

    2017-11-01

    Full Text Available The aim of this study was to investigate the Correlations between the Simulated Military Tasks Performance and Physical Fitness Tests at high altitude. This research is part of a project to modernize the physical fitness test of the Colombian Army. Data collection was performed at the 13th Battalion of Instruction and Training, located 30km south of Bogota D.C., with a temperature range from 1ºC to 23ºC during the study period, and at 3100m above sea level. The sample was composed by 60 volunteers from three different platoons. The volunteers start the data collection protocol after 2 weeks of acclimation at this altitude. The main results were the identification of a high positive correlation between the 3 Assault wall in succession and the Simulated Military Tasks performance (r = 0.764, p<0.001, and a moderate negative correlation between pull-ups and the Simulated Military Tasks performance (r = -0.535, p<0.001. It can be recommended the use of the 20-consecutive overtaking of the 3 Assault wall in succession as a good way to estimate the performance in operational tasks which involve: assault walls, network of wires, military Climbing Nets, Tarzan jump among others, at high altitude.

  8. High correlation between performance on a virtual-reality simulator and real-life cataract surgery

    DEFF Research Database (Denmark)

    Thomsen, Ann Sofia Skou; Smith, Phillip; Subhi, Yousif

    2017-01-01

    -tracking software of cataract surgical videos with a Pearson correlation coefficient of -0.70 (p = 0.017). CONCLUSION: Performance on the EyeSi simulator is significantly and highly correlated to real-life surgical performance. However, it is recommended that performance assessments are made using multiple data......PURPOSE: To investigate the correlation in performance of cataract surgery between a virtual-reality simulator and real-life surgery using two objective assessment tools with evidence of validity. METHODS: Cataract surgeons with varying levels of experience were included in the study. All...... antitremor training, forceps training, bimanual training, capsulorhexis and phaco divide and conquer. RESULTS: Eleven surgeons were enrolled. After a designated warm-up period, the proficiency-based test on the EyeSi simulator was strongly correlated to real-life performance measured by motion...

  9. Scientific and Computational Challenges of the Fusion Simulation Program (FSP)

    International Nuclear Information System (INIS)

    Tang, William M.

    2011-01-01

    This paper highlights the scientific and computational challenges facing the Fusion Simulation Program (FSP) a major national initiative in the United States with the primary objective being to enable scientific discovery of important new plasma phenomena with associated understanding that emerges only upon integration. This requires developing a predictive integrated simulation capability for magnetically-confined fusion plasmas that are properly validated against experiments in regimes relevant for producing practical fusion energy. It is expected to provide a suite of advanced modeling tools for reliably predicting fusion device behavior with comprehensive and targeted science-based simulations of nonlinearly-coupled phenomena in the core plasma, edge plasma, and wall region on time and space scales required for fusion energy production. As such, it will strive to embody the most current theoretical and experimental understanding of magnetic fusion plasmas and to provide a living framework for the simulation of such plasmas as the associated physics understanding continues to advance over the next several decades. Substantive progress on answering the outstanding scientific questions in the field will drive the FSP toward its ultimate goal of developing the ability to predict the behavior of plasma discharges in toroidal magnetic fusion devices with high physics fidelity on all relevant time and space scales. From a computational perspective, this will demand computing resources in the petascale range and beyond together with the associated multi-core algorithmic formulation needed to address burning plasma issues relevant to ITER - a multibillion dollar collaborative experiment involving seven international partners representing over half the world's population. Even more powerful exascale platforms will be needed to meet the future challenges of designing a demonstration fusion reactor (DEMO). Analogous to other major applied physics modeling projects (e

  10. Scientific and computational challenges of the fusion simulation program (FSP)

    International Nuclear Information System (INIS)

    Tang, William M.

    2011-01-01

    This paper highlights the scientific and computational challenges facing the Fusion Simulation Program (FSP) - a major national initiative in the United States with the primary objective being to enable scientific discovery of important new plasma phenomena with associated understanding that emerges only upon integration. This requires developing a predictive integrated simulation capability for magnetically-confined fusion plasmas that are properly validated against experiments in regimes relevant for producing practical fusion energy. It is expected to provide a suite of advanced modeling tools for reliably predicting fusion device behavior with comprehensive and targeted science-based simulations of nonlinearly-coupled phenomena in the core plasma, edge plasma, and wall region on time and space scales required for fusion energy production. As such, it will strive to embody the most current theoretical and experimental understanding of magnetic fusion plasmas and to provide a living framework for the simulation of such plasmas as the associated physics understanding continues to advance over the next several decades. Substantive progress on answering the outstanding scientific questions in the field will drive the FSP toward its ultimate goal of developing the ability to predict the behavior of plasma discharges in toroidal magnetic fusion devices with high physics fidelity on all relevant time and space scales. From a computational perspective, this will demand computing resources in the petascale range and beyond together with the associated multi-core algorithmic formulation needed to address burning plasma issues relevant to ITER - a multibillion dollar collaborative experiment involving seven international partners representing over half the world's population. Even more powerful exascale platforms will be needed to meet the future challenges of designing a demonstration fusion reactor (DEMO). Analogous to other major applied physics modeling projects (e

  11. Cognitive load, emotion, and performance in high-fidelity simulation among beginning nursing students: a pilot study.

    Science.gov (United States)

    Schlairet, Maura C; Schlairet, Timothy James; Sauls, Denise H; Bellflowers, Lois

    2015-03-01

    Establishing the impact of the high-fidelity simulation environment on student performance, as well as identifying factors that could predict learning, would refine simulation outcome expectations among educators. The purpose of this quasi-experimental pilot study was to explore the impact of simulation on emotion and cognitive load among beginning nursing students. Forty baccalaureate nursing students participated in teaching simulations, rated their emotional state and cognitive load, and completed evaluation simulations. Two principal components of emotion were identified representing the pleasant activation and pleasant deactivation components of affect. Mean rating of cognitive load following simulation was high. Linear regression identiffed slight but statistically nonsignificant positive associations between principal components of emotion and cognitive load. Logistic regression identified a negative but statistically nonsignificant effect of cognitive load on assessment performance. Among lower ability students, a more pronounced effect of cognitive load on assessment performance was observed; this also was statistically non-significant. Copyright 2015, SLACK Incorporated.

  12. The Potential of the Cell Processor for Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel; Shalf, John; Oliker, Leonid; Husbands, Parry; Kamil, Shoaib; Yelick, Katherine

    2005-10-14

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of the using the forth coming STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. We are the first to present quantitative Cell performance data on scientific kernels and show direct comparisons against leading superscalar (AMD Opteron), VLIW (IntelItanium2), and vector (Cray X1) architectures. Since neither Cell hardware nor cycle-accurate simulators are currently publicly available, we develop both analytical models and simulators to predict kernel performance. Our work also explores the complexity of mapping several important scientific algorithms onto the Cells unique architecture. Additionally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  13. Accelerating Scientific Applications using High Performance Dense and Sparse Linear Algebra Kernels on GPUs

    KAUST Repository

    Abdelfattah, Ahmad

    2015-01-15

    High performance computing (HPC) platforms are evolving to more heterogeneous configurations to support the workloads of various applications. The current hardware landscape is composed of traditional multicore CPUs equipped with hardware accelerators that can handle high levels of parallelism. Graphical Processing Units (GPUs) are popular high performance hardware accelerators in modern supercomputers. GPU programming has a different model than that for CPUs, which means that many numerical kernels have to be redesigned and optimized specifically for this architecture. GPUs usually outperform multicore CPUs in some compute intensive and massively parallel applications that have regular processing patterns. However, most scientific applications rely on crucial memory-bound kernels and may witness bottlenecks due to the overhead of the memory bus latency. They can still take advantage of the GPU compute power capabilities, provided that an efficient architecture-aware design is achieved. This dissertation presents a uniform design strategy for optimizing critical memory-bound kernels on GPUs. Based on hierarchical register blocking, double buffering and latency hiding techniques, this strategy leverages the performance of a wide range of standard numerical kernels found in dense and sparse linear algebra libraries. The work presented here focuses on matrix-vector multiplication kernels (MVM) as repre- sentative and most important memory-bound operations in this context. Each kernel inherits the benefits of the proposed strategies. By exposing a proper set of tuning parameters, the strategy is flexible enough to suit different types of matrices, ranging from large dense matrices, to sparse matrices with dense block structures, while high performance is maintained. Furthermore, the tuning parameters are used to maintain the relative performance across different GPU architectures. Multi-GPU acceleration is proposed to scale the performance on several devices. The

  14. Numerical simulation of turbulent combustion: Scientific challenges

    Science.gov (United States)

    Ren, ZhuYin; Lu, Zhen; Hou, LingYun; Lu, LiuYan

    2014-08-01

    Predictive simulation of engine combustion is key to understanding the underlying complicated physicochemical processes, improving engine performance, and reducing pollutant emissions. Critical issues as turbulence modeling, turbulence-chemistry interaction, and accommodation of detailed chemical kinetics in complex flows remain challenging and essential for high-fidelity combustion simulation. This paper reviews the current status of the state-of-the-art large eddy simulation (LES)/prob-ability density function (PDF)/detailed chemistry approach that can address the three challenging modelling issues. PDF as a subgrid model for LES is formulated and the hybrid mesh-particle method for LES/PDF simulations is described. Then the development need in micro-mixing models for the PDF simulations of turbulent premixed combustion is identified. Finally the different acceleration methods for detailed chemistry are reviewed and a combined strategy is proposed for further development.

  15. High-Fidelity Simulation in Occupational Therapy Curriculum: Impact on Level II Fieldwork Performance

    Directory of Open Access Journals (Sweden)

    Rebecca Ozelie

    2016-10-01

    Full Text Available Simulation experiences provide experiential learning opportunities during artificially produced real-life medical situations in a safe environment. Evidence supports using simulation in health care education yet limited quantitative evidence exists in occupational therapy. This study aimed to evaluate the differences in scores on the AOTA Fieldwork Performance Evaluation for the Occupational Therapy Student of Level II occupational therapy students who received high-fidelity simulation training and students who did not. A retrospective analysis of 180 students from a private university was used. Independent samples nonparametric t tests examined mean differences between Fieldwork Performance Evaluation scores of those who did and did not receive simulation experiences in the curriculum. Mean ranks were also analyzed for subsection scores and practice settings. Results of this study found no significant difference in overall Fieldwork Performance Evaluation scores between the two groups. The students who completed simulation and had fieldwork in inpatient rehabilitation had the greatest increase in mean rank scores and increases in several subsections. The outcome measure used in this study was found to have limited discriminatory capability and may have affected the results; however, this study finds that using simulation may be a beneficial supplement to didactic coursework in occupational therapy curriculums.

  16. Driving Simulator Development and Performance Study

    OpenAIRE

    Juto, Erik

    2010-01-01

    The driving simulator is a vital tool for much of the research performed at theSwedish National Road and Transport Institute (VTI). Currently VTI posses three driving simulators, two high fidelity simulators developed and constructed by VTI, and a medium fidelity simulator from the German company Dr.-Ing. Reiner Foerst GmbH. The two high fidelity simulators run the same simulation software, developed at VTI. The medium fidelity simulator runs a proprietary simulation software. At VTI there is...

  17. Basic research in the East and West: a comparison of the scientific performance of high-energy physics accelerators

    International Nuclear Information System (INIS)

    Irvine, J.; Martin, B.R.

    1985-01-01

    This paper presents the results of a study comparing the past scientific performance of high-energy physics accelerators in the Eastern bloc with that of their main Western counterparts. Output-evaluation indicators are used. After carefully examining the extent to which the output indicators used may be biased against science in the Eastern bloc, various conclusions are drawn about the relative contributions to science made by these accelerators. Where significant differences in performance are apparent, an attempt is made to identify the main factors responsible. (author)

  18. Simulating and stimulating performance: Introducing distributed simulation to enhance musical learning and performance

    Directory of Open Access Journals (Sweden)

    Aaron eWilliamon

    2014-02-01

    Full Text Available Musicians typically rehearse far away from their audiences and in practice rooms that differ significantly from the concert venues in which they aspire to perform. Due to the high costs and inaccessibility of such venues, much current international music training lacks repeated exposure to realistic performance situations, with students learning all too late (or not at all how to manage performance stress and the demands of their audiences. Virtual environments have been shown to be an effective training tool in the fields of medicine and sport, offering practitioners access to real-life performance scenarios but with lower risk of negative evaluation and outcomes. The aim of this research was to design and test the efficacy of simulated performance environments in which conditions of real performance could be recreated. Advanced violin students (n=11 were recruited to perform in two simulations: a solo recital with a small virtual audience and an audition situation with three expert virtual judges. Each simulation contained back-stage and on-stage areas, life-sized interactive virtual observers, and pre- and post-performance protocols designed to match those found at leading international performance venues. Participants completed a questionnaire on their experiences of using the simulations. Results show that both simulated environments offered realistic experience of performance contexts and were rated particularly useful for developing performance skills. For a subset of 7 violinists, state anxiety and electrocardiographic data were collected during the simulated audition and an actual audition with real judges. Results display comparable levels of reported state anxiety and patterns of heart rate variability in both situations, suggesting that responses to the simulated audition closely approximate those of a real audition. The findings are discussed in relation to their implications, both generalizable and individual-specific, for

  19. Simulating and stimulating performance: introducing distributed simulation to enhance musical learning and performance.

    Science.gov (United States)

    Williamon, Aaron; Aufegger, Lisa; Eiholzer, Hubert

    2014-01-01

    Musicians typically rehearse far away from their audiences and in practice rooms that differ significantly from the concert venues in which they aspire to perform. Due to the high costs and inaccessibility of such venues, much current international music training lacks repeated exposure to realistic performance situations, with students learning all too late (or not at all) how to manage performance stress and the demands of their audiences. Virtual environments have been shown to be an effective training tool in the fields of medicine and sport, offering practitioners access to real-life performance scenarios but with lower risk of negative evaluation and outcomes. The aim of this research was to design and test the efficacy of simulated performance environments in which conditions of "real" performance could be recreated. Advanced violin students (n = 11) were recruited to perform in two simulations: a solo recital with a small virtual audience and an audition situation with three "expert" virtual judges. Each simulation contained back-stage and on-stage areas, life-sized interactive virtual observers, and pre- and post-performance protocols designed to match those found at leading international performance venues. Participants completed a questionnaire on their experiences of using the simulations. Results show that both simulated environments offered realistic experience of performance contexts and were rated particularly useful for developing performance skills. For a subset of 7 violinists, state anxiety and electrocardiographic data were collected during the simulated audition and an actual audition with real judges. Results display comparable levels of reported state anxiety and patterns of heart rate variability in both situations, suggesting that responses to the simulated audition closely approximate those of a real audition. The findings are discussed in relation to their implications, both generalizable and individual-specific, for performance training.

  20. High performance computing in science and engineering Garching/Munich 2016

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, Siegfried; Bode, Arndt; Bruechle, Helmut; Brehm, Matthias (eds.)

    2016-11-01

    Computer simulations are the well-established third pillar of natural sciences along with theory and experimentation. Particularly high performance computing is growing fast and constantly demands more and more powerful machines. To keep pace with this development, in spring 2015, the Leibniz Supercomputing Centre installed the high performance computing system SuperMUC Phase 2, only three years after the inauguration of its sibling SuperMUC Phase 1. Thereby, the compute capabilities were more than doubled. This book covers the time-frame June 2014 until June 2016. Readers will find many examples of outstanding research in the more than 130 projects that are covered in this book, with each one of these projects using at least 4 million core-hours on SuperMUC. The largest scientific communities using SuperMUC in the last two years were computational fluid dynamics simulations, chemistry and material sciences, astrophysics, and life sciences.

  1. High performance stream computing for particle beam transport simulations

    International Nuclear Information System (INIS)

    Appleby, R; Bailey, D; Higham, J; Salt, M

    2008-01-01

    Understanding modern particle accelerators requires simulating charged particle transport through the machine elements. These simulations can be very time consuming due to the large number of particles and the need to consider many turns of a circular machine. Stream computing offers an attractive way to dramatically improve the performance of such simulations by calculating the simultaneous transport of many particles using dedicated hardware. Modern Graphics Processing Units (GPUs) are powerful and affordable stream computing devices. The results of simulations of particle transport through the booster-to-storage-ring transfer line of the DIAMOND synchrotron light source using an NVidia GeForce 7900 GPU are compared to the standard transport code MAD. It is found that particle transport calculations are suitable for stream processing and large performance increases are possible. The accuracy and potential speed gains are compared and the prospects for future work in the area are discussed

  2. Center for Technology for Advanced Scientific Component Software (TASCS)

    Energy Technology Data Exchange (ETDEWEB)

    Damevski, Kostadin [Virginia State Univ., Petersburg, VA (United States)

    2009-03-30

    A resounding success of the Scientific Discover through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedened computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS) tackles these issues by exploiting component-based software development to facilitate collaborative hig-performance scientific computing.

  3. Research initiatives for plug-and-play scientific computing

    International Nuclear Information System (INIS)

    McInnes, Lois Curfman; Dahlgren, Tamara; Nieplocha, Jarek; Bernholdt, David; Allan, Ben; Armstrong, Rob; Chavarria, Daniel; Elwasif, Wael; Gorton, Ian; Kenny, Joe; Krishan, Manoj; Malony, Allen; Norris, Boyana; Ray, Jaideep; Shende, Sameer

    2007-01-01

    This paper introduces three component technology initiatives within the SciDAC Center for Technology for Advanced Scientific Component Software (TASCS) that address ever-increasing productivity challenges in creating, managing, and applying simulation software to scientific discovery. By leveraging the Common Component Architecture (CCA), a new component standard for high-performance scientific computing, these initiatives tackle difficulties at different but related levels in the development of component-based scientific software: (1) deploying applications on massively parallel and heterogeneous architectures, (2) investigating new approaches to the runtime enforcement of behavioral semantics, and (3) developing tools to facilitate dynamic composition, substitution, and reconfiguration of component implementations and parameters, so that application scientists can explore tradeoffs among factors such as accuracy, reliability, and performance

  4. Center for Center for Technology for Advanced Scientific Component Software (TASCS)

    Energy Technology Data Exchange (ETDEWEB)

    Kostadin, Damevski [Virginia State Univ., Petersburg, VA (United States)

    2015-01-25

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS)1 tackles these these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.

  5. Scientific Performance of a Nano-satellite MeV Telescope

    Energy Technology Data Exchange (ETDEWEB)

    Lucchetta, Giulio; Berlato, Francesco; Rando, Riccardo; Bastieri, Denis; Urso, Giorgio, E-mail: giulio.lucchetta@desy.de, E-mail: fberlato@mpe.mpg.de [Dipartimento di Fisica and Astronomia “G. Galilei,” Università di Padova, I-35131 Padova (Italy)

    2017-05-01

    Over the past two decades, both X-ray and gamma-ray astronomy have experienced great progress. However, the region of the electromagnetic spectrum around ∼1 MeV is not so thoroughly explored. Future medium-sized gamma-ray telescopes will fill this gap in observations. As the timescale for the development and launch of a medium-class mission is ∼10 years, with substantial costs, we propose a different approach for the immediate future. In this paper, we evaluate the viability of a much smaller and cheaper detector: a nano-satellite Compton telescope, based on the CubeSat architecture. The scientific performance of this telescope would be well below that of the instrument expected for the future larger missions; however, via simulations, we estimate that such a compact telescope will achieve a performance similar to that of COMPTEL.

  6. Scientific and computational challenges of the fusion simulation project (FSP)

    International Nuclear Information System (INIS)

    Tang, W M

    2008-01-01

    This paper highlights the scientific and computational challenges facing the Fusion Simulation Project (FSP). The primary objective is to develop advanced software designed to use leadership-class computers for carrying out multiscale physics simulations to provide information vital to delivering a realistic integrated fusion simulation model with unprecedented physics fidelity. This multiphysics capability will be unprecedented in that in the current FES applications domain, the largest-scale codes are used to carry out first-principles simulations of mostly individual phenomena in realistic 3D geometry while the integrated models are much smaller-scale, lower-dimensionality codes with significant empirical elements used for modeling and designing experiments. The FSP is expected to be the most up-to-date embodiment of the theoretical and experimental understanding of magnetically confined thermonuclear plasmas and to provide a living framework for the simulation of such plasmas as the associated physics understanding continues to advance over the next several decades. Substantive progress on answering the outstanding scientific questions in the field will drive the FSP toward its ultimate goal of developing a reliable ability to predict the behavior of plasma discharges in toroidal magnetic fusion devices on all relevant time and space scales. From a computational perspective, the fusion energy science application goal to produce high-fidelity, whole-device modeling capabilities will demand computing resources in the petascale range and beyond, together with the associated multicore algorithmic formulation needed to address burning plasma issues relevant to ITER - a multibillion dollar collaborative device involving seven international partners representing over half the world's population. Even more powerful exascale platforms will be needed to meet the future challenges of designing a demonstration fusion reactor (DEMO). Analogous to other major applied physics

  7. Assessing Technical Performance and Determining the Learning Curve in Cleft Palate Surgery Using a High-Fidelity Cleft Palate Simulator.

    Science.gov (United States)

    Podolsky, Dale J; Fisher, David M; Wong Riff, Karen W; Szasz, Peter; Looi, Thomas; Drake, James M; Forrest, Christopher R

    2018-06-01

    This study assessed technical performance in cleft palate repair using a newly developed assessment tool and high-fidelity cleft palate simulator through a longitudinal simulation training exercise. Three residents performed five and one resident performed nine consecutive endoscopically recorded cleft palate repairs using a cleft palate simulator. Two fellows in pediatric plastic surgery and two expert cleft surgeons also performed recorded simulated repairs. The Cleft Palate Objective Structured Assessment of Technical Skill (CLOSATS) and end-product scales were developed to assess performance. Two blinded cleft surgeons assessed the recordings and the final repairs using the CLOSATS, end-product scale, and a previously developed global rating scale. The average procedure-specific (CLOSATS), global rating, and end-product scores increased logarithmically after each successive simulation session for the residents. Reliability of the CLOSATS (average item intraclass correlation coefficient (ICC), 0.85 ± 0.093) and global ratings (average item ICC, 0.91 ± 0.02) among the raters was high. Reliability of the end-product assessments was lower (average item ICC, 0.66 ± 0.15). Standard setting linear regression using an overall cutoff score of 7 of 10 corresponded to a pass score for the CLOSATS and the global score of 44 (maximum, 60) and 23 (maximum, 30), respectively. Using logarithmic best-fit curves, 6.3 simulation sessions are required to reach the minimum standard. A high-fidelity cleft palate simulator has been developed that improves technical performance in cleft palate repair. The simulator and technical assessment scores can be used to determine performance before operating on patients.

  8. Parallel Backprojection: A Case Study in High-Performance Reconfigurable Computing

    Directory of Open Access Journals (Sweden)

    Cordes Ben

    2009-01-01

    Full Text Available High-performance reconfigurable computing (HPRC is a novel approach to provide large-scale computing power to modern scientific applications. Using both general-purpose processors and FPGAs allows application designers to exploit fine-grained and coarse-grained parallelism, achieving high degrees of speedup. One scientific application that benefits from this technique is backprojection, an image formation algorithm that can be used as part of a synthetic aperture radar (SAR processing system. We present an implementation of backprojection for SAR on an HPRC system. Using simulated data taken at a variety of ranges, our implementation runs over 200 times faster than a similar software program, with an overall application speedup better than 50x. The backprojection application is easily parallelizable, achieving near-linear speedup when run on multiple nodes of a clustered HPRC system. The results presented can be applied to other systems and other algorithms with similar characteristics.

  9. Parallel Backprojection: A Case Study in High-Performance Reconfigurable Computing

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available High-performance reconfigurable computing (HPRC is a novel approach to provide large-scale computing power to modern scientific applications. Using both general-purpose processors and FPGAs allows application designers to exploit fine-grained and coarse-grained parallelism, achieving high degrees of speedup. One scientific application that benefits from this technique is backprojection, an image formation algorithm that can be used as part of a synthetic aperture radar (SAR processing system. We present an implementation of backprojection for SAR on an HPRC system. Using simulated data taken at a variety of ranges, our implementation runs over 200 times faster than a similar software program, with an overall application speedup better than 50x. The backprojection application is easily parallelizable, achieving near-linear speedup when run on multiple nodes of a clustered HPRC system. The results presented can be applied to other systems and other algorithms with similar characteristics.

  10. OpenMM 4: A Reusable, Extensible, Hardware Independent Library for High Performance Molecular Simulation.

    Science.gov (United States)

    Eastman, Peter; Friedrichs, Mark S; Chodera, John D; Radmer, Randall J; Bruns, Christopher M; Ku, Joy P; Beauchamp, Kyle A; Lane, Thomas J; Wang, Lee-Ping; Shukla, Diwakar; Tye, Tony; Houston, Mike; Stich, Timo; Klein, Christoph; Shirts, Michael R; Pande, Vijay S

    2013-01-08

    OpenMM is a software toolkit for performing molecular simulations on a range of high performance computing architectures. It is based on a layered architecture: the lower layers function as a reusable library that can be invoked by any application, while the upper layers form a complete environment for running molecular simulations. The library API hides all hardware-specific dependencies and optimizations from the users and developers of simulation programs: they can be run without modification on any hardware on which the API has been implemented. The current implementations of OpenMM include support for graphics processing units using the OpenCL and CUDA frameworks. In addition, OpenMM was designed to be extensible, so new hardware architectures can be accommodated and new functionality (e.g., energy terms and integrators) can be easily added.

  11. Undergraduate medical academic performance is improved by scientific training.

    Science.gov (United States)

    Zhang, Lili; Zhang, Wei; Wu, Chong; Liu, Zhongming; Cai, Yunfei; Cao, Xingguo; He, Yushan; Liu, Guoxiang; Miao, Hongming

    2017-09-01

    The effect of scientific training on course learning in undergraduates is still controversial. In this study, we investigated the academic performance of undergraduate students with and without scientific training. The results show that scientific training improves students' test scores in general medical courses, such as biochemistry and molecular biology, cell biology, physiology, and even English. We classified scientific training into four levels. We found that literature reading could significantly improve students' test scores in general courses. Students who received scientific training carried out experiments more effectively and published articles performed better than their untrained counterparts in biochemistry and molecular biology examinations. The questionnaire survey demonstrated that the trained students were more confident of their course learning, and displayed more interest, motivation and capability in course learning. In summary, undergraduate academic performance is improved by scientific training. Our findings shed light on the novel strategies in the management of undergraduate education in the medical school. © 2017 by The International Union of Biochemistry and Molecular Biology, 45(5):379-384, 2017. © 2017 The International Union of Biochemistry and Molecular Biology.

  12. 20th Joint Workshop on Sustained Simulation Performance

    CERN Document Server

    Bez, Wolfgang; Focht, Erich; Patel, Nisarg; Kobayashi, Hiroaki

    2016-01-01

    The book presents the state of the art in high-performance computing and simulation on modern supercomputer architectures. It explores general trends in hardware and software development, and then focuses specifically on the future of high-performance systems and heterogeneous architectures. It also covers applications such as computational fluid dynamics, material science, medical applications and climate research and discusses innovative fields like coupled multi-physics or multi-scale simulations. The papers included were selected from the presentations given at the 20th Workshop on Sustained Simulation Performance at the HLRS, University of Stuttgart, Germany in December 2015, and the subsequent Workshop on Sustained Simulation Performance at Tohoku University in February 2016.

  13. Verification of Scientific Simulations via Hypothesis-Driven Comparative and Quantitative Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Ahrens, James P [ORNL; Heitmann, Katrin [ORNL; Petersen, Mark R [ORNL; Woodring, Jonathan [Los Alamos National Laboratory (LANL); Williams, Sean [Los Alamos National Laboratory (LANL); Fasel, Patricia [Los Alamos National Laboratory (LANL); Ahrens, Christine [Los Alamos National Laboratory (LANL); Hsu, Chung-Hsing [ORNL; Geveci, Berk [ORNL

    2010-11-01

    This article presents a visualization-assisted process that verifies scientific-simulation codes. Code verification is necessary because scientists require accurate predictions to interpret data confidently. This verification process integrates iterative hypothesis verification with comparative, feature, and quantitative visualization. Following this process can help identify differences in cosmological and oceanographic simulations.

  14. Performance of high-rate TRD prototypes for the CBM experiment in test beam and simulation

    Energy Technology Data Exchange (ETDEWEB)

    Klein-Boesing, Melanie [Institut fuer Kernphysik, Muenster (Germany)

    2008-07-01

    The goal of the future Compressed Baryonic Matter (CBM) experiment is to explore the QCD phase diagram in the region of high baryon densities not covered by other experiments. Among other detectors, it will employ a Transition Radiation Detector (TRD) for tracking of charged particles and electron identification. To meet the demands for tracking and for electron identification at large particle densities and very high interaction rates, high efficiency TRD prototypes have been developed. These prototypes with double-sided pad plane electrodes based on Multiwire Proportional Chambers (MWPC) have been tested at GSI and implemented in the simulation framework of CBM. Results of the performance in a test beam and in simulations are shown. In addition, we present a study of the performance of CBM for electron identification and dilepton reconstruction with this new detector layout.

  15. High-Performance Modeling of Carbon Dioxide Sequestration by Coupling Reservoir Simulation and Molecular Dynamics

    KAUST Repository

    Bao, Kai

    2015-10-26

    The present work describes a parallel computational framework for carbon dioxide (CO2) sequestration simulation by coupling reservoir simulation and molecular dynamics (MD) on massively parallel high-performance-computing (HPC) systems. In this framework, a parallel reservoir simulator, reservoir-simulation toolbox (RST), solves the flow and transport equations that describe the subsurface flow behavior, whereas the MD simulations are performed to provide the required physical parameters. Technologies from several different fields are used to make this novel coupled system work efficiently. One of the major applications of the framework is the modeling of large-scale CO2 sequestration for long-term storage in subsurface geological formations, such as depleted oil and gas reservoirs and deep saline aquifers, which has been proposed as one of the few attractive and practical solutions to reduce CO2 emissions and address the global-warming threat. Fine grids and accurate prediction of the properties of fluid mixtures under geological conditions are essential for accurate simulations. In this work, CO2 sequestration is presented as a first example for coupling reservoir simulation and MD, although the framework can be extended naturally to the full multiphase multicomponent compositional flow simulation to handle more complicated physical processes in the future. Accuracy and scalability analysis are performed on an IBM BlueGene/P and on an IBM BlueGene/Q, the latest IBM supercomputer. Results show good accuracy of our MD simulations compared with published data, and good scalability is observed with the massively parallel HPC systems. The performance and capacity of the proposed framework are well-demonstrated with several experiments with hundreds of millions to one billion cells. To the best of our knowledge, the present work represents the first attempt to couple reservoir simulation and molecular simulation for large-scale modeling. Because of the complexity of

  16. CUDA/GPU Technology : Parallel Programming For High Performance Scientific Computing

    OpenAIRE

    YUHENDRA; KUZE, Hiroaki; JOSAPHAT, Tetuko Sri Sumantyo

    2009-01-01

    [ABSTRACT]Graphics processing units (GP Us) originally designed for computer video cards have emerged as the most powerful chip in a high-performance workstation. In the high performance computation capabilities, graphic processing units (GPU) lead to much more powerful performance than conventional CPUs by means of parallel processing. In 2007, the birth of Compute Unified Device Architecture (CUDA) and CUDA-enabled GPUs by NVIDIA Corporation brought a revolution in the general purpose GPU a...

  17. Computer simulations and the changing face of scientific experimentation

    CERN Document Server

    Duran, Juan M

    2013-01-01

    Computer simulations have become a central tool for scientific practice. Their use has replaced, in many cases, standard experimental procedures. This goes without mentioning cases where the target system is empirical but there are no techniques for direct manipulation of the system, such as astronomical observation. To these cases, computer simulations have proved to be of central importance. The question about their use and implementation, therefore, is not only a technical one but represents a challenge for the humanities as well. In this volume, scientists, historians, and philosophers joi

  18. High-Performance Modeling and Simulation of Anchoring in Granular Media for NEO Applications

    Science.gov (United States)

    Quadrelli, Marco B.; Jain, Abhinandan; Negrut, Dan; Mazhar, Hammad

    2012-01-01

    NASA is interested in designing a spacecraft capable of visiting a near-Earth object (NEO), performing experiments, and then returning safely. Certain periods of this mission would require the spacecraft to remain stationary relative to the NEO, in an environment characterized by very low gravity levels; such situations require an anchoring mechanism that is compact, easy to deploy, and upon mission completion, easy to remove. The design philosophy used in this task relies on the simulation capability of a high-performance multibody dynamics physics engine. On Earth, it is difficult to create low-gravity conditions, and testing in low-gravity environments, whether artificial or in space, can be costly and very difficult to achieve. Through simulation, the effect of gravity can be controlled with great accuracy, making it ideally suited to analyze the problem at hand. Using Chrono::Engine, a simulation pack age capable of utilizing massively parallel Graphic Processing Unit (GPU) hardware, several validation experiments were performed. Modeling of the regolith interaction has been carried out, after which the anchor penetration tests were performed and analyzed. The regolith was modeled by a granular medium composed of very large numbers of convex three-dimensional rigid bodies, subject to microgravity levels and interacting with each other with contact, friction, and cohesional forces. The multibody dynamics simulation approach used for simulating anchors penetrating a soil uses a differential variational inequality (DVI) methodology to solve the contact problem posed as a linear complementarity method (LCP). Implemented within a GPU processing environment, collision detection is greatly accelerated compared to traditional CPU (central processing unit)- based collision detection. Hence, systems of millions of particles interacting with complex dynamic systems can be efficiently analyzed, and design recommendations can be made in a much shorter time. The figure

  19. GNES-R: Global nuclear energy simulator for reactors task 1: High-fidelity neutron transport

    International Nuclear Information System (INIS)

    Clarno, K.; De Almeida, V.; D'Azevedo, E.; De Oliveira, C.; Hamilton, S.

    2006-01-01

    A multi-laboratory, multi-university collaboration has formed to advance the state-of-the-art in high-fidelity, coupled-physics simulation of nuclear energy systems. We are embarking on the first-phase in the development of a new suite of simulation tools dedicated to the advancement of nuclear science and engineering technologies. We seek to develop and demonstrate a new generation of multi-physics simulation tools that will explore the scientific phenomena of tightly coupled physics parameters within nuclear systems, support the design and licensing of advanced nuclear reactors, and provide benchmark quality solutions for code validation. In this paper, we have presented the general scope of the collaborative project and discuss the specific challenges of high-fidelity neutronics for nuclear reactor simulation and the inroads we have made along this path. The high-performance computing neutronics code system utilizes the latest version of SCALE to generate accurate, problem-dependent cross sections, which are used in NEWTRNX - a new 3-D, general-geometry, discrete-ordinates solver based on the Slice-Balance Approach. The Global Nuclear Energy Simulator for Reactors (GNES-R) team is embarking on a long-term simulation development project that encompasses multiple laboratories and universities for the expansion of high-fidelity coupled-physics simulation of nuclear energy systems. (authors)

  20. Undergraduate Medical Academic Performance is Improved by Scientific Training

    Science.gov (United States)

    Zhang, Lili; Zhang, Wei; Wu, Chong; Liu, Zhongming; Cai, Yunfei; Cao, Xingguo; He, Yushan; Liu, Guoxiang; Miao, Hongming

    2017-01-01

    The effect of scientific training on course learning in undergraduates is still controversial. In this study, we investigated the academic performance of undergraduate students with and without scientific training. The results show that scientific training improves students' test scores in general medical courses, such as biochemistry and…

  1. Direct numerical simulation of reactor two-phase flows enabled by high-performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Jun; Cambareri, Joseph J.; Brown, Cameron S.; Feng, Jinyong; Gouws, Andre; Li, Mengnan; Bolotnov, Igor A.

    2018-04-01

    Nuclear reactor two-phase flows remain a great engineering challenge, where the high-resolution two-phase flow database which can inform practical model development is still sparse due to the extreme reactor operation conditions and measurement difficulties. Owing to the rapid growth of computing power, the direct numerical simulation (DNS) is enjoying a renewed interest in investigating the related flow problems. A combination between DNS and an interface tracking method can provide a unique opportunity to study two-phase flows based on first principles calculations. More importantly, state-of-the-art high-performance computing (HPC) facilities are helping unlock this great potential. This paper reviews the recent research progress of two-phase flow DNS related to reactor applications. The progress in large-scale bubbly flow DNS has been focused not only on the sheer size of those simulations in terms of resolved Reynolds number, but also on the associated advanced modeling and analysis techniques. Specifically, the current areas of active research include modeling of sub-cooled boiling, bubble coalescence, as well as the advanced post-processing toolkit for bubbly flow simulations in reactor geometries. A novel bubble tracking method has been developed to track the evolution of bubbles in two-phase bubbly flow. Also, spectral analysis of DNS database in different geometries has been performed to investigate the modulation of the energy spectrum slope due to bubble-induced turbulence. In addition, the single-and two-phase analysis results are presented for turbulent flows within the pressurized water reactor (PWR) core geometries. The related simulations are possible to carry out only with the world leading HPC platforms. These simulations are allowing more complex turbulence model development and validation for use in 3D multiphase computational fluid dynamics (M-CFD) codes.

  2. STEMsalabim: A high-performance computing cluster friendly code for scanning transmission electron microscopy image simulations of thin specimens

    International Nuclear Information System (INIS)

    Oelerich, Jan Oliver; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D.; Volz, Kerstin

    2017-01-01

    Highlights: • We present STEMsalabim, a modern implementation of the multislice algorithm for simulation of STEM images. • Our package is highly parallelizable on high-performance computing clusters, combining shared and distributed memory architectures. • With STEMsalabim, computationally and memory expensive STEM image simulations can be carried out within reasonable time. - Abstract: We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space.

  3. STEMsalabim: A high-performance computing cluster friendly code for scanning transmission electron microscopy image simulations of thin specimens

    Energy Technology Data Exchange (ETDEWEB)

    Oelerich, Jan Oliver, E-mail: jan.oliver.oelerich@physik.uni-marburg.de; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D.; Volz, Kerstin

    2017-06-15

    Highlights: • We present STEMsalabim, a modern implementation of the multislice algorithm for simulation of STEM images. • Our package is highly parallelizable on high-performance computing clusters, combining shared and distributed memory architectures. • With STEMsalabim, computationally and memory expensive STEM image simulations can be carried out within reasonable time. - Abstract: We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space.

  4. Impact of measuring electron tracks in high-resolution scientific charge-coupled devices within Compton imaging systems

    International Nuclear Information System (INIS)

    Chivers, D.H.; Coffer, A.; Plimley, B.; Vetter, K.

    2011-01-01

    We have implemented benchmarked models to determine the gain in sensitivity of electron-tracking based Compton imaging relative to conventional Compton imaging by the use of high-resolution scientific charge-coupled devices (CCD). These models are based on the recently demonstrated ability of electron-tracking based Compton imaging by using fully depleted scientific CCDs. Here we evaluate the gain in sensitivity by employing Monte Carlo simulations in combination with advanced charge transport models to calculate two-dimensional charge distributions corresponding to experimentally obtained tracks. In order to reconstruct the angle of the incident γ-ray, a trajectory determination algorithm was used on each track and integrated into a back-projection routine utilizing a geodesic-vertex ray tracing technique. Analysis was performed for incident γ-ray energies of 662 keV and results show an increase in sensitivity consistent with tracking of the Compton electron to approximately ±30 o .

  5. Improving the trust in results of numerical simulations and scientific data analytics

    Energy Technology Data Exchange (ETDEWEB)

    Cappello, Franck [Argonne National Lab. (ANL), Argonne, IL (United States); Constantinescu, Emil [Argonne National Lab. (ANL), Argonne, IL (United States); Hovland, Paul [Argonne National Lab. (ANL), Argonne, IL (United States); Peterka, Tom [Argonne National Lab. (ANL), Argonne, IL (United States); Phillips, Carolyn [Argonne National Lab. (ANL), Argonne, IL (United States); Snir, Marc [Argonne National Lab. (ANL), Argonne, IL (United States); Wild, Stefan [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-04-30

    This white paper investigates several key aspects of the trust that a user can give to the results of numerical simulations and scientific data analytics. In this document, the notion of trust is related to the integrity of numerical simulations and data analytics applications. This white paper complements the DOE ASCR report on Cybersecurity for Scientific Computing Integrity by (1) exploring the sources of trust loss; (2) reviewing the definitions of trust in several areas; (3) providing numerous cases of result alteration, some of them leading to catastrophic failures; (4) examining the current notion of trust in numerical simulation and scientific data analytics; (5) providing a gap analysis; and (6) suggesting two important research directions and their respective research topics. To simplify the presentation without loss of generality, we consider that trust in results can be lost (or the results’ integrity impaired) because of any form of corruption happening during the execution of the numerical simulation or the data analytics application. In general, the sources of such corruption are threefold: errors, bugs, and attacks. Current applications are already using techniques to deal with different types of corruption. However, not all potential corruptions are covered by these techniques. We firmly believe that the current level of trust that a user has in the results is at least partially founded on ignorance of this issue or the hope that no undetected corruptions will occur during the execution. This white paper explores the notion of trust and suggests recommendations for developing a more scientifically grounded notion of trust in numerical simulation and scientific data analytics. We first formulate the problem and show that it goes beyond previous questions regarding the quality of results such as V&V, uncertainly quantification, and data assimilation. We then explore the complexity of this difficult problem, and we sketch complementary general

  6. Comparison of turbulence measurements from DIII-D low-mode and high-performance plasmas to turbulence simulations and models

    International Nuclear Information System (INIS)

    Rhodes, T.L.; Leboeuf, J.-N.; Sydora, R.D.; Groebner, R.J.; Doyle, E.J.; McKee, G.R.; Peebles, W.A.; Rettig, C.L.; Zeng, L.; Wang, G.

    2002-01-01

    Measured turbulence characteristics (correlation lengths, spectra, etc.) in low-confinement (L-mode) and high-performance plasmas in the DIII-D tokamak [Luxon et al., Proceedings Plasma Physics and Controlled Nuclear Fusion Research 1986 (International Atomic Energy Agency, Vienna, 1987), Vol. I, p. 159] show many similarities with the characteristics determined from turbulence simulations. Radial correlation lengths Δr of density fluctuations from L-mode discharges are found to be numerically similar to the ion poloidal gyroradius ρ θ,s , or 5-10 times the ion gyroradius ρ s over the radial region 0.2 θ,s or 5-10 times ρ s , an experiment was performed which modified ρ θs while keeping other plasma parameters approximately fixed. It was found that the experimental Δr did not scale as ρ θ,s , which was similar to low-resolution UCAN simulations. Finally, both experimental measurements and gyrokinetic simulations indicate a significant reduction in the radial correlation length from high-performance quiescent double barrier discharges, as compared to normal L-mode, consistent with reduced transport in these high-performance plasmas

  7. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    Energy Technology Data Exchange (ETDEWEB)

    Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brightwell, Ron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In this paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.

  8. GPU-based high performance Monte Carlo simulation in neutron transport

    Energy Technology Data Exchange (ETDEWEB)

    Heimlich, Adino; Mol, Antonio C.A.; Pereira, Claudio M.N.A. [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. de Inteligencia Artificial Aplicada], e-mail: cmnap@ien.gov.br

    2009-07-01

    Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)

  9. GPU-based high performance Monte Carlo simulation in neutron transport

    International Nuclear Information System (INIS)

    Heimlich, Adino; Mol, Antonio C.A.; Pereira, Claudio M.N.A.

    2009-01-01

    Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)

  10. High-resolution 3D simulations of NIF ignition targets performed on Sequoia with HYDRA

    Science.gov (United States)

    Marinak, M. M.; Clark, D. S.; Jones, O. S.; Kerbel, G. D.; Sepke, S.; Patel, M. V.; Koning, J. M.; Schroeder, C. R.

    2015-11-01

    Developments in the multiphysics ICF code HYDRA enable it to perform large-scale simulations on the Sequoia machine at LLNL. With an aggregate computing power of 20 Petaflops, Sequoia offers an unprecedented capability to resolve the physical processes in NIF ignition targets for a more complete, consistent treatment of the sources of asymmetry. We describe modifications to HYDRA that enable it to scale to over one million processes on Sequoia. These include new options for replicating parts of the mesh over a subset of the processes, to avoid strong scaling limits. We consider results from a 3D full ignition capsule-only simulation performed using over one billion zones run on 262,000 processors which resolves surface perturbations through modes l = 200. We also report progress towards a high-resolution 3D integrated hohlraum simulation performed using 262,000 processors which resolves surface perturbations on the ignition capsule through modes l = 70. These aim for the most complete calculations yet of the interactions and overall impact of the various sources of asymmetry for NIF ignition targets. This work was performed under the auspices of the Lawrence Livermore National Security, LLC, (LLNS) under Contract No. DE-AC52-07NA27344.

  11. Performance Engineering Technology for Scientific Component Software

    Energy Technology Data Exchange (ETDEWEB)

    Malony, Allen D.

    2007-05-08

    Large-scale, complex scientific applications are beginning to benefit from the use of component software design methodology and technology for software development. Integral to the success of component-based applications is the ability to achieve high-performing code solutions through the use of performance engineering tools for both intra-component and inter-component analysis and optimization. Our work on this project aimed to develop performance engineering technology for scientific component software in association with the DOE CCTTSS SciDAC project (active during the contract period) and the broader Common Component Architecture (CCA) community. Our specific implementation objectives were to extend the TAU performance system and Program Database Toolkit (PDT) to support performance instrumentation, measurement, and analysis of CCA components and frameworks, and to develop performance measurement and monitoring infrastructure that could be integrated in CCA applications. These objectives have been met in the completion of all project milestones and in the transfer of the technology into the continuing CCA activities as part of the DOE TASCS SciDAC2 effort. In addition to these achievements, over the past three years, we have been an active member of the CCA Forum, attending all meetings and serving in several working groups, such as the CCA Toolkit working group, the CQoS working group, and the Tutorial working group. We have contributed significantly to CCA tutorials since SC'04, hosted two CCA meetings, participated in the annual ACTS workshops, and were co-authors on the recent CCA journal paper [24]. There are four main areas where our project has delivered results: component performance instrumentation and measurement, component performance modeling and optimization, performance database and data mining, and online performance monitoring. This final report outlines the achievements in these areas for the entire project period. The submitted progress

  12. Accelerating the scientific exploration process with scientific workflows

    International Nuclear Information System (INIS)

    Altintas, Ilkay; Barney, Oscar; Cheng, Zhengang; Critchlow, Terence; Ludaescher, Bertram; Parker, Steve; Shoshani, Arie; Vouk, Mladen

    2006-01-01

    Although an increasing amount of middleware has emerged in the last few years to achieve remote data access, distributed job execution, and data management, orchestrating these technologies with minimal overhead still remains a difficult task for scientists. Scientific workflow systems improve this situation by creating interfaces to a variety of technologies and automating the execution and monitoring of the workflows. Workflow systems provide domain-independent customizable interfaces and tools that combine different tools and technologies along with efficient methods for using them. As simulations and experiments move into the petascale regime, the orchestration of long running data and compute intensive tasks is becoming a major requirement for the successful steering and completion of scientific investigations. A scientific workflow is the process of combining data and processes into a configurable, structured set of steps that implement semi-automated computational solutions of a scientific problem. Kepler is a cross-project collaboration, co-founded by the SciDAC Scientific Data Management (SDM) Center, whose purpose is to develop a domain-independent scientific workflow system. It provides a workflow environment in which scientists design and execute scientific workflows by specifying the desired sequence of computational actions and the appropriate data flow, including required data transformations, between these steps. Currently deployed workflows range from local analytical pipelines to distributed, high-performance and high-throughput applications, which can be both data- and compute-intensive. The scientific workflow approach offers a number of advantages over traditional scripting-based approaches, including ease of configuration, improved reusability and maintenance of workflows and components (called actors), automated provenance management, 'smart' re-running of different versions of workflow instances, on-the-fly updateable parameters, monitoring

  13. High-End Scientific Computing

    Science.gov (United States)

    EPA uses high-end scientific computing, geospatial services and remote sensing/imagery analysis to support EPA's mission. The Center for Environmental Computing (CEC) assists the Agency's program offices and regions to meet staff needs in these areas.

  14. Center for Technology for Advanced Scientific Component Software (TASCS) Consolidated Progress Report July 2006 - March 2009

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, D E; McInnes, L C; Govindaraju, M; Bramley, R; Epperly, T; Kohl, J A; Nieplocha, J; Armstrong, R; Shasharina, S; Sussman, A L; Sottile, M; Damevski, K

    2009-04-14

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS) tackles these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.

  15. Scientific Assistant Virtual Laboratory (SAVL)

    Science.gov (United States)

    Alaghband, Gita; Fardi, Hamid; Gnabasik, David

    2007-03-01

    The Scientific Assistant Virtual Laboratory (SAVL) is a scientific discovery environment, an interactive simulated virtual laboratory, for learning physics and mathematics. The purpose of this computer-assisted intervention is to improve middle and high school student interest, insight and scores in physics and mathematics. SAVL develops scientific and mathematical imagination in a visual, symbolic, and experimental simulation environment. It directly addresses the issues of scientific and technological competency by providing critical thinking training through integrated modules. This on-going research provides a virtual laboratory environment in which the student directs the building of the experiment rather than observing a packaged simulation. SAVL: * Engages the persistent interest of young minds in physics and math by visually linking simulation objects and events with mathematical relations. * Teaches integrated concepts by the hands-on exploration and focused visualization of classic physics experiments within software. * Systematically and uniformly assesses and scores students by their ability to answer their own questions within the context of a Master Question Network. We will demonstrate how the Master Question Network uses polymorphic interfaces and C# lambda expressions to manage simulation objects.

  16. The computer program LIAR for the simulation and modeling of high performance linacs

    International Nuclear Information System (INIS)

    Assmann, R.; Adolphsen, C.; Bane, K.; Emma, P.; Raubenheimer, T.O.; Siemann, R.; Thompson, K.; Zimmermann, F.

    1997-07-01

    High performance linear accelerators are the central components of the proposed next generation of linear colliders. They must provide acceleration of up to 750 GeV per beam while maintaining small normalized emittances. Standard simulation programs, mainly developed for storage rings, did not meet the specific requirements for high performance linacs with high bunch charges and strong wakefields. The authors present the program. LIAR (LInear Accelerator Research code) that includes single and multi-bunch wakefield effects, a 6D coupled beam description, specific optimization algorithms and other advanced features. LIAR has been applied to and checked against the existing Stanford Linear Collider (SLC), the linacs of the proposed Next Linear Collider (NLC) and the proposed Linac Coherent Light Source (LCLS) at SLAC. Its modular structure allows easy extension for different purposes. The program is available for UNIX workstations and Windows PC's

  17. Exploring HPCS languages in scientific computing

    International Nuclear Information System (INIS)

    Barrett, R F; Alam, S R; Almeida, V F d; Bernholdt, D E; Elwasif, W R; Kuehn, J A; Poole, S W; Shet, A G

    2008-01-01

    As computers scale up dramatically to tens and hundreds of thousands of cores, develop deeper computational and memory hierarchies, and increased heterogeneity, developers of scientific software are increasingly challenged to express complex parallel simulations effectively and efficiently. In this paper, we explore the three languages developed under the DARPA High-Productivity Computing Systems (HPCS) program to help address these concerns: Chapel, Fortress, and X10. These languages provide a variety of features not found in currently popular HPC programming environments and make it easier to express powerful computational constructs, leading to new ways of thinking about parallel programming. Though the languages and their implementations are not yet mature enough for a comprehensive evaluation, we discuss some of the important features, and provide examples of how they can be used in scientific computing. We believe that these characteristics will be important to the future of high-performance scientific computing, whether the ultimate language of choice is one of the HPCS languages or something else

  18. Exploring HPCS languages in scientific computing

    Science.gov (United States)

    Barrett, R. F.; Alam, S. R.; Almeida, V. F. d.; Bernholdt, D. E.; Elwasif, W. R.; Kuehn, J. A.; Poole, S. W.; Shet, A. G.

    2008-07-01

    As computers scale up dramatically to tens and hundreds of thousands of cores, develop deeper computational and memory hierarchies, and increased heterogeneity, developers of scientific software are increasingly challenged to express complex parallel simulations effectively and efficiently. In this paper, we explore the three languages developed under the DARPA High-Productivity Computing Systems (HPCS) program to help address these concerns: Chapel, Fortress, and X10. These languages provide a variety of features not found in currently popular HPC programming environments and make it easier to express powerful computational constructs, leading to new ways of thinking about parallel programming. Though the languages and their implementations are not yet mature enough for a comprehensive evaluation, we discuss some of the important features, and provide examples of how they can be used in scientific computing. We believe that these characteristics will be important to the future of high-performance scientific computing, whether the ultimate language of choice is one of the HPCS languages or something else.

  19. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  20. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu

    2013-07-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  1. Development of three-dimensional neoclassical transport simulation code with high performance Fortran on a vector-parallel computer

    International Nuclear Information System (INIS)

    Satake, Shinsuke; Okamoto, Masao; Nakajima, Noriyoshi; Takamaru, Hisanori

    2005-11-01

    A neoclassical transport simulation code (FORTEC-3D) applicable to three-dimensional configurations has been developed using High Performance Fortran (HPF). Adoption of computing techniques for parallelization and a hybrid simulation model to the δf Monte-Carlo method transport simulation, including non-local transport effects in three-dimensional configurations, makes it possible to simulate the dynamism of global, non-local transport phenomena with a self-consistent radial electric field within a reasonable computation time. In this paper, development of the transport code using HPF is reported. Optimization techniques in order to achieve both high vectorization and parallelization efficiency, adoption of a parallel random number generator, and also benchmark results, are shown. (author)

  2. Scientific Programming with High Performance Fortran: A Case Study Using the xHPF Compiler

    Directory of Open Access Journals (Sweden)

    Eric De Sturler

    1997-01-01

    Full Text Available Recently, the first commercial High Performance Fortran (HPF subset compilers have appeared. This article reports on our experiences with the xHPF compiler of Applied Parallel Research, version 1.2, for the Intel Paragon. At this stage, we do not expect very High Performance from our HPF programs, even though performance will eventually be of paramount importance for the acceptance of HPF. Instead, our primary objective is to study how to convert large Fortran 77 (F77 programs to HPF such that the compiler generates reasonably efficient parallel code. We report on a case study that identifies several problems when parallelizing code with HPF; most of these problems affect current HPF compiler technology in general, although some are specific for the xHPF compiler. We discuss our solutions from the perspective of the scientific programmer, and presenttiming results on the Intel Paragon. The case study comprises three programs of different complexity with respect to parallelization. We use the dense matrix-matrix product to show that the distribution of arrays and the order of nested loops significantly influence the performance of the parallel program. We use Gaussian elimination with partial pivoting to study the parallelization strategy of the compiler. There are various ways to structure this algorithm for a particular data distribution. This example shows how much effort may be demanded from the programmer to support the compiler in generating an efficient parallel implementation. Finally, we use a small application to show that the more complicated structure of a larger program may introduce problems for the parallelization, even though all subroutines of the application are easy to parallelize by themselves. The application consists of a finite volume discretization on a structured grid and a nested iterative solver. Our case study shows that it is possible to obtain reasonably efficient parallel programs with xHPF, although the compiler

  3. Advanced scientific computational methods and their applications to nuclear technologies. (4) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (4)

    International Nuclear Information System (INIS)

    Sekimura, Naoto; Okita, Taira

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the fourth issue showing the overview of scientific computational methods with the introduction of continuum simulation methods and their applications. Simulation methods on physical radiation effects on materials are reviewed based on the process such as binary collision approximation, molecular dynamics, kinematic Monte Carlo method, reaction rate method and dislocation dynamics. (T. Tanaka)

  4. Scientific Literacy of High School Students.

    Science.gov (United States)

    Lucas, Keith B.; Tulip, David F.

    This investigation was undertaken in order to establish the status of scientific literacy among three groups of secondary school students in four Brisbane, Australia high schools, and to reduce the apparent reticence of science teachers to evaluate students' achievement in the various dimensions of scientific literacy by demonstrating appropriate…

  5. 8th International Workshop on Parallel Tools for High Performance Computing

    CERN Document Server

    Gracia, José; Knüpfer, Andreas; Resch, Michael; Nagel, Wolfgang

    2015-01-01

    Numerical simulation and modelling using High Performance Computing has evolved into an established technique in academic and industrial research. At the same time, the High Performance Computing infrastructure is becoming ever more complex. For instance, most of the current top systems around the world use thousands of nodes in which classical CPUs are combined with accelerator cards in order to enhance their compute power and energy efficiency. This complexity can only be mastered with adequate development and optimization tools. Key topics addressed by these tools include parallelization on heterogeneous systems, performance optimization for CPUs and accelerators, debugging of increasingly complex scientific applications, and optimization of energy usage in the spirit of green IT. This book represents the proceedings of the 8th International Parallel Tools Workshop, held October 1-2, 2014 in Stuttgart, Germany – which is a forum to discuss the latest advancements in the parallel tools.

  6. Improving the Performance of the Extreme-scale Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2014-01-01

    Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation-based toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1) a new deadlock resolution protocol to reduce the parallel discrete event simulation management overhead and (2) a new simulated MPI message matching algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement, such as by reducing the simulation overhead for running the NAS Parallel Benchmark suite inside the simulator from 1,020\\% to 238% for the conjugate gradient (CG) benchmark and from 102% to 0% for the embarrassingly parallel (EP) and benchmark, as well as, from 37,511% to 13,808% for CG and from 3,332% to 204% for EP with accurate process failure simulation.

  7. 24th & 25th Joint Workshop on Sustained Simulation Performance

    CERN Document Server

    Bez, Wolfgang; Focht, Erich; Gienger, Michael; Kobayashi, Hiroaki

    2017-01-01

    This book presents the state of the art in High Performance Computing on modern supercomputer architectures. It addresses trends in hardware and software development in general, as well as the future of High Performance Computing systems and heterogeneous architectures. The contributions cover a broad range of topics, from improved system management to Computational Fluid Dynamics, High Performance Data Analytics, and novel mathematical approaches for large-scale systems. In addition, they explore innovative fields like coupled multi-physics and multi-scale simulations. All contributions are based on selected papers presented at the 24th Workshop on Sustained Simulation Performance, held at the University of Stuttgart’s High Performance Computing Center in Stuttgart, Germany in December 2016 and the subsequent Workshop on Sustained Simulation Performance, held at the Cyberscience Center, Tohoku University, Japan in March 2017.

  8. Optimized Parallel Discrete Event Simulation (PDES) for High Performance Computing (HPC) Clusters

    National Research Council Canada - National Science Library

    Abu-Ghazaleh, Nael

    2005-01-01

    The aim of this project was to study the communication subsystem performance of state of the art optimistic simulator Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES...

  9. High performance computer code for molecular dynamics simulations

    International Nuclear Information System (INIS)

    Levay, I.; Toekesi, K.

    2007-01-01

    Complete text of publication follows. Molecular Dynamics (MD) simulation is a widely used technique for modeling complicated physical phenomena. Since 2005 we are developing a MD simulations code for PC computers. The computer code is written in C++ object oriented programming language. The aim of our work is twofold: a) to develop a fast computer code for the study of random walk of guest atoms in Be crystal, b) 3 dimensional (3D) visualization of the particles motion. In this case we mimic the motion of the guest atoms in the crystal (diffusion-type motion), and the motion of atoms in the crystallattice (crystal deformation). Nowadays, it is common to use Graphics Devices in intensive computational problems. There are several ways to use this extreme processing performance, but never before was so easy to programming these devices as now. The CUDA (Compute Unified Device) Architecture introduced by nVidia Corporation in 2007 is a very useful for every processor hungry application. A Unified-architecture GPU include 96-128, or more stream processors, so the raw calculation performance is 576(!) GFLOPS. It is ten times faster, than the fastest dual Core CPU [Fig.1]. Our improved MD simulation software uses this new technology, which speed up our software and the code run 10 times faster in the critical calculation code segment. Although the GPU is a very powerful tool, it has a strongly paralleled structure. It means, that we have to create an algorithm, which works on several processors without deadlock. Our code currently uses 256 threads, shared and constant on-chip memory, instead of global memory, which is 100 times slower than others. It is possible to implement the total algorithm on GPU, therefore we do not need to download and upload the data in every iteration. On behalf of maximal throughput, every thread run with the same instructions

  10. Building a High Performance Computing Infrastructure for Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Belov, S; Kaplin, V; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2011-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies (ICT), and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of the computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for the particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. Recently a dedicated optical network with the initial bandwidth of 10 Gbps connecting these three facilities was built in order to make it possible to share the computing resources among the research communities of participating institutes, thus providing a common platform for building the computing infrastructure for various scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technologies based on XEN and KVM platforms. The solution implemented was tested thoroughly within the computing environment of KEDR detector experiment which is being carried out at BINP, and foreseen to be applied to the use cases of other HEP experiments in the upcoming future.

  11. Scientific Applications Performance Evaluation on Burst Buffer

    KAUST Repository

    Markomanolis, George S.

    2017-10-19

    Parallel I/O is an integral component of modern high performance computing, especially in storing and processing very large datasets, such as the case of seismic imaging, CFD, combustion and weather modeling. The storage hierarchy includes nowadays additional layers, the latest being the usage of SSD-based storage as a Burst Buffer for I/O acceleration. We present an in-depth analysis on how to use Burst Buffer for specific cases and how the internal MPI I/O aggregators operate according to the options that the user provides during his job submission. We analyze the performance of a range of I/O intensive scientific applications, at various scales on a large installation of Lustre parallel file system compared to an SSD-based Burst Buffer. Our results show a performance improvement over Lustre when using Burst Buffer. Moreover, we show results from a data hierarchy library which indicate that the standard I/O approaches are not enough to get the expected performance from this technology. The performance gain on the total execution time of the studied applications is between 1.16 and 3 times compared to Lustre. One of the test cases achieved an impressive I/O throughput of 900 GB/s on Burst Buffer.

  12. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    Science.gov (United States)

    Moon, Hongsik

    What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the

  13. On the Performance of the Python Programming Language for Serial and Parallel Scientific Computations

    Directory of Open Access Journals (Sweden)

    Xing Cai

    2005-01-01

    Full Text Available This article addresses the performance of scientific applications that use the Python programming language. First, we investigate several techniques for improving the computational efficiency of serial Python codes. Then, we discuss the basic programming techniques in Python for parallelizing serial scientific applications. It is shown that an efficient implementation of the array-related operations is essential for achieving good parallel performance, as for the serial case. Once the array-related operations are efficiently implemented, probably using a mixed-language implementation, good serial and parallel performance become achievable. This is confirmed by a set of numerical experiments. Python is also shown to be well suited for writing high-level parallel programs.

  14. Simulation-Driven Development and Optimization of a High-Performance Six-Dimensional Wrist Force/Torque Sensor

    Directory of Open Access Journals (Sweden)

    Qiaokang LIANG

    2010-05-01

    Full Text Available This paper describes the Simulation-Driven Development and Optimization (SDDO of a six-dimensional force/torque sensor with high performance. By the implementation of the SDDO, the developed sensor possesses high performance such as high sensitivity, linearity, stiffness and repeatability simultaneously, which is hard for tranditional force/torque sensor. Integrated approach provided by software ANSYS was used to streamline and speed up the process chain and thereby to deliver results significantly faster than traditional approaches. The result of calibration experiment possesses some impressive characters, therefore the developed fore/torque sensor can be usefully used in industry and the methods of design can also be used to develop industrial product.

  15. Expert opinions and scientific evidence for colonoscopy key performance indicators.

    Science.gov (United States)

    Rees, Colin J; Bevan, Roisin; Zimmermann-Fraedrich, Katharina; Rutter, Matthew D; Rex, Douglas; Dekker, Evelien; Ponchon, Thierry; Bretthauer, Michael; Regula, Jaroslaw; Saunders, Brian; Hassan, Cesare; Bourke, Michael J; Rösch, Thomas

    2016-12-01

    Colonoscopy is a widely performed procedure with procedural volumes increasing annually throughout the world. Many procedures are now performed as part of colorectal cancer screening programmes. Colonoscopy should be of high quality and measures of this quality should be evidence based. New UK key performance indicators and quality assurance standards have been developed by a working group with consensus agreement on each standard reached. This paper reviews the scientific basis for each of the quality measures published in the UK standards. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  16. Statistical physics of fracture: scientific discovery through high-performance computing

    International Nuclear Information System (INIS)

    Kumar, Phani; Nukala, V V; Simunovic, Srdan; Mills, Richard T

    2006-01-01

    The paper presents the state-of-the-art algorithmic developments for simulating the fracture of disordered quasi-brittle materials using discrete lattice systems. Large scale simulations are often required to obtain accurate scaling laws; however, due to computational complexity, the simulations using the traditional algorithms were limited to small system sizes. We have developed two algorithms: a multiple sparse Cholesky downdating scheme for simulating 2D random fuse model systems, and a block-circulant preconditioner for simulating 2D random fuse model systems. Using these algorithms, we were able to simulate fracture of largest ever lattice system sizes (L = 1024 in 2D, and L = 64 in 3D) with extensive statistical sampling. Our recent simulations on 1024 processors of Cray-XT3 and IBM Blue-Gene/L have further enabled us to explore fracture of 3D lattice systems of size L = 200, which is a significant computational achievement. These largest ever numerical simulations have enhanced our understanding of physics of fracture; in particular, we analyze damage localization and its deviation from percolation behavior, scaling laws for damage density, universality of fracture strength distribution, size effect on the mean fracture strength, and finally the scaling of crack surface roughness

  17. High Performance Electrical Modeling and Simulation Software Normal Environment Verification and Validation Plan, Version 1.0; TOPICAL

    International Nuclear Information System (INIS)

    WIX, STEVEN D.; BOGDAN, CAROLYN W.; MARCHIONDO JR., JULIO P.; DEVENEY, MICHAEL F.; NUNEZ, ALBERT V.

    2002-01-01

    The requirements in modeling and simulation are driven by two fundamental changes in the nuclear weapons landscape: (1) The Comprehensive Test Ban Treaty and (2) The Stockpile Life Extension Program which extends weapon lifetimes well beyond their originally anticipated field lifetimes. The move from confidence based on nuclear testing to confidence based on predictive simulation forces a profound change in the performance asked of codes. The scope of this document is to improve the confidence in the computational results by demonstration and documentation of the predictive capability of electrical circuit codes and the underlying conceptual, mathematical and numerical models as applied to a specific stockpile driver. This document describes the High Performance Electrical Modeling and Simulation software normal environment Verification and Validation Plan

  18. Effects of reflex-based self-defence training on police performance in simulated high-pressure arrest situations

    NARCIS (Netherlands)

    Renden, Peter G.; Savelsbergh, Geert J. P.; Oudejans, Raoul R. D.

    2017-01-01

    We investigated the effects of reflex-based self-defence training on police performance in simulated high-pressure arrest situations. Police officers received this training as well as a regular police arrest and self-defence skills training (control training) in a crossover design. Officers’

  19. High Fidelity BWR Fuel Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Su Jong [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-08-01

    This report describes the Consortium for Advanced Simulation of Light Water Reactors (CASL) work conducted for completion of the Thermal Hydraulics Methods (THM) Level 3 milestone THM.CFD.P13.03: High Fidelity BWR Fuel Simulation. High fidelity computational fluid dynamics (CFD) simulation for Boiling Water Reactor (BWR) was conducted to investigate the applicability and robustness performance of BWR closures. As a preliminary study, a CFD model with simplified Ferrule spacer grid geometry of NUPEC BWR Full-size Fine-mesh Bundle Test (BFBT) benchmark has been implemented. Performance of multiphase segregated solver with baseline boiling closures has been evaluated. Although the mean values of void fraction and exit quality of CFD result for BFBT case 4101-61 agreed with experimental data, the local void distribution was not predicted accurately. The mesh quality was one of the critical factors to obtain converged result. The stability and robustness of the simulation was mainly affected by the mesh quality, combination of BWR closure models. In addition, the CFD modeling of fully-detailed spacer grid geometry with mixing vane is necessary for improving the accuracy of CFD simulation.

  20. High-performance computing using FPGAs

    CERN Document Server

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  1. High performance parallel I/O

    CERN Document Server

    Prabhat

    2014-01-01

    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  2. [Performance analysis of scientific researchers in biomedicine].

    Science.gov (United States)

    Gamba, Gerardo

    2013-01-01

    There is no data about the performance of scientific researchers in biomedicine in our environment that can be use by individual subjects to compare their execution with their pairs. Using the Scopus browser the following data from 115 scientific researchers in biomedicine were obtained: actual institution, number of articles published, place on each article within the author list as first, last or unique author, total number of citations, percentage of citations due to the most cited paper, and h-index. Results were analyzed with descriptive statistics and simple lineal regressions. Most of scientific researches in the sample are from the National Institutes of the Health Ministry or some of the research institutes or faculties at the Universidad Nacional Autónoma de México. Total number of publications was biomedicine in Mexico City, which can be used to compare the productivity of individual subjects with their pairs.

  3. Simulation of the High Performance Time to Digital Converter for the ATLAS Muon Spectrometer trigger upgrade

    International Nuclear Information System (INIS)

    Meng, X.T.; Levin, D.S.; Chapman, J.W.; Zhou, B.

    2016-01-01

    The ATLAS Muon Spectrometer endcap thin-Resistive Plate Chamber trigger project compliments the New Small Wheel endcap Phase-1 upgrade for higher luminosity LHC operation. These new trigger chambers, located in a high rate region of ATLAS, will improve overall trigger acceptance and reduce the fake muon trigger incidence. These chambers must generate a low level muon trigger to be delivered to a remote high level processor within a stringent latency requirement of 43 bunch crossings (1075 ns). To help meet this requirement the High Performance Time to Digital Converter (HPTDC), a multi-channel ASIC designed by CERN Microelectronics group, has been proposed for the digitization of the fast front end detector signals. This paper investigates the HPTDC performance in the context of the overall muon trigger latency, employing detailed behavioral Verilog simulations in which the latency in triggerless mode is measured for a range of configurations and under realistic hit rate conditions. The simulation results show that various HPTDC operational configurations, including leading edge and pair measurement modes can provide high efficiency (>98%) to capture and digitize hits within a time interval satisfying the Phase-1 latency tolerance.

  4. Simulator experiments: effects of NPP operator experience on performance

    International Nuclear Information System (INIS)

    Beare, A.N.; Gray, L.H.

    1984-01-01

    During the FY83 research, a simulator experiment was conducted at the control room simulator for a GE Boiling Water Reactor (BWR) NPP. The research subjects were licensed operators undergoing requalification training and shift technical advisors (STAs). This experiment was designed to investigate the effects of senior reactor operator (SRO) experience, operating crew augmentation with an STA and practice, as a crew, upon crew and individual operator performance, in response to anticipated plant transients. Sixteen two-man crews of licensed operators were employed in a 2 x 2 factorial design. The SROs leading the crews were split into high and low experience groups on the basis of their years of experience as an SRO. One half of the high- and low-SRO experience groups were assisted by an STA. The crews responded to four simulated plant casualties. A five-variable set of content-referenced performance measures was derived from task analyses of the procedurally correct responses to the four casualties. System parameters and control manipulations were recorded by the computer controlling the simulator. Data on communications and procedure use were obtained from analysis of videotapes of the exercises. Questionnaires were used to collect subject biographical information and data on subjective workload during each simulated casualty. For four of the five performance measures, no significant differences were found between groups led by high (25 to 114 months) and low (1 to 17 months as an SRO) experience SROs. However, crews led by low experience SROs tended to have significantly shorter task performance times than crews led by high experience SROs. The presence of the STA had no significant effect on overall team performance in responding to the four simulated casualties. The FY84 experiments are a partial replication and extension of the FY83 experiment, but with PWR operators and simulator

  5. Training Elementary Teachers to Prepare Students for High School Authentic Scientific Research

    Science.gov (United States)

    Danch, J. M.

    2017-12-01

    The Woodbridge Township New Jersey School District has a 4-year high school Science Research program that depends on the enrollment of students with the prerequisite skills to conduct authentic scientific research at the high school level. A multifaceted approach to training elementary teachers in the methods of scientific investigation, data collection and analysis and communication of results was undertaken in 2017. Teachers of predominately grades 4 and 5 participated in hands on workshops at a Summer Tech Academy, an EdCamp, a District Inservice Day and a series of in-class workshops for teachers and students together. Aspects of the instruction for each of these activities was facilitated by high school students currently enrolled in the High School Science Research Program. Much of the training activities centered around a "Learning With Students" model where teachers and their students simultaneously learn to perform inquiry activities and conduct scientific research fostering inquiry as it is meant to be: where participants produce original data are not merely working to obtain previously determined results.

  6. High performance thermal stress analysis on the earth simulator

    International Nuclear Information System (INIS)

    Noriyuki, Kushida; Hiroshi, Okuda; Genki, Yagawa

    2003-01-01

    In this study, the thermal stress finite element analysis code optimized for the earth simulator was developed. A processor node of which of the earth simulator is the 8-way vector processor, and each processor can communicate using the message passing interface. Thus, there are two ways to parallelize the finite element method on the earth simulator. The first method is to assign one processor for one sub-domain, and the second method is to assign one node (=8 processors) for one sub-domain considering the shared memory type parallelization. Considering that the preconditioned conjugate gradient (PCG) method, which is one of the suitable linear equation solvers for the large-scale parallel finite element methods, shows the better convergence behavior if the number of domains is the smaller, we have determined to employ PCG and the hybrid parallelization, which is based on the shared and distributed memory type parallelization. It has been said that it is hard to obtain the good parallel or vector performance, since the finite element method is based on unstructured grids. In such situation, the reordering is inevitable to improve the computational performance [2]. In this study, we used three reordering methods, i.e. Reverse Cuthil-McKee (RCM), cyclic multicolor (CM) and diagonal jagged descending storage (DJDS)[3]. RCM provides the good convergence of the incomplete lower-upper (ILU) PCG, but causes the load imbalance. On the other hand, CM provides the good load balance, but worsens the convergence of ILU PCG if the vector length is so long. Therefore, we used the combined-method of RCM and CM. DJDS is the method to store the sparse matrices such that longer vector length can be obtained. For attaining the efficient inter-node parallelization, such partitioning methods as the recursive coordinate bisection (RCM) or MeTIS have been used. Computational performance of the practical large-scale engineering problems will be shown at the meeting. (author)

  7. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  8. A lattice-particle approach for the simulation of fracture processes in fiber-reinforced high-performance concrete

    NARCIS (Netherlands)

    Montero-Chacón, F.; Schlangen, H.E.J.G.; Medina, F.

    2013-01-01

    The use of fiber-reinforced high-performance concrete (FRHPC) is becoming more extended; therefore it is necessary to develop tools to simulate and better understand its behavior. In this work, a discrete model for the analysis of fracture mechanics in FRHPC is presented. The plain concrete matrix,

  9. High Energy Physics Exascale Requirements Review. An Office of Science review sponsored jointly by Advanced Scientific Computing Research and High Energy Physics, June 10-12, 2015, Bethesda, Maryland

    Energy Technology Data Exchange (ETDEWEB)

    Habib, Salman [Argonne National Lab. (ANL), Argonne, IL (United States); Roser, Robert [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Gerber, Richard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Antypas, Katie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Dart, Eli [Esnet, Berkeley, CA (United States); Dosanjh, Sudip [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hack, James [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Monga, Inder [Esnet, Berkeley, CA (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Riley, Katherine [Argonne National Lab. (ANL), Argonne, IL (United States); Rotman, Lauren [Esnet, Berkeley, CA (United States); Straatsma, Tjerk [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wells, Jack [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Williams, Tim [Argonne National Lab. (ANL), Argonne, IL (United States); Almgren, A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Amundson, J. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Bailey, Stephen [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bard, Deborah [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bloom, Ken [Univ. of Nebraska, Lincoln, NE (United States); Bockelman, Brian [Univ. of Nebraska, Lincoln, NE (United States); Borgland, Anders [SLAC National Accelerator Lab., Menlo Park, CA (United States); Borrill, Julian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Boughezal, Radja [Argonne National Lab. (ANL), Argonne, IL (United States); Brower, Richard [Boston Univ., MA (United States); Cowan, Benjamin [SLAC National Accelerator Lab., Menlo Park, CA (United States); Finkel, Hal [Argonne National Lab. (ANL), Argonne, IL (United States); Frontiere, Nicholas [Argonne National Lab. (ANL), Argonne, IL (United States); Fuess, Stuart [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Ge, Lixin [SLAC National Accelerator Lab., Menlo Park, CA (United States); Gnedin, Nick [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Gottlieb, Steven [Indiana Univ., Bloomington, IN (United States); Gutsche, Oliver [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Han, T. [Indiana Univ., Bloomington, IN (United States); Heitmann, Katrin [Argonne National Lab. (ANL), Argonne, IL (United States); Hoeche, Stefan [SLAC National Accelerator Lab., Menlo Park, CA (United States); Ko, Kwok [SLAC National Accelerator Lab., Menlo Park, CA (United States); Kononenko, Oleksiy [SLAC National Accelerator Lab., Menlo Park, CA (United States); LeCompte, Thomas [Argonne National Lab. (ANL), Argonne, IL (United States); Li, Zheng [SLAC National Accelerator Lab., Menlo Park, CA (United States); Lukic, Zarija [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Mori, Warren [Univ. of California, Los Angeles, CA (United States); Ng, Cho-Kuen [SLAC National Accelerator Lab., Menlo Park, CA (United States); Nugent, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Oleynik, Gene [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); O’Shea, Brian [Michigan State Univ., East Lansing, MI (United States); Padmanabhan, Nikhil [Yale Univ., New Haven, CT (United States); Petravick, Donald [Univ. of Illinois, Urbana, IL (United States). National Center for Supercomputing Applications; Petriello, Frank J. [Argonne National Lab. (ANL), Argonne, IL (United States); Pope, Adrian [Argonne National Lab. (ANL), Argonne, IL (United States); Power, John [Argonne National Lab. (ANL), Argonne, IL (United States); Qiang, Ji [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Reina, Laura [Florida State Univ., Tallahassee, FL (United States); Rizzo, Thomas Gerard [SLAC National Accelerator Lab., Menlo Park, CA (United States); Ryne, Robert [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Schram, Malachi [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Spentzouris, P. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Toussaint, Doug [Univ. of Arizona, Tucson, AZ (United States); Vay, Jean Luc [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Viren, B. [Brookhaven National Lab. (BNL), Upton, NY (United States); Wuerthwein, Frank [Univ. of California, San Diego, CA (United States); Xiao, Liling [SLAC National Accelerator Lab., Menlo Park, CA (United States); Coffey, Richard [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-11-29

    The U.S. Department of Energy (DOE) Office of Science (SC) Offices of High Energy Physics (HEP) and Advanced Scientific Computing Research (ASCR) convened a programmatic Exascale Requirements Review on June 10–12, 2015, in Bethesda, Maryland. This report summarizes the findings, results, and recommendations derived from that meeting. The high-level findings and observations are as follows. Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude — and in some cases greater — than that available currently. The growth rate of data produced by simulations is overwhelming the current ability of both facilities and researchers to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. Data rates and volumes from experimental facilities are also straining the current HEP infrastructure in its ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. A close integration of high-performance computing (HPC) simulation and data analysis will greatly aid in interpreting the results of HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. Long-range planning between HEP and ASCR will be required to meet HEP’s research needs. To best use ASCR HPC resources, the experimental HEP program needs (1) an established, long-term plan for access to ASCR computational and data resources, (2) the ability to map workflows to HPC resources, (3) the ability for ASCR facilities to accommodate workflows run by collaborations potentially comprising thousands of individual members, (4) to transition codes to the next-generation HPC platforms that will be available at ASCR

  10. Using Just-in-Time Information to Support Scientific Discovery Learning in a Computer-Based Simulation

    Science.gov (United States)

    Hulshof, Casper D.; de Jong, Ton

    2006-01-01

    Students encounter many obstacles during scientific discovery learning with computer-based simulations. It is hypothesized that an effective type of support, that does not interfere with the scientific discovery learning process, should be delivered on a "just-in-time" base. This study explores the effect of facilitating access to…

  11. 20th and 21st Joint Workshop on Sustained Simulation Performance

    CERN Document Server

    Bez, Wolfgang; Focht, Erich; Kobayashi, Hiroaki; Qi, Jiaxing; Roller, Sabine

    2015-01-01

    The book presents the state of the art in high-performance computing and simulation on modern supercomputer architectures. It covers trends in hardware and software development in general, and the future of high-performance systems and heterogeneous architectures specifically. The application contributions cover computational fluid dynamics, material science, medical applications and climate research. Innovative fields like coupled multi-physics or multi-scale simulations are also discussed. All papers were chosen from presentations given at the 20th Workshop on Sustained Simulation Performance in December 2014 at the HLRS, University of Stuttgart, Germany, and the subsequent Workshop on Sustained Simulation Performance at Tohoku University in February 2015.  .

  12. Critical thinking skills in nursing students: comparison of simulation-based performance with metrics

    Science.gov (United States)

    Fero, Laura J.; O’Donnell, John M.; Zullo, Thomas G.; Dabbs, Annette DeVito; Kitutu, Julius; Samosky, Joseph T.; Hoffman, Leslie A.

    2018-01-01

    Aim This paper is a report of an examination of the relationship between metrics of critical thinking skills and performance in simulated clinical scenarios. Background Paper and pencil assessments are commonly used to assess critical thinking but may not reflect simulated performance. Methods In 2007, a convenience sample of 36 nursing students participated in measurement of critical thinking skills and simulation-based performance using videotaped vignettes, high-fidelity human simulation, the California Critical Thinking Disposition Inventory and California Critical Thinking Skills Test. Simulation- based performance was rated as ‘meeting’ or ‘not meeting’ overall expectations. Test scores were categorized as strong, average, or weak. Results Most (75·0%) students did not meet overall performance expectations using videotaped vignettes or high-fidelity human simulation; most difficulty related to problem recognition and reporting findings to the physician. There was no difference between overall performance based on method of assessment (P = 0·277). More students met subcategory expectations for initiating nursing interventions (P ≤ 0·001) using high-fidelity human simulation. The relationship between video-taped vignette performance and critical thinking disposition or skills scores was not statistically significant, except for problem recognition and overall critical thinking skills scores (Cramer’s V = 0·444, P = 0·029). There was a statistically significant relationship between overall high-fidelity human simulation performance and overall critical thinking disposition scores (Cramer’s V = 0·413, P = 0·047). Conclusion Students’ performance reflected difficulty meeting expectations in simulated clinical scenarios. High-fidelity human simulation performance appeared to approximate scores on metrics of critical thinking best. Further research is needed to determine if simulation-based performance correlates with critical thinking skills

  13. Critical thinking skills in nursing students: comparison of simulation-based performance with metrics.

    Science.gov (United States)

    Fero, Laura J; O'Donnell, John M; Zullo, Thomas G; Dabbs, Annette DeVito; Kitutu, Julius; Samosky, Joseph T; Hoffman, Leslie A

    2010-10-01

    This paper is a report of an examination of the relationship between metrics of critical thinking skills and performance in simulated clinical scenarios. Paper and pencil assessments are commonly used to assess critical thinking but may not reflect simulated performance. In 2007, a convenience sample of 36 nursing students participated in measurement of critical thinking skills and simulation-based performance using videotaped vignettes, high-fidelity human simulation, the California Critical Thinking Disposition Inventory and California Critical Thinking Skills Test. Simulation-based performance was rated as 'meeting' or 'not meeting' overall expectations. Test scores were categorized as strong, average, or weak. Most (75.0%) students did not meet overall performance expectations using videotaped vignettes or high-fidelity human simulation; most difficulty related to problem recognition and reporting findings to the physician. There was no difference between overall performance based on method of assessment (P = 0.277). More students met subcategory expectations for initiating nursing interventions (P ≤ 0.001) using high-fidelity human simulation. The relationship between videotaped vignette performance and critical thinking disposition or skills scores was not statistically significant, except for problem recognition and overall critical thinking skills scores (Cramer's V = 0.444, P = 0.029). There was a statistically significant relationship between overall high-fidelity human simulation performance and overall critical thinking disposition scores (Cramer's V = 0.413, P = 0.047). Students' performance reflected difficulty meeting expectations in simulated clinical scenarios. High-fidelity human simulation performance appeared to approximate scores on metrics of critical thinking best. Further research is needed to determine if simulation-based performance correlates with critical thinking skills in the clinical setting. © 2010 The Authors. Journal of Advanced

  14. Improving Performances in the Public Sector: The Scientific ...

    African Journals Online (AJOL)

    Improving Performances in the Public Sector: The Scientific Management Theory ... adopts the principles for enhanced productivity, efficiency and the attainment of ... of the public sector, as observed and reported by several scholars over time.

  15. Multi-Language Programming Environments for High Performance Java Computing

    OpenAIRE

    Vladimir Getov; Paul Gray; Sava Mintchev; Vaidy Sunderam

    1999-01-01

    Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI) tool which provides ...

  16. High Performance Object-Oriented Scientific Programming in Fortran 90

    Science.gov (United States)

    Norton, Charles D.; Decyk, Viktor K.; Szymanski, Boleslaw K.

    1997-01-01

    We illustrate how Fortran 90 supports object-oriented concepts by example of plasma particle computations on the IBM SP. Our experience shows that Fortran 90 and object-oriented methodology give high performance while providing a bridge from Fortran 77 legacy codes to modern programming principles. All of our object-oriented Fortran 90 codes execute more quickly thatn the equeivalent C++ versions, yet the abstraction modelling capabilities used for scentific programming are comparably powereful.

  17. Virtual reality simulation training of mastoidectomy - studies on novice performance.

    Science.gov (United States)

    Andersen, Steven Arild Wuyts

    2016-08-01

    Virtual reality (VR) simulation-based training is increasingly used in surgical technical skills training including in temporal bone surgery. The potential of VR simulation in enabling high-quality surgical training is great and VR simulation allows high-stakes and complex procedures such as mastoidectomy to be trained repeatedly, independent of patients and surgical tutors, outside traditional learning environments such as the OR or the temporal bone lab, and with fewer of the constraints of traditional training. This thesis aims to increase the evidence-base of VR simulation training of mastoidectomy and, by studying the final-product performances of novices, investigates the transfer of skills to the current gold-standard training modality of cadaveric dissection, the effect of different practice conditions and simulator-integrated tutoring on performance and retention of skills, and the role of directed, self-regulated learning. Technical skills in mastoidectomy were transferable from the VR simulation environment to cadaveric dissection with significant improvement in performance after directed, self-regulated training in the VR temporal bone simulator. Distributed practice led to a better learning outcome and more consolidated skills than massed practice and also resulted in a more consistent performance after three months of non-practice. Simulator-integrated tutoring accelerated the initial learning curve but also caused over-reliance on tutoring, which resulted in a drop in performance when the simulator-integrated tutor-function was discontinued. The learning curves were highly individual but often plateaued early and at an inadequate level, which related to issues concerning both the procedure and the VR simulator, over-reliance on the tutor function and poor self-assessment skills. Future simulator-integrated automated assessment could potentially resolve some of these issues and provide trainees with both feedback during the procedure and immediate

  18. XVIS: Visualization for the Extreme-Scale Scientific-Computation Ecosystem Final Scientific/Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk [Kitware, Inc., Clifton Park, NY (United States); Maynard, Robert [Kitware, Inc., Clifton Park, NY (United States)

    2017-10-27

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. The XVis project brought together collaborators from predominant DOE projects for visualization on accelerators and combining their respective features into a new visualization toolkit called VTK-m.

  19. 18th and 19th Workshop on Sustained Simulation Performance

    CERN Document Server

    Bez, Wolfgang; Focht, Erich; Kobayashi, Hiroaki; Patel, Nisarg

    2015-01-01

    This book presents the state of the art in high-performance computing and simulation on modern supercomputer architectures. It covers trends in hardware and software development in general and the future of high-performance systems and heterogeneous architectures in particular. The application-related contributions cover computational fluid dynamics, material science, medical applications and climate research; innovative fields such as coupled multi-physics and multi-scale simulations are highlighted. All papers were chosen from presentations given at the 18th Workshop on Sustained Simulation Performance held at the HLRS, University of Stuttgart, Germany in October 2013 and subsequent Workshop of the same name held at Tohoku University in March 2014.  

  20. Scientific Approach for Optimising Performance, Health and Safety in High-Altitude Observatories

    Science.gov (United States)

    Böcker, Michael; Vogy, Joachim; Nolle-Gösser, Tanja

    2008-09-01

    The ESO coordinated study “Optimising Performance, Health and Safety in High-Altitude Observatories” is based on a psychological approach using a questionnaire for data collection and assessment of high-altitude effects. During 2007 and 2008, data from 28 staff and visitors involved in APEX and ALMA were collected and analysed and the first results of the study are summarised. While there is a lot of information about biomedical changes at high altitude, relatively few studies have focussed on psychological changes, for example with respect to performance of mental tasks, safety consciousness and emotions. Both, biomedical and psychological changes are relevant factors in occupational safety and health. The results of the questionnaire on safety, health and performance issues demonstrate that the working conditions at high altitude are less detrimental than expected.

  1. Application of Nuclear Power Plant Simulator for High School Student Training

    Energy Technology Data Exchange (ETDEWEB)

    Kong, Chi Dong; Choi, Soo Young; Park, Min Young; Lee, Duck Jung [Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of)

    2014-10-15

    In this context, two lectures on nuclear power plant simulator and practical training were provided to high school students in 2014. The education contents were composed of two parts: the micro-physics simulator and the macro-physics simulator. The micro-physics simulator treats only in-core phenomena, whereas the macro-physics simulator describes whole system of a nuclear power plant but it considers a reactor core as a point. The high school students showed strong interests caused by the fact that they operated the simulation by themselves. This abstract reports the training detail and evaluation of the effectiveness of the training. Lectures on nuclear power plant simulator and practical exercises were performed at Ulsan Energy High School and Ulsan Meister High School. Two simulators were used: the macro- and micro-physics simulator. Using the macro-physics simulator, the following five simulations were performed: reactor power increase/decrease, reactor trip, single reactor coolant pump trip, large break loss of coolant accident, and station black-out with D.C. power loss. Using the micro-physics simulator, the following three analyses were performed: the transient analysis, fuel rod performance analysis, and thermal-hydraulics analysis. The students at both high schools showed interest and strong support for the simulator-based training. After the training, the students showed passionate responses that the education was of help for them to get interest in a nuclear power plant.

  2. Application of Nuclear Power Plant Simulator for High School Student Training

    International Nuclear Information System (INIS)

    Kong, Chi Dong; Choi, Soo Young; Park, Min Young; Lee, Duck Jung

    2014-01-01

    In this context, two lectures on nuclear power plant simulator and practical training were provided to high school students in 2014. The education contents were composed of two parts: the micro-physics simulator and the macro-physics simulator. The micro-physics simulator treats only in-core phenomena, whereas the macro-physics simulator describes whole system of a nuclear power plant but it considers a reactor core as a point. The high school students showed strong interests caused by the fact that they operated the simulation by themselves. This abstract reports the training detail and evaluation of the effectiveness of the training. Lectures on nuclear power plant simulator and practical exercises were performed at Ulsan Energy High School and Ulsan Meister High School. Two simulators were used: the macro- and micro-physics simulator. Using the macro-physics simulator, the following five simulations were performed: reactor power increase/decrease, reactor trip, single reactor coolant pump trip, large break loss of coolant accident, and station black-out with D.C. power loss. Using the micro-physics simulator, the following three analyses were performed: the transient analysis, fuel rod performance analysis, and thermal-hydraulics analysis. The students at both high schools showed interest and strong support for the simulator-based training. After the training, the students showed passionate responses that the education was of help for them to get interest in a nuclear power plant

  3. High-performance parallel processors based on star-coupled wavelength division multiplexing optical interconnects

    Science.gov (United States)

    Deri, Robert J.; DeGroot, Anthony J.; Haigh, Ronald E.

    2002-01-01

    As the performance of individual elements within parallel processing systems increases, increased communication capability between distributed processor and memory elements is required. There is great interest in using fiber optics to improve interconnect communication beyond that attainable using electronic technology. Several groups have considered WDM, star-coupled optical interconnects. The invention uses a fiber optic transceiver to provide low latency, high bandwidth channels for such interconnects using a robust multimode fiber technology. Instruction-level simulation is used to quantify the bandwidth, latency, and concurrency required for such interconnects to scale to 256 nodes, each operating at 1 GFLOPS performance. Performance scales have been shown to .apprxeq.100 GFLOPS for scientific application kernels using a small number of wavelengths (8 to 32), only one wavelength received per node, and achievable optoelectronic bandwidth and latency.

  4. Physical modeling and high-performance GPU computing for characterization, interception, and disruption of hazardous near-Earth objects

    Science.gov (United States)

    Kaplinger, Brian Douglas

    For the past few decades, both the scientific community and the general public have been becoming more aware that the Earth lives in a shooting gallery of small objects. We classify all of these asteroids and comets, known or unknown, that cross Earth's orbit as near-Earth objects (NEOs). A look at our geologic history tells us that NEOs have collided with Earth in the past, and we expect that they will continue to do so. With thousands of known NEOs crossing the orbit of Earth, there has been significant scientific interest in developing the capability to deflect an NEO from an impacting trajectory. This thesis applies the ideas of Smoothed Particle Hydrodynamics (SPH) theory to the NEO disruption problem. A simulation package was designed that allows efficacy simulation to be integrated into the mission planning and design process. This is done by applying ideas in high-performance computing (HPC) on the computer graphics processing unit (GPU). Rather than prove a concept through large standalone simulations on a supercomputer, a highly parallel structure allows for flexible, target dependent questions to be resolved. Built around nonclassified data and analysis, this computer package will allow academic institutions to better tackle the issue of NEO mitigation effectiveness.

  5. The profile of high school students’ scientific literacy on fluid dynamics

    Science.gov (United States)

    Parno; Yuliati, L.; Munfaridah, N.

    2018-05-01

    This study aims to describe the profile of scientific literacy of high school students on Fluid Dynamics materials. Scientific literacy is one of the ability to solve daily problems in accordance with the context of materials related to science and technology. The study was conducted on 90 high school students in Sumbawa using survey design. Data were collected using an instrument of scientific literacy for high school students on dynamic fluid materials. Data analysis was conducted descriptively to determine the students’ profile of scientific literacy. The results showed that high school students’ scientific literacy on Fluid Dynamics materials was in the low category. The highest average is obtained on indicators of scientific literacy i.e. the ability to interpret data and scientific evidence. The ability of scientific literacy is related to the mastery of concepts and learning experienced by students, therefore it is necessary to use learning that can trace this ability such as Science, Technology, Engineering, and Mathematics (STEM).

  6. A New Approach in Advance Network Reservation and Provisioning for High-Performance Scientific Data Transfers

    Energy Technology Data Exchange (ETDEWEB)

    Balman, Mehmet; Chaniotakis, Evangelos; Shoshani, Arie; Sim, Alex

    2010-01-28

    Scientific applications already generate many terabytes and even petabytes of data from supercomputer runs and large-scale experiments. The need for transferring data chunks of ever-increasing sizes through the network shows no sign of abating. Hence, we need high-bandwidth high speed networks such as ESnet (Energy Sciences Network). Network reservation systems, i.e. ESnet's OSCARS (On-demand Secure Circuits and Advance Reservation System) establish guaranteed bandwidth of secure virtual circuits at a certain time, for a certain bandwidth and length of time. OSCARS checks network availability and capacity for the specified period of time, and allocates requested bandwidth for that user if it is available. If the requested reservation cannot be granted, no further suggestion is returned back to the user. Further, there is no possibility from the users view-point to make an optimal choice. We report a new algorithm, where the user specifies the total volume that needs to be transferred, a maximum bandwidth that he/she can use, and a desired time period within which the transfer should be done. The algorithm can find alternate allocation possibilities, including earliest time for completion, or shortest transfer duration - leaving the choice to the user. We present a novel approach for path finding in time-dependent networks, and a new polynomial algorithm to find possible reservation options according to given constraints. We have implemented our algorithm for testing and incorporation into a future version of ESnet?s OSCARS. Our approach provides a basis for provisioning end-to-end high performance data transfers over storage and network resources.

  7. High-resolution global climate modelling: the UPSCALE project, a large-simulation campaign

    Directory of Open Access Journals (Sweden)

    M. S. Mizielinski

    2014-08-01

    Full Text Available The UPSCALE (UK on PRACE: weather-resolving Simulations of Climate for globAL Environmental risk project constructed and ran an ensemble of HadGEM3 (Hadley Centre Global Environment Model 3 atmosphere-only global climate simulations over the period 1985–2011, at resolutions of N512 (25 km, N216 (60 km and N96 (130 km as used in current global weather forecasting, seasonal prediction and climate modelling respectively. Alongside these present climate simulations a parallel ensemble looking at extremes of future climate was run, using a time-slice methodology to consider conditions at the end of this century. These simulations were primarily performed using a 144 million core hour, single year grant of computing time from PRACE (the Partnership for Advanced Computing in Europe in 2012, with additional resources supplied by the Natural Environment Research Council (NERC and the Met Office. Almost 400 terabytes of simulation data were generated on the HERMIT supercomputer at the High Performance Computing Center Stuttgart (HLRS, and transferred to the JASMIN super-data cluster provided by the Science and Technology Facilities Council Centre for Data Archival (STFC CEDA for analysis and storage. In this paper we describe the implementation of the project, present the technical challenges in terms of optimisation, data output, transfer and storage that such a project involves and include details of the model configuration and the composition of the UPSCALE data set. This data set is available for scientific analysis to allow assessment of the value of model resolution in both present and potential future climate conditions.

  8. Simulation on following Performance of High-Speed Railway In Situ Testing System

    Directory of Open Access Journals (Sweden)

    Fei-Long Zheng

    2013-01-01

    Full Text Available Subgrade bears both the weight of superstructures and the impacts of running trains. Its stability affects the line smoothness directly, but in situ testing method on it is inadequate. This paper presents a railway roadbed in situ testing device, the key component of which is an excitation hydraulic servo cylinder that can output the static pressure and dynamic pressure simultaneously to simulate the force of the trains to the subgrade. The principle of the excitation system is briefly introduced, and the transfer function of the closed-loop force control system is derived and simulated; that, it shows without control algorithm, the dynamic response is very low and the following performance is quite poor. So, the improvedadaptive model following control (AMFC algorithm based on direct state method is adopted. Then, control block diagram is built and simulated with the input of different waveforms and frequencies. The simulation results show that the system has been greatly improved; the output waveform can follow the input signal much better except for a little distortion when the signal varies severely. And the following performance becomes even better as the load stiffness increases.

  9. Investigating the Mobility of Light Autonomous Tracked Vehicles using a High Performance Computing Simulation Capability

    Science.gov (United States)

    Negrut, Dan; Mazhar, Hammad; Melanz, Daniel; Lamb, David; Jayakumar, Paramsothy; Letherwood, Michael; Jain, Abhinandan; Quadrelli, Marco

    2012-01-01

    This paper is concerned with the physics-based simulation of light tracked vehicles operating on rough deformable terrain. The focus is on small autonomous vehicles, which weigh less than 100 lb and move on deformable and rough terrain that is feature rich and no longer representable using a continuum approach. A scenario of interest is, for instance, the simulation of a reconnaissance mission for a high mobility lightweight robot where objects such as a boulder or a ditch that could otherwise be considered small for a truck or tank, become major obstacles that can impede the mobility of the light autonomous vehicle and negatively impact the success of its mission. Analyzing and gauging the mobility and performance of these light vehicles is accomplished through a modeling and simulation capability called Chrono::Engine. Chrono::Engine relies on parallel execution on Graphics Processing Unit (GPU) cards.

  10. Energy Smart Management of Scientific Data

    Energy Technology Data Exchange (ETDEWEB)

    Otoo, Ekow; Rotem, Dron; Tsao, Shih-Chiang

    2009-04-12

    Scientific data centers comprised of high-powered computing equipment and large capacity disk storage systems consume considerable amount of energy. Dynamic power management techniques (DPM) are commonly used for saving energy in disk systems. These involve powering down disks that exhibit long idle periods and placing them in standby mode. A file request from a disk in standby mode will incur both energy and performance penalties as it takes energy (and time) to spin up the disk before it can serve a file. For this reason, DPM has to make decisions as to when to transition the disk into standby mode such that the energy saved is greater than the energy needed to spin it up again and the performance penalty is tolerable. The length of the idle period until the DPM decides to power down a disk is called idlenessthreshold. In this paper, we study both analytically and experimentally dynamic power management techniques that save energy subject to performance constraints on file access costs. Based on observed workloads of scientific applications and disk characteristics, we provide a methodology for determining file assignment to disks and computing idleness thresholds that result in significant improvements to the energy saved by existing DPMsolutions while meeting response time constraints. We validate our methods with simulations that use traces taken from scientific applications.

  11. Reading, Writing, and Presenting Original Scientific Research: A Nine-Week Course in Scientific Communication for High School Students†

    Science.gov (United States)

    Danka, Elizabeth S.; Malpede, Brian M.

    2015-01-01

    High school students are not often given opportunities to communicate scientific findings to their peers, the general public, and/or people in the scientific community, and therefore they do not develop scientific communication skills. We present a nine-week course that can be used to teach high school students, who may have no previous experience, how to read and write primary scientific articles and how to discuss scientific findings with a broad audience. Various forms of this course have been taught for the past 10 years as part of an intensive summer research program for rising high school seniors that is coordinated by the Young Scientist Program at Washington University in St. Louis. The format presented here includes assessments for efficacy through both rubric-based methods and student self-assessment surveys. PMID:26753027

  12. HIGH-FIDELITY SIMULATION-DRIVEN MODEL DEVELOPMENT FOR COARSE-GRAINED COMPUTATIONAL FLUID DYNAMICS

    Energy Technology Data Exchange (ETDEWEB)

    Hanna, Botros N.; Dinh, Nam T.; Bolotnov, Igor A.

    2016-06-01

    Nuclear reactor safety analysis requires identifying various credible accident scenarios and determining their consequences. For a full-scale nuclear power plant system behavior, it is impossible to obtain sufficient experimental data for a broad range of risk-significant accident scenarios. In single-phase flow convective problems, Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES) can provide us with high fidelity results when physical data are unavailable. However, these methods are computationally expensive and cannot be afforded for simulation of long transient scenarios in nuclear accidents despite extraordinary advances in high performance scientific computing over the past decades. The major issue is the inability to make the transient computation parallel, thus making number of time steps required in high-fidelity methods unaffordable for long transients. In this work, we propose to apply a high fidelity simulation-driven approach to model sub-grid scale (SGS) effect in Coarse Grained Computational Fluid Dynamics CG-CFD. This approach aims to develop a statistical surrogate model instead of the deterministic SGS model. We chose to start with a turbulent natural convection case with volumetric heating in a horizontal fluid layer with a rigid, insulated lower boundary and isothermal (cold) upper boundary. This scenario of unstable stratification is relevant to turbulent natural convection in a molten corium pool during a severe nuclear reactor accident, as well as in containment mixing and passive cooling. The presented approach demonstrates how to create a correction for the CG-CFD solution by modifying the energy balance equation. A global correction for the temperature equation proves to achieve a significant improvement to the prediction of steady state temperature distribution through the fluid layer.

  13. Use of high performance networks and supercomputers for real-time flight simulation

    Science.gov (United States)

    Cleveland, Jeff I., II

    1993-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.

  14. Approaching Sentient Building Performance Simulation Systems

    DEFF Research Database (Denmark)

    Negendahl, Kristoffer; Perkov, Thomas; Heller, Alfred

    2014-01-01

    Sentient BPS systems can combine one or more high precision BPS and provide near instantaneous performance feedback directly in the design tool, thus providing speed and precision of building performance in the early design stages. Sentient BPS systems are essentially combining: 1) design tools, 2......) parametric tools, 3) BPS tools, 4) dynamic databases 5) interpolation techniques and 6) prediction techniques as a fast and valid simulation system, in the early design stage....

  15. High Performance Multivariate Visual Data Exploration for Extremely Large Data

    International Nuclear Information System (INIS)

    Ruebel, Oliver; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Ahern, Sean; Weber, Gunther H.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes; Prabhat

    2008-01-01

    One of the central challenges in modern science is the need to quickly derive knowledge and understanding from large, complex collections of data. We present a new approach that deals with this challenge by combining and extending techniques from high performance visual data analysis and scientific data management. This approach is demonstrated within the context of gaining insight from complex, time-varying datasets produced by a laser wakefield accelerator simulation. Our approach leverages histogram-based parallel coordinates for both visual information display as well as a vehicle for guiding a data mining operation. Data extraction and subsetting are implemented with state-of-the-art index/query technology. This approach, while applied here to accelerator science, is generally applicable to a broad set of science applications, and is implemented in a production-quality visual data analysis infrastructure. We conduct a detailed performance analysis and demonstrate good scalability on a distributed memory Cray XT4 system

  16. High Performance Multivariate Visual Data Exploration for Extremely Large Data

    Energy Technology Data Exchange (ETDEWEB)

    Rubel, Oliver; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Ahern, Sean; Weber, Gunther H.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes; Prabhat,

    2008-08-22

    One of the central challenges in modern science is the need to quickly derive knowledge and understanding from large, complex collections of data. We present a new approach that deals with this challenge by combining and extending techniques from high performance visual data analysis and scientific data management. This approach is demonstrated within the context of gaining insight from complex, time-varying datasets produced by a laser wakefield accelerator simulation. Our approach leverages histogram-based parallel coordinates for both visual information display as well as a vehicle for guiding a data mining operation. Data extraction and subsetting are implemented with state-of-the-art index/query technology. This approach, while applied here to accelerator science, is generally applicable to a broad set of science applications, and is implemented in a production-quality visual data analysis infrastructure. We conduct a detailed performance analysis and demonstrate good scalability on a distributed memory Cray XT4 system.

  17. Enabling Efficient Climate Science Workflows in High Performance Computing Environments

    Science.gov (United States)

    Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.

    2015-12-01

    A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.

  18. Science on Stage: Engaging and teaching scientific content through performance art

    Science.gov (United States)

    Posner, Esther

    2016-04-01

    Engaging teaching material through performance art and music can improve the long-term retention of scientific content. Additionally, the development of effective performance skills are a powerful tool to communicate scientific concepts and information to a broader audience that can have many positive benefits in terms of career development and the delivery of professional presentations. While arts integration has been shown to increase student engagement and achievement, relevant artistic materials are still required for use as supplemental activities in STEM (science, technology, engineering, mathematics) courses. I will present an original performance poem, "Tectonic Petrameter: A Journey Through Earth History," with instructions for its implementation as a play in pre-university and undergraduate geoscience classrooms. "Tectonic Petrameter" uses a dynamic combination of rhythm and rhyme to teach the geological time scale, fundamental concepts in geology and important events in Earth history. I propose that using performance arts, such as "Tectonic Petrameter" and other creative art forms, may be an avenue for breaking down barriers related to teaching students and the broader non-scientific community about Earth's long and complex history.

  19. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  20. Advanced scientific computational methods and their applications of nuclear technologies. (1) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (1)

    International Nuclear Information System (INIS)

    Oka, Yoshiaki; Okuda, Hiroshi

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the first issue showing their overview and introduction of continuum simulation methods. Finite element method as their applications is also reviewed. (T. Tanaka)

  1. 3D graphene nanomaterials for binder-free supercapacitors: scientific design for enhanced performance

    Science.gov (United States)

    He, Shuijian; Chen, Wei

    2015-04-01

    Because of the excellent intrinsic properties, especially the strong mechanical strength, extraordinarily high surface area and extremely high conductivity, graphene is deemed as a versatile building block for fabricating functional materials for energy production and storage applications. In this article, the recent progress in the assembly of binder-free and self-standing graphene-based materials, as well as their application in supercapacitors are reviewed, including electrical double layer capacitors, pseudocapacitors, and asymmetric supercapacitors. Various fabrication strategies and the influence of structures on the capacitance performance of 3D graphene-based materials are discussed. We finally give concluding remarks and an outlook on the scientific design of binder-free and self-standing graphene materials for achieving better capacitance performance.

  2. A High-Throughput, High-Accuracy System-Level Simulation Framework for System on Chips

    Directory of Open Access Journals (Sweden)

    Guanyi Sun

    2011-01-01

    Full Text Available Today's System-on-Chips (SoCs design is extremely challenging because it involves complicated design tradeoffs and heterogeneous design expertise. To explore the large solution space, system architects have to rely on system-level simulators to identify an optimized SoC architecture. In this paper, we propose a system-level simulation framework, System Performance Simulation Implementation Mechanism, or SPSIM. Based on SystemC TLM2.0, the framework consists of an executable SoC model, a simulation tool chain, and a modeling methodology. Compared with the large body of existing research in this area, this work is aimed at delivering a high simulation throughput and, at the same time, guaranteeing a high accuracy on real industrial applications. Integrating the leading TLM techniques, our simulator can attain a simulation speed that is not slower than that of the hardware execution by a factor of 35 on a set of real-world applications. SPSIM incorporates effective timing models, which can achieve a high accuracy after hardware-based calibration. Experimental results on a set of mobile applications proved that the difference between the simulated and measured results of timing performance is within 10%, which in the past can only be attained by cycle-accurate models.

  3. Designing a High Performance Parallel Personal Cluster

    OpenAIRE

    Kapanova, K. G.; Sellier, J. M.

    2016-01-01

    Today, many scientific and engineering areas require high performance computing to perform computationally intensive experiments. For example, many advances in transport phenomena, thermodynamics, material properties, computational chemistry and physics are possible only because of the availability of such large scale computing infrastructures. Yet many challenges are still open. The cost of energy consumption, cooling, competition for resources have been some of the reasons why the scientifi...

  4. Cyberinfrastructure and Scientific Collaboration: Application of a Virtual Team Performance Framework with Potential Relevance to Education. WCER Working Paper No. 2010-12

    Science.gov (United States)

    Kraemer, Sara; Thorn, Christopher A.

    2010-01-01

    The purpose of this exploratory study was to identify and describe some of the dimensions of scientific collaborations using high throughput computing (HTC) through the lens of a virtual team performance framework. A secondary purpose was to assess the viability of using a virtual team performance framework to study scientific collaborations using…

  5. Multi-Language Programming Environments for High Performance Java Computing

    Directory of Open Access Journals (Sweden)

    Vladimir Getov

    1999-01-01

    Full Text Available Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI tool which provides application programmers wishing to use Java with immediate accessibility to existing scientific packages. The JCI tool also facilitates rapid development and reuse of existing code. These benefits are provided at minimal cost to the programmer. While beneficial to the programmer, the additional advantages of mixed‐language programming in terms of application performance and portability are addressed in detail within the context of this paper. In addition, we discuss how the JCI tool is complementing other ongoing projects such as IBM’s High‐Performance Compiler for Java (HPCJ and IceT’s metacomputing environment.

  6. The Validity and Incremental Validity of Knowledge Tests, Low-Fidelity Simulations, and High-Fidelity Simulations for Predicting Job Performance in Advanced-Level High-Stakes Selection

    Science.gov (United States)

    Lievens, Filip; Patterson, Fiona

    2011-01-01

    In high-stakes selection among candidates with considerable domain-specific knowledge and experience, investigations of whether high-fidelity simulations (assessment centers; ACs) have incremental validity over low-fidelity simulations (situational judgment tests; SJTs) are lacking. Therefore, this article integrates research on the validity of…

  7. Superconductivity of high Tc Scientific revolution?

    International Nuclear Information System (INIS)

    Marquina, J.E.; Ridaura, R.; Gomez, R.; Marquina, V.; Alvarez, J.L.

    1997-01-01

    A short history of superconductivity, since its discovery by Bednorz and Muller to the development of new materials with high transition temperatures, is presented. Further evolvements are analyzed in terms of T.s. Kuhn conceptions expressed in his book. The Structure of Scientific Revolutions. (Author) 4 refs

  8. PetIGA: A framework for high-performance isogeometric analysis

    KAUST Repository

    Dalcin, Lisandro; Collier, N.; Vignal, Philippe; Cortes, Adriano Mauricio; Calo, Victor M.

    2016-01-01

    We present PetIGA, a code framework to approximate the solution of partial differential equations using isogeometric analysis. PetIGA can be used to assemble matrices and vectors which come from a Galerkin weak form, discretized with Non-Uniform Rational B-spline basis functions. We base our framework on PETSc, a high-performance library for the scalable solution of partial differential equations, which simplifies the development of large-scale scientific codes, provides a rich environment for prototyping, and separates parallelism from algorithm choice. We describe the implementation of PetIGA, and exemplify its use by solving a model nonlinear problem. To illustrate the robustness and flexibility of PetIGA, we solve some challenging nonlinear partial differential equations that include problems in both solid and fluid mechanics. We show strong scaling results on up to 40964096 cores, which confirm the suitability of PetIGA for large scale simulations.

  9. PetIGA: A framework for high-performance isogeometric analysis

    KAUST Repository

    Dalcin, L.

    2016-05-25

    We present PetIGA, a code framework to approximate the solution of partial differential equations using isogeometric analysis. PetIGA can be used to assemble matrices and vectors which come from a Galerkin weak form, discretized with Non-Uniform Rational B-spline basis functions. We base our framework on PETSc, a high-performance library for the scalable solution of partial differential equations, which simplifies the development of large-scale scientific codes, provides a rich environment for prototyping, and separates parallelism from algorithm choice. We describe the implementation of PetIGA, and exemplify its use by solving a model nonlinear problem. To illustrate the robustness and flexibility of PetIGA, we solve some challenging nonlinear partial differential equations that include problems in both solid and fluid mechanics. We show strong scaling results on up to 40964096 cores, which confirm the suitability of PetIGA for large scale simulations.

  10. Performance of space charge simulations using High Performance Computing (HPC) cluster

    CERN Document Server

    Bartosik, Hannes; CERN. Geneva. ATS Department

    2017-01-01

    In 2016 a collaboration agreement between CERN and Istituto Nazionale di Fisica Nucleare (INFN) through its Centro Nazionale Analisi Fotogrammi (CNAF, Bologna) was signed [1], which foresaw the purchase and installation of a cluster of 20 nodes with 32 cores each, connected with InfiniBand, at CNAF for the use of CERN members to develop parallelized codes as well as conduct massive simulation campaigns with the already available parallelized tools. As outlined in [1], after the installation and the set up of the first 12 nodes, the green light to proceed with the procurement and installation of the next 8 nodes can be given only after successfully passing an acceptance test based on two specific benchmark runs. This condition is necessary to consider the first batch of the cluster operational and complying with the desired performance specifications. In this brief note, we report the results of the above mentioned acceptance test.

  11. Incorporating Primary Scientific Literature in Middle and High School Education

    Directory of Open Access Journals (Sweden)

    Sarah C. Fankhauser

    2015-11-01

    Full Text Available Primary literature is the most reliable and direct source of scientific information, but most middle school and high school science is taught using secondary and tertiary sources. One reason for this is that primary science articles can be difficult to access and interpret for young students and for their teachers, who may lack exposure to this type of writing. The Journal of Emerging Investigators (JEI was created to fill this gap and provide primary research articles that can be accessed and read by students and their teachers. JEI is a non-profit, online, open-access, peer-reviewed science journal dedicated to mentoring and publishing the scientific research of middle and high school students. JEI articles provide reliable scientific information that is written by students and therefore at a level that their peers can understand. For student-authors who publish in JEI, the review process and the interaction with scientists provide invaluable insight into the scientific process. Moreover, the resulting repository of free, student-written articles allows teachers to incorporate age-appropriate primary literature into the middle and high school science classroom. JEI articles can be used for teaching specific scientific content or for teaching the process of the scientific method itself. The critical thinking skills that students learn by engaging with the primary literature will be invaluable for the development of a scientifically-literate public.

  12. Incorporating Primary Scientific Literature in Middle and High School Education.

    Science.gov (United States)

    Fankhauser, Sarah C; Lijek, Rebeccah S

    2016-03-01

    Primary literature is the most reliable and direct source of scientific information, but most middle school and high school science is taught using secondary and tertiary sources. One reason for this is that primary science articles can be difficult to access and interpret for young students and for their teachers, who may lack exposure to this type of writing. The Journal of Emerging Investigators (JEI) was created to fill this gap and provide primary research articles that can be accessed and read by students and their teachers. JEI is a non-profit, online, open-access, peer-reviewed science journal dedicated to mentoring and publishing the scientific research of middle and high school students. JEI articles provide reliable scientific information that is written by students and therefore at a level that their peers can understand. For student-authors who publish in JEI, the review process and the interaction with scientists provide invaluable insight into the scientific process. Moreover, the resulting repository of free, student-written articles allows teachers to incorporate age-appropriate primary literature into the middle and high school science classroom. JEI articles can be used for teaching specific scientific content or for teaching the process of the scientific method itself. The critical thinking skills that students learn by engaging with the primary literature will be invaluable for the development of a scientifically-literate public.

  13. On the performance simulation of inter-stage turbine reheat

    International Nuclear Information System (INIS)

    Pellegrini, Alvise; Nikolaidis, Theoklis; Pachidis, Vassilios; Köhler, Stephan

    2017-01-01

    Highlights: • An innovative gas turbine performance simulation methodology is proposed. • It allows to perform DP and OD performance calculations for complex engines layouts. • It is essential for inter-turbine reheat (ITR) engine performance calculation. • A detailed description is provided for fast and flexible implementation. • The methodology is successfully verified against a commercial closed-source software. - Abstract: Several authors have suggested the implementation of reheat in high By-Pass Ratio (BPR) aero engines, to improve engine performance. In contrast to military afterburning, civil aero engines would aim at reducing Specific Fuel Consumption (SFC) by introducing ‘Inter-stage Turbine Reheat’ (ITR). To maximise benefits, the second combustor should be placed at an early stage of the expansion process, e.g. between the first and second High-Pressure Turbine (HPT) stages. The aforementioned cycle design requires the accurate simulation of two or more turbine stages on the same shaft. The Design Point (DP) performance can be easily evaluated by defining a Turbine Work Split (TWS) ratio between the turbine stages. However, the performance simulation of Off-Design (OD) operating points requires the calculation of the TWS parameter for every OD step, by taking into account the thermodynamic behaviour of each turbine stage, represented by their respective maps. No analytical solution of the aforementioned problem is currently available in the public domain. This paper presents an analytical methodology by which ITR can be simulated at DP and OD. Results show excellent agreement with a commercial, closed-source performance code; discrepancies range from 0% to 3.48%, and are ascribed to the different gas models implemented in the codes.

  14. Scientific Discovery through Advanced Computing in Plasma Science

    Science.gov (United States)

    Tang, William

    2005-03-01

    Advanced computing is generally recognized to be an increasingly vital tool for accelerating progress in scientific research during the 21st Century. For example, the Department of Energy's ``Scientific Discovery through Advanced Computing'' (SciDAC) Program was motivated in large measure by the fact that formidable scientific challenges in its research portfolio could best be addressed by utilizing the combination of the rapid advances in super-computing technology together with the emergence of effective new algorithms and computational methodologies. The imperative is to translate such progress into corresponding increases in the performance of the scientific codes used to model complex physical systems such as those encountered in high temperature plasma research. If properly validated against experimental measurements and analytic benchmarks, these codes can provide reliable predictive capability for the behavior of a broad range of complex natural and engineered systems. This talk reviews recent progress and future directions for advanced simulations with some illustrative examples taken from the plasma science applications area. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by the combination of access to powerful new computational resources together with innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning a huge range in time and space scales. In particular, the plasma science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations

  15. FY01 Supplemental Science and Performance Analysis: Volume 1, Scientific Bases and Analyses

    International Nuclear Information System (INIS)

    Bodvarsson, G.S.; Dobson, David

    2001-01-01

    The U.S. Department of Energy (DOE) is considering the possible recommendation of a site at Yucca Mountain, Nevada, for development as a geologic repository for the disposal of high-level radioactive waste and spent nuclear fuel. To facilitate public review and comment, in May 2001 the DOE released the Yucca Mountain Science and Engineering Report (S and ER) (DOE 2001 [DIRS 153849]), which presents technical information supporting the consideration of the possible site recommendation. The report summarizes the results of more than 20 years of scientific and engineering studies. A decision to recommend the site has not been made: the DOE has provided the S and ER and its supporting documents as an aid to the public in formulating comments on the possible recommendation. When the S and ER (DOE 2001 [DIRS 153849]) was released, the DOE acknowledged that technical and scientific analyses of the site were ongoing. Therefore, the DOE noted in the Federal Register Notice accompanying the report (66 FR 23013 [DIRS 155009], p. 2) that additional technical information would be released before the dates, locations, and times for public hearings on the possible recommendation were announced. This information includes: (1) the results of additional technical studies of a potential repository at Yucca Mountain, contained in this FY01 Supplemental Science and Performance Analyses: Vol. 1, Scientific Bases and Analyses; and FY01 Supplemental Science and Performance Analyses: Vol. 2, Performance Analyses (McNeish 2001 [DIRS 155023]) (collectively referred to as the SSPA) and (2) a preliminary evaluation of the Yucca Mountain site's preclosure and postclosure performance against the DOE's proposed site suitability guidelines (10 CFR Part 963 [64 FR 67054 [DIRS 124754

  16. FY01 Supplemental Science and Performance Analysis: Volume 1,Scientific Bases and Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Bodvarsson, G.S.; Dobson, David

    2001-05-30

    The U.S. Department of Energy (DOE) is considering the possible recommendation of a site at Yucca Mountain, Nevada, for development as a geologic repository for the disposal of high-level radioactive waste and spent nuclear fuel. To facilitate public review and comment, in May 2001 the DOE released the Yucca Mountain Science and Engineering Report (S&ER) (DOE 2001 [DIRS 153849]), which presents technical information supporting the consideration of the possible site recommendation. The report summarizes the results of more than 20 years of scientific and engineering studies. A decision to recommend the site has not been made: the DOE has provided the S&ER and its supporting documents as an aid to the public in formulating comments on the possible recommendation. When the S&ER (DOE 2001 [DIRS 153849]) was released, the DOE acknowledged that technical and scientific analyses of the site were ongoing. Therefore, the DOE noted in the Federal Register Notice accompanying the report (66 FR 23013 [DIRS 155009], p. 2) that additional technical information would be released before the dates, locations, and times for public hearings on the possible recommendation were announced. This information includes: (1) the results of additional technical studies of a potential repository at Yucca Mountain, contained in this FY01 Supplemental Science and Performance Analyses: Vol. 1, Scientific Bases and Analyses; and FY01 Supplemental Science and Performance Analyses: Vol. 2, Performance Analyses (McNeish 2001 [DIRS 155023]) (collectively referred to as the SSPA) and (2) a preliminary evaluation of the Yucca Mountain site's preclosure and postclosure performance against the DOE's proposed site suitability guidelines (10 CFR Part 963 [64 FR 67054 [DIRS 124754

  17. High-performance modeling of CO2 sequestration by coupling reservoir simulation and molecular dynamics

    KAUST Repository

    Bao, Kai

    2013-01-01

    The present work describes a parallel computational framework for CO2 sequestration simulation by coupling reservoir simulation and molecular dynamics (MD) on massively parallel HPC systems. In this framework, a parallel reservoir simulator, Reservoir Simulation Toolbox (RST), solves the flow and transport equations that describe the subsurface flow behavior, while the molecular dynamics simulations are performed to provide the required physical parameters. Numerous technologies from different fields are employed to make this novel coupled system work efficiently. One of the major applications of the framework is the modeling of large scale CO2 sequestration for long-term storage in the subsurface geological formations, such as depleted reservoirs and deep saline aquifers, which has been proposed as one of the most attractive and practical solutions to reduce the CO2 emission problem to address the global-warming threat. To effectively solve such problems, fine grids and accurate prediction of the properties of fluid mixtures are essential for accuracy. In this work, the CO2 sequestration is presented as our first example to couple the reservoir simulation and molecular dynamics, while the framework can be extended naturally to the full multiphase multicomponent compositional flow simulation to handle more complicated physical process in the future. Accuracy and scalability analysis are performed on an IBM BlueGene/P and on an IBM BlueGene/Q, the latest IBM supercomputer. Results show good accuracy of our MD simulations compared with published data, and good scalability are observed with the massively parallel HPC systems. The performance and capacity of the proposed framework are well demonstrated with several experiments with hundreds of millions to a billion cells. To our best knowledge, the work represents the first attempt to couple the reservoir simulation and molecular simulation for large scale modeling. Due to the complexity of the subsurface systems

  18. THC-MP: High performance numerical simulation of reactive transport and multiphase flow in porous media

    Science.gov (United States)

    Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu

    2015-07-01

    The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.

  19. Prediction of SFL Interruption Performance from the Results of Arc Simulation during High-Current Phase

    Science.gov (United States)

    Lee, Jong-Chul; Lee, Won-Ho; Kim, Woun-Jea

    2015-09-01

    The design and development procedures of SF6 gas circuit breakers are still largely based on trial and error through testing although the development costs go higher every year. The computation cannot cover the testing satisfactorily because all the real processes arc not taken into account. But the knowledge of the arc behavior and the prediction of the thermal-flow inside the interrupters by numerical simulations are more useful than those by experiments due to the difficulties to obtain physical quantities experimentally and the reduction of computational costs in recent years. In this paper, in order to get further information into the interruption process of a SF6 self-blast interrupter, which is based on a combination of thermal expansion and the arc rotation principle, gas flow simulations with a CFD-arc modeling are performed during the whole switching process such as high-current period, pre-current zero period, and current-zero period. Through the complete work, the pressure-rise and the ramp of the pressure inside the chamber before current zero as well as the post-arc current after current zero should be a good criterion to predict the short-line fault interruption performance of interrupters.

  20. Scalability of a Low-Cost Multi-Teraflop Linux Cluster for High-End Classical Atomistic and Quantum Mechanical Simulations

    Science.gov (United States)

    Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash

    2003-01-01

    Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.

  1. Performance simulation in high altitude platforms (HAPs) communications systems

    Science.gov (United States)

    Ulloa-Vásquez, Fernando; Delgado-Penin, J. A.

    2002-07-01

    This paper considers the analysis by simulation of a digital narrowband communication system for an scenario which consists of a High-Altitude aeronautical Platform (HAP) and fixed/mobile terrestrial transceivers. The aeronautical channel is modelled considering geometrical (angle of elevation vs. horizontal distance of the terrestrial reflectors) and statistical arguments and under these circumstances a serial concatenated coded digital transmission is analysed for several hypothesis related to radio-electric coverage areas. The results indicate a good feasibility for the communication system proposed and analysed.

  2. High-Fidelity Contrast Reaction Simulation Training: Performance Comparison of Faculty, Fellows, and Residents.

    Science.gov (United States)

    Pfeifer, Kyle; Staib, Lawrence; Arango, Jennifer; Kirsch, John; Arici, Mel; Kappus, Liana; Pahade, Jay

    2016-01-01

    Reactions to contrast material are uncommon in diagnostic radiology, and vary in clinical presentation from urticaria to life-threatening anaphylaxis. Prior studies have demonstrated a high error rate in contrast reaction management, with smaller studies using simulation demonstrating variable data on effectiveness. We sought to assess the effectiveness of high-fidelity simulation in teaching contrast reaction management for residents, fellows, and attendings. A 20-question multiple-choice test assessing contrast reaction knowledge, with Likert-scale questions assessing subjective comfort levels of management of contrast reactions, was created. Three simulation scenarios that represented a moderate reaction, a severe reaction, and a contrast reaction mimic were completed in a one-hour period in a simulation laboratory. All participants completed a pretest and a posttest at one month. A six-month delayed posttest was given, but was optional for all participants. A total of 150 radiologists participated (residents = 52; fellows = 24; faculty = 74) in the pretest and posttest; and 105 participants completed the delayed posttest (residents = 31; fellows = 17; faculty = 57). A statistically significant increase was found in the one-month posttest (P < .00001) and the six-month posttest scores (P < .00001) and Likert scores (P < .001) assessing comfort level in managing all contrast reactions, compared with the pretest. Test scores and comfort level for moderate and severe reactions significantly decreased at six months, compared with the one-month posttest (P < .05). High-fidelity simulation is an effective learning tool, allowing practice of "high-acuity" situation management in a nonthreatening environment; the simulation training resulted in significant improvement in test scores, as well as an increase in subjective comfort in management of reactions, across all levels of training. A six-month refresher course is suggested, to maintain knowledge and comfort level in

  3. Scientific visualization of 3-dimensional optimized stellarator configurations

    International Nuclear Information System (INIS)

    Spong, D.A.

    1998-01-01

    The design techniques and physics analysis of modern stellarator configurations for magnetic fusion research rely heavily on high performance computing and simulation. Stellarators, which are fundamentally 3-dimensional in nature, offer significantly more design flexibility than more symmetric devices such as the tokamak. By varying the outer boundary shape of the plasma, a variety of physics features, such as transport, stability, and heating efficiency can be optimized. Scientific visualization techniques are an important adjunct to this effort as they provide a necessary ergonomic link between the numerical results and the intuition of the human researcher. The authors have developed a variety of visualization techniques for stellarators which both facilitate the design optimization process and allow the physics simulations to be more readily understood

  4. A Secure Web Application Providing Public Access to High-Performance Data Intensive Scientific Resources - ScalaBLAST Web Application

    International Nuclear Information System (INIS)

    Curtis, Darren S.; Peterson, Elena S.; Oehmen, Chris S.

    2008-01-01

    This work presents the ScalaBLAST Web Application (SWA), a web based application implemented using the PHP script language, MySQL DBMS, and Apache web server under a GNU/Linux platform. SWA is an application built as part of the Data Intensive Computer for Complex Biological Systems (DICCBS) project at the Pacific Northwest National Laboratory (PNNL). SWA delivers accelerated throughput of bioinformatics analysis via high-performance computing through a convenient, easy-to-use web interface. This approach greatly enhances emerging fields of study in biology such as ontology-based homology, and multiple whole genome comparisons which, in the absence of a tool like SWA, require a heroic effort to overcome the computational bottleneck associated with genome analysis. The current version of SWA includes a user account management system, a web based user interface, and a backend process that generates the files necessary for the Internet scientific community to submit a ScalaBLAST parallel processing job on a dedicated cluster

  5. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    CERN Document Server

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2014-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  6. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    Science.gov (United States)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  7. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    International Nuclear Information System (INIS)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Muzaffar, Shahzad; Knight, Robert

    2015-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG). (paper)

  8. Measurement and simulation of the performance of high energy physics data grids

    Science.gov (United States)

    Crosby, Paul Andrew

    This thesis describes a study of resource brokering in a computational Grid for high energy physics. Such systems are being devised in order to manage the unprecedented workload of the next generation particle physics experiments such as those at the Large Hadron Collider. A simulation of the European Data Grid has been constructed, and calibrated using logging data from a real Grid testbed. This model is then used to explore the Grid's middleware configuration, and suggest improvements to its scheduling policy. The expansion of the simulation to include data analysis of the type conducted by particle physicists is then described. A variety of job and data management policies are explored, in order to determine how well they meet the needs of physicists, as well as how efficiently they make use of CPU and network resources. Appropriate performance indicators are introduced in order to measure how well jobs and resources are managed from different perspectives. The effects of inefficiencies in Grid middleware are explored, as are methods of compensating for them. It is demonstrated that a scheduling algorithm should alter its weighting on load balancing and data distribution, depending on whether data transfer or CPU requirements dominate, and also on the level of job loading. It is also shown that an economic model for data management and replication can improve the efficiency of network use and job processing.

  9. Scientific Challenges for Understanding the Quantum Universe

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.

    2009-10-16

    A workshop titled "Scientific Challenges for Understanding the Quantum Universe" was held December 9-11, 2008, at the Kavli Institute for Particle Astrophysics and Cosmology at the Stanford Linear Accelerator Center-National Accelerator Laboratory. The primary purpose of the meeting was to examine how computing at the extreme scale can contribute to meeting forefront scientific challenges in particle physics, particle astrophysics and cosmology. The workshop was organized around five research areas with associated panels. Three of these, "High Energy Theoretical Physics," "Accelerator Simulation," and "Experimental Particle Physics," addressed research of the Office of High Energy Physics’ Energy and Intensity Frontiers, while the"Cosmology and Astrophysics Simulation" and "Astrophysics Data Handling, Archiving, and Mining" panels were associated with the Cosmic Frontier.

  10. Protein Simulation Data in the Relational Model.

    Science.gov (United States)

    Simms, Andrew M; Daggett, Valerie

    2012-10-01

    High performance computing is leading to unprecedented volumes of data. Relational databases offer a robust and scalable model for storing and analyzing scientific data. However, these features do not come without a cost-significant design effort is required to build a functional and efficient repository. Modeling protein simulation data in a relational database presents several challenges: the data captured from individual simulations are large, multi-dimensional, and must integrate with both simulation software and external data sites. Here we present the dimensional design and relational implementation of a comprehensive data warehouse for storing and analyzing molecular dynamics simulations using SQL Server.

  11. Visualization and Data Analysis for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  12. Prospective randomized study of contrast reaction management curricula: Computer-based interactive simulation versus high-fidelity hands-on simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Carolyn L., E-mail: wangcl@uw.edu [Department of Radiology, University of Washington, Box 357115, 1959 NE Pacific Street, Seattle, WA 98195-7115 (United States); Schopp, Jennifer G.; Kani, Kimia [Department of Radiology, University of Washington, Box 357115, 1959 NE Pacific Street, Seattle, WA 98195-7115 (United States); Petscavage-Thomas, Jonelle M. [Penn State Hershey Medical Center, Department of Radiology, 500 University Drive, Hershey, PA 17033 (United States); Zaidi, Sadaf; Hippe, Dan S.; Paladin, Angelisa M.; Bush, William H. [Department of Radiology, University of Washington, Box 357115, 1959 NE Pacific Street, Seattle, WA 98195-7115 (United States)

    2013-12-01

    Purpose: We developed a computer-based interactive simulation program for teaching contrast reaction management to radiology trainees and compared its effectiveness to high-fidelity hands-on simulation training. Materials and methods: IRB approved HIPAA compliant prospective study of 44 radiology residents, fellows and faculty who were randomized into either the high-fidelity hands-on simulation group or computer-based simulation group. All participants took separate written tests prior to and immediately after their intervention. Four months later participants took a delayed written test and a hands-on high-fidelity severe contrast reaction scenario performance test graded on predefined critical actions. Results: There was no statistically significant difference between the computer and hands-on groups’ written pretest, immediate post-test, or delayed post-test scores (p > 0.6 for all). Both groups’ scores improved immediately following the intervention (p < 0.001). The delayed test scores 4 months later were still significantly higher than the pre-test scores (p ≤ 0.02). The computer group's performance was similar to the hands-on group on the severe contrast reaction simulation scenario test (p = 0.7). There were also no significant differences between the computer and hands-on groups in performance on the individual core competencies of contrast reaction management during the contrast reaction scenario. Conclusion: It is feasible to develop a computer-based interactive simulation program to teach contrast reaction management. Trainees that underwent computer-based simulation training scored similarly on written tests and on a hands-on high-fidelity severe contrast reaction scenario performance test as those trained with hands-on high-fidelity simulation.

  13. Prospective randomized study of contrast reaction management curricula: Computer-based interactive simulation versus high-fidelity hands-on simulation

    International Nuclear Information System (INIS)

    Wang, Carolyn L.; Schopp, Jennifer G.; Kani, Kimia; Petscavage-Thomas, Jonelle M.; Zaidi, Sadaf; Hippe, Dan S.; Paladin, Angelisa M.; Bush, William H.

    2013-01-01

    Purpose: We developed a computer-based interactive simulation program for teaching contrast reaction management to radiology trainees and compared its effectiveness to high-fidelity hands-on simulation training. Materials and methods: IRB approved HIPAA compliant prospective study of 44 radiology residents, fellows and faculty who were randomized into either the high-fidelity hands-on simulation group or computer-based simulation group. All participants took separate written tests prior to and immediately after their intervention. Four months later participants took a delayed written test and a hands-on high-fidelity severe contrast reaction scenario performance test graded on predefined critical actions. Results: There was no statistically significant difference between the computer and hands-on groups’ written pretest, immediate post-test, or delayed post-test scores (p > 0.6 for all). Both groups’ scores improved immediately following the intervention (p < 0.001). The delayed test scores 4 months later were still significantly higher than the pre-test scores (p ≤ 0.02). The computer group's performance was similar to the hands-on group on the severe contrast reaction simulation scenario test (p = 0.7). There were also no significant differences between the computer and hands-on groups in performance on the individual core competencies of contrast reaction management during the contrast reaction scenario. Conclusion: It is feasible to develop a computer-based interactive simulation program to teach contrast reaction management. Trainees that underwent computer-based simulation training scored similarly on written tests and on a hands-on high-fidelity severe contrast reaction scenario performance test as those trained with hands-on high-fidelity simulation

  14. The Effect of High and Low Antiepileptic Drug Dosage on Simulated Driving Performance in Person's with Seizures: A Pilot Study

    Directory of Open Access Journals (Sweden)

    Alexander M. Crizzle

    2015-10-01

    Full Text Available Background: Prior studies examining driving performance have not examined the effects of antiepileptic drugs (AED’s or their dosages in persons with epilepsy. AED’s are the primary form of treatment to control seizures, but they are shown to affect cognition, attention, and vision, all which may impair driving. The purpose of this study was to describe the characteristics of high and low AED dosages on simulated driving performance in persons with seizures. Method: Patients (N = 11; mean age 42.1 ± 6.3; 55% female; 100% Caucasian were recruited from the Epilepsy Monitoring Unit and had their driving assessed on a simulator. Results: No differences emerged in total or specific types of driving errors between high and low AED dosages. However, high AED drug dosage was significantly associated with errors of lane maintenance (r = .67, p < .05 and gap acceptance (r = .66, p < .05. The findings suggest that higher AED dosages may adversely affect driving performance, irrespective of having a diagnosis of epilepsy, conversion disorder, or other medical conditions. Conclusion: Future studies with larger samples are required to examine whether AED dosage or seizure focus alone can impair driving performance in persons with and without seizures.

  15. Reusable Object-Oriented Solutions for Numerical Simulation of PDEs in a High Performance Environment

    Directory of Open Access Journals (Sweden)

    Andrea Lani

    2006-01-01

    Full Text Available Object-oriented platforms developed for the numerical solution of PDEs must combine flexibility and reusability, in order to ease the integration of new functionalities and algorithms. While designing similar frameworks, a built-in support for high performance should be provided and enforced transparently, especially in parallel simulations. The paper presents solutions developed to effectively tackle these and other more specific problems (data handling and storage, implementation of physical models and numerical methods that have arisen in the development of COOLFluiD, an environment for PDE solvers. Particular attention is devoted to describe a data storage facility, highly suitable for both serial and parallel computing, and to discuss the application of two design patterns, Perspective and Method-Command-Strategy, that support extensibility and run-time flexibility in the implementation of physical models and generic numerical algorithms respectively.

  16. Performance Analysis Tool for HPC and Big Data Applications on Scientific Clusters

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wucherl [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Koo, Michelle [Univ. of California, Berkeley, CA (United States); Cao, Yu [California Inst. of Technology (CalTech), Pasadena, CA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Nugent, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States); Wu, Kesheng [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-09-17

    Big data is prevalent in HPC computing. Many HPC projects rely on complex workflows to analyze terabytes or petabytes of data. These workflows often require running over thousands of CPU cores and performing simultaneous data accesses, data movements, and computation. It is challenging to analyze the performance involving terabytes or petabytes of workflow data or measurement data of the executions, from complex workflows over a large number of nodes and multiple parallel task executions. To help identify performance bottlenecks or debug the performance issues in large-scale scientific applications and scientific clusters, we have developed a performance analysis framework, using state-ofthe- art open-source big data processing tools. Our tool can ingest system logs and application performance measurements to extract key performance features, and apply the most sophisticated statistical tools and data mining methods on the performance data. It utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of the big data analysis framework, we conduct case studies on the workflows from an astronomy project known as the Palomar Transient Factory (PTF) and the job logs from the genome analysis scientific cluster. Our study processed many terabytes of system logs and application performance measurements collected on the HPC systems at NERSC. The implementation of our tool is generic enough to be used for analyzing the performance of other HPC systems and Big Data workows.

  17. Extended-Term Dynamic Simulations with High Penetrations of Photovoltaic Generation.

    Energy Technology Data Exchange (ETDEWEB)

    Concepcion, Ricky James [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Elliott, Ryan Thomas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Donnelly, Matt [Montana Tech., Butte, MT (United States); Sanchez-Gasca, Juan [GE Energy, Schenectady, NY (United States)

    2016-01-01

    The uncontrolled intermittent availability of renewable energy sources makes integration of such devices into today's grid a challenge. Thus, it is imperative that dynamic simulation tools used to analyze power system performance are able to support systems with high amounts of photovoltaic (PV) generation. Additionally, simulation durations expanding beyond minutes into hours must be supported. This report aims to identify the path forward for dynamic simulation tools to accom- modate these needs by characterizing the properties of power systems (with high PV penetration), analyzing how these properties affect dynamic simulation software, and offering solutions for po- tential problems. We present a study of fixed time step, explicit numerical integration schemes that may be more suitable for these goals, based on identified requirements for simulating high PV penetration systems. We also present the alternative of variable time step integration. To help determine the characteristics of systems with high PV generation, we performed small signal sta- bility studies and time domain simulations of two representative systems. Along with feedback from stakeholders and vendors, we identify the current gaps in power system modeling including fast and slow dynamics and propose a new simulation framework to improve our ability to model and simulate longer-term dynamics.

  18. Micromagnetics on high-performance workstation and mobile computational platforms

    Science.gov (United States)

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  19. The Fuel Accident Condition Simulator (FACS) furnace system for high temperature performance testing of VHTR fuel

    Energy Technology Data Exchange (ETDEWEB)

    Demkowicz, Paul A., E-mail: paul.demkowicz@inl.gov [Idaho National Laboratory, 2525 Fremont Avenue, MS 3860, Idaho Falls, ID 83415-3860 (United States); Laug, David V.; Scates, Dawn M.; Reber, Edward L.; Roybal, Lyle G.; Walter, John B.; Harp, Jason M. [Idaho National Laboratory, 2525 Fremont Avenue, MS 3860, Idaho Falls, ID 83415-3860 (United States); Morris, Robert N. [Oak Ridge National Laboratory, 1 Bethel Valley Road, Oak Ridge, TN 37831 (United States)

    2012-10-15

    Highlights: Black-Right-Pointing-Pointer A system has been developed for safety testing of irradiated coated particle fuel. Black-Right-Pointing-Pointer FACS system is designed to facilitate remote operation in a shielded hot cell. Black-Right-Pointing-Pointer System will measure release of fission gases and condensable fission products. Black-Right-Pointing-Pointer Fuel performance can be evaluated at temperatures as high as 2000 Degree-Sign C in flowing helium. - Abstract: The AGR-1 irradiation of TRISO-coated particle fuel specimens was recently completed and represents the most successful such irradiation in US history, reaching peak burnups of greater than 19% FIMA with zero failures out of 300,000 particles. An extensive post-irradiation examination (PIE) campaign will be conducted on the AGR-1 fuel in order to characterize the irradiated fuel properties, assess the in-pile fuel performance in terms of coating integrity and fission metals release, and determine the fission product retention behavior during high temperature safety testing. A new furnace system has been designed, built, and tested to perform high temperature accident tests. The Fuel Accident Condition Simulator furnace system is designed to heat fuel specimens at temperatures up to 2000 Degree-Sign C in helium while monitoring the release of volatile fission metals (e.g. Cs, Ag, Sr, and Eu), iodine, and fission gases (Kr, Xe). Fission gases released from the fuel to the sweep gas are monitored in real time using dual cryogenic traps fitted with high purity germanium detectors. Condensable fission products are collected on a plate attached to a water-cooled cold finger that can be exchanged periodically without interrupting the test. Analysis of fission products on the condensation plates involves dry gamma counting followed by chemical analysis of selected isotopes. This paper will describe design and operational details of the Fuel Accident Condition Simulator furnace system and the associated

  20. Intra-EVA Space-to-Ground Interactions when Conducting Scientific Fieldwork Under Simulated Mars Mission Constraints

    Science.gov (United States)

    Beaton, Kara H.; Chappell, Steven P.; Abercromby, Andrew F. J.; Lim, Darlene S. S.

    2018-01-01

    The Biologic Analog Science Associated with Lava Terrains (BASALT) project is a four-year program dedicated to iteratively designing, implementing, and evaluating concepts of operations (ConOps) and supporting capabilities to enable and enhance scientific exploration for future human Mars missions. The BASALT project has incorporated three field deployments during which real (non-simulated) biological and geochemical field science have been conducted at two high-fidelity Mars analog locations under simulated Mars mission conditions, including communication delays and data transmission limitations. BASALT's primary Science objective has been to extract basaltic samples for the purpose of investigating how microbial communities and habitability correlate with the physical and geochemical characteristics of chemically altered basalt environments. Field sites include the active East Rift Zone on the Big Island of Hawai'i, reminiscent of early Mars when basaltic volcanism and interaction with water were widespread, and the dormant eastern Snake River Plain in Idaho, similar to present-day Mars where basaltic volcanism is rare and most evidence for volcano-driven hydrothermal activity is relict. BASALT's primary Science Operations objective has been to investigate exploration ConOps and capabilities that facilitate scientific return during human-robotic exploration under Mars mission constraints. Each field deployment has consisted of ten extravehicular activities (EVAs) on the volcanic flows in which crews of two extravehicular and two intravehicular crewmembers conducted the field science while communicating across time delay and under bandwidth constraints with an Earth-based Mission Support Center (MSC) comprised of expert scientists and operators. Communication latencies of 5 and 15 min one-way light time and low (0.512 Mb/s uplink, 1.54 Mb/s downlink) and high (5.0 Mb/s uplink, 10.0 Mb/s downlink) bandwidth conditions were evaluated. EVA crewmembers communicated

  1. Computer Simulation Performed for Columbia Project Cooling System

    Science.gov (United States)

    Ahmad, Jasim

    2005-01-01

    This demo shows a high-fidelity simulation of the air flow in the main computer room housing the Columbia (10,024 intel titanium processors) system. The simulation asseses the performance of the cooling system and identified deficiencies, and recommended modifications to eliminate them. It used two in house software packages on NAS supercomputers: Chimera Grid tools to generate a geometric model of the computer room, OVERFLOW-2 code for fluid and thermal simulation. This state-of-the-art technology can be easily extended to provide a general capability for air flow analyses on any modern computer room. Columbia_CFD_black.tiff

  2. Scientific Letter: High-intent suicide and the Beck's Suicide Intent ...

    African Journals Online (AJOL)

    Scientific Letter: High-intent suicide and the Beck's Suicide Intent scale: a case report. ... African Journal of Psychiatry. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current ... Abstract. Scientific Letter - No Abstract Available ...

  3. Studying Scientific Discovery by Computer Simulation.

    Science.gov (United States)

    1983-03-30

    Mendel’s laws of inheritance, the law of Gay- Lussac for gaseous reactions, tile law of Dulong and Petit, the derivation of atomic weights by Avogadro...neceseary mid identify by block number) scientific discovery -ittri sic properties physical laws extensive terms data-driven heuristics intensive...terms theory-driven heuristics conservation laws 20. ABSTRACT (Continue on revere. side It necessary and identify by block number) Scientific discovery

  4. High-Performance Matrix-Vector Multiplication on the GPU

    DEFF Research Database (Denmark)

    Sørensen, Hans Henrik Brandenborg

    2012-01-01

    In this paper, we develop a high-performance GPU kernel for one of the most popular dense linear algebra operations, the matrix-vector multiplication. The target hardware is the most recent Nvidia Tesla 20-series (Fermi architecture), which is designed from the ground up for scientific computing...

  5. Equipment and performance upgrade of compact nuclear simulator

    International Nuclear Information System (INIS)

    Park, J. C.; Kwon, K. C.; Lee, D. Y.; Hwang, I. K.; Park, W. M.; Cha, K. H.; Song, S. J.; Lee, J. W.; Kim, B. G.; Kim, H. J.

    1999-01-01

    The simulator at Nuclear Training Center in KAERI became old and has not been used effectively for nuclear-related training and researches due to the problems such as aging of the equipment, difficulties in obtaining consumables and their high cost, and less personnel available who can handle the old equipment. To solve the problems, this study was performed for recovering the functions of the simulator through the technical design and replacement of components with new ones. As results of this study, our test after the replacement showed the same simulation status as the previous one, and new graphic displays added to the simulator was effective for the training and easy for maintenance. This study is meaningful as demonstrating the way of upgrading nuclear training simulators that lost their functioning due to the obsolescence of simulators and the unavailability of components

  6. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  7. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  8. Integrated plasma control for high performance tokamaks

    International Nuclear Information System (INIS)

    Humphreys, D.A.; Deranian, R.D.; Ferron, J.R.; Johnson, R.D.; LaHaye, R.J.; Leuer, J.A.; Penaflor, B.G.; Walker, M.L.; Welander, A.S.; Jayakumar, R.J.; Makowski, M.A.; Khayrutdinov, R.R.

    2005-01-01

    Sustaining high performance in a tokamak requires controlling many equilibrium shape and profile characteristics simultaneously with high accuracy and reliability, while suppressing a variety of MHD instabilities. Integrated plasma control, the process of designing high-performance tokamak controllers based on validated system response models and confirming their performance in detailed simulations, provides a systematic method for achieving and ensuring good control performance. For present-day devices, this approach can greatly reduce the need for machine time traditionally dedicated to control optimization, and can allow determination of high-reliability controllers prior to ever producing the target equilibrium experimentally. A full set of tools needed for this approach has recently been completed and applied to present-day devices including DIII-D, NSTX and MAST. This approach has proven essential in the design of several next-generation devices including KSTAR, EAST, JT-60SC, and ITER. We describe the method, results of design and simulation tool development, and recent research producing novel approaches to equilibrium and MHD control in DIII-D. (author)

  9. A Simulation Approach for Performance Validation during Embedded Systems Design

    Science.gov (United States)

    Wang, Zhonglei; Haberl, Wolfgang; Herkersdorf, Andreas; Wechs, Martin

    Due to the time-to-market pressure, it is highly desirable to design hardware and software of embedded systems in parallel. However, hardware and software are developed mostly using very different methods, so that performance evaluation and validation of the whole system is not an easy task. In this paper, we propose a simulation approach to bridge the gap between model-driven software development and simulation based hardware design, by merging hardware and software models into a SystemC based simulation environment. An automated procedure has been established to generate software simulation models from formal models, while the hardware design is originally modeled in SystemC. As the simulation models are annotated with timing information, performance issues are tackled in the same pass as system functionality, rather than in a dedicated approach.

  10. National Laboratory for Advanced Scientific Visualization at UNAM - Mexico

    Science.gov (United States)

    Manea, Marina; Constantin Manea, Vlad; Varela, Alfredo

    2016-04-01

    In 2015, the National Autonomous University of Mexico (UNAM) joined the family of Universities and Research Centers where advanced visualization and computing plays a key role to promote and advance missions in research, education, community outreach, as well as business-oriented consulting. This initiative provides access to a great variety of advanced hardware and software resources and offers a range of consulting services that spans a variety of areas related to scientific visualization, among which are: neuroanatomy, embryonic development, genome related studies, geosciences, geography, physics and mathematics related disciplines. The National Laboratory for Advanced Scientific Visualization delivers services through three main infrastructure environments: the 3D fully immersive display system Cave, the high resolution parallel visualization system Powerwall, the high resolution spherical displays Earth Simulator. The entire visualization infrastructure is interconnected to a high-performance-computing-cluster (HPCC) called ADA in honor to Ada Lovelace, considered to be the first computer programmer. The Cave is an extra large 3.6m wide room with projected images on the front, left and right, as well as floor walls. Specialized crystal eyes LCD-shutter glasses provide a strong stereo depth perception, and a variety of tracking devices allow software to track the position of a user's hand, head and wand. The Powerwall is designed to bring large amounts of complex data together through parallel computing for team interaction and collaboration. This system is composed by 24 (6x4) high-resolution ultra-thin (2 mm) bezel monitors connected to a high-performance GPU cluster. The Earth Simulator is a large (60") high-resolution spherical display used for global-scale data visualization like geophysical, meteorological, climate and ecology data. The HPCC-ADA, is a 1000+ computing core system, which offers parallel computing resources to applications that requires

  11. Doctors' stress responses and poor communication performance in simulated bad-news consultations.

    Science.gov (United States)

    Brown, Rhonda; Dunn, Stewart; Byrnes, Karen; Morris, Richard; Heinrich, Paul; Shaw, Joanne

    2009-11-01

    No studies have previously evaluated factors associated with high stress levels and poor communication performance in breaking bad news (BBN) consultations. This study determined factors that were most strongly related to doctors' stress responses and poor communication performance during a simulated BBN task. In 2007, the authors recruited 24 doctors comprising 12 novices (i.e., interns/residents with 1-3 years' experience) and 12 experts (i.e., registrars, medical/radiation oncologists, or cancer surgeons, with more than 4 years' experience). Doctors participated in simulated BBN consultations and a number of control tasks. Five-minute-epoch heart rate (HR), HR variability, and communication performance were assessed in all participants. Subjects also completed a short questionnaire asking about their prior experience BBN, perceived stress, psychological distress (i.e., anxiety, depression), fatigue, and burnout. High stress responses were related to inexperience with BBN, fatigue, and giving bad versus good news. Poor communication performance in the consultation was related to high burnout and fatigue scores. These results suggest that BBN was a stressful experience for doctors even in a simulated encounter, especially for those who were inexperienced and/or fatigued. Poor communication performance was related to burnout and fatigue, but not inexperience with BBN. These results likely indicate that burnout and fatigue contributed to stress and poor work performance in some doctors during the simulated BBN task.

  12. Are Cloud Environments Ready for Scientific Applications?

    Science.gov (United States)

    Mehrotra, P.; Shackleford, K.

    2011-12-01

    Cloud computing environments are becoming widely available both in the commercial and government sectors. They provide flexibility to rapidly provision resources in order to meet dynamic and changing computational needs without the customers incurring capital expenses and/or requiring technical expertise. Clouds also provide reliable access to resources even though the end-user may not have in-house expertise for acquiring or operating such resources. Consolidation and pooling in a cloud environment allow organizations to achieve economies of scale in provisioning or procuring computing resources and services. Because of these and other benefits, many businesses and organizations are migrating their business applications (e.g., websites, social media, and business processes) to cloud environments-evidenced by the commercial success of offerings such as the Amazon EC2. In this paper, we focus on the feasibility of utilizing cloud environments for scientific workloads and workflows particularly of interest to NASA scientists and engineers. There is a wide spectrum of such technical computations. These applications range from small workstation-level computations to mid-range computing requiring small clusters to high-performance simulations requiring supercomputing systems with high bandwidth/low latency interconnects. Data-centric applications manage and manipulate large data sets such as satellite observational data and/or data previously produced by high-fidelity modeling and simulation computations. Most of the applications are run in batch mode with static resource requirements. However, there do exist situations that have dynamic demands, particularly ones with public-facing interfaces providing information to the general public, collaborators and partners, as well as to internal NASA users. In the last few months we have been studying the suitability of cloud environments for NASA's technical and scientific workloads. We have ported several applications to

  13. A Computational Framework for Efficient Low Temperature Plasma Simulations

    Science.gov (United States)

    Verma, Abhishek Kumar; Venkattraman, Ayyaswamy

    2016-10-01

    Over the past years, scientific computing has emerged as an essential tool for the investigation and prediction of low temperature plasmas (LTP) applications which includes electronics, nanomaterial synthesis, metamaterials etc. To further explore the LTP behavior with greater fidelity, we present a computational toolbox developed to perform LTP simulations. This framework will allow us to enhance our understanding of multiscale plasma phenomenon using high performance computing tools mainly based on OpenFOAM FVM distribution. Although aimed at microplasma simulations, the modular framework is able to perform multiscale, multiphysics simulations of physical systems comprises of LTP. Some salient introductory features are capability to perform parallel, 3D simulations of LTP applications on unstructured meshes. Performance of the solver is tested based on numerical results assessing accuracy and efficiency of benchmarks for problems in microdischarge devices. Numerical simulation of microplasma reactor at atmospheric pressure with hemispherical dielectric coated electrodes will be discussed and hence, provide an overview of applicability and future scope of this framework.

  14. Performance evaluation of scientific programs on advanced architecture computers

    International Nuclear Information System (INIS)

    Walker, D.W.; Messina, P.; Baille, C.F.

    1988-01-01

    Recently a number of advanced architecture machines have become commercially available. These new machines promise better cost-performance then traditional computers, and some of them have the potential of competing with current supercomputers, such as the Cray X/MP, in terms of maximum performance. This paper describes an on-going project to evaluate a broad range of advanced architecture computers using a number of complete scientific application programs. The computers to be evaluated include distributed- memory machines such as the NCUBE, INTEL and Caltech/JPL hypercubes, and the MEIKO computing surface, shared-memory, bus architecture machines such as the Sequent Balance and the Alliant, very long instruction word machines such as the Multiflow Trace 7/200 computer, traditional supercomputers such as the Cray X.MP and Cray-2, and SIMD machines such as the Connection Machine. Currently 11 application codes from a number of scientific disciplines have been selected, although it is not intended to run all codes on all machines. Results are presented for two of the codes (QCD and missile tracking), and future work is proposed

  15. Scientific Modeling and simulations

    CERN Document Server

    Diaz de la Rubia, Tomás

    2009-01-01

    Showcases the conceptual advantages of modeling which, coupled with the unprecedented computing power through simulations, allow scientists to tackle the formibable problems of our society, such as the search for hydrocarbons, understanding the structure of a virus, or the intersection between simulations and real data in extreme environments

  16. H5Part A Portable High Performance Parallel Data Interface for Particle Simulations

    CERN Document Server

    Adelmann, Andreas; Shalf, John M; Siegerist, Cristina

    2005-01-01

    Largest parallel particle simulations, in six dimensional phase space generate wast amont of data. It is also desirable to share data and data analysis tools such as ParViT (Particle Visualization Toolkit) among other groups who are working on particle-based accelerator simulations. We define a very simple file schema built on top of HDF5 (Hierarchical Data Format version 5) as well as an API that simplifies the reading/writing of the data to the HDF5 file format. HDF5 offers a self-describing machine-independent binary file format that supports scalable parallel I/O performance for MPI codes on a variety of supercomputing systems and works equally well on laptop computers. The API is available for C, C++, and Fortran codes. The file format will enable disparate research groups with very different simulation implementations to share data transparently and share data analysis tools. For instance, the common file format will enable groups that depend on completely different simulation implementations to share c...

  17. StagBL : A Scalable, Portable, High-Performance Discretization and Solver Layer for Geodynamic Simulation

    Science.gov (United States)

    Sanan, P.; Tackley, P. J.; Gerya, T.; Kaus, B. J. P.; May, D.

    2017-12-01

    StagBL is an open-source parallel solver and discretization library for geodynamic simulation,encapsulating and optimizing operations essential to staggered-grid finite volume Stokes flow solvers.It provides a parallel staggered-grid abstraction with a high-level interface in C and Fortran.On top of this abstraction, tools are available to define boundary conditions and interact with particle systems.Tools and examples to efficiently solve Stokes systems defined on the grid are provided in small (direct solver), medium (simple preconditioners), and large (block factorization and multigrid) model regimes.By working directly with leading application codes (StagYY, I3ELVIS, and LaMEM) and providing an API and examples to integrate with others, StagBL aims to become a community tool supplying scalable, portable, reproducible performance toward novel science in regional- and planet-scale geodynamics and planetary science.By implementing kernels used by many research groups beneath a uniform abstraction layer, the library will enable optimization for modern hardware, thus reducing community barriers to large- or extreme-scale parallel simulation on modern architectures. In particular, the library will include CPU-, Manycore-, and GPU-optimized variants of matrix-free operators and multigrid components.The common layer provides a framework upon which to introduce innovative new tools.StagBL will leverage p4est to provide distributed adaptive meshes, and incorporate a multigrid convergence analysis tool.These options, in addition to a wealth of solver options provided by an interface to PETSc, will make the most modern solution techniques available from a common interface. StagBL in turn provides a PETSc interface, DMStag, to its central staggered grid abstraction.We present public version 0.5 of StagBL, including preliminary integration with application codes and demonstrations with its own demonstration application, StagBLDemo. Central to StagBL is the notion of an

  18. Performance-Driven Interface Contract Enforcement for Scientific Components

    Energy Technology Data Exchange (ETDEWEB)

    Dahlgren, Tamara Lynn [Univ. of California, Davis, CA (United States)

    2008-01-01

    Performance-driven interface contract enforcement research aims to improve the quality of programs built from plug-and-play scientific components. Interface contracts make the obligations on the caller and all implementations of the specified methods explicit. Runtime contract enforcement is a well-known technique for enhancing testing and debugging. However, checking all of the associated constraints during deployment is generally considered too costly from a performance stand point. Previous solutions enforced subsets of constraints without explicit consideration of their performance implications. Hence, this research measures the impacts of different interface contract sampling strategies and compares results with new techniques driven by execution time estimates. Results from three studies indicate automatically adjusting the level of checking based on performance constraints improves the likelihood of detecting contract violations under certain circumstances. Specifically, performance-driven enforcement is better suited to programs exercising constraints whose costs are at most moderately expensive relative to normal program execution.

  19. Parallel Tensor Compression for Large-Scale Scientific Data.

    Energy Technology Data Exchange (ETDEWEB)

    Kolda, Tamara G. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ballard, Grey [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Austin, Woody Nathan [Univ. of Texas, Austin, TX (United States)

    2015-10-01

    As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memory parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.

  20. Confidence in Numerical Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    This PowerPoint presentation offers a high-level discussion of uncertainty, confidence and credibility in scientific Modeling and Simulation (M&S). It begins by briefly evoking M&S trends in computational physics and engineering. The first thrust of the discussion is to emphasize that the role of M&S in decision-making is either to support reasoning by similarity or to “forecast,” that is, make predictions about the future or extrapolate to settings or environments that cannot be tested experimentally. The second thrust is to explain that M&S-aided decision-making is an exercise in uncertainty management. The three broad classes of uncertainty in computational physics and engineering are variability and randomness, numerical uncertainty and model-form uncertainty. The last part of the discussion addresses how scientists “think.” This thought process parallels the scientific method where by a hypothesis is formulated, often accompanied by simplifying assumptions, then, physical experiments and numerical simulations are performed to confirm or reject the hypothesis. “Confidence” derives, not just from the levels of training and experience of analysts, but also from the rigor with which these assessments are performed, documented and peer-reviewed.

  1. Performance measurement system for training simulators. Interim report

    International Nuclear Information System (INIS)

    Bockhold, G. Jr.; Roth, D.R.

    1978-05-01

    In the first project phase, the project team has designed, installed, and test run on the Browns Ferry nuclear power plant training simulator a performance measurement system capable of automatic recording of statistical information on operator actions and plant response. Key plant variables and operator actions were monitored and analyzed by the simulator computer for a selected set of four operating and casualty drills. The project has the following objectives: (1) To provide an empirical data base for statistical analysis of operator reliability and for allocation of safety and control functions between operators and automated controls; (2) To develop a method for evaluation of the effectiveness of control room designs and operating procedures; and (3) To develop a system for scoring aspects of operator performance to assist in training evaluations and to support operator selection research. The performance measurement system has shown potential for meeting the research objectives. However, the cost of training simulator time is high; to keep research program costs reasonable, the measurement system is being designed to be an integral part of operator training programs. In the pilot implementation, participating instructors judged the measurement system to be a valuable and objective extension of their abilities to monitor trainee performance

  2. Simulation of lean premixed turbulent combustion

    International Nuclear Information System (INIS)

    Bell, J; Day, M; Almgren, A; Lijewski, M; Rendleman, C; Cheng, R; Shepherd, I

    2006-01-01

    There is considerable technological interest in developing new fuel-flexible combustion systems that can burn fuels such as hydrogen or syngas. Lean premixed systems have the potential to burn these types of fuels with high efficiency and low NOx emissions due to reduced burnt gas temperatures. Although traditional Scientific approaches based on theory and laboratory experiment have played essential roles in developing our current understanding of premixed combustion, they are unable to meet the challenges of designing fuel-flexible lean premixed combustion devices. Computation, with its ability to deal with complexity and its unlimited access to data, has the potential for addressing these challenges. Realizing this potential requires the ability to perform high fidelity simulations of turbulent lean premixed flames under realistic conditions. In this paper, we examine the specialized mathematical structure of these combustion problems and discuss simulation approaches that exploit this structure. Using these ideas we can dramatically reduce computational cost, making it possible to perform high-fidelity simulations of realistic flames. We illustrate this methodology by considering ultra-lean hydrogen flames and discuss how this type of simulation is changing the way researchers study combustion

  3. Understanding the Impact of an Apprenticeship-Based Scientific Research Program on High School Students' Understanding of Scientific Inquiry

    Science.gov (United States)

    Aydeniz, Mehmet; Baksa, Kristen; Skinner, Jane

    2011-01-01

    The purpose of this study was to understand the impact of an apprenticeship program on high school students' understanding of the nature of scientific inquiry. Data related to seventeen students' understanding of science and scientific inquiry were collected through open-ended questionnaires. Findings suggest that although engagement in authentic…

  4. Applying GIS and high performance agent-based simulation for managing an Old World Screwworm fly invasion of Australia.

    Science.gov (United States)

    Welch, M C; Kwan, P W; Sajeev, A S M

    2014-10-01

    Agent-based modelling has proven to be a promising approach for developing rich simulations for complex phenomena that provide decision support functions across a broad range of areas including biological, social and agricultural sciences. This paper demonstrates how high performance computing technologies, namely General-Purpose Computing on Graphics Processing Units (GPGPU), and commercial Geographic Information Systems (GIS) can be applied to develop a national scale, agent-based simulation of an incursion of Old World Screwworm fly (OWS fly) into the Australian mainland. The development of this simulation model leverages the combination of massively data-parallel processing capabilities supported by NVidia's Compute Unified Device Architecture (CUDA) and the advanced spatial visualisation capabilities of GIS. These technologies have enabled the implementation of an individual-based, stochastic lifecycle and dispersal algorithm for the OWS fly invasion. The simulation model draws upon a wide range of biological data as input to stochastically determine the reproduction and survival of the OWS fly through the different stages of its lifecycle and dispersal of gravid females. Through this model, a highly efficient computational platform has been developed for studying the effectiveness of control and mitigation strategies and their associated economic impact on livestock industries can be materialised. Copyright © 2014 International Atomic Energy Agency 2014. Published by Elsevier B.V. All rights reserved.

  5. High-performance computing in seismology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  6. Effects of Dietary Nitrate Supplementation on Physiological Responses, Cognitive Function, and Exercise Performance at Moderate and Very-High Simulated Altitude

    Directory of Open Access Journals (Sweden)

    Oliver M. Shannon

    2017-06-01

    Full Text Available Purpose: Nitric oxide (NO bioavailability is reduced during acute altitude exposure, contributing toward the decline in physiological and cognitive function in this environment. This study evaluated the effects of nitrate (NO3− supplementation on NO bioavailability, physiological and cognitive function, and exercise performance at moderate and very-high simulated altitude.Methods:Ten males (mean (SD: V˙O2max: 60.9 (10.1 ml·kg−1·min−1 rested and performed exercise twice at moderate (~14.0% O2; ~3,000 m and twice at very-high (~11.7% O2; ~4,300 m simulated altitude. Participants ingested either 140 ml concentrated NO3−-rich (BRJ; ~12.5 mmol NO3− or NO3−-deplete (PLA; 0.01 mmol NO3− beetroot juice 2 h before each trial. Participants rested for 45 min in normobaric hypoxia prior to completing an exercise task. Exercise comprised a 45 min walk at 30% V˙O2max and a 3 km time-trial (TT, both conducted on a treadmill at a 10% gradient whilst carrying a 10 kg backpack to simulate altitude hiking. Plasma nitrite concentration ([NO2−], peripheral oxygen saturation (SpO2, pulmonary oxygen uptake (V˙O2, muscle and cerebral oxygenation, and cognitive function were measured throughout.Results: Pre-exercise plasma [NO2−] was significantly elevated in BRJ compared with PLA (p = 0.001. Pulmonary V˙O2 was reduced (p = 0.020, and SpO2 was elevated (p = 0.005 during steady-state exercise in BRJ compared with PLA, with similar effects at both altitudes. BRJ supplementation enhanced 3 km TT performance relative to PLA by 3.8% [1,653.9 (261.3 vs. 1718.7 (213.0 s] and 4.2% [1,809.8 (262.0 vs. 1,889.1 (203.9 s] at 3,000 and 4,300 m, respectively (p = 0.019. Oxygenation of the gastrocnemius was elevated during the TT consequent to BRJ (p = 0.011. The number of false alarms during the Rapid Visual Information Processing Task tended to be lower with BRJ compared with PLA prior to altitude exposure (p = 0.056. Performance in all other cognitive tasks

  7. Importance of debriefing in high-fidelity simulations

    Directory of Open Access Journals (Sweden)

    Igor Karnjuš

    2014-04-01

    Full Text Available Debriefing has been identified as one of the most important parts of a high-fidelity simulation learning process. During debriefing, the mentor invites learners to critically assess the knowledge and skills used during the execution of a scenario. Regardless of the abundance of studies that have examined simulation-based education, debriefing is still poorly defined.The present article examines the essential features of debriefing, its phases, techniques and methods with a systematic review of recent publications. It emphasizes the mentor’s role, since the effectiveness of debriefing largely depends on the mentor’s skills to conduct it. The guidelines that allow the mentor to evaluate his performance in conducting debriefing are also presented. We underline the importance of debriefing in clinical settings as part of continuous learning process. Debriefing allows the medical teams to assess their performance and develop new strategies to achieve higher competencies.Although the debriefing is the cornerstone of high-fidelity simulation learning process, it also represents an important learning strategy in the clinical setting. Many important aspects of debriefing are still poorly explored and understood, therefore this part of the learning process should be given greater attention in the future.

  8. A predictive analytic model for high-performance tunneling field-effect transistors approaching non-equilibrium Green's function simulations

    International Nuclear Information System (INIS)

    Salazar, Ramon B.; Appenzeller, Joerg; Ilatikhameneh, Hesameddin; Rahman, Rajib; Klimeck, Gerhard

    2015-01-01

    A new compact modeling approach is presented which describes the full current-voltage (I-V) characteristic of high-performance (aggressively scaled-down) tunneling field-effect-transistors (TFETs) based on homojunction direct-bandgap semiconductors. The model is based on an analytic description of two key features, which capture the main physical phenomena related to TFETs: (1) the potential profile from source to channel and (2) the elliptic curvature of the complex bands in the bandgap region. It is proposed to use 1D Poisson's equations in the source and the channel to describe the potential profile in homojunction TFETs. This allows to quantify the impact of source/drain doping on device performance, an aspect usually ignored in TFET modeling but highly relevant in ultra-scaled devices. The compact model is validated by comparison with state-of-the-art quantum transport simulations using a 3D full band atomistic approach based on non-equilibrium Green's functions. It is shown that the model reproduces with good accuracy the data obtained from the simulations in all regions of operation: the on/off states and the n/p branches of conduction. This approach allows calculation of energy-dependent band-to-band tunneling currents in TFETs, a feature that allows gaining deep insights into the underlying device physics. The simplicity and accuracy of the approach provide a powerful tool to explore in a quantitatively manner how a wide variety of parameters (material-, size-, and/or geometry-dependent) impact the TFET performance under any bias conditions. The proposed model presents thus a practical complement to computationally expensive simulations such as the 3D NEGF approach

  9. Multi-Bunch Simulations of the ILC for Luminosity Performance Studies

    CERN Document Server

    White, Glen; Walker, Nicholas J

    2005-01-01

    To study the luminosity performance of the International Linear Collider (ILC) with different design parameters, a simulation was constructed that tracks a multi-bunch representation of the beam from the Damping Ring extraction through to the Interaction Point. The simulation code PLACET is used to simulate the LINAC, MatMerlin is used to track through the Beam Delivery System and GUINEA-PIG for the beam-beam interaction. Included in the simulation are ground motion and wakefield effects, intra-train fast feedback and luminosity-based feedback systems. To efficiently study multiple parameters/multiple seeds, the simulation is deployed on the Queen Mary High-Throughput computing cluster at Queen Mary, University of London, where 100 simultaneous simulation seeds can be run.

  10. INEX simulations of the optical performance of the AFEL

    International Nuclear Information System (INIS)

    Goldstein, J.C.; Wang, T.S.F.; Sheffield, R.L.

    1991-01-01

    The AFEL (Advanced Free-Electron Laser) Project at Los Alamos National Laboratory is presently under construction. The project's goal is to produce a very high-brightness electron beam which will be generated by a photocathode injector and a 20 MeV rf-linac. Initial laser experiments will be performed with a 1-cm-period permanent magnet wiggler which will generate intense optical radiation near a wavelength of 3.7 μm. Future experiments will operate with ''slotted-tube'' electromagnetic wigglers (formerly called ''pulsed- wire'' wigglers). Experiments at both fundamental and higher-harmonic wavelengths are planned. This paper presents results of INEX (Integrated Numerical EXperiment) simulations of the optical performance of the AFEL. These simulations use the electron micropulse produced by the accelerator/beam transport code PARMELA in the 3-D FEL simulation code FELEX. 9 refs., 4 figs., 6 tabs

  11. A High-Fidelity Batch Simulation Environment for Integrated Batch and Piloted Air Combat Simulation Analysis

    Science.gov (United States)

    Goodrich, Kenneth H.; McManus, John W.; Chappell, Alan R.

    1992-01-01

    A batch air combat simulation environment known as the Tactical Maneuvering Simulator (TMS) is presented. The TMS serves as a tool for developing and evaluating tactical maneuvering logics. The environment can also be used to evaluate the tactical implications of perturbations to aircraft performance or supporting systems. The TMS is capable of simulating air combat between any number of engagement participants, with practical limits imposed by computer memory and processing power. Aircraft are modeled using equations of motion, control laws, aerodynamics and propulsive characteristics equivalent to those used in high-fidelity piloted simulation. Databases representative of a modern high-performance aircraft with and without thrust-vectoring capability are included. To simplify the task of developing and implementing maneuvering logics in the TMS, an outer-loop control system known as the Tactical Autopilot (TA) is implemented in the aircraft simulation model. The TA converts guidance commands issued by computerized maneuvering logics in the form of desired angle-of-attack and wind axis-bank angle into inputs to the inner-loop control augmentation system of the aircraft. This report describes the capabilities and operation of the TMS.

  12. FY 1992 Blue Book: Grand Challenges: High Performance Computing and Communications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — High performance computing and computer communications networks are becoming increasingly important to scientific advancement, economic competition, and national...

  13. How does the entrepreneurial orientation of scientists affect their scientific performance? Evidence from the Quadrant Model

    OpenAIRE

    Naohiro Shichijo; Silvia Rita Sedita; Yasunori Baba

    2013-01-01

    Using Stokes's (1997) "quadrant model of scientific research", this paper deals with how the entrepreneurial orientation of scientists affects their scientific performance by considering its impact on scientific production (number of publications), scientific prestige (number of forward citations), and breadth of research activities (interdisciplinarity). The results of a quantitative analysis applied to a sample of 1,957 scientific papers published by 66 scientists active in advanced materia...

  14. HIGH PERFORMANCE ADVANCED TOKAMAK REGIMES FOR NEXT-STEP EXPERIMENTS

    International Nuclear Information System (INIS)

    GREENFIELD, C.M.; MURAKAMI, M.; FERRON, J.R.; WADE, M.R.; LUCE, T.C.; PETTY, C.C.; MENARD, J.E; PETRIE, T.W.; ALLEN, S.L.; BURRELL, K.H.; CASPER, T.A; DeBOO, J.C.; DOYLE, E.J.; GAROFALO, A.M; GORELOV, Y.A; GROEBNER, R.J.; HOBIRK, J.; HYATT, A.W; JAYAKUMAR, R.J; KESSEL, C.E; LA HAYE, R.J; JACKSON, G.L; LOHR, J.; MAKOWSKI, M.A.; PINSKER, R.I.; POLITZER, P.A.; PRATER, R.; STRAIT, E.J.; TAYLOR, T.S; WEST, W.P.

    2003-01-01

    OAK-B135 Advanced Tokamak (AT) research in DIII-D seeks to provide a scientific basis for steady-state high performance operation in future devices. These regimes require high toroidal beta to maximize fusion output and poloidal beta to maximize the self-driven bootstrap current. Achieving these conditions requires integrated, simultaneous control of the current and pressure profiles, and active magnetohydrodynamic (MHD) stability control. The building blocks for AT operation are in hand. Resistive wall mode stabilization via plasma rotation and active feedback with non-axisymmetric coils allows routine operation above the no-wall beta limit. Neoclassical tearing modes are stabilized by active feedback control of localized electron cyclotron current drive (ECCD). Plasma shaping and profile control provide further improvements. Under these conditions, bootstrap supplies most of the current. Steady-state operation requires replacing the remaining Ohmic current, mostly located near the half-radius, with noninductive external sources. In DIII-D this current is provided by ECCD, and nearly stationary AT discharges have been sustained with little remaining Ohmic current. Fast wave current drive is being developed to control the central magnetic shear. Density control, with divertor cryopumps, of AT discharges with edge localized moding (ELMing) H-mode edges facilitates high current drive efficiency at reactor relevant collisionalities. A sophisticated plasma control system allows integrated control of these elements. Close coupling between modeling and experiment is key to understanding the separate elements, their complex nonlinear interactions, and their integration into self-consistent high performance scenarios. Progress on this development, and its implications for next-step devices, will be illustrated by results of recent experiment and simulation efforts

  15. Burnout among pilots: psychosocial factors related to happiness and performance at simulator training.

    Science.gov (United States)

    Demerouti, Evangelia; Veldhuis, Wouter; Coombes, Claire; Hunter, Rob

    2018-06-18

    In this study among airline pilots, we aim to uncover the work characteristics (job demands and resources) and the outcomes (job crafting, happiness and simulator training performance) that are related to burnout for this occupational group. Using a large sample of airline pilots, we showed that 40% of the participating pilots experience high burnout. In line with Job Demands-Resources theory, job demands were detrimental for simulator training performance because they made pilots more exhausted and less able to craft their job, whereas job resources had a favourable effect because they reduced feelings of disengagement and increased job crafting. Moreover, burnout was negatively related to pilots' happiness with life. These findings highlight the importance of psychosocial factors and health for valuable outcomes for both pilots and airlines. Practitioner Summary: Using an online survey among the members of a European pilots' professional association, we examined the relationship between psychosocial factors (work characteristics, burnout) and outcomes (simulator training performance, happiness). Forty per cent of the participating pilots experience high burnout. Job demands were detrimental, whereas job resources were favourable for simulator training performance/happiness. Twitter text: 40% of airline pilots experience burnout and psychosocial work factors and burnout relate to performance at pilots' simulator training.

  16. Comparison between the performance of some KEK-klystrons and simulation results

    Energy Technology Data Exchange (ETDEWEB)

    Fukuda, Shigeki [National Lab. for High Energy Physics, Tsukuba, Ibaraki (Japan)

    1997-04-01

    Recent developments of various klystron simulation codes have enabled us to realistically design klystrons. This paper presents various simulation results using the FCI code and the performances of tubes manufactured based on this code. Upgrading a 30-MW S-band klystron and developing a 50-MW S-band klystron for the KEKB projects are successful examples based on FCI-code predictions. Mass-productions of these tubes have already started. On the other hand, a discrepancy has been found between the FCI simulation results and the performance of real tubes. In some cases, the simulation results lead to high-efficiency results, while manufactured tubes show the usual value, or a lower value, of the efficiency. One possible cause may come from a data mismatch between the electron-gun simulation and the input data set of the FCI code for the gun region. This kind of discrepancy has been observed in 30-MW S-band pulsed tubes, sub-booster pulsed tubes and L-band high-duty pulsed klystrons. Sometimes, JPNDSK (one-dimensional disk-model code) gives similar results. Some examples using the FCI code are given in this article. An Arsenal-MSU code could be applied to the 50-MW klystron under collaboration with Moscow State University; a good agreement has been found between the prediction of the code and performance. (author)

  17. Hypothesis testing of scientific Monte Carlo calculations

    Science.gov (United States)

    Wallerberger, Markus; Gull, Emanuel

    2017-11-01

    The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.

  18. Quantum Accelerators for High-performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S. [ORNL; Britt, Keith A. [ORNL; Mohiyaddin, Fahd A. [ORNL

    2017-11-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.

  19. Automating NEURON Simulation Deployment in Cloud Resources.

    Science.gov (United States)

    Stockton, David B; Santamaria, Fidel

    2017-01-01

    Simulations in neuroscience are performed on local servers or High Performance Computing (HPC) facilities. Recently, cloud computing has emerged as a potential computational platform for neuroscience simulation. In this paper we compare and contrast HPC and cloud resources for scientific computation, then report how we deployed NEURON, a widely used simulator of neuronal activity, in three clouds: Chameleon Cloud, a hybrid private academic cloud for cloud technology research based on the OpenStack software; Rackspace, a public commercial cloud, also based on OpenStack; and Amazon Elastic Cloud Computing, based on Amazon's proprietary software. We describe the manual procedures and how to automate cloud operations. We describe extending our simulation automation software called NeuroManager (Stockton and Santamaria, Frontiers in Neuroinformatics, 2015), so that the user is capable of recruiting private cloud, public cloud, HPC, and local servers simultaneously with a simple common interface. We conclude by performing several studies in which we examine speedup, efficiency, total session time, and cost for sets of simulations of a published NEURON model.

  20. Undergraduate nursing students' performance in recognising and responding to sudden patient deterioration in high psychological fidelity simulated environments: an Australian multi-centre study.

    Science.gov (United States)

    Bogossian, Fiona; Cooper, Simon; Cant, Robyn; Beauchamp, Alison; Porter, Joanne; Kain, Victoria; Bucknall, Tracey; Phillips, Nicole M

    2014-05-01

    Early recognition and situation awareness of sudden patient deterioration, a timely appropriate clinical response, and teamwork are critical to patient outcomes. High fidelity simulated environments provide the opportunity for undergraduate nursing students to develop and refine recognition and response skills. This paper reports the quantitative findings of the first phase of a larger program of ongoing research: Feedback Incorporating Review and Simulation Techniques to Act on Clinical Trends (FIRST2ACTTM). It specifically aims to identify the characteristics that may predict primary outcome measures of clinical performance, teamwork and situation awareness in the management of deteriorating patients. Mixed-method multi-centre study. High fidelity simulated acute clinical environment in three Australian universities. A convenience sample of 97 final year nursing students enrolled in an undergraduate Bachelor of Nursing or combined Bachelor of Nursing degree were included in the study. In groups of three, participants proceeded through three phases: (i) pre-briefing and completion of a multi-choice question test, (ii) three video-recorded simulated clinical scenarios where actors substituted real patients with deteriorating conditions, and (iii) post-scenario debriefing. Clinical performance, teamwork and situation awareness were evaluated, using a validated standard checklist (OSCE), Team Emergency Assessment Measure (TEAM) score sheet and Situation Awareness Global Assessment Technique (SAGAT). A Modified Angoff technique was used to establish cut points for clinical performance. Student teams engaged in 97 simulation experiences across the three scenarios and achieved a level of clinical performance consistent with the experts' identified pass level point in only 9 (1%) of the simulation experiences. Knowledge was significantly associated with overall teamwork (p=.034), overall situation awareness (p=.05) and clinical performance in two of the three scenarios

  1. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  2. Accelerating scientific discovery : 2007 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Beckman, P.; Dave, P.; Drugan, C.

    2008-11-14

    As a gateway for scientific discovery, the Argonne Leadership Computing Facility (ALCF) works hand in hand with the world's best computational scientists to advance research in a diverse span of scientific domains, ranging from chemistry, applied mathematics, and materials science to engineering physics and life sciences. Sponsored by the U.S. Department of Energy's (DOE) Office of Science, researchers are using the IBM Blue Gene/L supercomputer at the ALCF to study and explore key scientific problems that underlie important challenges facing our society. For instance, a research team at the University of California-San Diego/ SDSC is studying the molecular basis of Parkinson's disease. The researchers plan to use the knowledge they gain to discover new drugs to treat the disease and to identify risk factors for other diseases that are equally prevalent. Likewise, scientists from Pratt & Whitney are using the Blue Gene to understand the complex processes within aircraft engines. Expanding our understanding of jet engine combustors is the secret to improved fuel efficiency and reduced emissions. Lessons learned from the scientific simulations of jet engine combustors have already led Pratt & Whitney to newer designs with unprecedented reductions in emissions, noise, and cost of ownership. ALCF staff members provide in-depth expertise and assistance to those using the Blue Gene/L and optimizing user applications. Both the Catalyst and Applications Performance Engineering and Data Analytics (APEDA) teams support the users projects. In addition to working with scientists running experiments on the Blue Gene/L, we have become a nexus for the broader global community. In partnership with the Mathematics and Computer Science Division at Argonne National Laboratory, we have created an environment where the world's most challenging computational science problems can be addressed. Our expertise in high-end scientific computing enables us to provide

  3. libRoadRunner: a high performance SBML simulation and analysis library.

    Science.gov (United States)

    Somogyi, Endre T; Bouteiller, Jean-Marie; Glazier, James A; König, Matthias; Medley, J Kyle; Swat, Maciej H; Sauro, Herbert M

    2015-10-15

    This article presents libRoadRunner, an extensible, high-performance, cross-platform, open-source software library for the simulation and analysis of models expressed using Systems Biology Markup Language (SBML). SBML is the most widely used standard for representing dynamic networks, especially biochemical networks. libRoadRunner is fast enough to support large-scale problems such as tissue models, studies that require large numbers of repeated runs and interactive simulations. libRoadRunner is a self-contained library, able to run both as a component inside other tools via its C++ and C bindings, and interactively through its Python interface. Its Python Application Programming Interface (API) is similar to the APIs of MATLAB ( WWWMATHWORKSCOM: ) and SciPy ( HTTP//WWWSCIPYORG/: ), making it fast and easy to learn. libRoadRunner uses a custom Just-In-Time (JIT) compiler built on the widely used LLVM JIT compiler framework. It compiles SBML-specified models directly into native machine code for a variety of processors, making it appropriate for solving extremely large models or repeated runs. libRoadRunner is flexible, supporting the bulk of the SBML specification (except for delay and non-linear algebraic equations) including several SBML extensions (composition and distributions). It offers multiple deterministic and stochastic integrators, as well as tools for steady-state analysis, stability analysis and structural analysis of the stoichiometric matrix. libRoadRunner binary distributions are available for Mac OS X, Linux and Windows. The library is licensed under Apache License Version 2.0. libRoadRunner is also available for ARM-based computers such as the Raspberry Pi. http://www.libroadrunner.org provides online documentation, full build instructions, binaries and a git source repository. hsauro@u.washington.edu or somogyie@indiana.edu Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2015. This work is written

  4. Multicore Challenges and Benefits for High Performance Scientific Computing

    Directory of Open Access Journals (Sweden)

    Ida M.B. Nielsen

    2008-01-01

    Full Text Available Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexity of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.

  5. Quantum Simulations of Materials and Nanostructures (Q-SIMAN). Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Galli, Giulia [Univ. of California, Davis, CA (United States); Bai, Zhaojun [Univ. of California, Davis, CA (United States); Ceperley, David [Univ. of Illinois, Urbana, IL (United States); Cai, Wei [Stanford Univ., CA (United States); Gygi, Francois [Univ. of California, Davis, CA (United States); Marzari, Nicola [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Pickett, Warren [Univ. of California, Davis, CA (United States); Spaldin, Nicola [Univ. of California, Santa Barbara, CA (United States); Fattebert, Jean-Luc [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schwegler, Eric [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-09-16

    The focus of this SciDAC SAP (Scientific Application) is the development and use of quantum simulations techniques to understand materials and nanostructures at the microscopic level, predict their physical and chemical properties, and eventually design integrated materials with targeted properties. (Here the word ‘materials’ is used in a broad sense and it encompasses different thermodynamic states of matter, including solid, liquids and nanostructures.) Therefore our overarching goal is to enable scientific discoveries in the field of condensed matter and advanced materials through high performance computing.

  6. High performance computing and communications: Advancing the frontiers of information technology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental in the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.

  7. Terascale High-Fidelity Simulations of Turbulent Combustion with Detailed Chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Hong G. Im; Arnaud Trouve; Christopher J. Rutland; Jacqueline H. Chen

    2009-02-02

    The TSTC project is a multi-university collaborative effort to develop a high-fidelity turbulent reacting flow simulation capability utilizing terascale, massively parallel computer technology. The main paradigm of our approach is direct numerical simulation (DNS) featuring highest temporal and spatial accuracy, allowing quantitative observations of the fine-scale physics found in turbulent reacting flows as well as providing a useful tool for development of sub-models needed in device-level simulations. The code named S3D, developed and shared with Chen and coworkers at Sandia National Laboratories, has been enhanced with new numerical algorithms and physical models to provide predictive capabilities for spray dynamics, combustion, and pollutant formation processes in turbulent combustion. Major accomplishments include improved characteristic boundary conditions, fundamental studies of auto-ignition in turbulent stratified reactant mixtures, flame-wall interaction, and turbulent flame extinction by water spray. The overarching scientific issue in our recent investigations is to characterize criticality phenomena (ignition/extinction) in turbulent combustion, thereby developing unified criteria to identify ignition and extinction conditions. The computational development under TSTC has enabled the recent large-scale 3D turbulent combustion simulations conducted at Sandia National Laboratories.

  8. Terascale High-Fidelity Simulations of Turbulent Combustion with Detailed Chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Im, Hong G [University of Michigan; Trouve, Arnaud [University of Maryland; Rutland, Christopher J [University of Wisconsin; Chen, Jacqueline H [Sandia National Laboratories

    2012-08-13

    The TSTC project is a multi-university collaborative effort to develop a high-fidelity turbulent reacting flow simulation capability utilizing terascale, massively parallel computer technology. The main paradigm of our approach is direct numerical simulation (DNS) featuring highest temporal and spatial accuracy, allowing quantitative observations of the fine-scale physics found in turbulent reacting flows as well as providing a useful tool for development of sub-models needed in device-level simulations. The code named S3D, developed and shared with Chen and coworkers at Sandia National Laboratories, has been enhanced with new numerical algorithms and physical models to provide predictive capabilities for spray dynamics, combustion, and pollutant formation processes in turbulent combustion. Major accomplishments include improved characteristic boundary conditions, fundamental studies of auto-ignition in turbulent stratified reactant mixtures, flame-wall interaction, and turbulent flame extinction by water spray. The overarching scientific issue in our recent investigations is to characterize criticality phenomena (ignition/extinction) in turbulent combustion, thereby developing unified criteria to identify ignition and extinction conditions. The computational development under TSTC has enabled the recent large-scale 3D turbulent combustion simulations conducted at Sandia National Laboratories.

  9. Highly parallel machines and future of scientific computing

    International Nuclear Information System (INIS)

    Singh, G.S.

    1992-01-01

    Computing requirement of large scale scientific computing has always been ahead of what state of the art hardware could supply in the form of supercomputers of the day. And for any single processor system the limit to increase in the computing power was realized a few years back itself. Now with the advent of parallel computing systems the availability of machines with the required computing power seems a reality. In this paper the author tries to visualize the future large scale scientific computing in the penultimate decade of the present century. The author summarized trends in parallel computers and emphasize the need for a better programming environment and software tools for optimal performance. The author concludes this paper with critique on parallel architectures, software tools and algorithms. (author). 10 refs., 2 tabs

  10. A high-performance model for shallow-water simulations in distributed and heterogeneous architectures

    Science.gov (United States)

    Conde, Daniel; Canelas, Ricardo B.; Ferreira, Rui M. L.

    2017-04-01

    unstructured nature of the mesh topology with the corresponding employed solution, based on space-filling curves, being analyzed and discussed. Intra-node parallelism is achieved through OpenMP for CPUs and CUDA for GPUs, depending on which kind of device the process is running. Here the main difficulty is associated with the Object-Oriented approach, where the presence of complex data structures can degrade model performance considerably. STAV-2D now supports fully distributed and heterogeneous simulations where multiple different devices can be used to accelerate computation time. The advantages, short-comings and specific solutions for the employed unified Object-Oriented approach, where the source code for CPU and GPU has the same compilation units (no device specific branches like seen in available models), are discussed and quantified with a thorough scalability and performance analysis. The assembled parallel model is expected to achieve faster than real-time simulations for high resolutions (from meters to sub-meter) in large scaled problems (from cities to watersheds), effectively bridging the gap between detailed and timely simulation results. Acknowledgements This research as partially supported by Portuguese and European funds, within programs COMPETE2020 and PORL-FEDER, through project PTDC/ECM-HID/6387/2014 and Doctoral Grant SFRH/BD/97933/2013 granted by the National Foundation for Science and Technology (FCT). References Canelas, R.; Murillo, J. & Ferreira, R.M.L. (2013), Two-dimensional depth-averaged modelling of dam-break flows over mobile beds. Journal of Hydraulic Research, 51(4), 392-407. Conde, D. A. S.; Baptista, M. A. V.; Sousa Oliveira, C. & Ferreira, R. M. L. (2013), A shallow-flow model for the propagation of tsunamis over complex geometries and mobile beds, Nat. Hazards and Earth Syst. Sci., 13, 2533-2542. Conde, D. A. S.; Telhado, M. J.; Viana Baptista, M. A. & Ferreira, R. M. L. (2015) Severity and exposure associated with tsunami actions in

  11. Tackling some of the most intricate geophysical challenges via high-performance computing

    Science.gov (United States)

    Khosronejad, A.

    2016-12-01

    Recently, world has been witnessing significant enhancements in computing power of supercomputers. Computer clusters in conjunction with the advanced mathematical algorithms has set the stage for developing and applying powerful numerical tools to tackle some of the most intricate geophysical challenges that today`s engineers face. One such challenge is to understand how turbulent flows, in real-world settings, interact with (a) rigid and/or mobile complex bed bathymetry of waterways and sea-beds in the coastal areas; (b) objects with complex geometry that are fully or partially immersed; and (c) free-surface of waterways and water surface waves in the coastal area. This understanding is especially important because the turbulent flows in real-world environments are often bounded by geometrically complex boundaries, which dynamically deform and give rise to multi-scale and multi-physics transport phenomena, and characterized by multi-lateral interactions among various phases (e.g. air/water/sediment phases). Herein, I present some of the multi-scale and multi-physics geophysical fluid mechanics processes that I have attempted to study using an in-house high-performance computational model, the so-called VFS-Geophysics. More specifically, I will present the simulation results of turbulence/sediment/solute/turbine interactions in real-world settings. Parts of the simulations I present are performed to gain scientific insights into the processes such as sand wave formation (A. Khosronejad, and F. Sotiropoulos, (2014), Numerical simulation of sand waves in a turbulent open channel flow, Journal of Fluid Mechanics, 753:150-216), while others are carried out to predict the effects of climate change and large flood events on societal infrastructures ( A. Khosronejad, et al., (2016), Large eddy simulation of turbulence and solute transport in a forested headwater stream, Journal of Geophysical Research:, doi: 10.1002/2014JF003423).

  12. Cyber-Enabled Scientific Discovery

    International Nuclear Information System (INIS)

    Chan, Tony; Jameson, Leland

    2007-01-01

    It is often said that numerical simulation is third in the group of three ways to explore modern science: theory, experiment and simulation. Carefully executed modern numerical simulations can, however, be considered at least as relevant as experiment and theory. In comparison to physical experimentation, with numerical simulation one has the numerically simulated values of every field variable at every grid point in space and time. In comparison to theory, with numerical simulation one can explore sets of very complex non-linear equations such as the Einstein equations that are very difficult to investigate theoretically. Cyber-enabled scientific discovery is not just about numerical simulation but about every possible issue related to scientific discovery by utilizing cyberinfrastructure such as the analysis and storage of large data sets, the creation of tools that can be used by broad classes of researchers and, above all, the education and training of a cyber-literate workforce

  13. Applications of industrial computed tomography at Los Alamos Scientific Laboratory

    International Nuclear Information System (INIS)

    Kruger, R.P.; Morris, R.A.; Wecksung, G.W.

    1980-01-01

    A research and development program was begun three years ago at the Los Alamos Scientific Laboratory (LASL) to study nonmedical applications of computed tomography. This program had several goals. The first goal was to develop the necessary reconstruction algorithms to accurately reconstruct cross sections of nonmedical industrial objects. The second goal was to be able to perform extensive tomographic simulations to determine the efficacy of tomographic reconstruction with a variety of hardware configurations. The final goal was to construct an inexpensive industrial prototype scanner with a high degree of design flexibility. The implementation of these program goals is described

  14. Water desalination price from recent performances: Modelling, simulation and analysis

    International Nuclear Information System (INIS)

    Metaiche, M.; Kettab, A.

    2005-01-01

    The subject of the present article is the technical simulation of seawater desalination, by a one stage reverse osmosis system, the objectives of which are the recent valuation of cost price through the use of new membrane and permeator performances, the use of new means of simulation and modelling of desalination parameters, and show the main parameters influencing the cost price. We have taken as the simulation example the Seawater Desalting centre of Djannet (Boumerdes, Algeria). The present performances allow water desalting at a price of 0.5 $/m 3 , which is an interesting and promising price, corresponding with the very acceptable water product quality, in the order of 269 ppm. It is important to run the desalting systems by reverse osmosis under high pressure, resulting in further decrease of the desalting cost and the production of good quality water. Aberration in choice of functioning conditions produces high prices and unacceptable quality. However there exists the possibility of decreasing the price by decreasing the requirement on the product quality. The seawater temperature has an effect on the cost price and quality. The installation of big desalting centres, contributes to the decrease in prices. A very important, long and tedious calculation is effected, which is impossible to conduct without programming and informatics tools. The use of the simulation model has been much efficient in the design of desalination centres that can perform at very improved prices. (author)

  15. Performance simulation of an absorption heat transformer operating with partially miscible mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Alonso, D.; Cachot, T.; Hornut, J.M. [LSGC-CNRS-ENSIC, Nancy (France); Univ. Henri Poincare, Nancy (France). IUT

    2002-07-08

    This paper proposes to study the thermodynamics performances of a new absorption heat-transformer cycle, where the separation step is obtained by the cooling and settling of a partially miscible mixture at low temperature. This new cycle has been called an absorption-demixing heat transformer (ADHT) cycle. A numerical simulation code has been written, and has allowed us to evaluate the temperature lift and thermal yield of 2 working pairs. Both high qualitative and quantitative performances have been obtained, so demonstrating the feasibility and industrial interest for such a cycle. Moreover a comparison of the simulation results with performances really obtained on an experimental ADHT has confirmed the pertinence of the simulation code.(author)

  16. A web portal for hydrodynamical, cosmological simulations

    Science.gov (United States)

    Ragagnin, A.; Dolag, K.; Biffi, V.; Cadolle Bel, M.; Hammer, N. J.; Krukau, A.; Petkova, M.; Steinborn, D.

    2017-07-01

    This article describes a data centre hosting a web portal for accessing and sharing the output of large, cosmological, hydro-dynamical simulations with a broad scientific community. It also allows users to receive related scientific data products by directly processing the raw simulation data on a remote computing cluster. The data centre has a multi-layer structure: a web portal, a job control layer, a computing cluster and a HPC storage system. The outer layer enables users to choose an object from the simulations. Objects can be selected by visually inspecting 2D maps of the simulation data, by performing highly compounded and elaborated queries or graphically by plotting arbitrary combinations of properties. The user can run analysis tools on a chosen object. These services allow users to run analysis tools on the raw simulation data. The job control layer is responsible for handling and performing the analysis jobs, which are executed on a computing cluster. The innermost layer is formed by a HPC storage system which hosts the large, raw simulation data. The following services are available for the users: (I) CLUSTERINSPECT visualizes properties of member galaxies of a selected galaxy cluster; (II) SIMCUT returns the raw data of a sub-volume around a selected object from a simulation, containing all the original, hydro-dynamical quantities; (III) SMAC creates idealized 2D maps of various, physical quantities and observables of a selected object; (IV) PHOX generates virtual X-ray observations with specifications of various current and upcoming instruments.

  17. Propagation Diagnostic Simulations Using High-Resolution Equatorial Plasma Bubble Simulations

    Science.gov (United States)

    Rino, C. L.; Carrano, C. S.; Yokoyama, T.

    2017-12-01

    In a recent paper, under review, equatorial-plasma-bubble (EPB) simulations were used to conduct a comparative analysis of the EPB spectra characteristics with high-resolution in-situ measurements from the C/NOFS satellite. EPB realizations sampled in planes perpendicular to magnetic field lines provided well-defined EPB structure at altitudes penetrating both high and low-density regions. The average C/NOFS structure in highly disturbed regions showed nearly identical two-component inverse-power-law spectral characteristics as the measured EPB structure. This paper describes the results of PWE simulations using the same two-dimensional cross-field EPB realizations. New Irregularity Parameter Estimation (IPE) diagnostics, which are based on two-dimensional equivalent-phase-screen theory [A theory of scintillation for two-component power law irregularity spectra: Overview and numerical results, by Charles Carrano and Charles Rino, DOI: 10.1002/2015RS005903], have been successfully applied to extract two-component inverse-power-law parameters from measured intensity spectra. The EPB simulations [Low and Midlatitude Ionospheric Plasma DensityIrregularities and Their Effects on Geomagnetic Field, by Tatsuhiro Yokoyama and Claudia Stolle, DOI 10.1007/s11214-016-0295-7] have sufficient resolution to populate the structure scales (tens of km to hundreds of meters) that cause strong scintillation at GPS frequencies. The simulations provide an ideal geometry whereby the ramifications of varying structure along the propagation path can be investigated. It is well known path-integrated one-dimensional spectra increase the one-dimensional index by one. The relation requires decorrelation along the propagation path. Correlated structure would be interpreted as stochastic total-electron-content (TEC). The simulations are performed with unmodified structure. Because the EPB structure is confined to the central region of the sample planes, edge effects are minimized. Consequently

  18. High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science

    International Nuclear Information System (INIS)

    Pop, Florin

    2014-01-01

    Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP) analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.

  19. Development of student performance assessment based on scientific approach for a basic physics practicum in simple harmonic motion materials

    Science.gov (United States)

    Serevina, V.; Muliyati, D.

    2018-05-01

    This research aims to develop students’ performance assessment instrument based on scientific approach is valid and reliable in assessing the performance of students on basic physics lab of Simple Harmonic Motion (SHM). This study uses the ADDIE consisting of stages: Analyze, Design, Development, Implementation, and Evaluation. The student performance assessment developed can be used to measure students’ skills in observing, asking, conducting experiments, associating and communicate experimental results that are the ‘5M’ stages in a scientific approach. Each grain of assessment in the instrument is validated by the instrument expert and the evaluation with the result of all points of assessment shall be eligible to be used with a 100% eligibility percentage. The instrument is then tested for the quality of construction, material, and language by panel (lecturer) with the result: 85% or very good instrument construction aspect, material aspect 87.5% or very good, and language aspect 83% or very good. For small group trial obtained instrument reliability level of 0.878 or is in the high category, where r-table is 0.707. For large group trial obtained instrument reliability level of 0.889 or is in the high category, where r-table is 0.320. Instruments declared valid and reliable for 5% significance level. Based on the result of this research, it can be concluded that the student performance appraisal instrument based on the developed scientific approach is declared valid and reliable to be used in assessing student skill in SHM experimental activity.

  20. High performance cloud auditing and applications

    CERN Document Server

    Choi, Baek-Young; Song, Sejun

    2014-01-01

    This book mainly focuses on cloud security and high performance computing for cloud auditing. The book discusses emerging challenges and techniques developed for high performance semantic cloud auditing, and presents the state of the art in cloud auditing, computing and security techniques with focus on technical aspects and feasibility of auditing issues in federated cloud computing environments.   In summer 2011, the United States Air Force Research Laboratory (AFRL) CyberBAT Cloud Security and Auditing Team initiated the exploration of the cloud security challenges and future cloud auditing research directions that are covered in this book. This work was supported by the United States government funds from the Air Force Office of Scientific Research (AFOSR), the AFOSR Summer Faculty Fellowship Program (SFFP), the Air Force Research Laboratory (AFRL) Visiting Faculty Research Program (VFRP), the National Science Foundation (NSF) and the National Institute of Health (NIH). All chapters were partially suppor...

  1. Confidence in Numerical Simulations

    International Nuclear Information System (INIS)

    Hemez, Francois M.

    2015-01-01

    This PowerPoint presentation offers a high-level discussion of uncertainty, confidence and credibility in scientific Modeling and Simulation (M&S). It begins by briefly evoking M&S trends in computational physics and engineering. The first thrust of the discussion is to emphasize that the role of M&S in decision-making is either to support reasoning by similarity or to ''forecast,'' that is, make predictions about the future or extrapolate to settings or environments that cannot be tested experimentally. The second thrust is to explain that M&S-aided decision-making is an exercise in uncertainty management. The three broad classes of uncertainty in computational physics and engineering are variability and randomness, numerical uncertainty and model-form uncertainty. The last part of the discussion addresses how scientists ''think.'' This thought process parallels the scientific method where by a hypothesis is formulated, often accompanied by simplifying assumptions, then, physical experiments and numerical simulations are performed to confirm or reject the hypothesis. ''Confidence'' derives, not just from the levels of training and experience of analysts, but also from the rigor with which these assessments are performed, documented and peer-reviewed.

  2. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC): gap analysis for high fidelity and performance assessment code development

    International Nuclear Information System (INIS)

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng

    2011-01-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are

  3. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC) : gap analysis for high fidelity and performance assessment code development.

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng

    2011-03-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are

  4. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  5. High-performance modeling of CO2 sequestration by coupling reservoir simulation and molecular dynamics

    KAUST Repository

    Bao, Kai; Yan, Mi; Lu, Ligang; Allen, Rebecca; Salam, Amgad; Jordan, Kirk E.; Sun, Shuyu

    2013-01-01

    multicomponent compositional flow simulation to handle more complicated physical process in the future. Accuracy and scalability analysis are performed on an IBM BlueGene/P and on an IBM BlueGene/Q, the latest IBM supercomputer. Results show good accuracy of our

  6. Simulation and design of omni-directional high speed multibeam transmitter system

    Science.gov (United States)

    Tang, Jaw-Luen; Jui, Ping-Chang; Wang, Sun-Chen

    2006-09-01

    For future high speed indoor wireless communication, diffuse wireless optical communications offer more robust optical links against shadowing than line-of-sight links. However, their performance may be degraded by multipath dispersion resulting from surface reflections. We have developed a multipath diffusive propagation model capable of providing channel impulse responses data. It is aimed to design and simulate any multi-beam transmitter under a variety of indoor environments. In this paper, a multi-beam transmitter system with semi-sphere structure is proposed to combat the diverse effects of multipath distortion albeit, at the cost of increased laser power and cost. Simulation results of multiple impulse responses showed that this type of multi-beam transmitter can significantly improve the performance of BER suitable for high bit rate application. We present the performance and simulation results for both line-of-sight and diffuse link configurations.

  7. Reduced-order modeling (ROM) for simulation and optimization powerful algorithms as key enablers for scientific computing

    CERN Document Server

    Milde, Anja; Volkwein, Stefan

    2018-01-01

    This edited monograph collects research contributions and addresses the advancement of efficient numerical procedures in the area of model order reduction (MOR) for simulation, optimization and control. The topical scope includes, but is not limited to, new out-of-the-box algorithmic solutions for scientific computing, e.g. reduced basis methods for industrial problems and MOR approaches for electrochemical processes. The target audience comprises research experts and practitioners in the field of simulation, optimization and control, but the book may also be beneficial for graduate students alike. .

  8. Simulant Basis for the Standard High Solids Vessel Design

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, Reid A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fiskum, Sandra K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Suffield, Sarah R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Daniel, Richard C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gauglitz, Phillip A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wells, Beric E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-09-30

    The Waste Treatment and Immobilization Plant (WTP) is working to develop a Standard High Solids Vessel Design (SHSVD) process vessel. To support testing of this new design, WTP engineering staff requested that a Newtonian simulant and a non-Newtonian simulant be developed that would represent the Most Adverse Design Conditions (in development) with respect to mixing performance as specified by WTP. The majority of the simulant requirements are specified in 24590-PTF-RPT-PE-16-001, Rev. 0. The first step in this process is to develop the basis for these simulants. This document describes the basis for the properties of these two simulant types. The simulant recipes that meet this basis will be provided in a subsequent document.

  9. FY 1993 Blue Book: Grand Challenges 1993: High Performance Computing and Communications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — High performance computing and computer communications networks are becoming increasingly important to scientific advancement, economic competition, and national...

  10. SciDAC-Center for Plasma Edge Simulation Report

    Energy Technology Data Exchange (ETDEWEB)

    Parker, Steven [Univ. of Utah, Salt Lake City, UT (United States)

    2013-12-24

    The Common Component Architecture (CCA) effort is the embodiment of a long-range program of research and development into the formulation, roles, and use of component technologies in high-performance scientific computing. CCA components can interoperate with other components in a variety of frameworks, including SCIRun2 from the University of Utah. The SCIRun2 framework is also developing the ability to connect components from a variety of different models through a mechanism called meta-components. The meta component model operates by providing a plugin architecture for component models. Abstract components are manipulated and managed by the SCIRun2 framework, while concrete component models perform the actual work and communicate with each other directly. We will leverage the SCIRun2 framework and the Kepler system to orchestrate components in the Fusion Simulation Project (FSP) and to provide a CCA-based interface with Kepler. The groundwork for this functionality is being performed with the Scientific Data Management center. The SDM center is developing CCA-compliant interfaces for expressing and executing workflows and create workflow components based on SCIRun and Ptolemy (Kepler) execution engines, including development of uniform interfaces for selecting, starting, and monitoring scientific workflows. Accomplishments include Introduction to CCA and Simulation Software Systems, Introduction into SCIRun2 and Bridging within SCIRun2, CCALoop: A scalable design for a distributed component framework, and Combining Workflow methodologies with Component Architectures.

  11. Open Knee: Open Source Modeling & Simulation to Enable Scientific Discovery and Clinical Care in Knee Biomechanics

    Science.gov (United States)

    Erdemir, Ahmet

    2016-01-01

    Virtual representations of the knee joint can provide clinicians, scientists, and engineers the tools to explore mechanical function of the knee and its tissue structures in health and disease. Modeling and simulation approaches such as finite element analysis also provide the possibility to understand the influence of surgical procedures and implants on joint stresses and tissue deformations. A large number of knee joint models are described in the biomechanics literature. However, freely accessible, customizable, and easy-to-use models are scarce. Availability of such models can accelerate clinical translation of simulations, where labor intensive reproduction of model development steps can be avoided. The interested parties can immediately utilize readily available models for scientific discovery and for clinical care. Motivated by this gap, this study aims to describe an open source and freely available finite element representation of the tibiofemoral joint, namely Open Knee, which includes detailed anatomical representation of the joint's major tissue structures, their nonlinear mechanical properties and interactions. Three use cases illustrate customization potential of the model, its predictive capacity, and its scientific and clinical utility: prediction of joint movements during passive flexion, examining the role of meniscectomy on contact mechanics and joint movements, and understanding anterior cruciate ligament mechanics. A summary of scientific and clinically directed studies conducted by other investigators are also provided. The utilization of this open source model by groups other than its developers emphasizes the premise of model sharing as an accelerator of simulation-based medicine. Finally, the imminent need to develop next generation knee models are noted. These are anticipated to incorporate individualized anatomy and tissue properties supported by specimen-specific joint mechanics data for evaluation, all acquired in vitro from varying age

  12. Simulated Performances of a Very High Energy Tomograph for Non-Destructive Characterization of large objects

    Science.gov (United States)

    Kistler, Marc; Estre, Nicolas; Merle, Elsa

    2018-01-01

    As part of its R&D activities on high-energy X-ray imaging for non-destructive characterization, the Nuclear Measurement Laboratory has started an upgrade of its imaging system currently implemented at the CEA-Cadarache center. The goals are to achieve a sub-millimeter spatial resolution and the ability to perform tomographies on very large objects (more than 100-cm standard concrete or 40-cm steel). This paper presentsresults on the detection part of the imaging system. The upgrade of the detection part needs a thorough study of the performance of two detectors: a series of CdTe semiconductor sensors and two arrays of segmented CdWO4 scintillators with different pixel sizes. This study consists in a Quantum Accounting Diagram (QAD) analysis coupled with Monte-Carlo simulations. The scintillator arrays are able to detect millimeter details through 140 cm of concrete, but are limited to 120 cm for smaller ones. CdTe sensors have lower but more stable performance, with a 0.5 mm resolution for 90 cm of concrete. The choice of the detector then depends on the preferred characteristic: the spatial resolution or the use on large volumes. The combination of the features of the source and the studies on the detectors gives the expected performance of the whole equipment, in terms of signal-over-noise ratio (SNR), spatial resolution and acquisition time.

  13. High performance computing in linear control

    International Nuclear Information System (INIS)

    Datta, B.N.

    1993-01-01

    Remarkable progress has been made in both theory and applications of all important areas of control. The theory is rich and very sophisticated. Some beautiful applications of control theory are presently being made in aerospace, biomedical engineering, industrial engineering, robotics, economics, power systems, etc. Unfortunately, the same assessment of progress does not hold in general for computations in control theory. Control Theory is lagging behind other areas of science and engineering in this respect. Nowadays there is a revolution going on in the world of high performance scientific computing. Many powerful computers with vector and parallel processing have been built and have been available in recent years. These supercomputers offer very high speed in computations. Highly efficient software, based on powerful algorithms, has been developed to use on these advanced computers, and has also contributed to increased performance. While workers in many areas of science and engineering have taken great advantage of these hardware and software developments, control scientists and engineers, unfortunately, have not been able to take much advantage of these developments

  14. Using an Agent-Based Modeling Simulation and Game to Teach Socio-Scientific Topics

    Directory of Open Access Journals (Sweden)

    Lori L. Scarlatos

    2014-02-01

    Full Text Available In our modern world, where science, technology and society are tightly interwoven, it is essential that all students be able to evaluate scientific evidence and make informed decisions. Energy Choices, an agent-based simulation with a multiplayer game interface, was developed as a learning tool that models the interdependencies between the energy choices that are made, growth in local economies, and climate change on a global scale. This paper presents the results of pilot testing Energy Choices in two different settings, using two different modes of delivery.

  15. High performance in software development

    CERN Multimedia

    CERN. Geneva; Haapio, Petri; Liukkonen, Juha-Matti

    2015-01-01

    What are the ingredients of high-performing software? Software development, especially for large high-performance systems, is one the most complex tasks mankind has ever tried. Technological change leads to huge opportunities but challenges our old ways of working. Processing large data sets, possibly in real time or with other tight computational constraints, requires an efficient solution architecture. Efficiency requirements span from the distributed storage and large-scale organization of computation and data onto the lowest level of processor and data bus behavior. Integrating performance behavior over these levels is especially important when the computation is resource-bounded, as it is in numerics: physical simulation, machine learning, estimation of statistical models, etc. For example, memory locality and utilization of vector processing are essential for harnessing the computing power of modern processor architectures due to the deep memory hierarchies of modern general-purpose computers. As a r...

  16. Building performance simulation for sustainable buildings

    NARCIS (Netherlands)

    Hensen, J.L.M.

    2010-01-01

    This paper aims to provide a general view of the background and current state of building performance simulation, which has the potential to deliver, directly or indirectly, substantial benefits to building stakeholders and to the environment. However the building simulation community faces many

  17. Communication Requirements and Interconnect Optimization forHigh-End Scientific Applications

    Energy Technology Data Exchange (ETDEWEB)

    Kamil, Shoaib; Oliker, Leonid; Pinar, Ali; Shalf, John

    2007-11-12

    The path towards realizing peta-scale computing isincreasingly dependent on building supercomputers with unprecedentednumbers of processors. To prevent the interconnect from dominating theoverall cost of these ultra-scale systems, there is a critical need forhigh-performance network solutions whose costs scale linearly with systemsize. This work makes several unique contributions towards attaining thatgoal. First, we conduct one of the broadest studies to date of high-endapplication communication requirements, whose computational methodsinclude: finite-difference, lattice-bolzmann, particle in cell, sparselinear algebra, particle mesh ewald, and FFT-based solvers. Toefficiently collect this data, we use the IPM (Integrated PerformanceMonitoring) profiling layer to gather detailed messaging statistics withminimal impact to code performance. Using the derived communicationcharacterizations, we next present fit-trees interconnects, a novelapproach for designing network infrastructure at a fraction of thecomponent cost of traditional fat-tree solutions. Finally, we propose theHybrid Flexibly Assignable Switch Topology (HFAST) infrastructure, whichuses both passive (circuit) and active (packet) commodity switchcomponents to dynamically reconfigure interconnects to suit thetopological requirements of scientific applications. Overall ourexploration leads to a promising directions for practically addressingthe interconnect requirements of future peta-scale systems.

  18. A generative model for scientific concept hierarchies.

    Science.gov (United States)

    Datta, Srayan; Adar, Eytan

    2018-01-01

    In many scientific disciplines, each new 'product' of research (method, finding, artifact, etc.) is often built upon previous findings-leading to extension and branching of scientific concepts over time. We aim to understand the evolution of scientific concepts by placing them in phylogenetic hierarchies where scientific keyphrases from a large, longitudinal academic corpora are used as a proxy of scientific concepts. These hierarchies exhibit various important properties, including power-law degree distribution, power-law component size distribution, existence of a giant component and less probability of extending an older concept. We present a generative model based on preferential attachment to simulate the graphical and temporal properties of these hierarchies which helps us understand the underlying process behind scientific concept evolution and may be useful in simulating and predicting scientific evolution.

  19. A generative model for scientific concept hierarchies

    Science.gov (United States)

    Adar, Eytan

    2018-01-01

    In many scientific disciplines, each new ‘product’ of research (method, finding, artifact, etc.) is often built upon previous findings–leading to extension and branching of scientific concepts over time. We aim to understand the evolution of scientific concepts by placing them in phylogenetic hierarchies where scientific keyphrases from a large, longitudinal academic corpora are used as a proxy of scientific concepts. These hierarchies exhibit various important properties, including power-law degree distribution, power-law component size distribution, existence of a giant component and less probability of extending an older concept. We present a generative model based on preferential attachment to simulate the graphical and temporal properties of these hierarchies which helps us understand the underlying process behind scientific concept evolution and may be useful in simulating and predicting scientific evolution. PMID:29474409

  20. Validation of High-resolution Climate Simulations over Northern Europe.

    Science.gov (United States)

    Muna, R. A.

    2005-12-01

    Two AMIP2-type (Gates 1992) experiments have been performed with climate versions of ARPEGE/IFS model examine for North Atlantic North Europe, and Norwegian region and analyzed the effect of increasing resolution on the simulated biases. The ECMWF reanalysis or ERA-15 has been used to validate the simulations. Each of the simulations is an integration of the period 1979 to 1996. The global simulations used observed monthly mean sea surface temperatures (SST) as lower boundary condition. All aspects but the horizontal resolutions are similar in the two simulations. The first simulation has a uniform horizontal resolution of T63L. The second one has a variable resolution (T106Lc3) with the highest resolution in the Norwegian Sea. Both simulations have 31 vertical layers in the same locations. For each simulation the results were divided into two seasons: winter (DJF) and summer (JJA). The parameters investigated were mean sea level pressure, geopotential and temperature at 850 hPa and 500 hPa. To find out the causes of temperature bias during summer, latent and sensible heat flux, total cloud cover and total precipitation were analyzed. The high-resolution simulation exhibits more or less realistic climate over Nordic, Artic and European region. The overall performance of the simulations shows improvements of generally all fields investigated with increasing resolution over the target area both in winter (DJF) and summer (JJA).

  1. Damaris: Addressing performance variability in data management for post-petascale simulations

    International Nuclear Information System (INIS)

    Dorier, Matthieu; Antoniu, Gabriel; Cappello, Franck; Snir, Marc; Sisneros, Robert

    2016-01-01

    With exascale computing on the horizon, reducing performance variability in data management tasks (storage, visualization, analysis, etc.) is becoming a key challenge in sustaining high performance. Here, this variability significantly impacts the overall application performance at scale and its predictability over time. In this article, we present Damaris, a system that leverages dedicated cores in multicore nodes to offload data management tasks, including I/O, data compression, scheduling of data movements, in situ analysis, and visualization. We evaluate Damaris with the CM1 atmospheric simulation and the Nek5000 computational fluid dynamic simulation on four platforms, including NICS’s Kraken and NCSA’s Blue Waters. Our results show that (1) Damaris fully hides the I/O variability as well as all I/O-related costs, thus making simulation performance predictable; (2) it increases the sustained write throughput by a factor of up to 15 compared with standard I/O approaches; (3) it allows almost perfect scalability of the simulation up to over 9,000 cores, as opposed to state-of-the-art approaches that fail to scale; and (4) it enables a seamless connection to the VisIt visualization software to perform in situ analysis and visualization in a way that impacts neither the performance of the simulation nor its variability. In addition, we extended our implementation of Damaris to also support the use of dedicated nodes and conducted a thorough comparison of the two approaches—dedicated cores and dedicated nodes—for I/O tasks with the aforementioned applications.

  2. The effectiveness of and satisfaction with high-fidelity simulation to teach cardiac surgical resuscitation skills to nurses.

    Science.gov (United States)

    McRae, Marion E; Chan, Alice; Hulett, Renee; Lee, Ai Jin; Coleman, Bernice

    2017-06-01

    There are few reports of the effectiveness or satisfaction with simulation to learn cardiac surgical resuscitation skills. To test the effect of simulation on the self-confidence of nurses to perform cardiac surgical resuscitation simulation and nurses' satisfaction with the simulation experience. A convenience sample of sixty nurses rated their self-confidence to perform cardiac surgical resuscitation skills before and after two simulations. Simulation performance was assessed. Subjects completed the Satisfaction with Simulation Experience scale and demographics. Self-confidence scores to perform all cardiac surgical skills as measured by paired t-tests were significantly increased after the simulation (d=-0.50 to 1.78). Self-confidence and cardiac surgical work experience were not correlated with time to performance. Total satisfaction scores were high (mean 80.2, SD 1.06) indicating satisfaction with the simulation. There was no correlation of the satisfaction scores with cardiac surgical work experience (τ=-0.05, ns). Self-confidence scores to perform cardiac surgical resuscitation procedures were higher after the simulation. Nurses were highly satisfied with the simulation experience. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools

  4. Experimental Investigation and High Resolution Simulation of In-Situ Combustion Processes

    Energy Technology Data Exchange (ETDEWEB)

    Margot Gerritsen; Tony Kovscek

    2008-04-30

    This final technical report describes work performed for the project 'Experimental Investigation and High Resolution Numerical Simulator of In-Situ Combustion Processes', DE-FC26-03NT15405. In summary, this work improved our understanding of in-situ combustion (ISC) process physics and oil recovery. This understanding was translated into improved conceptual models and a suite of software algorithms that extended predictive capabilities. We pursued experimental, theoretical, and numerical tasks during the performance period. The specific project objectives were (i) identification, experimentally, of chemical additives/injectants that improve combustion performance and delineation of the physics of improved performance, (ii) establishment of a benchmark one-dimensional, experimental data set for verification of in-situ combustion dynamics computed by simulators, (iii) develop improved numerical methods that can be used to describe in-situ combustion more accurately, and (iv) to lay the underpinnings of a highly efficient, 3D, in-situ combustion simulator using adaptive mesh refinement techniques and parallelization. We believe that project goals were met and exceeded as discussed.

  5. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  6. Key scientific challenges in geological disposal of high level radioactive waste

    International Nuclear Information System (INIS)

    Wang Ju

    2007-01-01

    The geological disposal of high radioactive waste is a challenging task facing the scientific and technical world. This paper introduces the latest progress of high level radioactive disposal programs in the latest progress of high level radioactive disposal programs in the world, and discusses the following key scientific challenges: (1) precise prediction of the evolution of a repository site; (2) characteristics of deep geological environment; (3) behaviour of deep rock mass, groundwater and engineering material under coupled con-ditions (intermediate to high temperature, geostress, hydraulic, chemical, biological and radiation process, etc); (4) geo-chemical behaviour of transuranic radionuclides with low concentration and its migration with groundwater; and (5) safety assessment of disposal system. Several large-scale research projects and several hot topics related with high-level waste disposal are also introduced. (authors)

  7. Impact of High-Fidelity Simulation and Pharmacist-Specific Didactic Lectures in Addition to ACLS Provider Certification on Pharmacy Resident ACLS Performance.

    Science.gov (United States)

    Bartel, Billie J

    2014-08-01

    This pilot study explored the use of multidisciplinary high-fidelity simulation and additional pharmacist-focused training methods in training postgraduate year 1 (PGY1) pharmacy residents to provide Advanced Cardiovascular Life Support (ACLS) care. Pharmacy resident confidence and comfort level were assessed after completing these training requirements. The ACLS training requirements for pharmacy residents were revised to include didactic instruction on ACLS pharmacology and rhythm recognition and participation in multidisciplinary high-fidelity simulation ACLS experiences in addition to ACLS provider certification. Surveys were administered to participating residents to assess the impact of this additional education on resident confidence and comfort level in cardiopulmonary arrest situations. The new ACLS didactic and simulation training requirements resulted in increased resident confidence and comfort level in all assessed functions. Residents felt more confident in all areas except providing recommendations for dosing and administration of medications and rhythm recognition after completing the simulation scenarios than with ACLS certification training and the didactic components alone. All residents felt the addition of lectures and simulation experiences better prepared them to function as a pharmacist in the ACLS team. Additional ACLS training requirements for pharmacy residents increased overall awareness of pharmacist roles and responsibilities and greatly improved resident confidence and comfort level in performing most essential pharmacist functions during ACLS situations. © The Author(s) 2013.

  8. SEAscan 3.5: A simulator performance analyzer

    International Nuclear Information System (INIS)

    Dennis, T.; Eisenmann, S.

    1990-01-01

    SEAscan 3.5 is a personal computer based tool developed to analyze the dynamic performance of nuclear power plant training simulators. The system has integrated features to provide its own human featured performance. In this paper, the program is described as a tool for the analysis of training simulator performance. The structure and operating characteristics of SEAscan 3.5 are described. The hardcopy documents are shown to aid in verification of conformance to ANSI/ANS-3.5-1985

  9. Towards Optimal PDE Simulations

    International Nuclear Information System (INIS)

    Keyes, David

    2009-01-01

    The Terascale Optimal PDE Solvers (TOPS) Integrated Software Infrastructure Center (ISIC) was created to develop and implement algorithms and support scientific investigations performed by DOE-sponsored researchers. These simulations often involve the solution of partial differential equations (PDEs) on terascale computers. The TOPS Center researched, developed and deployed an integrated toolkit of open-source, optimal complexity solvers for the nonlinear partial differential equations that arise in many DOE application areas, including fusion, accelerator design, global climate change and reactive chemistry. The algorithms created as part of this project were also designed to reduce current computational bottlenecks by orders of magnitude on terascale computers, enabling scientific simulation on a scale heretofore impossible.

  10. Terascale Optimal PDE Simulations

    Energy Technology Data Exchange (ETDEWEB)

    David Keyes

    2009-07-28

    The Terascale Optimal PDE Solvers (TOPS) Integrated Software Infrastructure Center (ISIC) was created to develop and implement algorithms and support scientific investigations performed by DOE-sponsored researchers. These simulations often involve the solution of partial differential equations (PDEs) on terascale computers. The TOPS Center researched, developed and deployed an integrated toolkit of open-source, optimal complexity solvers for the nonlinear partial differential equations that arise in many DOE application areas, including fusion, accelerator design, global climate change and reactive chemistry. The algorithms created as part of this project were also designed to reduce current computational bottlenecks by orders of magnitude on terascale computers, enabling scientific simulation on a scale heretofore impossible.

  11. The use of physics practicum to train science process skills and its effect on scientific attitude of vocational high school students

    Science.gov (United States)

    Wiwin, E.; Kustijono, R.

    2018-03-01

    The purpose of the study is to describe the use of Physics practicum to train the science process skills and its effect on the scientific attitudes of the vocational high school students. The components of science process skills are: observing, classifying, inferring, predicting, and communicating. The established scientific attitudes are: curiosity, honesty, collaboration, responsibility, and open-mindedness. This is an experimental research with the one-shot case study design. The subjects are 30 Multimedia Program students of SMK Negeri 12 Surabaya. The data collection techniques used are observation and performance tests. The score of science process skills and scientific attitudes are taken from observational and performance instruments. Data analysis used are descriptive statistics and correlation. The results show that: 1) the physics practicum can train the science process skills and scientific attitudes in good category, 2) the relationship between the science process skills and the students' scientific attitude is good category 3) Student responses to the learning process using the practicum in the good category, The results of the research conclude that the physics practicum can train the science process skill and have a significant effect on the scientific attitude of the vocational highschool students.

  12. The 2009 Simulated Car Racing Championship

    DEFF Research Database (Denmark)

    Loiacono, Daniele; Lanzi, Pier Luca; Togelius, Julian

    2011-01-01

    In this paper, we overview the 2009 Simulated Car Racing Championship-an event comprising three competitions held in association with the 2009 IEEE Congress on Evolutionary Computation (CEC), the 2009 ACM Genetic and Evolutionary Computation Conference (GECCO), and the 2009 IEEE Symposium....... The organizers provide short summaries of the other competitors. Finally, we summarize the championship results, followed by a discussion about what the organizers learned about 1) the development of high-performing car racing controllers and 2) the organization of scientific competitions....

  13. Evaluating the performance of coupled snow-soil models in SURFEXv8 to simulate the permafrost thermal regime at a high Arctic site

    Science.gov (United States)

    Barrere, Mathieu; Domine, Florent; Decharme, Bertrand; Morin, Samuel; Vionnet, Vincent; Lafaysse, Matthieu

    2017-09-01

    Climate change projections still suffer from a limited representation of the permafrost-carbon feedback. Predicting the response of permafrost temperature to climate change requires accurate simulations of Arctic snow and soil properties. This study assesses the capacity of the coupled land surface and snow models ISBA-Crocus and ISBA-ES to simulate snow and soil properties at Bylot Island, a high Arctic site. Field measurements complemented with ERA-Interim reanalyses were used to drive the models and to evaluate simulation outputs. Snow height, density, temperature, thermal conductivity and thermal insulance are examined to determine the critical variables involved in the soil and snow thermal regime. Simulated soil properties are compared to measurements of thermal conductivity, temperature and water content. The simulated snow density profiles are unrealistic, which is most likely caused by the lack of representation in snow models of the upward water vapor fluxes generated by the strong temperature gradients within the snowpack. The resulting vertical profiles of thermal conductivity are inverted compared to observations, with high simulated values at the bottom of the snowpack. Still, ISBA-Crocus manages to successfully simulate the soil temperature in winter. Results are satisfactory in summer, but the temperature of the top soil could be better reproduced by adequately representing surface organic layers, i.e., mosses and litter, and in particular their water retention capacity. Transition periods (soil freezing and thawing) are the least well reproduced because the high basal snow thermal conductivity induces an excessively rapid heat transfer between the soil and the snow in simulations. Hence, global climate models should carefully consider Arctic snow thermal properties, and especially the thermal conductivity of the basal snow layer, to perform accurate predictions of the permafrost evolution under climate change.

  14. Numerical simulations on a high-temperature particle moving in coolant

    International Nuclear Information System (INIS)

    Li Xiaoyan; Shang Zhi; Xu Jijun

    2006-01-01

    This study considers the coupling effect between film boiling heat transfer and evaporation drag around a hot-particle in cold liquid. Taking momentum and energy equations of the vapor film into account, a transient single particle model under FCI conditions has been established. The numerical simulations on a high-temperature particle moving in coolant have been performed using Gear algorithm. Adaptive dynamic boundary method is adopted during simulating to matching the dynamic boundary that is caused by vapor film changing. Based on the method presented above, the transient process of high-temperature particles moving in coolant can be simulated. The experimental results prove the validity of the HPMC model. (authors)

  15. Surrogate model approach for improving the performance of reactive transport simulations

    Science.gov (United States)

    Jatnieks, Janis; De Lucia, Marco; Sips, Mike; Dransch, Doris

    2016-04-01

    Reactive transport models can serve a large number of important geoscientific applications involving underground resources in industry and scientific research. It is common for simulation of reactive transport to consist of at least two coupled simulation models. First is a hydrodynamics simulator that is responsible for simulating the flow of groundwaters and transport of solutes. Hydrodynamics simulators are well established technology and can be very efficient. When hydrodynamics simulations are performed without coupled geochemistry, their spatial geometries can span millions of elements even when running on desktop workstations. Second is a geochemical simulation model that is coupled to the hydrodynamics simulator. Geochemical simulation models are much more computationally costly. This is a problem that makes reactive transport simulations spanning millions of spatial elements very difficult to achieve. To address this problem we propose to replace the coupled geochemical simulation model with a surrogate model. A surrogate is a statistical model created to include only the necessary subset of simulator complexity for a particular scenario. To demonstrate the viability of such an approach we tested it on a popular reactive transport benchmark problem that involves 1D Calcite transport. This is a published benchmark problem (Kolditz, 2012) for simulation models and for this reason we use it to test the surrogate model approach. To do this we tried a number of statistical models available through the caret and DiceEval packages for R, to be used as surrogate models. These were trained on randomly sampled subset of the input-output data from the geochemical simulation model used in the original reactive transport simulation. For validation we use the surrogate model to predict the simulator output using the part of sampled input data that was not used for training the statistical model. For this scenario we find that the multivariate adaptive regression splines

  16. Manufacturing plant performance evaluation by discrete event simulation

    International Nuclear Information System (INIS)

    Rosli Darmawan; Mohd Rasid Osman; Rosnah Mohd Yusuff; Napsiah Ismail; Zulkiflie Leman

    2002-01-01

    A case study was conducted to evaluate the performance of a manufacturing plant using discrete event simulation technique. The study was carried out on animal feed production plant. Sterifeed plant at Malaysian Institute for Nuclear Technology Research (MINT), Selangor, Malaysia. The plant was modelled base on the actual manufacturing activities recorded by the operators. The simulation was carried out using a discrete event simulation software. The model was validated by comparing the simulation results with the actual operational data of the plant. The simulation results show some weaknesses with the current plant design and proposals were made to improve the plant performance. (Author)

  17. EPISTEMOLOGICAL PERCEPTION AND SCIENTIFIC LITERACY IN LEVEL HIGH SCHOOL TEACHERS

    Directory of Open Access Journals (Sweden)

    Ramiro Álvarez-Valenzuela

    2016-07-01

    Full Text Available Research in science education has helped to find some difficulties that hinder the teaching-learning process. These problems include conceptual content of school subjects, the influence of prior knowledge of the student and the teachers have not been trained in their university education epistemologically. This research presents the epistemological conceptions of a sample of 114 high school teachers university science area, which refer the ideas about the role of observation in scientific knowledge development and the work of scientists in the process of knowledge generation. It also includes the level of scientific literacy from the literature that is used as a source of information on the teaching. The result also identifies the level of scientific literacy in students and their influence on learning.

  18. 78 FR 13864 - Atlantic Highly Migratory Species; Exempted Fishing, Scientific Research, Display, and Chartering...

    Science.gov (United States)

    2013-03-01

    ... Highly Migratory Species; Exempted Fishing, Scientific Research, Display, and Chartering Permits; Letters... Permits (EFPs), Scientific Research Permits (SRPs), Display Permits, Letters of Acknowledgment (LOAs), and... scientific research, the acquisition of information and data, the enhancement of safety at sea, the purpose...

  19. 77 FR 69593 - Atlantic Highly Migratory Species; Exempted Fishing, Scientific Research, Display, and Chartering...

    Science.gov (United States)

    2012-11-20

    ... Highly Migratory Species; Exempted Fishing, Scientific Research, Display, and Chartering Permits; Letters... intent to issue Exempted Fishing Permits (EFPs), Scientific Research Permits (SRPs), Display Permits... public display and scientific research that is exempt from regulations (e.g., fishing seasons, prohibited...

  20. 75 FR 75458 - Atlantic Highly Migratory Species; Exempted Fishing, Scientific Research, Display, and Chartering...

    Science.gov (United States)

    2010-12-03

    ... Highly Migratory Species; Exempted Fishing, Scientific Research, Display, and Chartering Permits; Letters... intent to issue Exempted Fishing Permits (EFPs), Scientific Research Permits (SRPs), Display Permits... of HMS for public display and scientific research that is exempt from regulations (e.g., seasons...

  1. SLC positron source: Simulation and performance

    International Nuclear Information System (INIS)

    Pitthan, R.; Braun, H.; Clendenin, J.E.; Ecklund, S.D.; Helm, R.H.; Kulikov, A.V.; Odian, A.C.; Pei, G.X.; Ross, M.C.; Woodley, M.D.

    1991-06-01

    Performance of the source was found to be in good general agreement with computer simulations with S-band acceleration, and where not, the simulations lead to identification of problems, in particular the underestimated impact of linac misalignments due to the 1989 Loma Prieta Earthquake. 13 refs., 7 figs

  2. Simulating Performance Risk for Lighting Retrofit Decisions

    Directory of Open Access Journals (Sweden)

    Jia Hu

    2015-05-01

    Full Text Available In building retrofit projects, dynamic simulations are performed to simulate building performance. Uncertainty may negatively affect model calibration and predicted lighting energy savings, which increases the chance of default on performance-based contracts. Therefore, the aim of this paper is to develop a simulation-based method that can analyze lighting performance risk in lighting retrofit decisions. The model uses a surrogate model, which is constructed by adaptively selecting sample points and generating approximation surfaces with fast computing time. The surrogate model is a replacement of the computation intensive process. A statistical method is developed to generate extreme weather profile based on the 20-year historical weather data. A stochastic occupancy model was created using actual occupancy data to generate realistic occupancy patterns. Energy usage of lighting, and heating, ventilation, and air conditioning (HVAC is simulated using EnergyPlus. The method can evaluate the influence of different risk factors (e.g., variation of luminaire input wattage, varying weather conditions on lighting and HVAC energy consumption and lighting electricity demand. Probability distributions are generated to quantify the risk values. A case study was conducted to demonstrate and validate the methods. The surrogate model is a good solution for quantifying the risk factors and probability distribution of the building performance.

  3. Hand ultrasound: a high-fidelity simulation of lung sliding.

    Science.gov (United States)

    Shokoohi, Hamid; Boniface, Keith

    2012-09-01

    Simulation training has been effectively used to integrate didactic knowledge and technical skills in emergency and critical care medicine. In this article, we introduce a novel model of simulating lung ultrasound and the features of lung sliding and pneumothorax by performing a hand ultrasound. The simulation model involves scanning the palmar aspect of the hand to create normal lung sliding in varying modes of scanning and to mimic ultrasound features of pneumothorax, including "stratosphere/barcode sign" and "lung point." The simple, reproducible, and readily available simulation model we describe demonstrates a high-fidelity simulation surrogate that can be used to rapidly illustrate the signs of normal and abnormal lung sliding at the bedside. © 2012 by the Society for Academic Emergency Medicine.

  4. An exploration of the relationship between knowledge and performance-related variables in high-fidelity simulation: designing instruction that promotes expertise in practice.

    Science.gov (United States)

    Hauber, Roxanne P; Cormier, Eileen; Whyte, James

    2010-01-01

    Increasingly, high-fidelity patient simulation (HFPS) is becoming essential to nursing education. Much remains unknown about how classroom learning is connected to student decision-making in simulation scenarios and the degree to which transference takes place between the classroom setting and actual practice. The present study was part of a larger pilot study aimed at determining the relationship between nursing students' clinical ability to prioritize their actions and the associated cognitions and physiologic outcomes of care using HFPS. In an effort to better explain the knowledge base being used by nursing students in HFPS, the investigators explored the relationship between common measures of knowledge and performance-related variables. Findings are discussed within the context of the expert performance approach and concepts from cognitive psychology, such as cognitive architecture, cognitive load, memory, and transference.

  5. Aircraft Performance for Open Air Traffic Simulations

    NARCIS (Netherlands)

    Metz, I.C.; Hoekstra, J.M.; Ellerbroek, J.; Kugler, D.

    2016-01-01

    The BlueSky Open Air Tra_c Simulator developed by the Control & Simulation section of TU Delft aims at supporting research for analysing Air Tra_c Management concepts by providing an open source simulation platform. The goal of this study was to complement BlueSky with aircraft performance

  6. An empirical investigation of operator performance in cognitively demanding simulated emergencies

    International Nuclear Information System (INIS)

    Roth, E.M.; Mumaw, R.J.; Lewis, P.M.

    1994-07-01

    This report documents the results of an empirical study of nuclear power plant operator performance in cognitively demanding simulated emergencies. During emergencies operators follow highly prescriptive written procedures. The objectives of the study were to understand and document what role higher-level cognitive activities such as diagnosis, or more generally 'situation assessment', play in guiding operator performance, given that operators utilize procedures in responding to the events. The study examined crew performance in variants of two emergencies: (1) an Interfacing System Loss of Coolant Accident and (2) a Loss of Heat Sink scenario. Data on operator performance were collected using training simulators at two plant sites. Up to 11 crews from each plant participated in each of two simulated emergencies for a total of 38 cases. Crew performance was videotaped and partial transcripts were produced and analyzed. The results revealed a number of instances where higher-level cognitive activities such as situation assessment and response planning enabled crews to handle aspects of the situation that were not fully addressed by the procedures. This report documents these cases and discusses their implications for the development and evaluation of training and control room aids, as well as for human reliability analyses

  7. Simulation of plasma loading of high-pressure RF cavities

    Energy Technology Data Exchange (ETDEWEB)

    Yu, K. [Brookhaven National Lab. (BNL), Upton, NY (United States). Computational Science Initiative; Samulyak, R. [Brookhaven National Lab. (BNL), Upton, NY (United States). Computational Science Initiative; Stony Brook Univ., NY (United States). Dept. of Applied Mathematics and Statistics; Yonehara, K. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Freemire, B. [Northern Illinois Univ., DeKalb, IL (United States)

    2018-01-11

    Muon beam-induced plasma loading of radio-frequency (RF) cavities filled with high pressure hydrogen gas with 1% dry air dopant has been studied via numerical simulations. The electromagnetic code SPACE, that resolves relevant atomic physics processes, including ionization by the muon beam, electron attachment to dopant molecules, and electron-ion and ion-ion recombination, has been used. Simulations studies have also been performed in the range of parameters typical for practical muon cooling channels.

  8. Simulation of plasma loading of high-pressure RF cavities

    Science.gov (United States)

    Yu, K.; Samulyak, R.; Yonehara, K.; Freemire, B.

    2018-01-01

    Muon beam-induced plasma loading of radio-frequency (RF) cavities filled with high pressure hydrogen gas with 1% dry air dopant has been studied via numerical simulations. The electromagnetic code SPACE, that resolves relevant atomic physics processes, including ionization by the muon beam, electron attachment to dopant molecules, and electron-ion and ion-ion recombination, has been used. Simulations studies have been performed in the range of parameters typical for practical muon cooling channels.

  9. MDT Performance in a High Rate Background Environment

    CERN Document Server

    Aleksa, Martin; Hessey, N P; Riegler, W

    1998-01-01

    A Cs137 gamma source with different lead filters in the SPS beam-line X5 has been used to simulate the ATLAS background radiation. This note shows the impact of high background rates on the MDT efficiency and resolution for three kinds of pulse shaping and compares the results with GARFIELD simulations. Furthermore it explains how the performance can be improved by time slewing corrections and double track separation.

  10. A Grid-Based Cyber Infrastructure for High Performance Chemical Dynamics Simulations

    Directory of Open Access Journals (Sweden)

    Khadka Prashant

    2008-10-01

    Full Text Available Chemical dynamics simulation is an effective means to study atomic level motions of molecules, collections of molecules, liquids, surfaces, interfaces of materials, and chemical reactions. To make chemical dynamics simulations globally accessible to a broad range of users, recently a cyber infrastructure was developed that provides an online portal to VENUS, a popular chemical dynamics simulation program package, to allow people to submit simulation jobs that will be executed on the web server machine. In this paper, we report new developments of the cyber infrastructure for the improvement of its quality of service by dispatching the submitted simulations jobs from the web server machine onto a cluster of workstations for execution, and by adding an animation tool, which is optimized for animating the simulation results. The separation of the server machine from the simulation-running machine improves the service quality by increasing the capacity to serve more requests simultaneously with even reduced web response time, and allows the execution of large scale, time-consuming simulation jobs on the powerful workstation cluster. With the addition of an animation tool, the cyber infrastructure automatically converts, upon the selection of the user, some simulation results into an animation file that can be viewed on usual web browsers without requiring installation of any special software on the user computer. Since animation is essential for understanding the results of chemical dynamics simulations, this animation capacity provides a better way for understanding simulation details of the chemical dynamics. By combining computing resources at locations under different administrative controls, this cyber infrastructure constitutes a grid environment providing physically and administratively distributed functionalities through a single easy-to-use online portal

  11. High performance computing system in the framework of the Higgs boson studies

    CERN Document Server

    Belyaev, Nikita; The ATLAS collaboration

    2017-01-01

    The Higgs boson physics is one of the most important and promising fields of study in modern High Energy Physics. To perform precision measurements of the Higgs boson properties, the use of fast and efficient instruments of Monte Carlo event simulation is required. Due to the increasing amount of data and to the growing complexity of the simulation software tools, the computing resources currently available for Monte Carlo simulation on the LHC GRID are not sufficient. One of the possibilities to address this shortfall of computing resources is the usage of institutes computer clusters, commercial computing resources and supercomputers. In this paper, a brief description of the Higgs boson physics, the Monte-Carlo generation and event simulation techniques are presented. A description of modern high performance computing systems and tests of their performance are also discussed. These studies have been performed on the Worldwide LHC Computing Grid and Kurchatov Institute Data Processing Center, including Tier...

  12. Performance simulation of a MRPC-based PET imaging system

    Science.gov (United States)

    Roy, A.; Banerjee, A.; Biswas, S.; Chattopadhyay, S.; Das, G.; Saha, S.

    2014-10-01

    The less expensive and high resolution Multi-gap Resistive Plate Chamber (MRPC) opens up a new possibility to find an efficient alternative detector for the Time of Flight (TOF) based Positron Emission Tomography, where the sensitivity of the system depends largely on the time resolution of the detector. In a layered structure, suitable converters can be used to increase the photon detection efficiency. In this work, we perform a detailed GEANT4 simulation to optimize the converter thickness towards improving the efficiency of photon conversion. A Monte Carlo based procedure has been developed to simulate the time resolution of the MRPC-based system, making it possible to simulate its response for PET imaging application. The results of the test of a six-gap MRPC, operating in avalanche mode, with 22Na source have been discussed.

  13. Advanced scientific computational methods and their applications to nuclear technologies. (3) Introduction of continuum simulation methods and their applications (3)

    International Nuclear Information System (INIS)

    Satake, Shin-ichi; Kunugi, Tomoaki

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the third issue showing the introduction of continuum simulation methods and their applications. Spectral methods and multi-interface calculation methods in fluid dynamics are reviewed. (T. Tanaka)

  14. Return on Scientific Investment - RoSI: a PMO dynamical index proposal for scientific projects performance evaluation and management.

    Science.gov (United States)

    Caous, Cristofer André; Machado, Birajara; Hors, Cora; Zeh, Andrea Kaufmann; Dias, Cleber Gustavo; Amaro Junior, Edson

    2012-01-01

    To propose a measure (index) of expected risks to evaluate and follow up the performance analysis of research projects involving financial and adequate structure parameters for its development. A ranking of acceptable results regarding research projects with complex variables was used as an index to gauge a project performance. In order to implement this method the ulcer index as the basic model to accommodate the following variables was applied: costs, high impact publication, fund raising, and patent registry. The proposed structured analysis, named here as RoSI (Return on Scientific Investment) comprises a pipeline of analysis to characterize the risk based on a modeling tool that comprises multiple variables interacting in semi-quantitatively environments. This method was tested with data from three different projects in our Institution (projects A, B and C). Different curves reflected the ulcer indexes identifying the project that may have a minor risk (project C) related to development and expected results according to initial or full investment. The results showed that this model contributes significantly to the analysis of risk and planning as well as to the definition of necessary investments that consider contingency actions with benefits to the different stakeholders: the investor or donor, the project manager and the researchers.

  15. Research on high-performance mass storage system

    International Nuclear Information System (INIS)

    Cheng Yaodong; Wang Lu; Huang Qiulan; Zheng Wei

    2010-01-01

    With the enlargement of scientific experiments, more and more data will be produced, which brings great challenge to storage system. Large storage capacity and high data access performance are both important to Mass storage system. This paper firstly reviews some kinds of popular storage systems including network storage system, SAN-based sharing system, WAN File system, object-based parallel file system, hierarchical storage system and cloud storage systems. Then some key technologies are presented. Finally, this paper takes BES storage system as an example and introduces its requirements, architecture and operation results. (authors)

  16. Simulations of depleted CMOS sensors for high-radiation environments

    CERN Document Server

    Liu, J.; Bhat, S.; Breugnon, P.; Caicedo, I.; Chen, Z.; Degerli, Y.; Godiot-Basolo, S.; Guilloux, F.; Hemperek, T.; Hirono, T.; Hügging, F.; Krüger, H.; Moustakas, K.; Pangaud, P.; Rozanov, A.; Rymaszewski, P.; Schwemling, P.; Wang, M.; Wang, T.; Wermes, N.; Zhang, L.

    2017-01-01

    After the Phase II upgrade for the Large Hadron Collider (LHC), the increased luminosity requests a new upgraded Inner Tracker (ITk) for the ATLAS experiment. As a possible option for the ATLAS ITk, a new pixel detector based on High Voltage/High Resistivity CMOS (HV/HR CMOS) technology is under study. Meanwhile, a new CMOS pixel sensor is also under development for the tracker of Circular Electron Position Collider (CEPC). In order to explore the sensor electric properties, such as the breakdown voltage and charge collection efficiency, 2D/3D Technology Computer Aided Design (TCAD) simulations have been performed carefully for the above mentioned both of prototypes. In this paper, the guard-ring simulation for a HV/HR CMOS sensor developed for the ATLAS ITk and the charge collection efficiency simulation for a CMOS sensor explored for the CEPC tracker will be discussed in details. Some comparisons between the simulations and the latest measurements will also be addressed.

  17. Simulator of Cryogenic process and Refrigeration, and its Control in scientific -nuclear facilities with EcosimPro

    International Nuclear Information System (INIS)

    Veleiro Blanco, A. M.

    2011-01-01

    The cryogenic plants and their control in Scientific-Nuclear Facilities is complicated by the large number of variables and the wide range of variation during operation. Initially the design and control of these systems in CERN was based on stationary calculations which non yielded the expected results. Due to its complexity, the dynamic simulation is the only way to get adequate results during operational transients.

  18. De Novo Ultrascale Atomistic Simulations On High-End Parallel Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Nakano, A; Kalia, R K; Nomura, K; Sharma, A; Vashishta, P; Shimojo, F; van Duin, A; Goddard, III, W A; Biswas, R; Srivastava, D; Yang, L H

    2006-09-04

    We present a de novo hierarchical simulation framework for first-principles based predictive simulations of materials and their validation on high-end parallel supercomputers and geographically distributed clusters. In this framework, high-end chemically reactive and non-reactive molecular dynamics (MD) simulations explore a wide solution space to discover microscopic mechanisms that govern macroscopic material properties, into which highly accurate quantum mechanical (QM) simulations are embedded to validate the discovered mechanisms and quantify the uncertainty of the solution. The framework includes an embedded divide-and-conquer (EDC) algorithmic framework for the design of linear-scaling simulation algorithms with minimal bandwidth complexity and tight error control. The EDC framework also enables adaptive hierarchical simulation with automated model transitioning assisted by graph-based event tracking. A tunable hierarchical cellular decomposition parallelization framework then maps the O(N) EDC algorithms onto Petaflops computers, while achieving performance tunability through a hierarchy of parameterized cell data/computation structures, as well as its implementation using hybrid Grid remote procedure call + message passing + threads programming. High-end computing platforms such as IBM BlueGene/L, SGI Altix 3000 and the NSF TeraGrid provide an excellent test grounds for the framework. On these platforms, we have achieved unprecedented scales of quantum-mechanically accurate and well validated, chemically reactive atomistic simulations--1.06 billion-atom fast reactive force-field MD and 11.8 million-atom (1.04 trillion grid points) quantum-mechanical MD in the framework of the EDC density functional theory on adaptive multigrids--in addition to 134 billion-atom non-reactive space-time multiresolution MD, with the parallel efficiency as high as 0.998 on 65,536 dual-processor BlueGene/L nodes. We have also achieved an automated execution of hierarchical QM

  19. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    Science.gov (United States)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will

  20. Optical Characterization and Energy Simulation of Glazing for High-Performance Windows

    International Nuclear Information System (INIS)

    Jonsson, Andreas

    2010-01-01

    This thesis focuses on one important component of the energy system - the window. Windows are installed in buildings mainly to create visual contact with the surroundings and to let in daylight, and should also be heat and sound insulating. This thesis covers four important aspects of windows: antireflection and switchable coatings, energy simulations and optical measurements. Energy simulations have been used to compare different windows and also to estimate the performance of smart or switchable windows, whose transmittance can be regulated. The results from this thesis show the potential of the emerging technology of smart windows, not only from a daylight and an energy perspective, but also for comfort and well-being. The importance of a well functioning control system for such windows, is pointed out. To fulfill all requirements of modern windows, they often have two or more panes. Each glass surface leads to reflection of light and therefore less daylight is transmitted. It is therefore of interest to find ways to increase the transmittance. In this thesis antireflection coatings, similar to those found on eye-glasses and LCD screens, have been investigated. For large area applications such as windows, it is necessary to use techniques which can easily be adapted to large scale manufacturing at low cost. Such a technique is dip-coating in a sol-gel of porous silica. Antireflection coatings have been deposited on glass and plastic materials to study both visual and energy performance and it has been shown that antireflection coatings increase the transmittance of windows without negatively affecting the thermal insulation and the energy efficiency. Optical measurements are important for quantifying product properties for comparisons and evaluations. It is important that new measurement routines are simple and applicable to standard commercial instruments. Different systematic error sources for optical measurements of patterned light diffusing samples using

  1. Cavitation performance improvement of high specific speed mixed-flow pump

    International Nuclear Information System (INIS)

    Chen, T; Sun, Y B; Wu, D Z; Wang, L Q

    2012-01-01

    Cavitation performance improvement of large hydraulic machinery such as pump and turbine has been a hot topic for decades. During the design process of the pumps, in order to minimize size, weight and cost centrifugal and mixed-flow pump impellers are required to operate at the highest possible rotational speed. The rotational speed is limited by the phenomenon of cavitation. The hydraulic model of high-speed mixed-flow pump with large flow rate and high pumping head, which was designed based on the traditional method, always involves poor cavitation performance. In this paper, on the basis of the same hydraulic design parameters, two hydraulic models of high-speed mixed-flow pump were designed by using different methods, in order to investigate the cavitation and hydraulic performance of the two models, the method of computational fluid dynamics (CFD) was adopted for internal flow simulation of the high specific speed mixed-flow pump. Based on the results of numerical simulation, the influences of impeller parameters and three-dimensional configuration on pressure distribution of the blades' suction surfaces were analyzed. The numerical simulation results shows a better pressure distribution and lower pressure drop around the leading edge of the improved model. The research results could provide references to the design and optimization of the anti-cavitation blade.

  2. A Collaborative Extensible User Environment for Simulation and Knowledge Management

    Energy Technology Data Exchange (ETDEWEB)

    Freedman, Vicky L.; Lansing, Carina S.; Porter, Ellen A.; Schuchardt, Karen L.; Guillen, Zoe C.; Sivaramakrishnan, Chandrika; Gorton, Ian

    2015-06-01

    In scientific simulation, scientists use measured data to create numerical models, execute simulations and analyze results from advanced simulators executing on high performance computing platforms. This process usually requires a team of scientists collaborating on data collection, model creation and analysis, and on authorship of publications and data. This paper shows that scientific teams can benefit from a user environment called Akuna that permits subsurface scientists in disparate locations to collaborate on numerical modeling and analysis projects. The Akuna user environment is built on the Velo framework that provides both a rich client environment for conducting and analyzing simulations and a Web environment for data sharing and annotation. Akuna is an extensible toolset that integrates with Velo, and is designed to support any type of simulator. This is achieved through data-driven user interface generation, use of a customizable knowledge management platform, and an extensible framework for simulation execution, monitoring and analysis. This paper describes how the customized Velo content management system and the Akuna toolset are used to integrate and enhance an effective collaborative research and application environment. The extensible architecture of Akuna is also described and demonstrates its usage for creation and execution of a 3D subsurface simulation.

  3. Direct numerical simulations of turbulent lean premixed combustion

    International Nuclear Information System (INIS)

    Sankaran, Ramanan; Hawkes, Evatt R; Chen, Jacqueline H; Lu Tianfeng; Law, Chung K

    2006-01-01

    In recent years, due to the advent of high-performance computers and advanced numerical algorithms, direct numerical simulation (DNS) of combustion has emerged as a valuable computational research tool, in concert with experimentation. The role of DNS in delivering new Scientific insight into turbulent combustion is illustrated using results from a recent 3D turbulent premixed flame simulation. To understand the influence of turbulence on the flame structure, a 3D fully-resolved DNS of a spatially-developing lean methane-air turbulent Bunsen flame was performed in the thin reaction zones regime. A reduced chemical model for methane-air chemistry consisting of 13 resolved species, 4 quasi-steady state species and 73 elementary reactions was developed specifically for the current simulation. The data is analyzed to study possible influences of turbulence on the flame thickness. The results show that the average flame thickness increases, in qualitative agreement with several experimental results

  4. Simulations of High Speed Fragment Trajectories

    Science.gov (United States)

    Yeh, Peter; Attaway, Stephen; Arunajatesan, Srinivasan; Fisher, Travis

    2017-11-01

    Flying shrapnel from an explosion are capable of traveling at supersonic speeds and distances much farther than expected due to aerodynamic interactions. Predicting the trajectories and stable tumbling modes of arbitrary shaped fragments is a fundamental problem applicable to range safety calculations, damage assessment, and military technology. Traditional approaches rely on characterizing fragment flight using a single drag coefficient, which may be inaccurate for fragments with large aspect ratios. In our work we develop a procedure to simulate trajectories of arbitrary shaped fragments with higher fidelity using high performance computing. We employ a two-step approach in which the force and moment coefficients are first computed as a function of orientation using compressible computational fluid dynamics. The force and moment data are then input into a six-degree-of-freedom rigid body dynamics solver to integrate trajectories in time. Results of these high fidelity simulations allow us to further understand the flight dynamics and tumbling modes of a single fragment. Furthermore, we use these results to determine the validity and uncertainty of inexpensive methods such as the single drag coefficient model.

  5. Microsurgical Performance After Sleep Interruption: A NeuroTouch Simulator Study.

    Science.gov (United States)

    Micko, Alexander; Knopp, Karoline; Knosp, Engelbert; Wolfsberger, Stefan

    2017-10-01

    In times of the ubiquitous debate about doctors' working hour restrictions, it is still questionable if the physician's performance is impaired by high work load and long shifts. In this study, we evaluated the impact of sleep interruption on neurosurgical performance. Ten medical students and 10 neurosurgical residents were tested on the virtual-reality simulator NeuroTouch by performing an identical microsurgical task, well rested (baseline test), and after sleep interruption at night (stress test). Deviation of total score, timing, and excessive force on tissue were evaluated. In addition, vital parameters and self-assessment were analyzed. After sleep interruption, total performance score increased significantly (45.1 vs. 48.7, baseline vs. stress test, P = 0.048) while timing remained stable (10.1 vs. 10.4 minutes for baseline vs. stress test, P > 0.05) for both students and residents. Excessive force decreased in both groups during the stress test for the nondominant hand (P = 0.05). For the dominant hand, an increase of excessive force was encountered in the group of residents (P = 0.05). In contrast to their results, participants of both groups assessed their performance worse during the stress test. In our study, we found an increase of neurosurgical simulator performance in neurosurgical residents and medical students under simulated night shift conditions. Further, microsurgical dexterity remained unchanged. Based on our results and the data in the available literature, we cannot confirm that working hour restrictions will have a positive effect on neurosurgical performance. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Impact of Loss Synchronization on Reliable High Speed Networks: A Model Based Simulation

    Directory of Open Access Journals (Sweden)

    Suman Kumar

    2014-01-01

    Full Text Available Contemporary nature of network evolution demands for simulation models which are flexible, scalable, and easily implementable. In this paper, we propose a fluid based model for performance analysis of reliable high speed networks. In particular, this paper aims to study the dynamic relationship between congestion control algorithms and queue management schemes, in order to develop a better understanding of the causal linkages between the two. We propose a loss synchronization module which is user configurable. We validate our model through simulations under controlled settings. Also, we present a performance analysis to provide insights into two important issues concerning 10 Gbps high speed networks: (i impact of bottleneck buffer size on the performance of 10 Gbps high speed network and (ii impact of level of loss synchronization on link utilization-fairness tradeoffs. The practical impact of the proposed work is to provide design guidelines along with a powerful simulation tool to protocol designers and network developers.

  7. SCIENTIFIC BASIS OF DENTISTRY

    Directory of Open Access Journals (Sweden)

    Yegane GÜVEN

    2017-10-01

    Full Text Available Technological and scientific innovations have increased exponentially over the past years in the dentistry profession. In this article, these developments are evaluated both in terms of clinical practice and their place in the educational program. The effect of the biologic and digital revolutions on dental education and daily clinical practice are also reviewed. Biomimetics, personalized dental medicine regenerative dentistry, nanotechnology, high-end simulations providing virtual reality, genomic information, and stem cell studies will gain more importance in the coming years, moving dentistry to a different dimension.

  8. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    Science.gov (United States)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  9. Scientific performances of the XAA1.2 front-end chip for silicon microstrip detectors

    International Nuclear Information System (INIS)

    Del Monte, Ettore; Soffitta, Paolo; Morelli, Ennio; Pacciani, Luigi; Porrovecchio, Geiland; Rubini, Alda; Uberti, Olga; Costa, Enrico; Di Persio, Giuseppe; Donnarumma, Immacolata; Evangelista, Yuri; Feroci, Marco; Lazzarotto, Francesco; Mastropietro, Marcello; Rapisarda, Massimo

    2007-01-01

    The XAA1.2 is a custom ASIC chip for silicon microstrip detectors adapted by Ideas for the SuperAGILE instrument on board the AGILE space mission. The chip is equipped with 128 input channels, each one containing a charge preamplifier, shaper, peak detector and stretcher. The most important features of the ASIC are the extended linearity, low noise and low power consumption. The XAA1.2 underwent extensive laboratory testing in order to study its commandability and functionality and evaluate its scientific performances. In this paper we describe the XAA1.2 features, report the laboratory measurements and discuss the results emphasizing the scientific performances in the context of the SuperAGILE front-end electronics

  10. Kinetic Energy from Supernova Feedback in High-resolution Galaxy Simulations

    Science.gov (United States)

    Simpson, Christine M.; Bryan, Greg L.; Hummels, Cameron; Ostriker, Jeremiah P.

    2015-08-01

    We describe a new method for adding a prescribed amount of kinetic energy to simulated gas modeled on a cartesian grid by directly altering grid cells’ mass and velocity in a distributed fashion. The method is explored in the context of supernova (SN) feedback in high-resolution (˜10 pc) hydrodynamic simulations of galaxy formation. Resolution dependence is a primary consideration in our application of the method, and simulations of isolated explosions (performed at different resolutions) motivate a resolution-dependent scaling for the injected fraction of kinetic energy that we apply in cosmological simulations of a 109 M⊙ dwarf halo. We find that in high-density media (≳50 cm-3) with coarse resolution (≳4 pc per cell), results are sensitive to the initial kinetic energy fraction due to early and rapid cooling. In our galaxy simulations, the deposition of small amounts of SN energy in kinetic form (as little as 1%) has a dramatic impact on the evolution of the system, resulting in an order-of-magnitude suppression of stellar mass. The overall behavior of the galaxy in the two highest resolution simulations we perform appears to converge. We discuss the resulting distribution of stellar metallicities, an observable sensitive to galactic wind properties, and find that while the new method demonstrates increased agreement with observed systems, significant discrepancies remain, likely due to simplistic assumptions that neglect contributions from SNe Ia and stellar winds.

  11. GROMACS 4.5: A high-throughput and highly parallel open source molecular simulation toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Pronk, Sander [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Pall, Szilard [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Schulz, Roland [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Larsson, Per [Univ. of Virginia, Charlottesville, VA (United States); Bjelkmar, Par [Science for Life Lab., Stockholm (Sweden); Stockholm Univ., Stockholm (Sweden); Apostolov, Rossen [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Shirts, Michael R. [Univ. of Virginia, Charlottesville, VA (United States); Smith, Jeremy C. [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kasson, Peter M. [Univ. of Virginia, Charlottesville, VA (United States); van der Spoel, David [Science for Life Lab., Stockholm (Sweden); Uppsala Univ., Uppsala (Sweden); Hess, Berk [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Lindahl, Erik [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Stockholm Univ., Stockholm (Sweden)

    2013-02-13

    In this study, molecular simulation has historically been a low-throughput technique, but faster computers and increasing amounts of genomic and structural data are changing this by enabling large-scale automated simulation of, for instance, many conformers or mutants of biomolecules with or without a range of ligands. At the same time, advances in performance and scaling now make it possible to model complex biomolecular interaction and function in a manner directly testable by experiment. These applications share a need for fast and efficient software that can be deployed on massive scale in clusters, web servers, distributed computing or cloud resources. As a result, we present a range of new simulation algorithms and features developed during the past 4 years, leading up to the GROMACS 4.5 software package. The software now automatically handles wide classes of biomolecules, such as proteins, nucleic acids and lipids, and comes with all commonly used force fields for these molecules built-in. GROMACS supports several implicit solvent models, as well as new free-energy algorithms, and the software now uses multithreading for efficient parallelization even on low-end systems, including windows-based workstations. Together with hand-tuned assembly kernels and state-of-the-art parallelization, this provides extremely high performance and cost efficiency for high-throughput as well as massively parallel simulations.

  12. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel's MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  13. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  14. High Performance Computing in Science and Engineering '14

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2015-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS). The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and   engineers. The book comes with a wealth of color illustrations and tables of results.  

  15. The effects of fatigue on performance in simulated nursing work.

    Science.gov (United States)

    Barker, Linsey M; Nussbaum, Maury A

    2011-09-01

    Fatigue is associated with increased rates of medical errors and healthcare worker injuries, yet existing research in this sector has not considered multiple dimensions of fatigue simultaneously. This study evaluated hypothesised causal relationships between mental and physical fatigue and performance. High and low levels of mental and physical fatigue were induced in 16 participants during simulated nursing work tasks in a laboratory setting. Task-induced changes in fatigue dimensions were quantified using both subjective and objective measures, as were changes in performance on physical and mental tasks. Completing the simulated work tasks increased total fatigue, mental fatigue and physical fatigue in all experimental conditions. Higher physical fatigue adversely affected measures of physical and mental performance, whereas higher mental fatigue had a positive effect on one measure of mental performance. Overall, these results suggest causal effects between manipulated levels of mental and physical fatigue and task-induced changes in mental and physical performance. STATEMENT OF RELEVANCE: Nurse fatigue and performance has implications for patient and provider safety. Results from this study demonstrate the importance of a multidimensional view of fatigue in understanding the causal relationships between fatigue and performance. The findings can guide future work aimed at predicting fatigue-related performance decrements and designing interventions.

  16. Magnetic field simulation and shimming analysis of 3.0T superconducting MRI system

    Science.gov (United States)

    Yue, Z. K.; Liu, Z. Z.; Tang, G. S.; Zhang, X. C.; Duan, L. J.; Liu, W. C.

    2018-04-01

    3.0T superconducting magnetic resonance imaging (MRI) system has become the mainstream of modern clinical MRI system because of its high field intensity and high degree of uniformity and stability. It has broad prospects in scientific research and other fields. We analyze the principle of magnet designing in this paper. We also perform the magnetic field simulation and shimming analysis of the first 3.0T/850 superconducting MRI system in the world using the Ansoft Maxwell simulation software. We guide the production and optimization of the prototype based on the results of simulation analysis. Thus the magnetic field strength, magnetic field uniformity and magnetic field stability of the prototype is guided to achieve the expected target.

  17. Construction of Blaze at the University of Illinois at Chicago: A Shared, High-Performance, Visual Computer for Next-Generation Cyberinfrastructure-Accelerated Scientific, Engineering, Medical and Public Policy Research

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Maxine D. [Acting Director, EVL; Leigh, Jason [PI

    2014-02-17

    The Blaze high-performance visual computing system serves the high-performance computing research and education needs of University of Illinois at Chicago (UIC). Blaze consists of a state-of-the-art, networked, computer cluster and ultra-high-resolution visualization system called CAVE2(TM) that is currently not available anywhere in Illinois. This system is connected via a high-speed 100-Gigabit network to the State of Illinois' I-WIRE optical network, as well as to national and international high speed networks, such as the Internet2, and the Global Lambda Integrated Facility. This enables Blaze to serve as an on-ramp to national cyberinfrastructure, such as the National Science Foundation’s Blue Waters petascale computer at the National Center for Supercomputing Applications at the University of Illinois at Chicago and the Department of Energy’s Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. DOE award # DE-SC005067, leveraged with NSF award #CNS-0959053 for “Development of the Next-Generation CAVE Virtual Environment (NG-CAVE),” enabled us to create a first-of-its-kind high-performance visual computing system. The UIC Electronic Visualization Laboratory (EVL) worked with two U.S. companies to advance their commercial products and maintain U.S. leadership in the global information technology economy. New applications are being enabled with the CAVE2/Blaze visual computing system that is advancing scientific research and education in the U.S. and globally, and help train the next-generation workforce.

  18. Design of DSP-based high-power digital solar array simulator

    Science.gov (United States)

    Zhang, Yang; Liu, Zhilong; Tong, Weichao; Feng, Jian; Ji, Yibo

    2013-12-01

    To satisfy rigid performance specifications, a feedback control was presented for zoom optical lens plants. With the increasing of global energy consumption, research of the photovoltaic(PV) systems get more and more attention. Research of the digital high-power solar array simulator provides technical support for high-power grid-connected PV systems research.This paper introduces a design scheme of the high-power digital solar array simulator based on TMS320F28335. A DC-DC full-bridge topology was used in the system's main circuit. The switching frequency of IGBT is 25kHz.Maximum output voltage is 900V. Maximum output current is 20A. Simulator can be pre-stored solar panel IV curves.The curve is composed of 128 discrete points .When the system was running, the main circuit voltage and current values was feedback to the DSP by the voltage and current sensors in real-time. Through incremental PI,DSP control the simulator in the closed-loop control system. Experimental data show that Simulator output voltage and current follow a preset solar panels IV curve. In connection with the formation of high-power inverter, the system becomes gridconnected PV system. The inverter can find the simulator's maximum power point and the output power can be stabilized at the maximum power point (MPP).

  19. Simulated astigmatism impairs academic-related performance in children.

    Science.gov (United States)

    Narayanasamy, Sumithira; Vincent, Stephen J; Sampson, Geoff P; Wood, Joanne M

    2015-01-01

    Astigmatism is an important refractive condition in children. However, the functional impact of uncorrected astigmatism in this population is not well established, particularly with regard to academic performance. This study investigated the impact of simulated bilateral astigmatism on academic-related tasks before and after sustained near work in children. Twenty visually normal children (mean age: 10.8 ± 0.7 years; six males and 14 females) completed a range of standardised academic-related tests with and without 1.50 D of simulated bilateral astigmatism (with both academic-related tests and the visual condition administered in a randomised order). The simulated astigmatism was induced using a positive cylindrical lens while maintaining a plano spherical equivalent. Performance was assessed before and after 20 min of sustained near work, during two separate testing sessions. Academic-related measures included a standardised reading test (the Neale Analysis of Reading Ability), visual information processing tests (Coding and Symbol Search subtests from the Wechsler Intelligence Scale for Children) and a reading-related eye movement test (the Developmental Eye Movement test). Each participant was systematically assigned either with-the-rule (WTR, axis 180°) or against-the-rule (ATR, axis 90°) simulated astigmatism to evaluate the influence of axis orientation on any decrements in performance. Reading, visual information processing and reading-related eye movement performance were all significantly impaired by both simulated bilateral astigmatism (p  0.05). Simulated astigmatism led to a reduction of between 5% and 12% in performance across the academic-related outcome measures, but there was no significant effect of the axis (WTR or ATR) of astigmatism (p > 0.05). Simulated bilateral astigmatism impaired children's performance on a range of academic-related outcome measures irrespective of the orientation of the astigmatism. These findings have

  20. Proving test on the performance of a Multiple-Excitation Simulator

    International Nuclear Information System (INIS)

    Fujita, Katsuhisa; Ito, Tomohiro; Kojima, Nobuyuki; Sasaki, Yoichi; Abe, Hiroshi; Kuroda, Katsuhiko

    1995-01-01

    Seismic excitation test on large scale piping systems is scheduled to be carried out by the Nuclear power Engineering Corporation (NUPEC) using the large-scale, high-performance vibration table at the Tadotsu Engineering Laboratory, under the sponsorship of the Ministry of International Trade and Industry (MITI). In the test, the piping systems simulate the main steam piping system and the main feed water piping system in the nuclear power plants. In this study, a fundamental test was carried out to prove the performance of the Multiple Excitation Simulator which consists of the hydraulic actuator and the control system. An L-shaped piping system and a hydraulic actuator were installed on the shaking table. Acceleration and displacement generated by the actuator were measured. The performance of the actuator and the control system was discussed comparing the measured values and the target values on the time histories and the response spectrum of the acceleration. As a result, it was proved that the actuator and the control system have good performance and will be applicable to the verification test

  1. Development of a common data model for scientific simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ambrosiano, J. [Los Alamos National Lab., NM (United States); Butler, D.M. [Limit Point Systems, Inc. (United States); Matarazzo, C.; Miller, M. [Lawrence Livermore National Lab., CA (United States); Schoof, L. [Sandia National Lab., Albuquerque, NM (United States)

    1999-06-01

    The problem of sharing data among scientific simulation models is a difficult and persistent one. Computational scientists employ an enormous variety of discrete approximations in modeling physical processes on computers. Problems occur when models based on different representations are required to exchange data with one another, or with some other software package. Within the DOE`s Accelerated Strategic Computing Initiative (ASCI), a cross-disciplinary group called the Data Models and Formats (DMF) group, has been working to develop a common data model. The current model is comprised of several layers of increasing semantic complexity. One of these layers is an abstract model based on set theory and topology called the fiber bundle kernel (FBK). This layer provides the flexibility needed to describe a wide range of mesh-approximated functions as well as other entities. This paper briefly describes the ASCI common data model, its mathematical basis, and ASCI prototype development. These prototypes include an object-oriented data management library developed at Los Alamos called the Common Data Model Library or CDMlib, the Vector Bundle API from the Lawrence Livermore Laboratory, and the DMF API from Sandia National Laboratory.

  2. MAPPS (Maintenance Personnel Performance Simulation): a computer simulation model for human reliability analysis

    International Nuclear Information System (INIS)

    Knee, H.E.; Haas, P.M.

    1985-01-01

    A computer model has been developed, sensitivity tested, and evaluated capable of generating reliable estimates of human performance measures in the nuclear power plant (NPP) maintenance context. The model, entitled MAPPS (Maintenance Personnel Performance Simulation), is of the simulation type and is task-oriented. It addresses a number of person-machine, person-environment, and person-person variables and is capable of providing the user with a rich spectrum of important performance measures including mean time for successful task performance by a maintenance team and maintenance team probability of task success. These two measures are particularly important for input to probabilistic risk assessment (PRA) studies which were the primary impetus for the development of MAPPS. The simulation nature of the model along with its generous input parameters and output variables allows its usefulness to extend beyond its input to PRA

  3. Alcohol consumption for simulated driving performance: A systematic review.

    Science.gov (United States)

    Rezaee-Zavareh, Mohammad Saeid; Salamati, Payman; Ramezani-Binabaj, Mahdi; Saeidnejad, Mina; Rousta, Mansoureh; Shokraneh, Farhad; Rahimi-Movaghar, Vafa

    2017-06-01

    Alcohol consumption can lead to risky driving and increase the frequency of traffic accidents, injuries and mortalities. The main purpose of our study was to compare simulated driving performance between two groups of drivers, one consumed alcohol and the other not consumed, using a systematic review. In this systematic review, electronic resources and databases including Medline via Ovid SP, EMBASE via Ovid SP, PsycINFO via Ovid SP, PubMed, Scopus, Cumulative Index to Nursing and Allied Health Literature (CINHAL) via EBSCOhost were comprehensively and systematically searched. The randomized controlled clinical trials that compared simulated driving performance between two groups of drivers, one consumed alcohol and the other not consumed, were included. Lane position standard deviation (LPSD), mean of lane position deviation (MLPD), speed, mean of speed deviation (MSD), standard deviation of speed deviation (SDSD), number of accidents (NA) and line crossing (LC) were considered as the main parameters evaluating outcomes. After title and abstract screening, the articles were enrolled for data extraction and they were evaluated for risk of biases. Thirteen papers were included in our qualitative synthesis. All included papers were classified as high risk of biases. Alcohol consumption mostly deteriorated the following performance outcomes in descending order: SDSD, LPSD, speed, MLPD, LC and NA. Our systematic review had troublesome heterogeneity. Alcohol consumption may decrease simulated driving performance in alcohol consumed people compared with non-alcohol consumed people via changes in SDSD, LPSD, speed, MLPD, LC and NA. More well-designed randomized controlled clinical trials are recommended. Copyright © 2017. Production and hosting by Elsevier B.V.

  4. Alcohol consumption for simulated driving performance: A systematic review

    Institute of Scientific and Technical Information of China (English)

    Mohammad Saeid Rezaee-Zavareh; Payman Salamati; Mahdi Ramezani-Binabaj; Mina Saeidnejad; Mansoureh Rousta; Farhad Shokraneh; Vafa Rahimi-Movaghar

    2017-01-01

    Purpose:Alcohol consumption can lead to risky driving and increase the frequency of traffic accidents,injuries and mortalities.The main purpose of our study was to compare simulated driving performance between two groups of drivers,one consumed alcohol and the other not consumed,using a systematic review.Methods:In this systematic review,electronic resources and databases including Medline via Ovid SP,EMBASE via Ovid SP,PsycINFO via Ovid SP,PubMed,Scopus,Cumulative Index to Nursing and Allied Health Literature (CINHAL) via EBSCOhost were comprehensively and systematically searched.The randomized controlled clinical trials that compared simulated driving performance between two groups of drivers,one consumed alcohol and the other not consumed,were included.Lane position standard deviation (LPSD),mean of lane position deviation (MLPD),speed,mean of speed deviation (MSD),standard deviation of speed deviation (SDSD),number of accidents (NA) and line crossing (LC) were considered as the main parameters evaluating outcomes.After title and abstract screening,the articles were enrolled for data extraction and they were evaluated for risk of biases.Results:Thirteen papers were included in our qualitative synthesis.All included papers were classified as high risk of biases.Alcohol consumption mostly deteriorated the following performance outcomes in descending order:SDSD,LPSD,speed,MLPD,LC and NA.Our systematic review had troublesome heterogeneity.Conclusion:Alcohol consumption may decrease simulated driving performance in alcohol consumed people compared with non-alcohol consumed people via changes in SDSD,LPSD,speed,MLPD,LC and NA.More well-designed randomized controlled clinical trials are recommended.

  5. Simulation of ODE/PDE models with MATLAB, OCTAVE and SCILAB scientific and engineering applications

    CERN Document Server

    Vande Wouwer, Alain; Vilas, Carlos

    2014-01-01

    Simulation of ODE/PDE Models with MATLAB®, OCTAVE and SCILAB shows the reader how to exploit a fuller array of numerical methods for the analysis of complex scientific and engineering systems than is conventionally employed. The book is dedicated to numerical simulation of distributed parameter systems described by mixed systems of algebraic equations, ordinary differential equations (ODEs) and partial differential equations (PDEs). Special attention is paid to the numerical method of lines (MOL), a popular approach to the solution of time-dependent PDEs, which proceeds in two basic steps: spatial discretization and time integration. Besides conventional finite-difference and -element techniques, more advanced spatial-approximation methods are examined in some detail, including nonoscillatory schemes and adaptive-grid approaches. A MOL toolbox has been developed within MATLAB®/OCTAVE/SCILAB. In addition to a set of spatial approximations and time integrators, this toolbox includes a collection of applicatio...

  6. Processes Utilized by High School Students Reading Scientific Text

    Science.gov (United States)

    Clinger, Alicia Farr

    2014-01-01

    In response to an increased emphasis on disciplinary literacy in the secondary science classroom, an investigation of the literacy processes utilized by high school students while reading scientific text was undertaken. A think-aloud protocol was implemented to collect data on the processes students used when not prompted while reading a magazine…

  7. Simulation of a high efficiency multi-bed adsorption heat pump

    International Nuclear Information System (INIS)

    TeGrotenhuis, W.E.; Humble, P.H.; Sweeney, J.B.

    2012-01-01

    Attaining high energy efficiency with adsorption heat pumps is challenging due to thermodynamic losses that occur when the sorbent beds are thermally cycled without effective heat recuperation. The multi-bed concept described here enables high efficiency by effectively transferring heat from beds being cooled to beds being heated. A simplified lumped-parameter model and detailed finite element analysis are used to simulate a sorption compressor, which is used to project the overall heat pump coefficient of performance. Results are presented for ammonia refrigerant and a nano-structured monolithic carbon sorbent specifically modified for the application. The effects of bed geometry and number of beds on system performance are explored, and the majority of the performance benefit is obtained with four beds. Results indicate that a COP of 1.24 based on heat input is feasible at AHRI standard test conditions for residential HVAC equipment. When compared on a basis of primary energy input, performance equivalent to SEER 13 or 14 are theoretically attainable with this system. - Highlights: ► A multi-bed concept for adsorption heat pumps is capable of high efficiency. ► Modeling is used to simulate sorption compressor and overall heat pump performance. ► Results are presented for ammonia refrigerant and a nano-structured monolithic carbon sorbent. ► The majority of the efficiency benefit is obtained with four beds. ► Predicted COP as high as 1.24 for cooling is comparable to SEER 13 or 14 for electric heat pumps.

  8. Improving UV Resistance of High Performance Fibers

    Science.gov (United States)

    Hassanin, Ahmed

    High performance fibers are characterized by their superior properties compared to the traditional textile fibers. High strength fibers have high modules, high strength to weight ratio, high chemical resistance, and usually high temperature resistance. It is used in application where superior properties are needed such as bulletproof vests, ropes and cables, cut resistant products, load tendons for giant scientific balloons, fishing rods, tennis racket strings, parachute cords, adhesives and sealants, protective apparel and tire cords. Unfortunately, Ultraviolet (UV) radiation causes serious degradation to the most of high performance fibers. UV lights, either natural or artificial, cause organic compounds to decompose and degrade, because the energy of the photons of UV light is high enough to break chemical bonds causing chain scission. This work is aiming at achieving maximum protection of high performance fibers using sheathing approaches. The sheaths proposed are of lightweight to maintain the advantage of the high performance fiber that is the high strength to weight ratio. This study involves developing three different types of sheathing. The product of interest that need be protected from UV is braid from PBO. First approach is extruding a sheath from Low Density Polyethylene (LDPE) loaded with different rutile TiO2 % nanoparticles around the braid from the PBO. The results of this approach showed that LDPE sheath loaded with 10% TiO2 by weight achieved the highest protection compare to 0% and 5% TiO2. The protection here is judged by strength loss of PBO. This trend noticed in different weathering environments, where the sheathed samples were exposed to UV-VIS radiations in different weatheromter equipments as well as exposure to high altitude environment using NASA BRDL balloon. The second approach is focusing in developing a protective porous membrane from polyurethane loaded with rutile TiO2 nanoparticles. Membrane from polyurethane loaded with 4

  9. Comparative Performance of Four Single Extreme Outlier Discordancy Tests from Monte Carlo Simulations

    Directory of Open Access Journals (Sweden)

    Surendra P. Verma

    2014-01-01

    Full Text Available Using highly precise and accurate Monte Carlo simulations of 20,000,000 replications and 102 independent simulation experiments with extremely low simulation errors and total uncertainties, we evaluated the performance of four single outlier discordancy tests (Grubbs test N2, Dixon test N8, skewness test N14, and kurtosis test N15 for normal samples of sizes 5 to 20. Statistical contaminations of a single observation resulting from parameters called δ from ±0.1 up to ±20 for modeling the slippage of central tendency or ε from ±1.1 up to ±200 for slippage of dispersion, as well as no contamination (δ=0 and ε=±1, were simulated. Because of the use of precise and accurate random and normally distributed simulated data, very large replications, and a large number of independent experiments, this paper presents a novel approach for precise and accurate estimations of power functions of four popular discordancy tests and, therefore, should not be considered as a simple simulation exercise unrelated to probability and statistics. From both criteria of the Power of Test proposed by Hayes and Kinsella and the Test Performance Criterion of Barnett and Lewis, Dixon test N8 performs less well than the other three tests. The overall performance of these four tests could be summarized as N2≅N15>N14>N8.

  10. Computational steering of GEM based detector simulations

    Science.gov (United States)

    Sheharyar, Ali; Bouhali, Othmane

    2017-10-01

    Gas based detector R&D relies heavily on full simulation of detectors and their optimization before final prototypes can be built and tested. These simulations in particular those with complex scenarios such as those involving high detector voltages or gas with larger gains are computationally intensive may take several days or weeks to complete. These long-running simulations usually run on the high-performance computers in batch mode. If the results lead to unexpected behavior, then the simulation might be rerun with different parameters. However, the simulations (or jobs) may have to wait in a queue until they get a chance to run again because the supercomputer is a shared resource that maintains a queue of other user programs as well and executes them as time and priorities permit. It may result in inefficient resource utilization and increase in the turnaround time for the scientific experiment. To overcome this issue, the monitoring of the behavior of a simulation, while it is running (or live), is essential. In this work, we employ the computational steering technique by coupling the detector simulations with a visualization package named VisIt to enable the exploration of the live data as it is produced by the simulation.

  11. Scientific work environments in the next decade

    Science.gov (United States)

    Gomez, Julian E.

    1989-01-01

    The applications of contemporary computer graphics to scientific visualization is described, with emphasis on the nonintuitive problems. A radically different approach is proposed which centers on the idea of the scientist being in the simulation display space rather than observing it on a screen. Interaction is performed with nonstandard input devices to preserve the feeling of being immersed in the three-dimensional display space. Construction of such a system could begin now with currently available technology.

  12. Return on Scientific Investment – RoSI: a PMO dynamical index proposal for scientific projects performance evaluation and management

    Directory of Open Access Journals (Sweden)

    Cristofer André Caous

    2012-06-01

    Full Text Available Objective: To propose a measure (index of expected risks to evaluateand follow up the performance analysis of research projects involvingfinancial and adequate structure parameters for its development.Methods: A ranking of acceptable results regarding researchprojects with complex variables was used as an index to gauge aproject performance. In order to implement this method the ulcerindex as the basic model to accommodate the following variableswas applied: costs, high impact publication, fund raising, and patentregistry. The proposed structured analysis, named here as RoSI(Return on Scientific Investment comprises a pipeline of analysis tocharacterize the risk based on a modeling tool that comprises multiplevariables interacting in semi-quantitatively environments. Results:This method was tested with data from three different projects in ourInstitution (projects A, B and C. Different curves reflected the ulcer indexes identifying the project that may have a minor risk (project C related to development and expected results according to initial or full investment. Conclusion: The results showed that this model contributes significantly to the analysis of risk and planning as well as to the definition of necessary investments that consider contingency actions with benefits to the different stakeholders: the investor or donor, the project manager and the researchers.

  13. The Effect of Natural or Simulated Altitude Training on High-Intensity Intermittent Running Performance in Team-Sport Athletes: A Meta-Analysis.

    Science.gov (United States)

    Hamlin, Michael J; Lizamore, Catherine A; Hopkins, Will G

    2018-02-01

    While adaptation to hypoxia at natural or simulated altitude has long been used with endurance athletes, it has only recently gained popularity for team-sport athletes. To analyse the effect of hypoxic interventions on high-intensity intermittent running performance in team-sport athletes. A systematic literature search of five journal databases was performed. Percent change in performance (distance covered) in the Yo-Yo intermittent recovery test (level 1 and level 2 were used without differentiation) in hypoxic (natural or simulated altitude) and control (sea level or normoxic placebo) groups was meta-analyzed with a mixed model. The modifying effects of study characteristics (type and dose of hypoxic exposure, training duration, post-altitude duration) were estimated with fixed effects, random effects allowed for repeated measurement within studies and residual real differences between studies, and the standard-error weighting factors were derived or imputed via standard deviations of change scores. Effects and their uncertainty were assessed with magnitude-based inference, with a smallest important improvement of 4% estimated via between-athlete standard deviations of performance at baseline. Ten studies qualified for inclusion, but two were excluded owing to small sample size and risk of publication bias. Hypoxic interventions occurred over a period of 7-28 days, and the range of total hypoxic exposure (in effective altitude-hours) was 4.5-33 km h in the intermittent-hypoxia studies and 180-710 km h in the live-high studies. There were 11 control and 15 experimental study-estimates in the final meta-analysis. Training effects were moderate and very likely beneficial in the control groups at 1 week (20 ± 14%, percent estimate, ± 90% confidence limits) and 4-week post-intervention (25 ± 23%). The intermittent and live-high hypoxic groups experienced additional likely beneficial gains at 1 week (13 ± 16%; 13 ± 15%) and 4-week post

  14. Teaching childbirth with high-fidelity simulation. Is it better observing the scenario during the briefing session?

    Science.gov (United States)

    Cuerva, Marcos J; Piñel, Carlos S; Martin, Lourdes; Espinosa, Jose A; Corral, Octavio J; Mendoza, Nicolás

    2018-02-12

    The design of optimal courses for obstetric undergraduate teaching is a relevant question. This study evaluates two different designs of simulator-based learning activity on childbirth with regard to respect to the patient, obstetric manoeuvres, interpretation of cardiotocography tracings (CTG) and infection prevention. This randomised experimental study which differs in the content of their briefing sessions consisted of two groups of undergraduate students, who performed two simulator-based learning activities on childbirth. The first briefing session included the observations of a properly performed scenario according to Spanish clinical practice guidelines on care in normal childbirth by the teachers whereas the second group did not include the observations of a properly performed scenario, and the students observed it only after the simulation process. The group that observed a properly performed scenario after the simulation obtained worse grades during the simulation, but better grades during the debriefing and evaluation. Simulator use in childbirth may be more fruitful when the medical students observe correct performance at the completion of the scenario compared to that at the start of the scenario. Impact statement What is already known on this subject? There is a scarcity of literature about the design of optimal high-fidelity simulation training in childbirth. It is known that preparing simulator-based learning activities is a complex process. Simulator-based learning includes the following steps: briefing, simulation, debriefing and evaluation. The most important part of high-fidelity simulations is the debriefing. A good briefing and simulation are of high relevance in order to have a fruitful debriefing session. What do the results of this study add? Our study describes a full simulator-based learning activity on childbirth that can be reproduced in similar facilities. The findings of this study add that high-fidelity simulation training in

  15. Scientific data management challenges, technology and deployment

    CERN Document Server

    Rotem, Doron

    2010-01-01

    Dealing with the volume, complexity, and diversity of data currently being generated by scientific experiments and simulations often causes scientists to waste productive time. Scientific Data Management: Challenges, Technology, and Deployment describes cutting-edge technologies and solutions for managing and analyzing vast amounts of data, helping scientists focus on their scientific goals. The book begins with coverage of efficient storage systems, discussing how to write and read large volumes of data without slowing the simulation, analysis, or visualization processes. It then focuses on the efficient data movement and management of storage spaces and explores emerging database systems for scientific data. The book also addresses how to best organize data for analysis purposes, how to effectively conduct searches over large datasets, how to successfully automate multistep scientific process workflows, and how to automatically collect metadata and lineage information. This book provides a comprehensive u...

  16. Collective efficacy in a high-fidelity simulation of an airline operations center

    Science.gov (United States)

    Jinkerson, Shanna

    This study investigated the relationships between collective efficacy, teamwork, and team performance. Participants were placed into teams, where they worked together in a high-fidelity simulation of an airline operations center. Each individual was assigned a different role to represent different jobs within an airline (Flight Operations Coordinator, Crew Scheduling, Maintenance, Weather, Flight Scheduling, or Flight Planning.) Participants completed a total of three simulations with an After Action Review between each. Within this setting, both team performance and teamwork behaviors were shown to be positively related to expectations for subsequent performance (collective efficacy). Additionally, teamwork and collective efficacy were not shown to be concomitantly related to subsequent team performance. A chi-square test was used to evaluate existence of performance spirals, and they were not supported. The results of this study were likely impacted by lack of power, as well as a lack of consistency across the three simulations.

  17. EFFECT SCIENTIFIC INQUIRY TEACHING MODELS AND SCIENTIFIC ATTITUDE TO PHYSICS STUDENT OUTCOMES

    Directory of Open Access Journals (Sweden)

    Dian Clara Natalia Sihotang

    2014-12-01

    Full Text Available The objectives of this study were to determine whether: (1 the student’s achievement taught by using Scientific Inquiry Teaching Models is better than that of taught by using Direct Instruction; (2 the student’s achievement who have a high scientific attitude is better than student who have low scientific attitude; and (3 there is interaction between Scientific Inquiry Teaching Models and scientific attitude for the student’s achievement. The results of research are: (1 the student’s achievement given learning through Scientific Inquiry Teaching Models better than Direct Instruction; (2 the student’s achievement who have a high scientific attitude better than student who have low scientific attitude; and (3 there was interaction between Scientific Inquiry Teaching Models and scientific attitude for student’s achievement which this models is better to apply for student who have a high scientific attitude.

  18. Thermomechanical simulations and experimental validation for high speed incremental forming

    Science.gov (United States)

    Ambrogio, Giuseppina; Gagliardi, Francesco; Filice, Luigino; Romero, Natalia

    2016-10-01

    Incremental sheet forming (ISF) consists in deforming only a small region of the workspace through a punch driven by a NC machine. The drawback of this process is its slowness. In this study, a high speed variant has been investigated from both numerical and experimental points of view. The aim has been the design of a FEM model able to perform the material behavior during the high speed process by defining a thermomechanical model. An experimental campaign has been performed by a CNC lathe with high speed to test process feasibility. The first results have shown how the material presents the same performance than in conventional speed ISF and, in some cases, better material behavior due to the temperature increment. An accurate numerical simulation has been performed to investigate the material behavior during the high speed process confirming substantially experimental evidence.

  19. Status, performance and scientific highlights from the MAGIC telescope system

    Energy Technology Data Exchange (ETDEWEB)

    Doert, Marlene [Technische Universitaet Dortmund (Germany); Ruhr-Universitaet Bochum (Germany); Collaboration: MAGIC-Collaboration

    2015-07-01

    The MAGIC telescopes are a system of two 17 m Imaging Air Cherenkov Telescopes, which are located at 2200 m above sea level at the Roque de Los Muchachos Observatory on the Canary Island of La Palma. In this presentation, we report on recent scientific highlights gained from MAGIC observations in the galactic and the extragalactic regime. We also present the current status and performance of the MAGIC system after major hardware upgrades in the years 2011 to 2014 and give an overview of future plans.

  20. Towards Online Visualization and Interactive Monitoring of Real-Time CFD Simulations on Commodity Hardware

    Directory of Open Access Journals (Sweden)

    Nils Koliha

    2015-09-01

    Full Text Available Real-time rendering in the realm of computational fluid dynamics (CFD in particular and scientific high performance computing (HPC in general is a comparably young field of research, as the complexity of most problems with practical relevance is too high for a real-time numerical simulation. However, recent advances in HPC and the development of very efficient numerical techniques allow running first optimized numerical simulations in or near real-time, which in return requires integrated and optimized visualization techniques that do not affect performance. In this contribution, we present concepts, implementation details and several application examples of a minimally-invasive, efficient visualization tool for the interactive monitoring of 2D and 3D turbulent flow simulations on commodity hardware. The numerical simulations are conducted with ELBE, an efficient lattice Boltzmann environment based on NVIDIA CUDA (Compute Unified Device Architecture, which provides optimized numerical kernels for 2D and 3D computational fluid dynamics with fluid-structure interactions and turbulence.

  1. Team Culture and Business Strategy Simulation Performance

    Science.gov (United States)

    Ritchie, William J.; Fornaciari, Charles J.; Drew, Stephen A. W.; Marlin, Dan

    2013-01-01

    Many capstone strategic management courses use computer-based simulations as core pedagogical tools. Simulations are touted as assisting students in developing much-valued skills in strategy formation, implementation, and team management in the pursuit of superior strategic performance. However, despite their rich nature, little is known regarding…

  2. Abrupt climate change and high to low latitude teleconnections as simulated in climate models

    DEFF Research Database (Denmark)

    Cvijanovic, Ivana

    of the present day atmospheric mid-latitude energy transport compared to that of the Last Glacial Maximum, suggesting its ability to reorganize more easily and thereby dampen high latitude temperature anomalies that could arise from changes in the oceanic transport. The role of tropical SSTs in the tropical......High to low latitude atmospheric teleconnections have been a topic of increasing scientific interest since it was shown that high latitude extratropical forcing can induce tropical precipitation shifts through atmosphere-surface ocean interactions. In this thesis, several aspects of high to low...... precipitation shifts was further re-examined in idealized simulations with the fixed tropical sea surface temperatures, showing that the SST changes are fundamental to the tropical precipitation shifts. Regarding the high latitude energy loss, it was shown that the main energy compensation comes from...

  3. GROMACS 4.5: a high-throughput and highly parallel open source molecular simulation toolkit.

    Science.gov (United States)

    Pronk, Sander; Páll, Szilárd; Schulz, Roland; Larsson, Per; Bjelkmar, Pär; Apostolov, Rossen; Shirts, Michael R; Smith, Jeremy C; Kasson, Peter M; van der Spoel, David; Hess, Berk; Lindahl, Erik

    2013-04-01

    Molecular simulation has historically been a low-throughput technique, but faster computers and increasing amounts of genomic and structural data are changing this by enabling large-scale automated simulation of, for instance, many conformers or mutants of biomolecules with or without a range of ligands. At the same time, advances in performance and scaling now make it possible to model complex biomolecular interaction and function in a manner directly testable by experiment. These applications share a need for fast and efficient software that can be deployed on massive scale in clusters, web servers, distributed computing or cloud resources. Here, we present a range of new simulation algorithms and features developed during the past 4 years, leading up to the GROMACS 4.5 software package. The software now automatically handles wide classes of biomolecules, such as proteins, nucleic acids and lipids, and comes with all commonly used force fields for these molecules built-in. GROMACS supports several implicit solvent models, as well as new free-energy algorithms, and the software now uses multithreading for efficient parallelization even on low-end systems, including windows-based workstations. Together with hand-tuned assembly kernels and state-of-the-art parallelization, this provides extremely high performance and cost efficiency for high-throughput as well as massively parallel simulations. GROMACS is an open source and free software available from http://www.gromacs.org. Supplementary data are available at Bioinformatics online.

  4. Performance evaluation of sea surface simulation methods for target detection

    Science.gov (United States)

    Xia, Renjie; Wu, Xin; Yang, Chen; Han, Yiping; Zhang, Jianqi

    2017-11-01

    With the fast development of sea surface target detection by optoelectronic sensors, machine learning has been adopted to improve the detection performance. Many features can be learned from training images by machines automatically. However, field images of sea surface target are not sufficient as training data. 3D scene simulation is a promising method to address this problem. For ocean scene simulation, sea surface height field generation is the key point to achieve high fidelity. In this paper, two spectra-based height field generation methods are evaluated. Comparison between the linear superposition and linear filter method is made quantitatively with a statistical model. 3D ocean scene simulating results show the different features between the methods, which can give reference for synthesizing sea surface target images with different ocean conditions.

  5. Evaluating TCMS Train-to-Ground communication performances based on the LTE technology and discreet event simulations

    DEFF Research Database (Denmark)

    Bouaziz, Maha; Yan, Ying; Kassab, Mohamed

    2018-01-01

    is shared between the train and different passengers. The simulation is based on the discrete-events network simulator Riverbed Modeler. Next, second step focusses on a co-simulation testbed, to evaluate performances with real traffic based on Hardware-In-The-Loop and OpenAirInterface modules. Preliminary...... (Long Term Evolution) network as an alternative communication technology, instead of GSM-R (Global System for Mobile communications-Railway) because of some capacity and capability limits. First step, a pure simulation is used to evaluate the network load for a high-speed scenario, when the LTE network...... simulation and co-simulation results show that LTE provides good performance for the TCMS traffic exchange in terms of packet delay and data integrity...

  6. Evaluation of medical research performance--position paper of the Association of the Scientific Medical Societies in Germany (AWMF).

    Science.gov (United States)

    Herrmann-Lingen, Christoph; Brunner, Edgar; Hildenbrand, Sibylle; Loew, Thomas H; Raupach, Tobias; Spies, Claudia; Treede, Rolf-Detlef; Vahl, Christian-Friedrich; Wenz, Hans-Jürgen

    2014-01-01

    The evaluation of medical research performance is a key prerequisite for the systematic advancement of medical faculties, research foci, academic departments, and individual scientists' careers. However, it is often based on vaguely defined aims and questionable methods and can thereby lead to unwanted regulatory effects. The current paper aims at defining the position of German academic medicine toward the aims, methods, and consequences of its evaluation. During the Berlin Forum of the Association of the Scientific Medical Societies in Germany (AWMF) held on 18 October 2013, international experts presented data on methods for evaluating medical research performance. Subsequent discussions among representatives of relevant scientific organizations and within three ad-hoc writing groups led to a first draft of this article. Further discussions within the AWMF Committee for Evaluation of Performance in Research and Teaching and the AWMF Executive Board resulted in the final consented version presented here. The AWMF recommends modifications to the current system of evaluating medical research performance. Evaluations should follow clearly defined and communicated aims and consist of both summative and formative components. Informed peer reviews are valuable but feasible in longer time intervals only. They can be complemented by objective indicators. However, the Journal Impact Factor is not an appropriate measure for evaluating individual publications or their authors. The scientific "impact" rather requires multidimensional evaluation. Indicators of potential relevance in this context may include, e.g., normalized citation rates of scientific publications, other forms of reception by the scientific community and the public, and activities in scientific organizations, research synthesis and science communication. In addition, differentiated recommendations are made for evaluating the acquisition of third-party funds and the promotion of junior scientists. With the

  7. The economic scientific research, a production neo-factor

    Directory of Open Access Journals (Sweden)

    Elena Ciucur

    2007-12-01

    Full Text Available The scientific research represents a modern production neo-factor that implies two groups of coordinates: preparation and scientific research. The scientific research represents a complex of elements that confer a new orientation of high performance and is materialized in resources and new availabilities brought in active shape by the contribution of the creators and by the attraction in a specific way in the economic circuit. It is the creator of new ideas, lifting the performance and understanding to the highest international standards of competitive economic efficiency. In the present, the role of the scientific research stands before some new challenges generated by the stage of society. It.s propose a unitary, coherent scientific research and educational system, created in corresponding proportions, based on the type, level and utility of the system, by the state, the economic-social environment and the citizen himself.

  8. Development and verification of a high performance multi-group SP3 transport capability in the ARTEMIS core simulator

    International Nuclear Information System (INIS)

    Van Geemert, Rene

    2008-01-01

    For satisfaction of future global customer needs, dedicated efforts are being coordinated internationally and pursued continuously at AREVA NP. The currently ongoing CONVERGENCE project is committed to the development of the ARCADIA R next generation core simulation software package. ARCADIA R will be put to global use by all AREVA NP business regions, for the entire spectrum of core design processes, licensing computations and safety studies. As part of the currently ongoing trend towards more sophisticated neutronics methodologies, an SP 3 nodal transport concept has been developed for ARTEMIS which is the steady-state and transient core simulation part of ARCADIA R . For enabling a high computational performance, the SP N calculations are accelerated by applying multi-level coarse mesh re-balancing. In the current implementation, SP 3 is about 1.4 times as expensive computationally as SP 1 (diffusion). The developed SP 3 solution concept is foreseen as the future computational workhorse for many-group 3D pin-by-pin full core computations by ARCADIA R . With the entire numerical workload being highly parallelizable through domain decomposition techniques, associated CPU-time requirements that adhere to the efficiency needs in the nuclear industry can be expected to become feasible in the near future. The accuracy enhancement obtainable by using SP 3 instead of SP 1 has been verified by a detailed comparison of ARTEMIS 16-group pin-by-pin SP N results with KAERI's DeCart reference results for the 2D pin-by-pin Purdue UO 2 /MOX benchmark. This article presents the accuracy enhancement verification and quantifies the achieved ARTEMIS-SP 3 computational performance for a number of 2D and 3D multi-group and multi-box (up to pin-by-pin) core computations. (authors)

  9. Turbocharged molecular discovery of OLED emitters: from high-throughput quantum simulation to highly efficient TADF devices

    Science.gov (United States)

    Gómez-Bombarelli, Rafael; Aguilera-Iparraguirre, Jorge; Hirzel, Timothy D.; Ha, Dong-Gwang; Einzinger, Markus; Wu, Tony; Baldo, Marc A.; Aspuru-Guzik, Alán.

    2016-09-01

    Discovering new OLED emitters requires many experiments to synthesize candidates and test performance in devices. Large scale computer simulation can greatly speed this search process but the problem remains challenging enough that brute force application of massive computing power is not enough to successfully identify novel structures. We report a successful High Throughput Virtual Screening study that leveraged a range of methods to optimize the search process. The generation of candidate structures was constrained to contain combinatorial explosion. Simulations were tuned to the specific problem and calibrated with experimental results. Experimentalists and theorists actively collaborated such that experimental feedback was regularly utilized to update and shape the computational search. Supervised machine learning methods prioritized candidate structures prior to quantum chemistry simulation to prevent wasting compute on likely poor performers. With this combination of techniques, each multiplying the strength of the search, this effort managed to navigate an area of molecular space and identify hundreds of promising OLED candidate structures. An experimentally validated selection of this set shows emitters with external quantum efficiencies as high as 22%.

  10. Contribution to the Development of Simulation Model of Ship Turbine

    Directory of Open Access Journals (Sweden)

    Božić Ratko

    2015-01-01

    Full Text Available Simulation modelling, performed by System Dynamics Modelling Approach and intensive use of computers, is one of the most convenient and most successful scientific methods of analysis of performance dynamics of nonlinear and very complex natural technical and organizational systems [1]. The purpose of this work is to demonstrate the successful application of system dynamics simulation modelling at analyzing performance dynamics of a complex system of ship’s propulsion system. Gas turbine is a complex non-linear system, which needs to be systematically investigated as a unit consisting of a number of subsystems and elements, which are linked by cause-effect (UPV feedback loops (KPD, both within the propulsion system and with the relevant surrounding. In this paper the authors will present an efficient application of scientific methods for the study of complex dynamic systems called qualitative and quantitative simulation System Dynamics Methodology. Gas turbine will be presented by a set of non-linear differential equations, after which mental-verbal structural models and flowcharts in System dynamics symbols will be produced, and the performance dynamics in load condition will be simulated in POWERSIM simulation language.

  11. Scientific codes developed and used at GRS. Nuclear simulation chain

    Energy Technology Data Exchange (ETDEWEB)

    Schaffrath, Andreas; Sonnenkalb, Martin; Sievers, Juergen; Luther, Wolfgang; Velkov, Kiril [Gesellschaft fuer Anlagen und Reaktorsicherheit (GRS) gGmbH, Garching/Muenchen (Germany). Forschungszentrum

    2016-05-15

    Over 60 technical experts of the reactor safety research division of the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) gGmbH are developing and validating reliable methods and computer codes - summarized under the term nuclear simulation chain - for the safety-related assessment for all types of nuclear power plants (NPP) and other nuclear facilities considering the current state of science and technology. This nuclear simulation chain has to be able to simulate and assess all relevant physical processes and phenomena for all operating states and (severe) accidents. In the present contribution, the nuclear simulation chain developed and applied by GRS as well as selected examples of its application are presented. The latter demonstrate impressively the width of its scope and its performance. The GRS codes can be passed on request to other (national as well as international) organizations. This contributes to a worldwide increase of the nuclear safety standards. The code transfer is especially important for developing and emerging countries lacking the financial means and/or the necessary know-how for this purpose. At the end of this contribution, the respective course of action is described.

  12. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Chase Qishi [New Jersey Inst. of Technology, Newark, NJ (United States); Univ. of Memphis, TN (United States); Zhu, Michelle Mengxia [Southern Illinois Univ., Carbondale, IL (United States)

    2016-06-06

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models feature diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific

  13. High performance coronagraphy for direct imaging of exoplanets

    Directory of Open Access Journals (Sweden)

    Guyon O.

    2011-07-01

    Full Text Available Coronagraphy has recently been an extremely active field of research, with several high performance concepts proposed, and several new coronagraphs tested in laboratories and telescopes. Coronagraph concepts can be grouped in a few broad categories: Lyot-type coronagraphs, pupil apodization and nulling interferometers. Among existing coronagraph concepts, several approach the fundamental performance limit imposed by the physical nature of light. To achieve their full potential, coronagraphs require exquisite wavefront control and calibration. This has been, and still is, the main bottleneck for the scientifically productive use of coronagraphs on ground-based telescopes. New and promising wavefront sensing techniques suitable for high contrast imaging have however been developed in the last few years and are started to be realized in laboratories. I will review some of these enabling technologies, and show that coronagraphs are now ready for “prime time” on existing and future telescopes.

  14. High performance computing applied to simulation of the flow in pipes; Computacao de alto desempenho aplicada a simulacao de escoamento em dutos

    Energy Technology Data Exchange (ETDEWEB)

    Cozin, Cristiane; Lueders, Ricardo; Morales, Rigoberto E.M. [Universidade Tecnologica Federal do Parana (UTFPR), Curitiba, PR (Brazil). Dept. de Engenharia Mecanica

    2008-07-01

    In recent years, computer cluster has emerged as a real alternative to solution of problems which require high performance computing. Consequently, the development of new applications has been driven. Among them, flow simulation represents a real computational burden specially for large systems. This work presents a study of using parallel computing for numerical fluid flow simulation in pipelines. A mathematical flow model is numerically solved. In general, this procedure leads to a tridiagonal system of equations suitable to be solved by a parallel algorithm. In this work, this is accomplished by a parallel odd-oven reduction method found in the literature which is implemented on Fortran programming language. A computational platform composed by twelve processors was used. Many measures of CPU times for different tridiagonal system sizes and number of processors were obtained, highlighting the communication time between processors as an important issue to be considered when evaluating the performance of parallel applications. (author)

  15. Highly immersive virtual reality laparoscopy simulation: development and future aspects.

    Science.gov (United States)

    Huber, Tobias; Wunderling, Tom; Paschold, Markus; Lang, Hauke; Kneist, Werner; Hansen, Christian

    2018-02-01

    Virtual reality (VR) applications with head-mounted displays (HMDs) have had an impact on information and multimedia technologies. The current work aimed to describe the process of developing a highly immersive VR simulation for laparoscopic surgery. We combined a VR laparoscopy simulator (LapSim) and a VR-HMD to create a user-friendly VR simulation scenario. Continuous clinical feedback was an essential aspect of the development process. We created an artificial VR (AVR) scenario by integrating the simulator video output with VR game components of figures and equipment in an operating room. We also created a highly immersive VR surrounding (IVR) by integrating the simulator video output with a [Formula: see text] video of a standard laparoscopy scenario in the department's operating room. Clinical feedback led to optimization of the visualization, synchronization, and resolution of the virtual operating rooms (in both the IVR and the AVR). Preliminary testing results revealed that individuals experienced a high degree of exhilaration and presence, with rare events of motion sickness. The technical performance showed no significant difference compared to that achieved with the standard LapSim. Our results provided a proof of concept for the technical feasibility of an custom highly immersive VR-HMD setup. Future technical research is needed to improve the visualization, immersion, and capability of interacting within the virtual scenario.

  16. Maintenance Personnel Performance Simulation (MAPPS) model

    International Nuclear Information System (INIS)

    Siegel, A.I.; Bartter, W.D.; Wolf, J.J.; Knee, H.E.; Haas, P.M.

    1984-01-01

    A stochastic computer model for simulating the actions and behavior of nuclear power plant maintenance personnel is described. The model considers personnel, environmental, and motivational variables to yield predictions of maintenance performance quality and time to perform. The mode has been fully developed and sensitivity tested. Additional evaluation of the model is now taking place

  17. Management of scientific and engineering data collected during site characterization of a potential high-level waste repository

    International Nuclear Information System (INIS)

    Newbury, C.M.; Heitland, G.W.

    1992-01-01

    This paper discusses the characterization of Yucca Mountain as a potential site for a high-level nuclear waste repository encompasses many diverse investigations to determine the nature of the site. Laboratory and on-site investigations are being conducted of the geology, hydrology, mineralogy, paleoclimate, geotechnical properties, and past use of the area, to name a few. Effective use of the data from these investigations requires development of a system for the collection, storage, and dissemination of those scientific and engineering data needed to support model development, design, and performance assessment. The time and budgetary constraints associated with this project make sharing of technical data within the geoscience community absolutely critical to the successful solution of the complex scientific problem challenging us

  18. Scientific report 1997

    International Nuclear Information System (INIS)

    Gosset, J.; Gueneau, C.; Doizi, D.

    1998-01-01

    In this book are found technical and scientific papers on the main works of the Direction of the Fuel Cycle (DCC) in France. The study fields are: the up-side of the nuclear fuel cycle with theoretical studies (plasma simulation) and technological developments and instrumentation (lasers diodes, carbides plasma projection, carbon 13 enrichment); the down-side nuclear fuel cycle with theoretical studies (ion Eu 3+ complexation simulation, decay simulation, uranium and plutonium diffusion study, electrolyser operating simulation), scenario studies ( recycling, wastes management), experimental studies; dismantling and cleaning (soils cleaning, surface-active agent for decontamination, fault tree analysis); analysis with expert systems and mass spectrometry. (A.L.B.)

  19. LIAR: A COMPUTER PROGRAM FOR THE SIMULATION AND MODELING OF HIGH PERFORMANCE LINACS

    International Nuclear Information System (INIS)

    Adolphsen, Chris

    2003-01-01

    The computer program LIAR (''LInear Accelerator Research code'') is a numerical simulation and tracking program for linear colliders. The LIAR project was started at SLAC in August 1995 in order to provide a computing and simulation tool that specifically addresses the needs of high energy linear colliders. LIAR is designed to be used for a variety of different linear accelerators. It has been applied for and checked against the existing Stanford Linear Collider (SLC) as well as the linacs of the proposed Next Linear Collider (NLC) and the proposed Linac Coherent Light Source (LCLS). The program includes wakefield effects, a 4D coupled beam description, specific optimization algorithms and other advanced features. We describe the most important concepts and highlights of the program. After having presented the LIAR program at the LINAC96 and the PAC97 conferences, we do now introduce it to the European particle accelerator community

  20. A Comparison Study of Augmented Reality versus Interactive Simulation Technology to Support Student Learning of a Socio-Scientific Issue

    Science.gov (United States)

    Chang, Hsin-Yi; Hsu, Ying-Shao; Wu, Hsin-Kai

    2016-01-01

    We investigated the impact of an augmented reality (AR) versus interactive simulation (IS) activity incorporated in a computer learning environment to facilitate students' learning of a socio-scientific issue (SSI) on nuclear power plants and radiation pollution. We employed a quasi-experimental research design. Two classes (a total of 45…

  1. Using High Performance Computing to Support Water Resource Planning

    Energy Technology Data Exchange (ETDEWEB)

    Groves, David G. [RAND Corporation, Santa Monica, CA (United States); Lembert, Robert J. [RAND Corporation, Santa Monica, CA (United States); May, Deborah W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Leek, James R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Syme, James [RAND Corporation, Santa Monica, CA (United States)

    2015-10-22

    In recent years, decision support modeling has embraced deliberation-withanalysis— an iterative process in which decisionmakers come together with experts to evaluate a complex problem and alternative solutions in a scientifically rigorous and transparent manner. Simulation modeling supports decisionmaking throughout this process; visualizations enable decisionmakers to assess how proposed strategies stand up over time in uncertain conditions. But running these simulation models over standard computers can be slow. This, in turn, can slow the entire decisionmaking process, interrupting valuable interaction between decisionmakers and analytics.

  2. Development of the McGill simulator for endoscopic sinus surgery: a new high-fidelity virtual reality simulator for endoscopic sinus surgery.

    Science.gov (United States)

    Varshney, Rickul; Frenkiel, Saul; Nguyen, Lily H P; Young, Meredith; Del Maestro, Rolando; Zeitouni, Anthony; Tewfik, Marc A

    2014-01-01

    The technical challenges of endoscopic sinus surgery (ESS) and the high risk of complications support the development of alternative modalities to train residents in these procedures. Virtual reality simulation is becoming a useful tool for training the skills necessary for minimally invasive surgery; however, there are currently no ESS virtual reality simulators available with valid evidence supporting their use in resident education. Our aim was to develop a new rhinology simulator, as well as to define potential performance metrics for trainee assessment. The McGill simulator for endoscopic sinus surgery (MSESS), a new sinus surgery virtual reality simulator with haptic feedback, was developed (a collaboration between the McGill University Department of Otolaryngology-Head and Neck Surgery, the Montreal Neurologic Institute Simulation Lab, and the National Research Council of Canada). A panel of experts in education, performance assessment, rhinology, and skull base surgery convened to identify core technical abilities that would need to be taught by the simulator, as well as performance metrics to be developed and captured. The MSESS allows the user to perform basic sinus surgery skills, such as an ethmoidectomy and sphenoidotomy, through the use of endoscopic tools in a virtual nasal model. The performance metrics were developed by an expert panel and include measurements of safety, quality, and efficiency of the procedure. The MSESS incorporates novel technological advancements to create a realistic platform for trainees. To our knowledge, this is the first simulator to combine novel tools such as the endonasal wash and elaborate anatomic deformity with advanced performance metrics for ESS.

  3. Highly automated driving, secondary task performance, and driver state.

    Science.gov (United States)

    Merat, Natasha; Jamson, A Hamish; Lai, Frank C H; Carsten, Oliver

    2012-10-01

    A driving simulator study compared the effect of changes in workload on performance in manual and highly automated driving. Changes in driver state were also observed by examining variations in blink patterns. With the addition of a greater number of advanced driver assistance systems in vehicles, the driver's role is likely to alter in the future from an operator in manual driving to a supervisor of highly automated cars. Understanding the implications of such advancements on drivers and road safety is important. A total of 50 participants were recruited for this study and drove the simulator in both manual and highly automated mode. As well as comparing the effect of adjustments in driving-related workload on performance, the effect of a secondary Twenty Questions Task was also investigated. In the absence of the secondary task, drivers' response to critical incidents was similar in manual and highly automated driving conditions. The worst performance was observed when drivers were required to regain control of driving in the automated mode while distracted by the secondary task. Blink frequency patterns were more consistent for manual than automated driving but were generally suppressed during conditions of high workload. Highly automated driving did not have a deleterious effect on driver performance, when attention was not diverted to the distracting secondary task. As the number of systems implemented in cars increases, an understanding of the implications of such automation on drivers' situation awareness, workload, and ability to remain engaged with the driving task is important.

  4. Key performance indicators for successful simulation projects

    OpenAIRE

    Jahangirian, M; Taylor, SJE; Young, T; Robinson, S

    2016-01-01

    There are many factors that may contribute to the successful delivery of a simulation project. To provide a structured approach to assessing the impact various factors have on project success, we propose a top-down framework whereby 15 Key Performance Indicators (KPI) are developed that represent the level of successfulness of simulation projects from various perspectives. They are linked to a set of Critical Success Factors (CSF) as reported in the simulation literature. A single measure cal...

  5. 2D simulation and performance evaluation of bifacial rear local contact c-Si solar cells under variable illumination conditions

    KAUST Repository

    Katsaounis, Theodoros; Kotsovos, Konstantinos; Gereige, Issam; Al-Saggaf, Ahmed; Tzavaras, Athanasios

    2017-01-01

    A customized 2D computational tool has been developed to simulate bifacial rear local contact PERC type PV structures based on the numerical solution of the transport equations through the finite element method. Simulations were performed under various device material parameters and back contact geometry configurations in order to optimize bifacial solar cell performance under different simulated illumination conditions. Bifacial device maximum power output was also compared with the monofacial equivalent one and the industrial standard Al-BSF structure. The performance of the bifacial structure during highly diffused irradiance conditions commonly observed in the Middle East region due to high concentrations of airborne dust particles was also investigated. Simulation results demonstrated that such conditions are highly favorable for the bifacial device because of the significantly increased diffuse component of the solar radiation which enters the back cell surface.

  6. 2D simulation and performance evaluation of bifacial rear local contact c-Si solar cells under variable illumination conditions

    KAUST Repository

    Katsaounis, Theodoros

    2017-09-18

    A customized 2D computational tool has been developed to simulate bifacial rear local contact PERC type PV structures based on the numerical solution of the transport equations through the finite element method. Simulations were performed under various device material parameters and back contact geometry configurations in order to optimize bifacial solar cell performance under different simulated illumination conditions. Bifacial device maximum power output was also compared with the monofacial equivalent one and the industrial standard Al-BSF structure. The performance of the bifacial structure during highly diffused irradiance conditions commonly observed in the Middle East region due to high concentrations of airborne dust particles was also investigated. Simulation results demonstrated that such conditions are highly favorable for the bifacial device because of the significantly increased diffuse component of the solar radiation which enters the back cell surface.

  7. 2006 XSD Scientific Software Workshop report.

    Energy Technology Data Exchange (ETDEWEB)

    Evans, K., Jr.; De Carlo, F.; Jemian, P.; Lang, J.; Lienert, U.; Maclean, J.; Newville, M.; Tieman, B.; Toby, B.; van Veenendaal, B.; Univ. of Chicago

    2006-01-22

    In May of 2006, a committee was formed to assess the fundamental needs and opportunities in scientific software for x-ray data reduction, analysis, modeling, and simulation. This committee held a series of discussions throughout the summer, conducted a poll of the members of the x-ray community, and held a workshop. This report details the findings and recommendations of the committee. Each experiment performed at the APS requires three crucial ingredients: the powerful x-ray source, an optimized instrument to perform measurements, and computer software to acquire, visualize, and analyze the experimental observations. While the APS has invested significant resources in the accelerator, investment in other areas such as scientific software for data analysis and visualization has lagged behind. This has led to the adoption of a wide variety of software with variable levels of usability. In order to maximize the scientific output of the APS, it is essential to support the broad development of real-time analysis and data visualization software. As scientists attack problems of increasing sophistication and deal with larger and more complex data sets, software is playing an ever more important role. Furthermore, our need for excellent and flexible scientific software can only be expected to increase, as the upgrade of the APS facility and the implementation of advanced detectors create a host of new measurement capabilities. New software analysis tools must be developed to take full advantage of these capabilities. It is critical that the APS take the lead in software development and the implementation of theory to software to ensure the continued success of this facility. The topics described in this report are relevant to the APS today and critical for the APS upgrade plan. Implementing these recommendations will have a positive impact on the scientific productivity of the APS today and will be even more critical in the future.

  8. Analysis of the lack of scientific and technological talents of high-level women in China

    Science.gov (United States)

    Lin, Wang

    2017-08-01

    The growth and development of high-level female scientific and technological talents has become a global problem, facing severe challenges. The lack of high-level women in science and technology has become a global problem. How to recruit and help female scientists and technological talents grow raises awareness from the industry. To find out the main reasons for the lack of high-level female scientific and technological talent. This paper analyses the impact of gender discrimination on the lack of high-level female scientific and technological talents, the impact of disciplinary differences on female roles. The main reasons are: women’s natural disadvantage of mathematical thinking; female birth, the traditional culture on the role of women and the impact of values.

  9. 76 FR 72678 - Atlantic Highly Migratory Species; Exempted Fishing, Scientific Research, Display, and Chartering...

    Science.gov (United States)

    2011-11-25

    ... require scientists to report their activities associated with these tags. Examples of research conducted... stock assessments. The public display and scientific research quotas for sandbar sharks are now limited... Highly Migratory Species; Exempted Fishing, Scientific Research, Display, and Chartering Permits; Letters...

  10. High-performance mass storage system for workstations

    Science.gov (United States)

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive

  11. PREFACE: International conference on Computer Simulation in Physics and beyond (CSP2015)

    Science.gov (United States)

    2016-02-01

    The International conference on Computer Simulations in Physics and beyond (CSP2015) was held from 6-10 September 2015 at the campus of the Moscow Institute for Electronics and Mathematics (MIEM), National Research University Higher School of Economics, Moscow. Computer simulations are in increasingly popular tool for scientific research, supplementing experimental and analytical research. The main goal of the conference is contributing to the development of methods and algorithms which take into account trends in hardware development, which may help with intensive research. The conference also allowed senior scientists and students to have the opportunity to speak each other and exchange ideas and views on the developments in the area of high-performance computing in science. We would like to take this opportunity to thank our sponsors: the Russian Foundation for Basic Research, Federal Agency of Scientific Organizations, and Higher School of Economics.

  12. Performance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systems

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2011-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore clusters: IBM POWER4, POWER5+ and Blue Gene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore clusters because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyro kinetic Toroidal Code in magnetic fusion to validate our performance model of the hybrid application on these multicore clusters. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore clusters. © 2011 IEEE.

  13. Performance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systems

    KAUST Repository

    Wu, Xingfu

    2011-08-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore clusters: IBM POWER4, POWER5+ and Blue Gene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore clusters because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyro kinetic Toroidal Code in magnetic fusion to validate our performance model of the hybrid application on these multicore clusters. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore clusters. © 2011 IEEE.

  14. Parallel processing is good for your scientific codes...But massively parallel processing is so much better

    International Nuclear Information System (INIS)

    Thomas, B.; Domain, Ch.; Souffez, Y.; Eon-Duval, P.

    1998-01-01

    Harnessing the power of many computers, to solve concurrently difficult scientific problems, is one of the most innovative trend in High Performance Computing. At EDF, we have invested in parallel computing and have achieved significant results. First we improved the processing speed of strategic codes, in order to extend their scope. Then we turned to numerical simulations at the atomic scale. These computations, we never dreamt of before, provided us with a better understanding of metallurgic phenomena. More precisely we were able to trace defects in alloys that are used in nuclear power plants. (author)

  15. Development of a High-Resolution Climate Model for Future Climate Change Projection on the Earth Simulator

    Science.gov (United States)

    Kanzawa, H.; Emori, S.; Nishimura, T.; Suzuki, T.; Inoue, T.; Hasumi, H.; Saito, F.; Abe-Ouchi, A.; Kimoto, M.; Sumi, A.

    2002-12-01

    The fastest supercomputer of the world, the Earth Simulator (total peak performance 40TFLOPS) has recently been available for climate researches in Yokohama, Japan. We are planning to conduct a series of future climate change projection experiments on the Earth Simulator with a high-resolution coupled ocean-atmosphere climate model. The main scientific aims for the experiments are to investigate 1) the change in global ocean circulation with an eddy-permitting ocean model, 2) the regional details of the climate change including Asian monsoon rainfall pattern, tropical cyclones and so on, and 3) the change in natural climate variability with a high-resolution model of the coupled ocean-atmosphere system. To meet these aims, an atmospheric GCM, CCSR/NIES AGCM, with T106(~1.1o) horizontal resolution and 56 vertical layers is to be coupled with an oceanic GCM, COCO, with ~ 0.28ox 0.19o horizontal resolution and 48 vertical layers. This coupled ocean-atmosphere climate model, named MIROC, also includes a land-surface model, a dynamic-thermodynamic seaice model, and a river routing model. The poles of the oceanic model grid system are rotated from the geographic poles so that they are placed in Greenland and Antarctic land masses to avoild the singularity of the grid system. Each of the atmospheric and the oceanic parts of the model is parallelized with the Message Passing Interface (MPI) technique. The coupling of the two is to be done with a Multi Program Multi Data (MPMD) fashion. A 100-model-year integration will be possible in one actual month with 720 vector processors (which is only 14% of the full resources of the Earth Simulator).

  16. High performance APCS conceptual design and evaluation scoping study

    International Nuclear Information System (INIS)

    Soelberg, N.; Liekhus, K.; Chambers, A.; Anderson, G.

    1998-02-01

    This Air Pollution Control System (APCS) Conceptual Design and Evaluation study was conducted to evaluate a high-performance (APC) system for minimizing air emissions from mixed waste thermal treatment systems. Seven variations of high-performance APCS designs were conceptualized using several design objectives. One of the system designs was selected for detailed process simulation using ASPEN PLUS to determine material and energy balances and evaluate performance. Installed system capital costs were also estimated. Sensitivity studies were conducted to evaluate the incremental cost and benefit of added carbon adsorber beds for mercury control, specific catalytic reduction for NO x control, and offgas retention tanks for holding the offgas until sample analysis is conducted to verify that the offgas meets emission limits. Results show that the high-performance dry-wet APCS can easily meet all expected emission limits except for possibly mercury. The capability to achieve high levels of mercury control (potentially necessary for thermally treating some DOE mixed streams) could not be validated using current performance data for mercury control technologies. The engineering approach and ASPEN PLUS modeling tool developed and used in this study identified APC equipment and system performance, size, cost, and other issues that are not yet resolved. These issues need to be addressed in feasibility studies and conceptual designs for new facilities or for determining how to modify existing facilities to meet expected emission limits. The ASPEN PLUS process simulation with current and refined input assumptions and calculations can be used to provide system performance information for decision-making, identifying best options, estimating costs, reducing the potential for emission violations, providing information needed for waste flow analysis, incorporating new APCS technologies in existing designs, or performing facility design and permitting activities

  17. Mechanism change in a simulation of peer review: from junk support to elitism.

    Science.gov (United States)

    Paolucci, Mario; Grimaldo, Francisco

    2014-01-01

    Peer review works as the hinge of the scientific process, mediating between research and the awareness/acceptance of its results. While it might seem obvious that science would regulate itself scientifically, the consensus on peer review is eroding; a deeper understanding of its workings and potential alternatives is sorely needed. Employing a theoretical approach supported by agent-based simulation, we examined computational models of peer review, performing what we propose to call redesign , that is, the replication of simulations using different mechanisms . Here, we show that we are able to obtain the high sensitivity to rational cheating that is present in literature. In addition, we also show how this result appears to be fragile against small variations in mechanisms. Therefore, we argue that exploration of the parameter space is not enough if we want to support theoretical statements with simulation, and that exploration at the level of mechanisms is needed. These findings also support prudence in the application of simulation results based on single mechanisms, and endorse the use of complex agent platforms that encourage experimentation of diverse mechanisms.

  18. Investigating Assessment Bias for Constructed Response Explanation Tasks: Implications for Evaluating Performance Expectations for Scientific Practice

    Science.gov (United States)

    Federer, Meghan Rector

    frequently incorporate multivalent concepts into explanations of change, resulting in explanatory practices that were scientifically non-normative. However, use of follow-up question approaches was found to resolve this source of bias and thereby increase the validity of inferences about student understanding. The second study focused on issues of item and instrument structure, specifically item feature effects and item position effects, which have been shown to influence measures of student performance across assessment tasks. Results indicated that, along the instrument item sequence, items with similar surface features produced greater sequencing effects than sequences of items with dissimilar surface features. This bias could be addressed by use of a counterbalanced design (i.e., Latin Square) at the population level of analysis. Explanation scores were also highly correlated with student verbosity, despite verbosity being an intrinsically trivial aspect of explanation quality. Attempting to standardize student response length was one proposed solution to the verbosity bias. The third study explored gender differences in students' performance on constructed-response explanation tasks using impact (i.e., mean raw scores) and differential item function (i.e., item difficulties) patterns. While prior research in science education has suggested that females tend to perform better on constructed-response items, the results of this study revealed no overall differences in gender achievement. However, evaluation of specific item features patterns suggested that female respondents have a slight advantage on unfamiliar explanation tasks. That is, male students tended to incorporate fewer scientifically normative concepts (i.e., key concepts) than females for unfamiliar taxa. Conversely, females tended to incorporate more scientifically non-normative ideas (i.e., naive ideas) than males for familiar taxa. Together these results indicate that gender achievement differences for this

  19. Simulation and performance of brushless DC motor actuators

    OpenAIRE

    Gerba, Alex

    1985-01-01

    The simulation model for a Brushless D.C. Motor and the associated commutation power conditioner transistor model are presented. The necessary conditions for maximum power output while operating at steady-state speed and sinusoidally distributed air-gap flux are developed. Comparisons of simulated model with the measured performance of a typical motor are done both on time response waveforms and on average performance characteristics. These preliminary results indicate good ...

  20. SLC injector simulation and tuning for high charge transport

    International Nuclear Information System (INIS)

    Yeremian, A.D.; Miller, R.H.; Clendenin, J.E.; Early, R.A.; Ross, M.C.; Turner, J.L.; Wang, J.W.

    1992-01-01

    We have simulated the SLC injector from the thermionic gun through the first accelerating section and used the resulting parameters to tune the injector for optimum performance and high charge transport. Simulations are conducted using PARMELA, a three-dimensional space-charge model. The magnetic field profile due to the existing magnetic optics is calculated using POISSON, while SUPERFISH is used to calculate the space harmonics of the various bunchers and the accelerator cavities. The initial beam conditions in the PARMELA code are derived from the EGUN model of the gun. The resulting injector parameters from the PARMELA simulation are used to prescribe experimental settings of the injector components. The experimental results are in agreement with the results of the integrated injector model. (Author) 5 figs., 7 refs

  1. The advanced computational testing and simulation toolkit (ACTS)

    International Nuclear Information System (INIS)

    Drummond, L.A.; Marques, O.

    2002-01-01

    During the past decades there has been a continuous growth in the number of physical and societal problems that have been successfully studied and solved by means of computational modeling and simulation. Distinctively, a number of these are important scientific problems ranging in scale from the atomic to the cosmic. For example, ionization is a phenomenon as ubiquitous in modern society as the glow of fluorescent lights and the etching on silicon computer chips; but it was not until 1999 that researchers finally achieved a complete numerical solution to the simplest example of ionization, the collision of a hydrogen atom with an electron. On the opposite scale, cosmologists have long wondered whether the expansion of the Universe, which began with the Big Bang, would ever reverse itself, ending the Universe in a Big Crunch. In 2000, analysis of new measurements of the cosmic microwave background radiation showed that the geometry of the Universe is flat, and thus the Universe will continue expanding forever. Both of these discoveries depended on high performance computer simulations that utilized computational tools included in the Advanced Computational Testing and Simulation (ACTS) Toolkit. The ACTS Toolkit is an umbrella project that brought together a number of general purpose computational tool development projects funded and supported by the U.S. Department of Energy (DOE). These tools, which have been developed independently, mainly at DOE laboratories, make it easier for scientific code developers to write high performance applications for parallel computers. They tackle a number of computational issues that are common to a large number of scientific applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. The ACTS Toolkit Project enables the use of these tools by a much wider community of computational scientists, and promotes code portability, reusability, reduction of duplicate efforts

  2. The advanced computational testing and simulation toolkit (ACTS)

    Energy Technology Data Exchange (ETDEWEB)

    Drummond, L.A.; Marques, O.

    2002-05-21

    During the past decades there has been a continuous growth in the number of physical and societal problems that have been successfully studied and solved by means of computational modeling and simulation. Distinctively, a number of these are important scientific problems ranging in scale from the atomic to the cosmic. For example, ionization is a phenomenon as ubiquitous in modern society as the glow of fluorescent lights and the etching on silicon computer chips; but it was not until 1999 that researchers finally achieved a complete numerical solution to the simplest example of ionization, the collision of a hydrogen atom with an electron. On the opposite scale, cosmologists have long wondered whether the expansion of the Universe, which began with the Big Bang, would ever reverse itself, ending the Universe in a Big Crunch. In 2000, analysis of new measurements of the cosmic microwave background radiation showed that the geometry of the Universe is flat, and thus the Universe will continue expanding forever. Both of these discoveries depended on high performance computer simulations that utilized computational tools included in the Advanced Computational Testing and Simulation (ACTS) Toolkit. The ACTS Toolkit is an umbrella project that brought together a number of general purpose computational tool development projects funded and supported by the U.S. Department of Energy (DOE). These tools, which have been developed independently, mainly at DOE laboratories, make it easier for scientific code developers to write high performance applications for parallel computers. They tackle a number of computational issues that are common to a large number of scientific applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. The ACTS Toolkit Project enables the use of these tools by a much wider community of computational scientists, and promotes code portability, reusability, reduction of duplicate efforts

  3. The effects of bedrest on crew performance during simulated shuttle reentry. Volume 2: Control task performance

    Science.gov (United States)

    Jex, H. R.; Peters, R. A.; Dimarco, R. J.; Allen, R. W.

    1974-01-01

    A simplified space shuttle reentry simulation performed on the NASA Ames Research Center Centrifuge is described. Anticipating potentially deleterious effects of physiological deconditioning from orbital living (simulated here by 10 days of enforced bedrest) upon a shuttle pilot's ability to manually control his aircraft (should that be necessary in an emergency) a comprehensive battery of measurements was made roughly every 1/2 minute on eight military pilot subjects, over two 20-minute reentry Gz vs. time profiles, one peaking at 2 Gz and the other at 3 Gz. Alternate runs were made without and with g-suits to test the help or interference offered by such protective devices to manual control performance. A very demanding two-axis control task was employed, with a subcritical instability in the pitch axis to force a high attentional demand and a severe loss-of-control penalty. The results show that pilots experienced in high Gz flying can easily handle the shuttle manual control task during 2 Gz or 3 Gz reentry profiles, provided the degree of physiological deconditioning is no more than induced by these 10 days of enforced bedrest.

  4. An Analysis of the Supports and Constraints for Scientific Discussion in High School Project-Based Science

    Science.gov (United States)

    Alozie, Nonye M.; Moje, Elizabeth Birr; Krajcik, Joseph S.

    2010-01-01

    One goal of project-based science is to promote the development of scientific discourse communities in classrooms. Holding rich high school scientific discussions is challenging, especially when the demands of content and norms of high school science pose challenges to their enactment. There is little research on how high school teachers enact…

  5. Scientific Visualization and Simulation for Multi-dimensional Marine Environment Data

    Science.gov (United States)

    Su, T.; Liu, H.; Wang, W.; Song, Z.; Jia, Z.

    2017-12-01

    As higher attention on the ocean and rapid development of marine detection, there are increasingly demands for realistic simulation and interactive visualization of marine environment in real time. Based on advanced technology such as GPU rendering, CUDA parallel computing and rapid grid oriented strategy, a series of efficient and high-quality visualization methods, which can deal with large-scale and multi-dimensional marine data in different environmental circumstances, has been proposed in this paper. Firstly, a high-quality seawater simulation is realized by FFT algorithm, bump mapping and texture animation technology. Secondly, large-scale multi-dimensional marine hydrological environmental data is virtualized by 3d interactive technologies and volume rendering techniques. Thirdly, seabed terrain data is simulated with improved Delaunay algorithm, surface reconstruction algorithm, dynamic LOD algorithm and GPU programming techniques. Fourthly, seamless modelling in real time for both ocean and land based on digital globe is achieved by the WebGL technique to meet the requirement of web-based application. The experiments suggest that these methods can not only have a satisfying marine environment simulation effect, but also meet the rendering requirements of global multi-dimension marine data. Additionally, a simulation system for underwater oil spill is established by OSG 3D-rendering engine. It is integrated with the marine visualization method mentioned above, which shows movement processes, physical parameters, current velocity and direction for different types of deep water oil spill particle (oil spill particles, hydrates particles, gas particles, etc.) dynamically and simultaneously in multi-dimension. With such application, valuable reference and decision-making information can be provided for understanding the progress of oil spill in deep water, which is helpful for ocean disaster forecasting, warning and emergency response.

  6. FY01 Supplemental Science and Performance Analyses, Volume 1: Scientific Bases and Analyses, Part 1 and 2

    International Nuclear Information System (INIS)

    Dobson, David

    2001-01-01

    The U.S. Department of Energy (DOE) is considering the possible recommendation of a site at Yucca Mountain, Nevada, for development as a geologic repository for the disposal of high-level radioactive waste and spent nuclear fuel. To facilitate public review and comment, in May 2001 the DOE released the Yucca Mountain Science and Engineering Report (S and ER) (DOE 2001 [DIRS 153849]), which presents technical information supporting the consideration of the possible site recommendation. The report summarizes the results of more than 20 years of scientific and engineering studies. A decision to recommend the site has not been made: the DOE has provided the S and ER and its supporting documents as an aid to the public in formulating comments on the possible recommendation. When the S and ER (DOE 2001 [DIRS 153849]) was released, the DOE acknowledged that technical and scientific analyses of the site were ongoing. Therefore, the DOE noted in the Federal Register Notice accompanying the report (66 FR 23 013 [DIRS 155009], p. 2) that additional technical information would be released before the dates, locations, and times for public hearings on the possible recommendation were announced. This information includes: (1) the results of additional technical studies of a potential repository at Yucca Mountain, contained in this FY01 Supplemental Science and Performance Analyses: Vol. 1, Scientific Bases and Analyses; and FY01 Supplemental Science and Performance Analyses: Vol. 2, Performance Analyses (McNeish 2001 [DIRS 155023]) (collectively referred to as the SSPA) and (2) a preliminary evaluation of the Yucca Mountain site's preclosure and postclosure performance against the DOE's proposed site suitability guidelines (10 CFR Part 963 [64 FR 67054] [DIRS 124754]). By making the large amount of information developed on Yucca Mountain available in stages, the DOE intends to provide the public and interested parties with time to review the available materials and to formulate

  7. Predictors of laparoscopic simulation performance among practicing obstetrician gynecologists.

    Science.gov (United States)

    Mathews, Shyama; Brodman, Michael; D'Angelo, Debra; Chudnoff, Scott; McGovern, Peter; Kolev, Tamara; Bensinger, Giti; Mudiraj, Santosh; Nemes, Andreea; Feldman, David; Kischak, Patricia; Ascher-Walsh, Charles

    2017-11-01

    While simulation training has been established as an effective method for improving laparoscopic surgical performance in surgical residents, few studies have focused on its use for attending surgeons, particularly in obstetrics and gynecology. Surgical simulation may have a role in improving and maintaining proficiency in the operating room for practicing obstetrician gynecologists. We sought to determine if parameters of performance for validated laparoscopic virtual simulation tasks correlate with surgical volume and characteristics of practicing obstetricians and gynecologists. All gynecologists with laparoscopic privileges (n = 347) from 5 academic medical centers in New York City were required to complete a laparoscopic surgery simulation assessment. The physicians took a presimulation survey gathering physician self-reported characteristics and then performed 3 basic skills tasks (enforced peg transfer, lifting/grasping, and cutting) on the LapSim virtual reality laparoscopic simulator (Surgical Science Ltd, Gothenburg, Sweden). The association between simulation outcome scores (time, efficiency, and errors) and self-rated clinical skills measures (self-rated laparoscopic skill score or surgical volume category) were examined with regression models. The average number of laparoscopic procedures per month was a significant predictor of total time on all 3 tasks (P = .001 for peg transfer; P = .041 for lifting and grasping; P simulation performance as it correlates to active physician practice, further studies may help assess skill and individualize training to maintain skill levels as case volumes fluctuate. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Development of high performance scientific components for interoperability of computing packages

    Energy Technology Data Exchange (ETDEWEB)

    Gulabani, Teena Pratap [Iowa State Univ., Ames, IA (United States)

    2008-01-01

    Three major high performance quantum chemistry computational packages, NWChem, GAMESS and MPQC have been developed by different research efforts following different design patterns. The goal is to achieve interoperability among these packages by overcoming the challenges caused by the different communication patterns and software design of each of these packages. A chemistry algorithm is hard to develop as well as being a time consuming process; integration of large quantum chemistry packages will allow resource sharing and thus avoid reinvention of the wheel. Creating connections between these incompatible packages is the major motivation of the proposed work. This interoperability is achieved by bringing the benefits of Component Based Software Engineering through a plug-and-play component framework called Common Component Architecture (CCA). In this thesis, I present a strategy and process used for interfacing two widely used and important computational chemistry methodologies: Quantum Mechanics and Molecular Mechanics. To show the feasibility of the proposed approach the Tuning and Analysis Utility (TAU) has been coupled with NWChem code and its CCA components. Results show that the overhead is negligible when compared to the ease and potential of organizing and coping with large-scale software applications.

  9. High-performance computing on GPUs for resistivity logging of oil and gas wells

    Science.gov (United States)

    Glinskikh, V.; Dudaev, A.; Nechaev, O.; Surodina, I.

    2017-10-01

    We developed and implemented into software an algorithm for high-performance simulation of electrical logs from oil and gas wells using high-performance heterogeneous computing. The numerical solution of the 2D forward problem is based on the finite-element method and the Cholesky decomposition for solving a system of linear algebraic equations (SLAE). Software implementations of the algorithm used the NVIDIA CUDA technology and computing libraries are made, allowing us to perform decomposition of SLAE and find its solution on central processor unit (CPU) and graphics processor unit (GPU). The calculation time is analyzed depending on the matrix size and number of its non-zero elements. We estimated the computing speed on CPU and GPU, including high-performance heterogeneous CPU-GPU computing. Using the developed algorithm, we simulated resistivity data in realistic models.

  10. The COD Model: Simulating Workgroup Performance

    Science.gov (United States)

    Biggiero, Lucio; Sevi, Enrico

    Though the question of the determinants of workgroup performance is one of the most central in organization science, precise theoretical frameworks and formal demonstrations are still missing. In order to fill in this gap the COD agent-based simulation model is here presented and used to study the effects of task interdependence and bounded rationality on workgroup performance. The first relevant finding is an algorithmic demonstration of the ordering of interdependencies in terms of complexity, showing that the parallel mode is the most simplex, followed by the sequential and then by the reciprocal. This result is far from being new in organization science, but what is remarkable is that now it has the strength of an algorithmic demonstration instead of being based on the authoritativeness of some scholar or on some episodic empirical finding. The second important result is that the progressive introduction of realistic limits to agents' rationality dramatically reduces workgroup performance and addresses to a rather interesting result: when agents' rationality is severely bounded simple norms work better than complex norms. The third main finding is that when the complexity of interdependence is high, then the appropriate coordination mechanism is agents' direct and active collaboration, which means teamwork.

  11. XVis: Visualization for the Extreme-Scale Scientific-Computation Ecosystem: Year-end report FY17.

    Energy Technology Data Exchange (ETDEWEB)

    Moreland, Kenneth D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pugmire, David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Rogers, David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Childs, Hank [Univ. of Oregon, Eugene, OR (United States); Ma, Kwan-Liu [Univ. of California, Davis, CA (United States); Geveci, Berk [Kitware, Inc., Clifton Park, NY (United States)

    2017-10-01

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. This project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis.

  12. Proficiency performance benchmarks for removal of simulated brain tumors using a virtual reality simulator NeuroTouch.

    Science.gov (United States)

    AlZhrani, Gmaan; Alotaibi, Fahad; Azarnoush, Hamed; Winkler-Schwartz, Alexander; Sabbagh, Abdulrahman; Bajunaid, Khalid; Lajoie, Susanne P; Del Maestro, Rolando F

    2015-01-01

    Assessment of neurosurgical technical skills involved in the resection of cerebral tumors in operative environments is complex. Educators emphasize the need to develop and use objective and meaningful assessment tools that are reliable and valid for assessing trainees' progress in acquiring surgical skills. The purpose of this study was to develop proficiency performance benchmarks for a newly proposed set of objective measures (metrics) of neurosurgical technical skills performance during simulated brain tumor resection using a new virtual reality simulator (NeuroTouch). Each participant performed the resection of 18 simulated brain tumors of different complexity using the NeuroTouch platform. Surgical performance was computed using Tier 1 and Tier 2 metrics derived from NeuroTouch simulator data consisting of (1) safety metrics, including (a) volume of surrounding simulated normal brain tissue removed, (b) sum of forces utilized, and (c) maximum force applied during tumor resection; (2) quality of operation metric, which involved the percentage of tumor removed; and (3) efficiency metrics, including (a) instrument total tip path lengths and (b) frequency of pedal activation. All studies were conducted in the Neurosurgical Simulation Research Centre, Montreal Neurological Institute and Hospital, McGill University, Montreal, Canada. A total of 33 participants were recruited, including 17 experts (board-certified neurosurgeons) and 16 novices (7 senior and 9 junior neurosurgery residents). The results demonstrated that "expert" neurosurgeons resected less surrounding simulated normal brain tissue and less tumor tissue than residents. These data are consistent with the concept that "experts" focused more on safety of the surgical procedure compared with novices. By analyzing experts' neurosurgical technical skills performance on these different metrics, we were able to establish benchmarks for goal proficiency performance training of neurosurgery residents. This

  13. Measurements and simulation-based optimization of TIGRESS HPGe detector array performance

    International Nuclear Information System (INIS)

    Schumaker, M.A.

    2005-01-01

    TIGRESS is a new γ-ray detector array being developed for installation at the new ISAC-II facility at TRIUMF in Vancouver. When complete, it will consist of twelve large-volume segmented HPGe clover detectors, fitted with segmented Compton suppression shields. The combined operation of prototypes of both a TIGRESS detector and a suppression shield has been tested. Peak-to-total ratios, relative photopeak efficiencies, and energy resolution functions have been determined in order to characterize the performance of TIGRESS. This information was then used to refine a GEANT4 simulation of the full detector array. Using this simulation, methods to overcome the degradation of the photopeak efficiency and peak-to-total response that occurs with high γ-ray multiplicity events were explored. These methods take advantage of the high segmentation of both the HPGe clovers and the suppression shields to suppress or sum detector interactions selectively. For a range of γ-ray energies and multiplicities, optimal analysis methods have been determined, which has resulted in significant gains in the expected performance of TIGRESS. (author)

  14. Performance of technology-driven simulators for medical students--a systematic review.

    Science.gov (United States)

    Michael, Michael; Abboudi, Hamid; Ker, Jean; Shamim Khan, Mohammed; Dasgupta, Prokar; Ahmed, Kamran

    2014-12-01

    Simulation-based education has evolved as a key training tool in high-risk industries such as aviation and the military. In parallel with these industries, the benefits of incorporating specialty-oriented simulation training within medical schools are vast. Adoption of simulators into medical school education programs has shown great promise and has the potential to revolutionize modern undergraduate education. An English literature search was carried out using MEDLINE, EMBASE, and psychINFO databases to identify all randomized controlled studies pertaining to "technology-driven" simulators used in undergraduate medical education. A validity framework incorporating the "framework for technology enhanced learning" report by the Department of Health, United Kingdom, was used to evaluate the capabilities of each technology-driven simulator. Information was collected regarding the simulator type, characteristics, and brand name. Where possible, we extracted information from the studies on the simulators' performance with respect to validity status, reliability, feasibility, education impact, acceptability, and cost effectiveness. We identified 19 studies, analyzing simulators for medical students across a variety of procedure-based specialities including; cardiovascular (n = 2), endoscopy (n = 3), laparoscopic surgery (n = 8), vascular access (n = 2), ophthalmology (n = 1), obstetrics and gynecology (n = 1), anesthesia (n = 1), and pediatrics (n = 1). Incorporation of simulators has so far been on an institutional level; no national or international trends have yet emerged. Simulators are capable of providing a highly educational and realistic experience for the medical students within a variety of speciality-oriented teaching sessions. Further research is needed to establish how best to incorporate simulators into a more primary stage of medical education; preclinical and clinical undergraduate medicine. Copyright © 2014 Elsevier Inc. All rights

  15. Milking performance evaluation and factors affecting milking claw vacuum levels with flow simulator.

    Science.gov (United States)

    Enokidani, Masafumi; Kawai, Kazuhiro; Shinozuka, Yasunori; Watanabe, Aiko

    2017-08-01

    Milking performance of milking machines that matches the production capability of dairy cows is important in reducing the risk of mastitis, particularly in high-producing cows. This study used a simulated milking device to examine the milking performance of the milking system of 73 dairy farms and to analyze the factors affecting claw vacuum. Mean claw vacuum and range of fluctuation of claw vacuum (claw vacuum range) were measured at three different flow rates: 5.7, 7.6 and 8.7 kg/min. At the highest flow rate, only 16 farms (21.9%) met both standards of mean claw vacuum ≥35 kPa and claw vacuum range ≤ 7 kPa, showing that milking systems currently have poor milking performance. The factors affecting mean claw vacuum were claw type, milk-meter and vacuum shut-off device; the factor affecting claw vacuum range was claw type. Examination of the milking performance of the milking system using a simulated milking device allows an examination of the performance that can cope with high producing cows, indicating the possibility of reducing the risk of mastitis caused by inappropriate claw vacuum. © 2016 Japanese Society of Animal Science.

  16. Immersive visualization of rail simulation data.

    Science.gov (United States)

    2016-01-01

    The prime objective of this project was to create scientific, immersive visualizations of a Rail-simulation. This project is a part of a larger initiative that consists of three distinct parts. The first step consists of performing a finite element a...

  17. Simulation of press-forming for automobile part using ultra high tension steel

    Directory of Open Access Journals (Sweden)

    Tanabe I.

    2012-08-01

    Full Text Available In recent years, ultra high tension steel has gradually been used in the automobile industry. The development of press-forming technology is now essential by reason of its high productivity and high product quality. In this study, tensile tests were performed with a view to understanding the material properties. Press-forming tests were then carried out with regard to the behaviors of spring back and deep-drawability, and manufacturing a real product. The ultra high tension steel used in the experiments had a thickness of 1 mm and a tensile strength of 1000 MPa. Finally, simulations of spring back, deep-drawability and manufacturing a real product in ultra high tension steel were conducted and evaluated in order to calculate the optimum-press-forming conditions and the optimum shape of the die. FEM with non-linear and dynamic analysis using Euler-Lagrange’s element was used for the simulations. It is concluded from the results that (1 the simulations conformed to the results of the experiments (2 the simulations proved very effective for calculating the optimum press conditions and die shape.

  18. Cognitive load predicts point-of-care ultrasound simulator performance.

    Science.gov (United States)

    Aldekhyl, Sara; Cavalcanti, Rodrigo B; Naismith, Laura M

    2018-02-01

    The ability to maintain good performance with low cognitive load is an important marker of expertise. Incorporating cognitive load measurements in the context of simulation training may help to inform judgements of competence. This exploratory study investigated relationships between demographic markers of expertise, cognitive load measures, and simulator performance in the context of point-of-care ultrasonography. Twenty-nine medical trainees and clinicians at the University of Toronto with a range of clinical ultrasound experience were recruited. Participants answered a demographic questionnaire then used an ultrasound simulator to perform targeted scanning tasks based on clinical vignettes. Participants were scored on their ability to both acquire and interpret ultrasound images. Cognitive load measures included participant self-report, eye-based physiological indices, and behavioural measures. Data were analyzed using a multilevel linear modelling approach, wherein observations were clustered by participants. Experienced participants outperformed novice participants on ultrasound image acquisition. Ultrasound image interpretation was comparable between the two groups. Ultrasound image acquisition performance was predicted by level of training, prior ultrasound training, and cognitive load. There was significant convergence between cognitive load measurement techniques. A marginal model of ultrasound image acquisition performance including prior ultrasound training and cognitive load as fixed effects provided the best overall fit for the observed data. In this proof-of-principle study, the combination of demographic and cognitive load measures provided more sensitive metrics to predict ultrasound simulator performance. Performance assessments which include cognitive load can help differentiate between levels of expertise in simulation environments, and may serve as better predictors of skill transfer to clinical practice.

  19. GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers

    Directory of Open Access Journals (Sweden)

    Mark James Abraham

    2015-09-01

    Full Text Available GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules. It provides a rich set of calculation types, preparation and analysis tools. Several advanced techniques for free-energy calculations are supported. In version 5, it reaches new performance heights, through several new and enhanced parallelization algorithms. These work on every level; SIMD registers inside cores, multithreading, heterogeneous CPU–GPU acceleration, state-of-the-art 3D domain decomposition, and ensemble-level parallelization through built-in replica exchange and the separate Copernicus framework. The latest best-in-class compressed trajectory storage format is supported.

  20. Nuclear Power Plant Simulation Game.

    Science.gov (United States)

    Weiss, Fran

    1979-01-01

    Presents a nuclear power plant simulation game which is designed to involve a class of 30 junior or senior high school students. Scientific, ecological, and social issues covered in the game are also presented. (HM)

  1. Modeling Phase-transitions Using a High-performance, Isogeometric Analysis Framework

    KAUST Repository

    Vignal, Philippe

    2014-06-06

    In this paper, we present a high-performance framework for solving partial differential equations using Isogeometric Analysis, called PetIGA, and show how it can be used to solve phase-field problems. We specifically chose the Cahn-Hilliard equation, and the phase-field crystal equation as test cases. These two models allow us to highlight some of the main advantages that we have access to while using PetIGA for scientific computing.

  2. The ADAQ framework: An integrated toolkit for data acquisition and analysis with real and simulated radiation detectors

    International Nuclear Information System (INIS)

    Hartwig, Zachary S.

    2016-01-01

    The ADAQ framework is a collection of software tools that is designed to streamline the acquisition and analysis of radiation detector data produced in modern digital data acquisition (DAQ) systems and in Monte Carlo detector simulations. The purpose of the framework is to maximize user scientific productivity by minimizing the effort and expertise required to fully utilize radiation detectors in a variety of scientific and engineering disciplines. By using a single set of tools to span the real and simulation domains, the framework eliminates redundancy and provides an integrated workflow for high-fidelity comparison between experimental and simulated detector performance. Built on the ROOT data analysis framework, the core of the ADAQ framework is a set of C++ and Python libraries that enable high-level control of digital DAQ systems and detector simulations with data stored into standardized binary ROOT files for further analysis. Two graphical user interface programs utilize the libraries to create powerful tools: ADAQAcquisition handles control and readout of real-world DAQ systems and ADAQAnalysis provides data analysis and visualization methods for experimental and simulated data. At present, the ADAQ framework supports digital DAQ hardware from CAEN S.p.A. and detector simulations performed in Geant4; however, the modular design will facilitate future extension to other manufacturers and simulation platforms. - Highlights: • A new software framework for radiation detector data acquisition and analysis. • Integrated acquisition and analysis of real-world and simulated detector data. • C++ and Python libraries for data acquisition hardware control and readout. • Graphical program for control and readout of digital data acquisition hardware. • Graphical program for comprehensive analysis of real-world and simulated data.

  3. The ADAQ framework: An integrated toolkit for data acquisition and analysis with real and simulated radiation detectors

    Energy Technology Data Exchange (ETDEWEB)

    Hartwig, Zachary S., E-mail: hartwig@mit.edu

    2016-04-11

    The ADAQ framework is a collection of software tools that is designed to streamline the acquisition and analysis of radiation detector data produced in modern digital data acquisition (DAQ) systems and in Monte Carlo detector simulations. The purpose of the framework is to maximize user scientific productivity by minimizing the effort and expertise required to fully utilize radiation detectors in a variety of scientific and engineering disciplines. By using a single set of tools to span the real and simulation domains, the framework eliminates redundancy and provides an integrated workflow for high-fidelity comparison between experimental and simulated detector performance. Built on the ROOT data analysis framework, the core of the ADAQ framework is a set of C++ and Python libraries that enable high-level control of digital DAQ systems and detector simulations with data stored into standardized binary ROOT files for further analysis. Two graphical user interface programs utilize the libraries to create powerful tools: ADAQAcquisition handles control and readout of real-world DAQ systems and ADAQAnalysis provides data analysis and visualization methods for experimental and simulated data. At present, the ADAQ framework supports digital DAQ hardware from CAEN S.p.A. and detector simulations performed in Geant4; however, the modular design will facilitate future extension to other manufacturers and simulation platforms. - Highlights: • A new software framework for radiation detector data acquisition and analysis. • Integrated acquisition and analysis of real-world and simulated detector data. • C++ and Python libraries for data acquisition hardware control and readout. • Graphical program for control and readout of digital data acquisition hardware. • Graphical program for comprehensive analysis of real-world and simulated data.

  4. Outcomes and challenges of global high-resolution non-hydrostatic atmospheric simulations using the K computer

    Science.gov (United States)

    Satoh, Masaki; Tomita, Hirofumi; Yashiro, Hisashi; Kajikawa, Yoshiyuki; Miyamoto, Yoshiaki; Yamaura, Tsuyoshi; Miyakawa, Tomoki; Nakano, Masuo; Kodama, Chihiro; Noda, Akira T.; Nasuno, Tomoe; Yamada, Yohei; Fukutomi, Yoshiki

    2017-12-01

    This article reviews the major outcomes of a 5-year (2011-2016) project using the K computer to perform global numerical atmospheric simulations based on the non-hydrostatic icosahedral atmospheric model (NICAM). The K computer was made available to the public in September 2012 and was used as a primary resource for Japan's Strategic Programs for Innovative Research (SPIRE), an initiative to investigate five strategic research areas; the NICAM project fell under the research area of climate and weather simulation sciences. Combining NICAM with high-performance computing has created new opportunities in three areas of research: (1) higher resolution global simulations that produce more realistic representations of convective systems, (2) multi-member ensemble simulations that are able to perform extended-range forecasts 10-30 days in advance, and (3) multi-decadal simulations for climatology and variability. Before the K computer era, NICAM was used to demonstrate realistic simulations of intra-seasonal oscillations including the Madden-Julian oscillation (MJO), merely as a case study approach. Thanks to the big leap in computational performance of the K computer, we could greatly increase the number of cases of MJO events for numerical simulations, in addition to integrating time and horizontal resolution. We conclude that the high-resolution global non-hydrostatic model, as used in this five-year project, improves the ability to forecast intra-seasonal oscillations and associated tropical cyclogenesis compared with that of the relatively coarser operational models currently in use. The impacts of the sub-kilometer resolution simulation and the multi-decadal simulations using NICAM are also reviewed.

  5. Status report on high fidelity reactor simulation

    International Nuclear Information System (INIS)

    Palmiotti, G.; Smith, M.; Rabiti, C.; Lewis, E.; Yang, W.; Leclere, M.; Siegel, A.; Fischer, P.; Kaushik, D.; Ragusa, J.; Lottes, J.; Smith, B.

    2006-01-01

    This report presents the effort under way at Argonne National Laboratory toward a comprehensive, integrated computational tool intended mainly for the high-fidelity simulation of sodium-cooled fast reactors. The main activities carried out involved neutronics, thermal hydraulics, coupling strategies, software architecture, and high-performance computing. A new neutronics code, UNIC, is being developed. The first phase involves the application of a spherical harmonics method to a general, unstructured three-dimensional mesh. The method also has been interfaced with a method of characteristics. The spherical harmonics equations were implemented in a stand-alone code that was then used to solve several benchmark problems. For thermal hydraulics, a computational fluid dynamics code called Nek5000, developed in the Mathematics and Computer Science Division for coupled hydrodynamics and heat transfer, has been applied to a single-pin, periodic cell in the wire-wrap geometry typical of advanced burner reactors. Numerical strategies for multiphysics coupling have been considered and higher-accuracy efficient methods proposed to finely simulate coupled neutronic/thermal-hydraulic reactor transients. Initial steps have been taken in order to couple UNIC and Nek5000, and simplified problems have been defined and solved for testing. Furthermore, we have begun developing a lightweight computational framework, based in part on carefully selected open source tools, to nonobtrusively and efficiently integrate the individual physics modules into a unified simulation tool

  6. Integrated heat transport simulation of high ion temperature plasma of LHD

    International Nuclear Information System (INIS)

    Murakami, S.; Yamaguchi, H.; Sakai, A.

    2014-10-01

    A first dynamical simulation of high ion temperature plasma with carbon pellet injection of LHD is performed by the integrated simulation GNET-TD + TASK3D. NBI heating deposition of time evolving plasma is evaluated by the 5D drift kinetic equation solver, GNET-TD and the heat transport of multi-ion species plasma (e, H, He, C) is studied by the integrated transport simulation code, TASK3D. Achievement of high ion temperature plasma is attributed to the 1) increase of heating power per ion due to the temporal increase of effective charge, 2) reduction of effective neoclassical transport with impurities, 3) reduction of turbulence transport. The reduction of turbulence transport is most significant contribution to achieve the high ion temperature and the reduction of the turbulent transport from the L-mode plasma (normal hydrogen plasma) is evaluated to be a factor about five by using integrated heat transport simulation code. Applying the Z effective dependent turbulent reduction model we obtain a similar time behavior of ion temperature after the C pellet injection with the experimental results. (author)

  7. Parallel PDE-Based Simulations Using the Common Component Architecture

    International Nuclear Information System (INIS)

    McInnes, Lois C.; Allan, Benjamin A.; Armstrong, Robert; Benson, Steven J.; Bernholdt, David E.; Dahlgren, Tamara L.; Diachin, Lori; Krishnan, Manoj Kumar; Kohl, James A.; Larson, J. Walter; Lefantzi, Sophia; Nieplocha, Jarek; Norris, Boyana; Parker, Steven G.; Ray, Jaideep; Zhou, Shujia

    2006-01-01

    The complexity of parallel PDE-based simulations continues to increase as multimodel, multiphysics, and multi-institutional projects become widespread. A goal of component based software engineering in such large-scale simulations is to help manage this complexity by enabling better interoperability among various codes that have been independently developed by different groups. The Common Component Architecture (CCA) Forum is defining a component architecture specification to address the challenges of high-performance scientific computing. In addition, several execution frameworks, supporting infrastructure, and general purpose components are being developed. Furthermore, this group is collaborating with others in the high-performance computing community to design suites of domain-specific component interface specifications and underlying implementations. This chapter discusses recent work on leveraging these CCA efforts in parallel PDE-based simulations involving accelerator design, climate modeling, combustion, and accidental fires and explosions. We explain how component technology helps to address the different challenges posed by each of these applications, and we highlight how component interfaces built on existing parallel toolkits facilitate the reuse of software for parallel mesh manipulation, discretization, linear algebra, integration, optimization, and parallel data redistribution. We also present performance data to demonstrate the suitability of this approach, and we discuss strategies for applying component technologies to both new and existing applications

  8. Large-scale computation at PSI scientific achievements and future requirements

    International Nuclear Information System (INIS)

    Adelmann, A.; Markushin, V.

    2008-11-01

    Computational modelling and simulation are among the disciplines that have seen the most dramatic growth in capabilities in the 2Oth Century. Within the past two decades, scientific computing has become an important contributor to all scientific research programs. Computational modelling and simulation are particularly indispensable for solving research problems that are unsolvable by traditional theoretical and experimental approaches, hazardous to study, or time consuming or expensive to solve by traditional means. Many such research areas are found in PSI's research portfolio. Advances in computing technologies (including hardware and software) during the past decade have set the stage for a major step forward in modelling and simulation. We have now arrived at a situation where we have a number of otherwise unsolvable problems, where simulations are as complex as the systems under study. In 2008 the High-Performance Computing (HPC) community entered the petascale area with the heterogeneous Opteron/Cell machine, called Road Runner built by IBM for the Los Alamos National Laboratory. We are on the brink of a time where the availability of many hundreds of thousands of cores will open up new challenging possibilities in physics, algorithms (numerical mathematics) and computer science. However, to deliver on this promise, it is not enough to provide 'peak' performance in terms of peta-flops, the maximum theoretical speed a computer can attain. Most important, this must be translated into corresponding increase in the capabilities of scientific codes. This is a daunting problem that can only be solved by increasing investment in hardware, in the accompanying system software that enables the reliable use of high-end computers, in scientific competence i.e. the mathematical (parallel) algorithms that are the basis of the codes, and education. In the case of Switzerland, the white paper 'Swiss National Strategic Plan for High Performance Computing and Networking

  9. Large-scale computation at PSI scientific achievements and future requirements

    Energy Technology Data Exchange (ETDEWEB)

    Adelmann, A.; Markushin, V

    2008-11-15

    Computational modelling and simulation are among the disciplines that have seen the most dramatic growth in capabilities in the 2Oth Century. Within the past two decades, scientific computing has become an important contributor to all scientific research programs. Computational modelling and simulation are particularly indispensable for solving research problems that are unsolvable by traditional theoretical and experimental approaches, hazardous to study, or time consuming or expensive to solve by traditional means. Many such research areas are found in PSI's research portfolio. Advances in computing technologies (including hardware and software) during the past decade have set the stage for a major step forward in modelling and simulation. We have now arrived at a situation where we have a number of otherwise unsolvable problems, where simulations are as complex as the systems under study. In 2008 the High-Performance Computing (HPC) community entered the petascale area with the heterogeneous Opteron/Cell machine, called Road Runner built by IBM for the Los Alamos National Laboratory. We are on the brink of a time where the availability of many hundreds of thousands of cores will open up new challenging possibilities in physics, algorithms (numerical mathematics) and computer science. However, to deliver on this promise, it is not enough to provide 'peak' performance in terms of peta-flops, the maximum theoretical speed a computer can attain. Most important, this must be translated into corresponding increase in the capabilities of scientific codes. This is a daunting problem that can only be solved by increasing investment in hardware, in the accompanying system software that enables the reliable use of high-end computers, in scientific competence i.e. the mathematical (parallel) algorithms that are the basis of the codes, and education. In the case of Switzerland, the white paper 'Swiss National Strategic Plan for High Performance Computing

  10. Micro-Vibration Performance Prediction of SEPTA24 Using SMeSim (RUAG Space Mechanism Simulator Tool)

    Science.gov (United States)

    Omiciuolo, Manolo; Lang, Andreas; Wismer, Stefan; Barth, Stephan; Szekely, Gerhard

    2013-09-01

    Scientific space missions are currently challenging the performances of their payloads. The performances can be dramatically restricted by micro-vibration loads generated by any moving parts of the satellites, thus by Solar Array Drive Assemblies too. Micro-vibration prediction of SADAs is therefore very important to support their design and optimization in the early stages of a programme. The Space Mechanism Simulator (SMeSim) tool, developed by RUAG, enhances the capability of analysing the micro-vibration emissivity of a Solar Array Drive Assembly (SADA) under a specified set of boundary conditions. The tool is developed in the Matlab/Simulink® environment throughout a library of blocks simulating the different components a SADA is made of. The modular architecture of the blocks, assembled by the user, and the set up of the boundary conditions allow time-domain and frequency-domain analyses of a rigid multi-body model with concentrated flexibilities and coupled- electronic control of the mechanism. SMeSim is used to model the SEPTA24 Solar Array Drive Mechanism and predict its micro-vibration emissivity. SMeSim and the return of experience earned throughout its development and use can now support activities like verification by analysis of micro-vibration emissivity requirements and/or design optimization to minimize the micro- vibration emissivity of a SADA.

  11. An accurate behavioral model for single-photon avalanche diode statistical performance simulation

    Science.gov (United States)

    Xu, Yue; Zhao, Tingchen; Li, Ding

    2018-01-01

    An accurate behavioral model is presented to simulate important statistical performance of single-photon avalanche diodes (SPADs), such as dark count and after-pulsing noise. The derived simulation model takes into account all important generation mechanisms of the two kinds of noise. For the first time, thermal agitation, trap-assisted tunneling and band-to-band tunneling mechanisms are simultaneously incorporated in the simulation model to evaluate dark count behavior of SPADs fabricated in deep sub-micron CMOS technology. Meanwhile, a complete carrier trapping and de-trapping process is considered in afterpulsing model and a simple analytical expression is derived to estimate after-pulsing probability. In particular, the key model parameters of avalanche triggering probability and electric field dependence of excess bias voltage are extracted from Geiger-mode TCAD simulation and this behavioral simulation model doesn't include any empirical parameters. The developed SPAD model is implemented in Verilog-A behavioral hardware description language and successfully operated on commercial Cadence Spectre simulator, showing good universality and compatibility. The model simulation results are in a good accordance with the test data, validating high simulation accuracy.

  12. High-performance computational fluid dynamics: a custom-code approach

    International Nuclear Information System (INIS)

    Fannon, James; Náraigh, Lennon Ó; Loiseau, Jean-Christophe; Valluri, Prashant; Bethune, Iain

    2016-01-01

    We introduce a modified and simplified version of the pre-existing fully parallelized three-dimensional Navier–Stokes flow solver known as TPLS. We demonstrate how the simplified version can be used as a pedagogical tool for the study of computational fluid dynamics (CFDs) and parallel computing. TPLS is at its heart a two-phase flow solver, and uses calls to a range of external libraries to accelerate its performance. However, in the present context we narrow the focus of the study to basic hydrodynamics and parallel computing techniques, and the code is therefore simplified and modified to simulate pressure-driven single-phase flow in a channel, using only relatively simple Fortran 90 code with MPI parallelization, but no calls to any other external libraries. The modified code is analysed in order to both validate its accuracy and investigate its scalability up to 1000 CPU cores. Simulations are performed for several benchmark cases in pressure-driven channel flow, including a turbulent simulation, wherein the turbulence is incorporated via the large-eddy simulation technique. The work may be of use to advanced undergraduate and graduate students as an introductory study in CFDs, while also providing insight for those interested in more general aspects of high-performance computing. (paper)

  13. High-performance computational fluid dynamics: a custom-code approach

    Science.gov (United States)

    Fannon, James; Loiseau, Jean-Christophe; Valluri, Prashant; Bethune, Iain; Náraigh, Lennon Ó.

    2016-07-01

    We introduce a modified and simplified version of the pre-existing fully parallelized three-dimensional Navier-Stokes flow solver known as TPLS. We demonstrate how the simplified version can be used as a pedagogical tool for the study of computational fluid dynamics (CFDs) and parallel computing. TPLS is at its heart a two-phase flow solver, and uses calls to a range of external libraries to accelerate its performance. However, in the present context we narrow the focus of the study to basic hydrodynamics and parallel computing techniques, and the code is therefore simplified and modified to simulate pressure-driven single-phase flow in a channel, using only relatively simple Fortran 90 code with MPI parallelization, but no calls to any other external libraries. The modified code is analysed in order to both validate its accuracy and investigate its scalability up to 1000 CPU cores. Simulations are performed for several benchmark cases in pressure-driven channel flow, including a turbulent simulation, wherein the turbulence is incorporated via the large-eddy simulation technique. The work may be of use to advanced undergraduate and graduate students as an introductory study in CFDs, while also providing insight for those interested in more general aspects of high-performance computing.

  14. COMPUTERS: Teraflops for Europe; EEC Working Group on High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1991-03-15

    In little more than a decade, simulation on high performance computers has become an essential tool for theoretical physics, capable of solving a vast range of crucial problems inaccessible to conventional analytic mathematics. In many ways, computer simulation has become the calculus for interacting many-body systems, a key to the study of transitions from isolated to collective behaviour.

  15. COMPUTERS: Teraflops for Europe; EEC Working Group on High Performance Computing

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    In little more than a decade, simulation on high performance computers has become an essential tool for theoretical physics, capable of solving a vast range of crucial problems inaccessible to conventional analytic mathematics. In many ways, computer simulation has become the calculus for interacting many-body systems, a key to the study of transitions from isolated to collective behaviour

  16. Simulation of Martian EVA at the Mars Society Arctic Research Station

    Science.gov (United States)

    Pletser, V.; Zubrin, R.; Quinn, K.

    The Mars Society has established a Mars Arctic Research Station (M.A.R.S.) on Devon Island, North of Canada, in the middle of the Haughton crater formed by the impact of a large meteorite several million years ago. The site was selected for its similarities with the surface of the Mars planet. During the Summer 2001, the MARS Flashline Research Station supported an extended international simulation campaign of human Mars exploration operations. Six rotations of six person crews spent up to ten days each at the MARS Flashline Research Station. International crews, of mixed gender and professional qualifications, conducted various tasks as a Martian crew would do and performed scientific experiments in several fields (Geophysics, Biology, Psychology). One of the goals of this simulation campaign was to assess the operational and technical feasibility of sustaining a crew in an autonomous habitat, conducting a field scientific research program. Operations were conducted as they would be during a Martian mission, including Extra-Vehicular Activities (EVA) with specially designed unpressurized suits. The second rotation crew conducted seven simulated EVAs for a total of 17 hours, including motorized EVAs with All Terrain Vehicles, to perform field scientific experiments in Biology and Geophysics. Some EVAs were highly successful. For some others, several problems were encountered related to hardware technical failures and to bad weather conditions. The paper will present the experiment programme conducted at the Mars Flashline Research Station, the problems encountered and the lessons learned from an EVA operational point of view. Suggestions to improve foreseen Martian EVA operations will be discussed.

  17. Wall modeling for the simulation of highly non-isothermal unsteady flows

    International Nuclear Information System (INIS)

    Devesa, A.

    2006-12-01

    Nuclear industry flows are most of the time characterized by their high Reynolds number, density variations (at low Mach numbers) and a highly unsteady behaviour (low to moderate frequencies). High Reynolds numbers are un-affordable by direct simulation (DNS), and simulations must either be performed by solving averaged equations (RANS), or by solving only the large eddies (LES), both using a wall model. A first investigation of this thesis dealt with the derivation and test of two variable density wall models: an algebraic law (CWM) and a zonal approach dedicated to LES (TBLE-ρ). These models were validated in quasi-isothermal cases, before being used in academic and industrial non-isothermal flows with satisfactory results. Then, a numerical experiment of pulsed passive scalars was performed by DNS, were two forcing conditions were considered: oscillations are imposed in the outer flow; oscillations come from the wall. Several frequencies and amplitudes of oscillations were taken into account in order to gain insights in unsteady effects in the boundary layer, and to create a database for validating wall models in such context. The temporal behaviour of two wall models (algebraic and zonal wall models) were studied and showed that a zonal model produced better results when used in the simulation of unsteady flows. (author)

  18. High-Speed, High-Performance DQPSK Optical Links with Reduced Complexity VDFE Equalizers

    Directory of Open Access Journals (Sweden)

    Maki Nanou

    2017-02-01

    Full Text Available Optical transmission technologies optimized for optical network segments sensitive to power consumption and cost, comprise modulation formats with direct detection technologies. Specifically, non-return to zero differential quaternary phase shift keying (NRZ-DQPSK in deployed fiber plants, combined with high-performance, low-complexity electronic equalizers to compensate residual impairments at the receiver end, can be proved as a viable solution for high-performance, high-capacity optical links. Joint processing of the constructive and the destructive signals at the single-ended DQPSK receiver provides improved performance compared to the balanced configuration, however, at the expense of higher hardware requirements, a fact that may not be neglected especially in the case of high-speed optical links. To overcome this bottleneck, the use of partially joint constructive/destructive DQPSK equalization is investigated in this paper. Symbol-by-symbol equalization is performed by means of Volterra decision feedback-type equalizers, driven by a reduced subset of signals selected from the constructive and the destructive ports of the optical detectors. The proposed approach offers a low-complexity alternative for electronic equalization, without sacrificing much of the performance compared to the fully-deployed counterpart. The efficiency of the proposed equalizers is demonstrated by means of computer simulation in a typical optical transmission scenario.

  19. Measured and simulated performance of Compton-suppressed TIGRESS HPGe clover detectors

    Science.gov (United States)

    Schumaker, M. A.; Hackman, G.; Pearson, C. J.; Svensson, C. E.; Andreoiu, C.; Andreyev, A.; Austin, R. A. E.; Ball, G. C.; Bandyopadhyay, D.; Boston, A. J.; Chakrawarthy, R. S.; Churchman, R.; Drake, T. E.; Finlay, P.; Garrett, P. E.; Grinyer, G. F.; Hyland, B.; Jones, B.; Maharaj, R.; Morton, A. C.; Phillips, A. A.; Sarazin, F.; Scraggs, H. C.; Smith, M. B.; Valiente-Dobón, J. J.; Waddington, J. C.; Watters, L. M.

    2007-01-01

    Tests of the performance of a 32-fold segmented HPGe clover detector coupled to a 20-fold segmented Compton-suppression shield, which form a prototype element of the TRIUMF-ISAC Gamma-Ray Escape-Suppressed Spectrometer (TIGRESS), have been made. Peak-to-total ratios and relative efficiencies have been measured for a variety of γ-ray energies. These measurements were used to validate a GEANT4 simulation of the TIGRESS detectors, which was then used to create a simulation of the full 12-detector array. Predictions of the expected performance of TIGRESS are presented. These predictions indicate that TIGRESS will be capable, for single 1 MeV γ rays, of absolute detection efficiencies of 17% and 9.4%, and peak-to-total ratios of 54% and 61% for the "high-efficiency" and "optimized peak-to-total" configurations of the array, respectively.

  20. Numerical Simulation of Thermal Performance of Glass-Fibre-Reinforced Polymer

    Science.gov (United States)

    Zhao, Yuchao; Jiang, Xu; Zhang, Qilin; Wang, Qi

    2017-10-01

    Glass-Fibre-Reinforced Polymer (GFRP), as a developing construction material, has a rapidly increasing application in civil engineering especially bridge engineering area these years, mainly used as decorating materials and reinforcing bars for now. Compared with traditional construction material, these kinds of composite material have obvious advantages such as high strength, low density, resistance to corrosion and ease of processing. There are different processing methods to form members, such as pultrusion and resin transfer moulding (RTM) methods, which process into desired shape directly through raw material; meanwhile, GFRP, as a polymer composite, possesses several particular physical and mechanical properties, and the thermal property is one of them. The matrix material, polymer, performs special after heated and endue these composite material a potential hot processing property, but also a poor fire resistance. This paper focuses on thermal performance of GFRP as panels and corresponding researches are conducted. First, dynamic thermomechanical analysis (DMA) experiment is conducted to obtain the glass transition temperature (Tg) of the object GFRP, and the curve of bending elastic modulus with temperature is calculated according to the experimental data. Then compute and estimate the values of other various thermal parameters through DMA experiment and other literatures, and conduct numerical simulation under two condition respectively: (1) the heat transfer process of GFRP panel in which the panel would be heated directly on the surface above Tg, and the hot processing under this temperature field; (2) physical and mechanical performance of GFRP panel under fire condition. Condition (1) is mainly used to guide the development of high temperature processing equipment, and condition (2) indicates that GFRP’s performance under fire is unsatisfactory, measures must be taken when being adopted. Since composite materials’ properties differ from each other

  1. AUTOMATION OF CONTROL OF THE BUSINESS PROCESS OF PUBLISHING SCIENTIFIC JOURNALS

    Directory of Open Access Journals (Sweden)

    O. Yu. Sakaliuk

    2016-09-01

    Full Text Available We consider business process automation publishing scientific journals. It describes the focal point of publishing houses Odessa National Academy of Food Technology and the automation of business processes. A complex business process models publishing scientific journals. Analyzed organizational structure of Coordinating Centre of Scientific Journals' Publishing ONAFT structure and created its model. A process model simulation conducted business process notation eEPC and BPMN. Also held database design, creation of file structure and create AIS interface. Implemented interaction with the webcam. Justification feasibility of software development, and the definition of performance based on the results petal chart, it is safe to say that an automated way to much more efficient compared to manual mode. The developed software will accelerate the development of scientific periodicals ONAFT, which in turn improve the academy ratings at the global level, improve its image and credibility.

  2. SLC injector simulation and tuning for high charge transport

    International Nuclear Information System (INIS)

    Yeremian, A.D.; Miller, R.H.; Clendenin, J.E.; Early, R.A.; Ross, M.C.; Turner, J.L.; Wang, J.W.

    1992-08-01

    We have simulated the SLC injector from the thermionic gun through the first accelerating section and used the resulting parameters to tune the injector for optimum performance and high charge transport. Simulations are conducted using PARMELA, a three-dimensional ray-trace code with a two-dimensional space-charge model. The magnetic field profile due to the existing magnetic optics is calculated using POISSON, while SUPERFISH is used to calculate the space harmonics of the various bunchers and the accelerator cavities. The initial beam conditions in the PARMELA code are derived from the EGUN model of the gun. The resulting injector parameters from the PARMELA simulation are used to prescribe experimental settings of the injector components. The experimental results are in agreement with the results of the integrated injector model

  3. Simulation studies for a high resolution time projection chamber at the international linear collider

    Energy Technology Data Exchange (ETDEWEB)

    Muennich, A.

    2007-03-26

    The International Linear Collider (ILC) is planned to be the next large accelerator. The ILC will be able to perform high precision measurements only possible at the clean environment of electron positron collisions. In order to reach this high accuracy, the requirements for the detector performance are challenging. Several detector concepts are currently under study. The understanding of the detector and its performance will be crucial to extract the desired physics results from the data. To optimise the detector design, simulation studies are needed. Simulation packages like GEANT4 allow to model the detector geometry and simulate the energy deposit in the different materials. However, the detector response taking into account the transportation of the produced charge to the readout devices and the effects ofthe readout electronics cannot be described in detail. These processes in the detector will change the measured position of the energy deposit relative to the point of origin. The determination of this detector response is the task of detailed simulation studies, which have to be carried out for each subdetector. A high resolution Time Projection Chamber (TPC) with gas amplification based on micro pattern gas detectors, is one of the options for the main tracking system at the ILC. In the present thesis a detailed simulation tool to study the performance of a TPC was developed. Its goal is to find the optimal settings to reach an excellent momentum and spatial resolution. After an introduction to the present status of particle physics and the ILC project with special focus on the TPC as central tracker, the simulation framework is presented. The basic simulation methods and implemented processes are introduced. Within this stand-alone simulation framework each electron produced by primary ionisation is transferred through the gas volume and amplified using Gas Electron Multipliers (GEMs). The output format of the simulation is identical to the raw data from a

  4. Living high-training low: effect on erythropoiesis and aerobic performance in highly-trained swimmers

    DEFF Research Database (Denmark)

    Robach, P.; Schmitt, L.; Brugniaux, J.V.

    2006-01-01

    LHTL enhances aerobic performance in athletes, and if any positive effect may last for up to 2 weeks after LHTL intervention. Eighteen swimmers trained for 13 days at 1,200 m while sleeping/living at 1,200 m in ambient air (control, n=9) or in hypoxic rooms (LHTL, n=9, 5 days at simulated altitude of 2......The "living high-training low" model (LHTL), i.e., training in normoxia but sleeping/living in hypoxia, is designed to improve the athletes performance. However, LHTL efficacy still remains controversial and also little is known about the duration of its potential benefit. This study tested whether......,500 m followed by 8 days at simulated altitude of 3,000 m, 16 h day(-1)). Measures were done before 1-2 days (POST-1) and 2 weeks after intervention (POST-15). Aerobic performance was assessed from two swimming trials, exploring .VO(2max) and endurance performance (2,000-m time trial), respectively...

  5. Scientific Services on the Cloud

    Science.gov (United States)

    Chapman, David; Joshi, Karuna P.; Yesha, Yelena; Halem, Milt; Yesha, Yaacov; Nguyen, Phuong

    Scientific Computing was one of the first every applications for parallel and distributed computation. To this date, scientific applications remain some of the most compute intensive, and have inspired creation of petaflop compute infrastructure such as the Oak Ridge Jaguar and Los Alamos RoadRunner. Large dedicated hardware infrastructure has become both a blessing and a curse to the scientific community. Scientists are interested in cloud computing for much the same reason as businesses and other professionals. The hardware is provided, maintained, and administrated by a third party. Software abstraction and virtualization provide reliability, and fault tolerance. Graduated fees allow for multi-scale prototyping and execution. Cloud computing resources are only a few clicks away, and by far the easiest high performance distributed platform to gain access to. There may still be dedicated infrastructure for ultra-scale science, but the cloud can easily play a major part of the scientific computing initiative.

  6. Dose rate laser simulation tests adequacy: Shadowing and high intensity effects analysis

    International Nuclear Information System (INIS)

    Nikiforov, A.Y.; Skorobogatov, P.K.

    1996-01-01

    The adequacy of laser based simulation of the flash X-ray effects in microcircuits may be corrupted mainly due to laser radiation shadowing by the metallization and the non-linear absorption in a high intensity range. The numerical joint solution of the optical equations and the fundamental system of equations in a two-dimensional approximation were performed to adjust the application range of laser simulation. As a result the equivalent dose rate to laser intensity correspondence was established taking into account the shadowing as well as the high intensity effects. The simulation adequacy was verified in the range up to 4·10 11 rad(Si)/s with the comparative laser test of a specially designed test structure

  7. A high-performance channel engineered charge-plasma-based MOSFET with high-κ spacer

    Science.gov (United States)

    Shan, Chan; Wang, Ying; Luo, Xin; Bao, Meng-tian; Yu, Cheng-hao; Cao, Fei

    2017-12-01

    In this paper, the performance of graded channel double-gate MOSFET (GC-DGFET) that utilizes the charge-plasma concept and a high-κ spacer is investigated through 2-D device simulations. The results demonstrate that GC-DGFET with high-κ spacer can effectively improve the ON-state driving current (ION) and reduce the OFF-leakage current (IOFF). We find that reduction of the initial energy barrier between the source and channel is the origin of this ION enhancement. The reason for the IOFF reduction is identified to be the extension of the effective channel length owing to the fringing field via high-κ spacers. Consequently, these devices offer enhanced performance by reducing the total gate-to-gate capacitance (Cgg) and decreasing the intrinsic delay (τ).

  8. Framework Application for Core Edge Transport Simulation (FACETS)

    Energy Technology Data Exchange (ETDEWEB)

    Krasheninnikov, Sergei; Pigarov, Alexander

    2011-10-15

    The FACETS (Framework Application for Core-Edge Transport Simulations) project of Scientific Discovery through Advanced Computing (SciDAC) Program was aimed at providing a high-fidelity whole-tokamak modeling for the U.S. magnetic fusion energy program and ITER through coupling separate components for each of the core region, edge region, and wall, with realistic plasma particles and power sources and turbulent transport simulation. The project also aimed at developing advanced numerical algorithms, efficient implicit coupling methods, and software tools utilizing the leadership class computing facilities under Advanced Scientific Computing Research (ASCR). The FACETS project was conducted by a multi-discipline, multi-institutional teams, the Lead PI was J.R. Cary (Tech-X Corp.). In the FACETS project, the Applied Plasma Theory Group at the MAE Department of UCSD developed the Wall and Plasma-Surface Interaction (WALLPSI) module, performed its validation against experimental data, and integrated it into the developed framework. WALLPSI is a one-dimensional, coarse grained, reaction/advection/diffusion code applied to each material boundary cell in the common modeling domain for a tokamak. It incorporates an advanced model for plasma particle transport and retention in the solid matter of plasma facing components, simulation of plasma heat power load handling, calculation of erosion/deposition, and simulation of synergistic effects in strong plasma-wall coupling.

  9. Internal criteria for scientific choice: an evaluation of research in high-energy physics using electron accelerators

    International Nuclear Information System (INIS)

    Martin, B.R.; Irvine, J.

    1981-01-01

    The economic situation of scientific research is now very different from what it was in the early 1960s when Dr. Alvin Weinberg opened the debate on the criteria for scientific choice. Annual rates of growth of 10 per cent. or more in the budget for science were then common in most Western countries, while today scientists face the prospect of no growth at all or even a decline. Some progress has also been made in developing techniques for the evaluation of the scientific performance of research groups. These two facts make it interesting to reconsider the question of scientific choice. (author)

  10. First experiences of high-fidelity simulation training in junior nursing students in Korea.

    Science.gov (United States)

    Lee, Suk Jeong; Kim, Sang Suk; Park, Young-Mi

    2015-07-01

    This study was conducted to explore first experiences of high-fidelity simulation training in Korean nursing students, in order to develop and establish more effective guidelines for future simulation training in Korea. Thirty-three junior nursing students participated in high-fidelity simulation training for the first time. Using both qualitative and quantitative methods, data were collected from reflective journals and questionnaires of simulation effectiveness after simulation training. Descriptive statistics were used to analyze simulation effectiveness and content analysis was performed with the reflective journal data. Five dimensions and 31 domains, both positive and negative experiences, emerged from qualitative analysis: (i) machine-human interaction in a safe environment; (ii) perceived learning capability; (iii) observational learning; (iv) reconciling practice with theory; and (v) follow-up debriefing effect. More than 70% of students scored high on increased ability to identify changes in the patient's condition, critical thinking, decision-making, effectiveness of peer observation, and debriefing in effectiveness of simulation. This study reported both positive and negative experiences of simulation. The results of this study could be used to set the level of task difficulty in simulation. Future simulation programs can be designed by reinforcing the positive experiences and modifying the negative results. © 2014 The Authors. Japan Journal of Nursing Science © 2014 Japan Academy of Nursing Science.

  11. A task-based parallelism and vectorized approach to 3D Method of Characteristics (MOC) reactor simulation for high performance computing architectures

    Science.gov (United States)

    Tramm, John R.; Gunow, Geoffrey; He, Tim; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.

    2016-05-01

    In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures.

  12. High-performance computing in accelerating structure design and analysis

    International Nuclear Information System (INIS)

    Li Zenghai; Folwell, Nathan; Ge Lixin; Guetz, Adam; Ivanov, Valentin; Kowalski, Marc; Lee, Lie-Quan; Ng, Cho-Kuen; Schussman, Greg; Stingelin, Lukas; Uplenchwar, Ravindra; Wolf, Michael; Xiao, Liling; Ko, Kwok

    2006-01-01

    Future high-energy accelerators such as the Next Linear Collider (NLC) will accelerate multi-bunch beams of high current and low emittance to obtain high luminosity, which put stringent requirements on the accelerating structures for efficiency and beam stability. While numerical modeling has been quite standard in accelerator R and D, designing the NLC accelerating structure required a new simulation capability because of the geometric complexity and level of accuracy involved. Under the US DOE Advanced Computing initiatives (first the Grand Challenge and now SciDAC), SLAC has developed a suite of electromagnetic codes based on unstructured grids and utilizing high-performance computing to provide an advanced tool for modeling structures at accuracies and scales previously not possible. This paper will discuss the code development and computational science research (e.g. domain decomposition, scalable eigensolvers, adaptive mesh refinement) that have enabled the large-scale simulations needed for meeting the computational challenges posed by the NLC as well as projects such as the PEP-II and RIA. Numerical results will be presented to show how high-performance computing has made a qualitative improvement in accelerator structure modeling for these accelerators, either at the component level (single cell optimization), or on the scale of an entire structure (beam heating and long-range wakefields)

  13. Comparison of driving simulator performance and neuropsychological testing in narcolepsy.

    Science.gov (United States)

    Kotterba, Sylvia; Mueller, Nicole; Leidag, Markus; Widdig, Walter; Rasche, Kurt; Malin, Jean-Pierre; Schultze-Werninghaus, Gerhard; Orth, Maritta

    2004-09-01

    Daytime sleepiness and cataplexy can increase automobile accident rates in narcolepsy. Several countries have produced guidelines for issuing a driving license. The aim of the study was to compare driving simulator performance and neuropsychological test results in narcolepsy in order to evaluate their predictive value regarding driving ability. Thirteen patients with narcolepsy (age: 41.5+/-12.9 years) and 10 healthy control patients (age: 55.1+/-7.8 years) were investigated. By computer-assisted neuropsychological testing, vigilance, alertness and divided attention were assessed. In a driving simulator patients and controls had to drive on a highway for 60 min (mean speed of 100 km/h). Different weather and daytime conditions and obstacles were presented. Epworth Sleepiness Scale-Scores were significantly raised (narcolepsy patients: 16.7+/-5.1, controls: 6.6+/-3.6, P divided attention (56.9+/-25.4) and vigilance (58.7+/-26.8) were in a normal range. There was, however, a high inter-individual difference. There was no correlation between driving performance and neuropsychological test results or ESS Score. Neuropsychological test results did not significantly change in the follow-up. The difficulties encountered by the narcolepsy patient in remaining alert may account for sleep-related motor vehicle accidents. Driving simulator investigations are closely related to real traffic situations than isolated neuropsychological tests. At the present time the driving simulator seems to be a useful instrument judging driving ability especially in cases with ambiguous neuropsychological results.

  14. Design and Simulation of a High Performance Emergency Data Delivery Protocol

    DEFF Research Database (Denmark)

    Swartz, Kevin; Wang, Di

    2007-01-01

    The purpose of this project was to design a high performance data delivery protocol, capable of delivering data as quickly as possible to a base station or target node. This protocol was designed particularly for wireless network topologies, but could also be applied towards a wired system....... An emergency is defined as any event with high priority that needs to be handled immediately. It is assumed that this emergency event is important enough that energy efficiency is not a factor in our protocol. The desired effect is for fast as possible delivery to the base station for rapid event handling....

  15. Scientific-creative thinking and academic achievement

    Directory of Open Access Journals (Sweden)

    Rosario Bermejo

    2014-07-01

    Full Text Available The aim of this work is to study the relationship between scientific-creative thinking construct and academic performance in a sample of adolescents. In addition, the scientific-creative thinking instrument’s reliability will be tested. The sample was composed of 98 students (aged between 12-16 years old attending to a Secondary School in Murcia Region (Spain. The used instruments were: a the Scientific-Creative Thinking Test designed by Hu and Adey (2002, which was adapted to the Spanish culture by the High Abilities research team at Murcia University. The test is composed of 7 task based in the Scientific Creative Structure Model. It assesses the dimensions fluency, flexibility and originality; b The General and Factorial Intelligence Test (IGF/5r; Yuste, 2002, which assess the abilities of general intelligence and logic reasoning, verbal reasoning, numerical reasoning and spatial reasoning; c Students’ academic achievement by domains (scientific-technological, social-linguistic and artistic was collected. The results showed positive and statistical significant correlations between the scientific-creative tasks and academic achievement of different domains.

  16. DNS/LES Simulations of Separated Flows at High Reynolds Numbers

    Science.gov (United States)

    Balakumar, P.

    2015-01-01

    Direct numerical simulations (DNS) and large-eddy simulations (LES) simulations of flow through a periodic channel with a constriction are performed using the dynamic Smagorinsky model at two Reynolds numbers of 2800 and 10595. The LES equations are solved using higher order compact schemes. DNS are performed for the lower Reynolds number case using a fine grid and the data are used to validate the LES results obtained with a coarse and a medium size grid. LES simulations are also performed for the higher Reynolds number case using a coarse and a medium size grid. The results are compared with an existing reference data set. The DNS and LES results agreed well with the reference data. Reynolds stresses, sub-grid eddy viscosity, and the budgets for the turbulent kinetic energy are also presented. It is found that the turbulent fluctuations in the normal and spanwise directions have the same magnitude. The turbulent kinetic energy budget shows that the production peaks near the separation point region and the production to dissipation ratio is very high on the order of five in this region. It is also observed that the production is balanced by the advection, diffusion, and dissipation in the shear layer region. The dominant term is the turbulent diffusion that is about two times the molecular dissipation.

  17. High performance statistical computing with parallel R: applications to biology and climate modelling

    International Nuclear Information System (INIS)

    Samatova, Nagiza F; Branstetter, Marcia; Ganguly, Auroop R; Hettich, Robert; Khan, Shiraj; Kora, Guruprasad; Li, Jiangtian; Ma, Xiaosong; Pan, Chongle; Shoshani, Arie; Yoginath, Srikanth

    2006-01-01

    Ultrascale computing and high-throughput experimental technologies have enabled the production of scientific data about complex natural phenomena. With this opportunity, comes a new problem - the massive quantities of data so produced. Answers to fundamental questions about the nature of those phenomena remain largely hidden in the produced data. The goal of this work is to provide a scalable high performance statistical data analysis framework to help scientists perform interactive analyses of these raw data to extract knowledge. Towards this goal we have been developing an open source parallel statistical analysis package, called Parallel R, that lets scientists employ a wide range of statistical analysis routines on high performance shared and distributed memory architectures without having to deal with the intricacies of parallelizing these routines

  18. Numerical simulation of realistic high-temperature superconductors

    International Nuclear Information System (INIS)

    1997-01-01

    One of the main obstacles in the development of practical high-temperature superconducting (HTS) materials is dissipation, caused by the motion of magnetic flux quanta called vortices. Numerical simulations provide a promising new approach for studying these vortices. By exploiting the extraordinary memory and speed of massively parallel computers, researchers can obtain the extremely fine temporal and spatial resolution needed to model complex vortex behavior. The results may help identify new mechanisms to increase the current-capability capabilities and to predict the performance characteristics of HTS materials intended for industrial applications

  19. Modeling, Simulation and Analysis of Complex Networked Systems: A Program Plan for DOE Office of Advanced Scientific Computing Research

    Energy Technology Data Exchange (ETDEWEB)

    Brown, D L

    2009-05-01

    Many complex systems of importance to the U.S. Department of Energy consist of networks of discrete components. Examples are cyber networks, such as the internet and local area networks over which nearly all DOE scientific, technical and administrative data must travel, the electric power grid, social networks whose behavior can drive energy demand, and biological networks such as genetic regulatory networks and metabolic networks. In spite of the importance of these complex networked systems to all aspects of DOE's operations, the scientific basis for understanding these systems lags seriously behind the strong foundations that exist for the 'physically-based' systems usually associated with DOE research programs that focus on such areas as climate modeling, fusion energy, high-energy and nuclear physics, nano-science, combustion, and astrophysics. DOE has a clear opportunity to develop a similarly strong scientific basis for understanding the structure and dynamics of networked systems by supporting a strong basic research program in this area. Such knowledge will provide a broad basis for, e.g., understanding and quantifying the efficacy of new security approaches for computer networks, improving the design of computer or communication networks to be more robust against failures or attacks, detecting potential catastrophic failure on the power grid and preventing or mitigating its effects, understanding how populations will respond to the availability of new energy sources or changes in energy policy, and detecting subtle vulnerabilities in large software systems to intentional attack. This white paper outlines plans for an aggressive new research program designed to accelerate the advancement of the scientific basis for complex networked systems of importance to the DOE. It will focus principally on four research areas: (1) understanding network structure, (2) understanding network dynamics, (3) predictive modeling and simulation for complex

  20. Modeling, Simulation and Analysis of Complex Networked Systems: A Program Plan for DOE Office of Advanced Scientific Computing Research

    International Nuclear Information System (INIS)

    Brown, D.L.

    2009-01-01

    Many complex systems of importance to the U.S. Department of Energy consist of networks of discrete components. Examples are cyber networks, such as the internet and local area networks over which nearly all DOE scientific, technical and administrative data must travel, the electric power grid, social networks whose behavior can drive energy demand, and biological networks such as genetic regulatory networks and metabolic networks. In spite of the importance of these complex networked systems to all aspects of DOE's operations, the scientific basis for understanding these systems lags seriously behind the strong foundations that exist for the 'physically-based' systems usually associated with DOE research programs that focus on such areas as climate modeling, fusion energy, high-energy and nuclear physics, nano-science, combustion, and astrophysics. DOE has a clear opportunity to develop a similarly strong scientific basis for understanding the structure and dynamics of networked systems by supporting a strong basic research program in this area. Such knowledge will provide a broad basis for, e.g., understanding and quantifying the efficacy of new security approaches for computer networks, improving the design of computer or communication networks to be more robust against failures or attacks, detecting potential catastrophic failure on the power grid and preventing or mitigating its effects, understanding how populations will respond to the availability of new energy sources or changes in energy policy, and detecting subtle vulnerabilities in large software systems to intentional attack. This white paper outlines plans for an aggressive new research program designed to accelerate the advancement of the scientific basis for complex networked systems of importance to the DOE. It will focus principally on four research areas: (1) understanding network structure, (2) understanding network dynamics, (3) predictive modeling and simulation for complex networked systems

  1. Toward high-efficiency and detailed Monte Carlo simulation study of the granular flow spallation target

    Science.gov (United States)

    Cai, Han-Jie; Zhang, Zhi-Lei; Fu, Fen; Li, Jian-Yang; Zhang, Xun-Chao; Zhang, Ya-Ling; Yan, Xue-Song; Lin, Ping; Xv, Jian-Ya; Yang, Lei

    2018-02-01

    The dense granular flow spallation target is a new target concept chosen for the Accelerator-Driven Subcritical (ADS) project in China. For the R&D of this kind of target concept, a dedicated Monte Carlo (MC) program named GMT was developed to perform the simulation study of the beam-target interaction. Owing to the complexities of the target geometry, the computational cost of the MC simulation of particle tracks is highly expensive. Thus, improvement of computational efficiency will be essential for the detailed MC simulation studies of the dense granular target. Here we present the special design of the GMT program and its high efficiency performance. In addition, the speedup potential of the GPU-accelerated spallation models is discussed.

  2. The Roles of Sparse Direct Methods in Large-scale Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xiaoye S.; Gao, Weiguo; Husbands, Parry J.R.; Yang, Chao; Ng, Esmond G.

    2005-06-27

    Sparse systems of linear equations and eigen-equations arise at the heart of many large-scale, vital simulations in DOE. Examples include the Accelerator Science and Technology SciDAC (Omega3P code, electromagnetic problem), the Center for Extended Magnetohydrodynamic Modeling SciDAC(NIMROD and M3D-C1 codes, fusion plasma simulation). The Terascale Optimal PDE Simulations (TOPS)is providing high-performance sparse direct solvers, which have had significant impacts on these applications. Over the past several years, we have been working closely with the other SciDAC teams to solve their large, sparse matrix problems arising from discretization of the partial differential equations. Most of these systems are very ill-conditioned, resulting in extremely poor convergence deployed our direct methods techniques in these applications, which achieved significant scientific results as well as performance gains. These successes were made possible through the SciDAC model of computer scientists and application scientists working together to take full advantage of terascale computing systems and new algorithms research.

  3. The Roles of Sparse Direct Methods in Large-scale Simulations

    International Nuclear Information System (INIS)

    Li, Xiaoye S.; Gao, Weiguo; Husbands, Parry J.R.; Yang, Chao; Ng, Esmond G.

    2005-01-01

    Sparse systems of linear equations and eigen-equations arise at the heart of many large-scale, vital simulations in DOE. Examples include the Accelerator Science and Technology SciDAC (Omega3P code, electromagnetic problem), the Center for Extended Magnetohydrodynamic Modeling SciDAC(NIMROD and M3D-C1 codes, fusion plasma simulation). The Terascale Optimal PDE Simulations (TOPS)is providing high-performance sparse direct solvers, which have had significant impacts on these applications. Over the past several years, we have been working closely with the other SciDAC teams to solve their large, sparse matrix problems arising from discretization of the partial differential equations. Most of these systems are very ill-conditioned, resulting in extremely poor convergence deployed our direct methods techniques in these applications, which achieved significant scientific results as well as performance gains. These successes were made possible through the SciDAC model of computer scientists and application scientists working together to take full advantage of terascale computing systems and new algorithms research

  4. Performance Modeling and Optimization of a High Energy CollidingBeam Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Shan, Hongzhang; Strohmaier, Erich; Qiang, Ji; Bailey, David H.; Yelick, Kathy

    2006-06-01

    An accurate modeling of the beam-beam interaction is essential to maximizing the luminosity in existing and future colliders. BeamBeam3D was the first parallel code that can be used to study this interaction fully self-consistently on high-performance computing platforms. Various all-to-all personalized communication (AAPC) algorithms dominate its communication patterns, for which we developed a sequence of performance models using a series of micro-benchmarks. We find that for SMP based systems the most important performance constraint is node-adapter contention, while for 3D-Torus topologies good performance models are not possible without considering link contention. The best average model prediction error is very low on SMP based systems with of 3% to 7%. On torus based systems errors of 29% are higher but optimized performance can again be predicted within 8% in some cases. These excellent results across five different systems indicate that this methodology for performance modeling can be applied to a large class of algorithms.

  5. Performance Modeling and Optimization of a High Energy Colliding Beam Simulation Code

    International Nuclear Information System (INIS)

    Shan, Hongzhang; Strohmaier, Erich; Qiang, Ji; Bailey, David H.; Yelick, Kathy

    2006-01-01

    An accurate modeling of the beam-beam interaction is essential to maximizing the luminosity in existing and future colliders. BeamBeam3D was the first parallel code that can be used to study this interaction fully self-consistently on high-performance computing platforms. Various all-to-all personalized communication (AAPC) algorithms dominate its communication patterns, for which we developed a sequence of performance models using a series of micro-benchmarks. We find that for SMP based systems the most important performance constraint is node-adapter contention, while for 3D-Torus topologies good performance models are not possible without considering link contention. The best average model prediction error is very low on SMP based systems with of 3% to 7%. On torus based systems errors of 29% are higher but optimized performance can again be predicted within 8% in some cases. These excellent results across five different systems indicate that this methodology for performance modeling can be applied to a large class of algorithms

  6. High performance parallel computing of flows in complex geometries: II. Applications

    International Nuclear Information System (INIS)

    Gourdain, N; Gicquel, L; Staffelbach, G; Vermorel, O; Duchaine, F; Boussuge, J-F; Poinsot, T

    2009-01-01

    Present regulations in terms of pollutant emissions, noise and economical constraints, require new approaches and designs in the fields of energy supply and transportation. It is now well established that the next breakthrough will come from a better understanding of unsteady flow effects and by considering the entire system and not only isolated components. However, these aspects are still not well taken into account by the numerical approaches or understood whatever the design stage considered. The main challenge is essentially due to the computational requirements inferred by such complex systems if it is to be simulated by use of supercomputers. This paper shows how new challenges can be addressed by using parallel computing platforms for distinct elements of a more complex systems as encountered in aeronautical applications. Based on numerical simulations performed with modern aerodynamic and reactive flow solvers, this work underlines the interest of high-performance computing for solving flow in complex industrial configurations such as aircrafts, combustion chambers and turbomachines. Performance indicators related to parallel computing efficiency are presented, showing that establishing fair criterions is a difficult task for complex industrial applications. Examples of numerical simulations performed in industrial systems are also described with a particular interest for the computational time and the potential design improvements obtained with high-fidelity and multi-physics computing methods. These simulations use either unsteady Reynolds-averaged Navier-Stokes methods or large eddy simulation and deal with turbulent unsteady flows, such as coupled flow phenomena (thermo-acoustic instabilities, buffet, etc). Some examples of the difficulties with grid generation and data analysis are also presented when dealing with these complex industrial applications.

  7. Space plasma simulation chamber

    International Nuclear Information System (INIS)

    1986-01-01

    Scientific results of experiments and tests of instruments performed with the Space Plasma Simulation Chamber and its facility are reviewed in the following six categories. 1. Tests of instruments on board rockets, satellites and balloons. 2. Plasma wave experiments. 3. Measurements of plasma particles. 4. Optical measurements. 5. Plasma production. 6. Space plasms simulations. This facility has been managed under Laboratory Space Plasma Comittee since 1969 and used by scientists in cooperative programs with universities and institutes all over country. A list of publications is attached. (author)

  8. Application of High Performance Computing for Simulations of N-Dodecane Jet Spray with Evaporation

    Science.gov (United States)

    2016-11-01

    is unlimited. 10 6. References 1. Malbec L-M, Egúsquiza J, Bruneaux G, Meijer M. Characterization of a set of ECN spray A injectors : nozzle to...sprays and develop a predictive theory for comparison to measurements in the laboratory of turbulent diesel sprays. 15. SUBJECT TERMS high...models into future simulations of turbulent jet sprays and develop a predictive theory for comparison to measurements in the lab of turbulent diesel

  9. High Performance Computing Software Applications for Space Situational Awareness

    Science.gov (United States)

    Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.

    The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.

  10. Prospective randomized comparison of standard didactic lecture versus high-fidelity simulation for radiology resident contrast reaction management training.

    Science.gov (United States)

    Wang, Carolyn L; Schopp, Jennifer G; Petscavage, Jonelle M; Paladin, Angelisa M; Richardson, Michael L; Bush, William H

    2011-06-01

    The objective of our study was to assess whether high-fidelity simulation-based training is more effective than traditional didactic lecture to train radiology residents in the management of contrast reactions. This was a prospective study of 44 radiology residents randomized into a simulation group versus a lecture group. All residents attended a contrast reaction didactic lecture. Four months later, baseline knowledge was assessed with a written test, which we refer to as the "pretest." After the pretest, the 21 residents in the lecture group attended a repeat didactic lecture and the 23 residents in the simulation group underwent high-fidelity simulation-based training with five contrast reaction scenarios. Next, all residents took a second written test, which we refer to as the "posttest." Two months after the posttest, both groups took a third written test, which we refer to as the "delayed posttest," and underwent performance testing with a high-fidelity severe contrast reaction scenario graded on predefined critical actions. There was no statistically significant difference between the simulation and lecture group pretest, immediate posttest, or delayed posttest scores. The simulation group performed better than the lecture group on the severe contrast reaction simulation scenario (p = 0.001). The simulation group reported improved comfort in identifying and managing contrast reactions and administering medications after the simulation training (p ≤ 0.04) and was more comfortable than the control group (p = 0.03), which reported no change in comfort level after the repeat didactic lecture. When compared with didactic lecture, high-fidelity simulation-based training of contrast reaction management shows equal results on written test scores but improved performance during a high-fidelity severe contrast reaction simulation scenario.

  11. Network effects on scientific collaborations.

    Directory of Open Access Journals (Sweden)

    Shahadat Uddin

    Full Text Available BACKGROUND: The analysis of co-authorship network aims at exploring the impact of network structure on the outcome of scientific collaborations and research publications. However, little is known about what network properties are associated with authors who have increased number of joint publications and are being cited highly. METHODOLOGY/PRINCIPAL FINDINGS: Measures of social network analysis, for example network centrality and tie strength, have been utilized extensively in current co-authorship literature to explore different behavioural patterns of co-authorship networks. Using three SNA measures (i.e., degree centrality, closeness centrality and betweenness centrality, we explore scientific collaboration networks to understand factors influencing performance (i.e., citation count and formation (tie strength between authors of such networks. A citation count is the number of times an article is cited by other articles. We use co-authorship dataset of the research field of 'steel structure' for the year 2005 to 2009. To measure the strength of scientific collaboration between two authors, we consider the number of articles co-authored by them. In this study, we examine how citation count of a scientific publication is influenced by different centrality measures of its co-author(s in a co-authorship network. We further analyze the impact of the network positions of authors on the strength of their scientific collaborations. We use both correlation and regression methods for data analysis leading to statistical validation. We identify that citation count of a research article is positively correlated with the degree centrality and betweenness centrality values of its co-author(s. Also, we reveal that degree centrality and betweenness centrality values of authors in a co-authorship network are positively correlated with the strength of their scientific collaborations. CONCLUSIONS/SIGNIFICANCE: Authors' network positions in co

  12. A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure

    International Nuclear Information System (INIS)

    Liu Jizhi; Chen Xingbi

    2009-01-01

    A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate. (semiconductor integrated circuits)

  13. A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure

    Energy Technology Data Exchange (ETDEWEB)

    Liu Jizhi; Chen Xingbi, E-mail: jzhliu@uestc.edu.c [State Key Laboratory of Electronic Thin Films and Integrated Devices, University of Electronic Science and Technology of China, Chengdu 610054 (China)

    2009-12-15

    A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate. (semiconductor integrated circuits)

  14. ATES/heat pump simulations performed with ATESSS code

    Science.gov (United States)

    Vail, L. W.

    1989-01-01

    Modifications to the Aquifer Thermal Energy Storage System Simulator (ATESSS) allow simulation of aquifer thermal energy storage (ATES)/heat pump systems. The heat pump algorithm requires a coefficient of performance (COP) relationship of the form: COP = COP sub base + alpha (T sub ref minus T sub base). Initial applications of the modified ATES code to synthetic building load data for two sizes of buildings in two U.S. cities showed insignificant performance advantage of a series ATES heat pump system over a conventional groundwater heat pump system. The addition of algorithms for a cooling tower and solar array improved performance slightly. Small values of alpha in the COP relationship are the principal reason for the limited improvement in system performance. Future studies at Pacific Northwest Laboratory (PNL) are planned to investigate methods to increase system performance using alternative system configurations and operations scenarios.

  15. Efficient Use of Distributed Systems for Scientific Applications

    Science.gov (United States)

    Taylor, Valerie; Chen, Jian; Canfield, Thomas; Richard, Jacques

    2000-01-01

    Distributed computing has been regarded as the future of high performance computing. Nationwide high speed networks such as vBNS are becoming widely available to interconnect high-speed computers, virtual environments, scientific instruments and large data sets. One of the major issues to be addressed with distributed systems is the development of computational tools that facilitate the efficient execution of parallel applications on such systems. These tools must exploit the heterogeneous resources (networks and compute nodes) in distributed systems. This paper presents a tool, called PART, which addresses this issue for mesh partitioning. PART takes advantage of the following heterogeneous system features: (1) processor speed; (2) number of processors; (3) local network performance; and (4) wide area network performance. Further, different finite element applications under consideration may have different computational complexities, different communication patterns, and different element types, which also must be taken into consideration when partitioning. PART uses parallel simulated annealing to partition the domain, taking into consideration network and processor heterogeneity. The results of using PART for an explicit finite element application executing on two IBM SPs (located at Argonne National Laboratory and the San Diego Supercomputer Center) indicate an increase in efficiency by up to 36% as compared to METIS, a widely used mesh partitioning tool. The input to METIS was modified to take into consideration heterogeneous processor performance; METIS does not take into consideration heterogeneous networks. The execution times for these applications were reduced by up to 30% as compared to METIS. These results are given in Figure 1 for four irregular meshes with number of elements ranging from 30,269 elements for the Barth5 mesh to 11,451 elements for the Barth4 mesh. Future work with PART entails using the tool with an integrated application requiring

  16. High-Performance Tiled WMS and KML Web Server

    Science.gov (United States)

    Plesea, Lucian

    2007-01-01

    This software is an Apache 2.0 module implementing a high-performance map server to support interactive map viewers and virtual planet client software. It can be used in applications that require access to very-high-resolution geolocated images, such as GIS, virtual planet applications, and flight simulators. It serves Web Map Service (WMS) requests that comply with a given request grid from an existing tile dataset. It also generates the KML super-overlay configuration files required to access the WMS image tiles.

  17. Simulation and performance of brushless dc motor actuators

    Science.gov (United States)

    Gerba, A., Jr.

    1985-12-01

    The simulation model for a Brushless D.C. Motor and the associated commutation power conditioner transistor model are presented. The necessary conditions for maximum power output while operating at steady-state speed and sinusoidally distributed air-gap flux are developed. Comparison of simulated model with the measured performance of a typical motor are done both on time response waveforms and on average performance characteristics. These preliminary results indicate good agreement. Plans for model improvement and testing of a motor-driven positioning device for model evaluation are outlined.

  18. SALTON SEA SCIENTIFIC DRILLING PROJECT: SCIENTIFIC PROGRAM.

    Science.gov (United States)

    Sass, J.H.; Elders, W.A.

    1986-01-01

    The Salton Sea Scientific Drilling Project, was spudded on 24 October 1985, and reached a total depth of 10,564 ft. (3. 2 km) on 17 March 1986. There followed a period of logging, a flow test, and downhole scientific measurements. The scientific goals were integrated smoothly with the engineering and economic objectives of the program and the ideal of 'science driving the drill' in continental scientific drilling projects was achieved in large measure. The principal scientific goals of the project were to study the physical and chemical processes involved in an active, magmatically driven hydrothermal system. To facilitate these studies, high priority was attached to four areas of sample and data collection, namely: (1) core and cuttings, (2) formation fluids, (3) geophysical logging, and (4) downhole physical measurements, particularly temperatures and pressures.

  19. Effectiveness of the use of question-driven levels of inquiry based instruction (QD-LOIBI) assisted visual multimedia supported teaching material on enhancing scientific explanation ability senior high school students

    Science.gov (United States)

    Suhandi, A.; Muslim; Samsudin, A.; Hermita, N.; Supriyatman

    2018-05-01

    In this study, the effectiveness of the use of Question-Driven Levels of Inquiry Based Instruction (QD-LOIBI) assisted visual multimedia supported teaching materials on enhancing senior high school students scientific explanation ability has been studied. QD-LOIBI was designed by following five-levels of inquiry proposed by Wenning. Visual multimedia used in teaching materials included image (photo), virtual simulation and video phenomena. QD-LOIBI assisted teaching materials supported by visual multimedia were tried out on senior high school students at one high school in one district in West Java. A quasi-experiment method with design one experiment group (n = 31) and one control group (n = 32) were used. Experimental group were given QD-LOIBI assisted teaching material supported by visual multimedia, whereas the control group were given QD-LOIBI assisted teaching materials not supported visual multimedia. Data on the ability of scientific explanation in both groups were collected by scientific explanation ability test in essay form concerning kinetic gas theory concept. The results showed that the number of students in the experimental class that has increased the category and quality of scientific explanation is greater than in the control class. These results indicate that the use of multimedia supported instructional materials developed for implementation of QD-LOIBI can improve students’ ability to provide explanations supported by scientific evidence gained from practicum activities and applicable concepts, laws, principles or theories.

  20. Evaluation of the Thermo Scientific SureTect Listeria species assay. AOAC Performance Tested Method 071304.

    Science.gov (United States)

    Cloke, Jonathan; Evans, Katharine; Crabtree, David; Hughes, Annette; Simpson, Helen; Holopainen, Jani; Wickstrand, Nina; Kauppinen, Mikko; Leon-Velarde, Carlos; Larson, Nathan; Dave, Keron

    2014-01-01

    The Thermo Scientific SureTect Listeria species Assay is a new real-time PCR assay for the detection of all species of Listeria in food and environmental samples. This validation study was conducted using the AOAC Research Institute (RI) Performance Tested Methods program to validate the SureTect Listeria species Assay in comparison to the reference method detailed in International Organization for Standardization 11290-1:1996 including amendment 1:2004 in a variety of foods plus plastic and stainless steel. The food matrixes validated were smoked salmon, processed cheese, fresh bagged spinach, cantaloupe, cooked prawns, cooked sliced turkey meat, cooked sliced ham, salami, pork frankfurters, and raw ground beef. All matrixes were tested by Thermo Fisher Scientific, Microbiology Division, Basingstoke, UK. In addition, three matrixes (pork frankfurters, fresh bagged spinach, and stainless steel surface samples) were analyzed independently as part of the AOAC-RI-controlled independent laboratory study by the University ofGuelph, Canada. Using probability of detection statistical analysis, a significant difference in favour of the SureTect assay was demonstrated between the SureTect and reference method for high level spiked samples of pork frankfurters, smoked salmon, cooked prawns, stainless steel, and low-spiked samples of salami. For all other matrixes, no significant difference was seen between the two methods during the study. Inclusivity testing was conducted with 68 different isolates of Listeria species, all of which were detected by the SureTect Listeria species Assay. None of the 33 exclusivity isolates were detected by the SureTect Listeria species Assay. Ruggedness testing was conducted to evaluate the performance of the assay with specific method deviations outside of the recommended parameters open to variation, which demonstrated that the assay gave reliable performance. Accelerated stability testing was additionally conducted, validating the assay