WorldWideScience

Sample records for high-performance scientific simulation

  1. A high performance scientific cloud computing environment for materials simulations

    Jorissen, Kevin; Vila, Fernando D.; Rehr, John J.

    2011-01-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including...

  2. A high performance scientific cloud computing environment for materials simulations

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  3. High performance electromagnetic simulation tools

    Gedney, Stephen D.; Whites, Keith W.

    1994-10-01

    Army Research Office Grant #DAAH04-93-G-0453 has supported the purchase of 24 additional compute nodes that were installed in the Intel iPsC/860 hypercube at the Univesity Of Kentucky (UK), rendering a 32-node multiprocessor. This facility has allowed the investigators to explore and extend the boundaries of electromagnetic simulation for important areas of defense concerns including microwave monolithic integrated circuit (MMIC) design/analysis and electromagnetic materials research and development. The iPSC/860 has also provided an ideal platform for MMIC circuit simulations. A number of parallel methods based on direct time-domain solutions of Maxwell's equations have been developed on the iPSC/860, including a parallel finite-difference time-domain (FDTD) algorithm, and a parallel planar generalized Yee-algorithm (PGY). The iPSC/860 has also provided an ideal platform on which to develop a 'virtual laboratory' to numerically analyze, scientifically study and develop new types of materials with beneficial electromagnetic properties. These materials simulations are capable of assembling hundreds of microscopic inclusions from which an electromagnetic full-wave solution will be obtained in toto. This powerful simulation tool has enabled research of the full-wave analysis of complex multicomponent MMIC devices and the electromagnetic properties of many types of materials to be performed numerically rather than strictly in the laboratory.

  4. NCI's Transdisciplinary High Performance Scientific Data Platform

    Evans, Ben; Antony, Joseph; Bastrakova, Irina; Car, Nicholas; Cox, Simon; Druken, Kelsey; Evans, Bradley; Fraser, Ryan; Ip, Alex; Kemp, Carina; King, Edward; Minchin, Stuart; Larraondo, Pablo; Pugh, Tim; Richards, Clare; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    The Australian National Computational Infrastructure (NCI) manages Earth Systems data collections sourced from several domains and organisations onto a single High Performance Data (HPD) Node to further Australia's national priority research and innovation agenda. The NCI HPD Node has rapidly established its value, currently managing over 10 PBytes of datasets from collections that span a wide range of disciplines including climate, weather, environment, geoscience, geophysics, water resources and social sciences. Importantly, in order to facilitate broad user uptake, maximise reuse and enable transdisciplinary access through software and standardised interfaces, the datasets, associated information systems and processes have been incorporated into the design and operation of a unified platform that NCI has called, the National Environmental Research Data Interoperability Platform (NERDIP). The key goal of the NERDIP is to regularise data access so that it is easily discoverable, interoperable for different domains and enabled for high performance methods. It adopts and implements international standards and data conventions, and promotes scientific integrity within a high performance computing and data analysis environment. NCI has established a rich and flexible computing environment to access to this data, through the NCI supercomputer; a private cloud that supports both domain focused virtual laboratories and in-common interactive analysis interfaces; as well as remotely through scalable data services. Data collections of this importance must be managed with careful consideration of both their current use and the needs of the end-communities, as well as its future potential use, such as transitioning to more advanced software and improved methods. It is therefore critical that the data platform is both well-managed and trusted for stable production use (including transparency and reproducibility), agile enough to incorporate new technological advances and

  5. 3rd International Conference on High Performance Scientific Computing

    Kostina, Ekaterina; Phu, Hoang; Rannacher, Rolf

    2008-01-01

    This proceedings volume contains a selection of papers presented at the Third International Conference on High Performance Scientific Computing held at the Hanoi Institute of Mathematics, Vietnamese Academy of Science and Technology (VAST), March 6-10, 2006. The conference has been organized by the Hanoi Institute of Mathematics, Interdisciplinary Center for Scientific Computing (IWR), Heidelberg, and its International PhD Program ``Complex Processes: Modeling, Simulation and Optimization'', and Ho Chi Minh City University of Technology. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and applications in practice. Subjects covered are mathematical modelling, numerical simulation, methods for optimization and control, parallel computing, software development, applications of scientific computing in physics, chemistry, biology and mechanics, environmental and hydrology problems, transport, logistics and site loca...

  6. Component-based software for high-performance scientific computing

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

  7. Component-based software for high-performance scientific computing

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly

  8. 5th International Conference on High Performance Scientific Computing

    Hoang, Xuan; Rannacher, Rolf; Schlöder, Johannes

    2014-01-01

    This proceedings volume gathers a selection of papers presented at the Fifth International Conference on High Performance Scientific Computing, which took place in Hanoi on March 5-9, 2012. The conference was organized by the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) of Heidelberg University, Ho Chi Minh City University of Technology, and the Vietnam Institute for Advanced Study in Mathematics. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and practical applications. Subjects covered include mathematical modeling; numerical simulation; methods for optimization and control; parallel computing; software development; and applications of scientific computing in physics, mechanics and biomechanics, material science, hydrology, chemistry, biology, biotechnology, medicine, sports, psychology, transport, logistics, com...

  9. 6th International Conference on High Performance Scientific Computing

    Phu, Hoang; Rannacher, Rolf; Schlöder, Johannes

    2017-01-01

    This proceedings volume highlights a selection of papers presented at the Sixth International Conference on High Performance Scientific Computing, which took place in Hanoi, Vietnam on March 16-20, 2015. The conference was jointly organized by the Heidelberg Institute of Theoretical Studies (HITS), the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) at Heidelberg University, and the Vietnam Institute for Advanced Study in Mathematics, Ministry of Education The contributions cover a broad, interdisciplinary spectrum of scientific computing and showcase recent advances in theory, methods, and practical applications. Subjects covered numerical simulation, methods for optimization and control, parallel computing, and software development, as well as the applications of scientific computing in physics, mechanics, biomechanics and robotics, material science, hydrology, biotechnology, medicine, transport, scheduling, and in...

  10. High-performance scientific computing in the cloud

    Jorissen, Kevin; Vila, Fernando; Rehr, John

    2011-03-01

    Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.

  11. High Performance Data Distribution for Scientific Community

    Tirado, Juan M.; Higuero, Daniel; Carretero, Jesus

    2010-05-01

    Institutions such as NASA, ESA or JAXA find solutions to distribute data from their missions to the scientific community, and their long term archives. This is a complex problem, as it includes a vast amount of data, several geographically distributed archives, heterogeneous architectures with heterogeneous networks, and users spread around the world. We propose a novel architecture (HIDDRA) that solves this problem aiming to reduce user intervention in data acquisition and processing. HIDDRA is a modular system that provides a highly efficient parallel multiprotocol download engine, using a publish/subscribe policy which helps the final user to obtain data of interest transparently. Our system can deal simultaneously with multiple protocols (HTTP,HTTPS, FTP, GridFTP among others) to obtain the maximum bandwidth, reducing the workload in data server and increasing flexibility. It can also provide high reliability and fault tolerance, as several sources of data can be used to perform one file download. HIDDRA architecture can be arranged into a data distribution network deployed on several sites that can cooperate to provide former features. HIDDRA has been addressed by the 2009 e-IRG Report on Data Management as a promising initiative for data interoperability. Our first prototype has been evaluated in collaboration with the ESAC centre in Villafranca del Castillo (Spain) that shows a high scalability and performance, opening a wide spectrum of opportunities. Some preliminary results have been published in the Journal of Astrophysics and Space Science [1]. [1] D. Higuero, J.M. Tirado, J. Carretero, F. Félix, and A. de La Fuente. HIDDRA: a highly independent data distribution and retrieval architecture for space observation missions. Astrophysics and Space Science, 321(3):169-175, 2009

  12. Top scientific research center deploys Zambeel Aztera (TM) network storage system in high performance environment

    2002-01-01

    " The National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory has implemented a Zambeel Aztera storage system and software to accelerate the productivity of scientists running high performance scientific simulations and computations" (1 page).

  13. Language interoperability for high-performance parallel scientific components

    Elliot, N; Kohn, S; Smolinski, B

    1999-01-01

    With the increasing complexity and interdisciplinary nature of scientific applications, code reuse is becoming increasingly important in scientific computing. One method for facilitating code reuse is the use of components technologies, which have been used widely in industry. However, components have only recently worked their way into scientific computing. Language interoperability is an important underlying technology for these component architectures. In this paper, we present an approach to language interoperability for a high-performance parallel, component architecture being developed by the Common Component Architecture (CCA) group. Our approach is based on Interface Definition Language (IDL) techniques. We have developed a Scientific Interface Definition Language (SIDL), as well as bindings to C and Fortran. We have also developed a SIDL compiler and run-time library support for reference counting, reflection, object management, and exception handling (Babel). Results from using Babel to call a standard numerical solver library (written in C) from C and Fortran show that the cost of using Babel is minimal, where as the savings in development time and the benefits of object-oriented development support for C and Fortran far outweigh the costs

  14. High Performance Object-Oriented Scientific Programming in Fortran 90

    Norton, Charles D.; Decyk, Viktor K.; Szymanski, Boleslaw K.

    1997-01-01

    We illustrate how Fortran 90 supports object-oriented concepts by example of plasma particle computations on the IBM SP. Our experience shows that Fortran 90 and object-oriented methodology give high performance while providing a bridge from Fortran 77 legacy codes to modern programming principles. All of our object-oriented Fortran 90 codes execute more quickly thatn the equeivalent C++ versions, yet the abstraction modelling capabilities used for scentific programming are comparably powereful.

  15. Scientific computer simulation review

    Kaizer, Joshua S.; Heller, A. Kevin; Oberkampf, William L.

    2015-01-01

    Before the results of a scientific computer simulation are used for any purpose, it should be determined if those results can be trusted. Answering that question of trust is the domain of scientific computer simulation review. There is limited literature that focuses on simulation review, and most is specific to the review of a particular type of simulation. This work is intended to provide a foundation for a common understanding of simulation review. This is accomplished through three contributions. First, scientific computer simulation review is formally defined. This definition identifies the scope of simulation review and provides the boundaries of the review process. Second, maturity assessment theory is developed. This development clarifies the concepts of maturity criteria, maturity assessment sets, and maturity assessment frameworks, which are essential for performing simulation review. Finally, simulation review is described as the application of a maturity assessment framework. This is illustrated through evaluating a simulation review performed by the U.S. Nuclear Regulatory Commission. In making these contributions, this work provides a means for a more objective assessment of a simulation’s trustworthiness and takes the next step in establishing scientific computer simulation review as its own field. - Highlights: • We define scientific computer simulation review. • We develop maturity assessment theory. • We formally define a maturity assessment framework. • We describe simulation review as the application of a maturity framework. • We provide an example of a simulation review using a maturity framework

  16. Building a High Performance Computing Infrastructure for Novosibirsk Scientific Center

    Adakin, A; Chubarov, D; Nikultsev, V; Belov, S; Kaplin, V; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2011-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies (ICT), and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of the computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for the particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. Recently a dedicated optical network with the initial bandwidth of 10 Gbps connecting these three facilities was built in order to make it possible to share the computing resources among the research communities of participating institutes, thus providing a common platform for building the computing infrastructure for various scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technologies based on XEN and KVM platforms. The solution implemented was tested thoroughly within the computing environment of KEDR detector experiment which is being carried out at BINP, and foreseen to be applied to the use cases of other HEP experiments in the upcoming future.

  17. High-Performance Beam Simulator for the LANSCE Linac

    Pang, Xiaoying; Rybarcyk, Lawrence J.; Baily, Scott A.

    2012-01-01

    A high performance multiparticle tracking simulator is currently under development at Los Alamos. The heart of the simulator is based upon the beam dynamics simulation algorithms of the PARMILA code, but implemented in C++ on Graphics Processing Unit (GPU) hardware using NVIDIA's CUDA platform. Linac operating set points are provided to the simulator via the EPICS control system so that changes of the real time linac parameters are tracked and the simulation results updated automatically. This simulator will provide valuable insight into the beam dynamics along a linac in pseudo real-time, especially where direct measurements of the beam properties do not exist. Details regarding the approach, benefits and performance are presented.

  18. The Centre of High-Performance Scientific Computing, Geoverbund, ABC/J - Geosciences enabled by HPSC

    Kollet, Stefan; Görgen, Klaus; Vereecken, Harry; Gasper, Fabian; Hendricks-Franssen, Harrie-Jan; Keune, Jessica; Kulkarni, Ketan; Kurtz, Wolfgang; Sharples, Wendy; Shrestha, Prabhakar; Simmer, Clemens; Sulis, Mauro; Vanderborght, Jan

    2016-04-01

    The Centre of High-Performance Scientific Computing (HPSC TerrSys) was founded 2011 to establish a centre of competence in high-performance scientific computing in terrestrial systems and the geosciences enabling fundamental and applied geoscientific research in the Geoverbund ABC/J (geoscientfic research alliance of the Universities of Aachen, Cologne, Bonn and the Research Centre Jülich, Germany). The specific goals of HPSC TerrSys are to achieve relevance at the national and international level in (i) the development and application of HPSC technologies in the geoscientific community; (ii) student education; (iii) HPSC services and support also to the wider geoscientific community; and in (iv) the industry and public sectors via e.g., useful applications and data products. A key feature of HPSC TerrSys is the Simulation Laboratory Terrestrial Systems, which is located at the Jülich Supercomputing Centre (JSC) and provides extensive capabilities with respect to porting, profiling, tuning and performance monitoring of geoscientific software in JSC's supercomputing environment. We will present a summary of success stories of HPSC applications including integrated terrestrial model development, parallel profiling and its application from watersheds to the continent; massively parallel data assimilation using physics-based models and ensemble methods; quasi-operational terrestrial water and energy monitoring; and convection permitting climate simulations over Europe. The success stories stress the need for a formalized education of students in the application of HPSC technologies in future.

  19. Topic 14+16: High-performance and scientific applications and extreme-scale computing (Introduction)

    Downes, Turlough P.

    2013-01-01

    As our understanding of the world around us increases it becomes more challenging to make use of what we already know, and to increase our understanding still further. Computational modeling and simulation have become critical tools in addressing this challenge. The requirements of high-resolution, accurate modeling have outstripped the ability of desktop computers and even small clusters to provide the necessary compute power. Many applications in the scientific and engineering domains now need very large amounts of compute time, while other applications, particularly in the life sciences, frequently have large data I/O requirements. There is thus a growing need for a range of high performance applications which can utilize parallel compute systems effectively, which have efficient data handling strategies and which have the capacity to utilise current and future systems. The High Performance and Scientific Applications topic aims to highlight recent progress in the use of advanced computing and algorithms to address the varied, complex and increasing challenges of modern research throughout both the "hard" and "soft" sciences. This necessitates being able to use large numbers of compute nodes, many of which are equipped with accelerators, and to deal with difficult I/O requirements. © 2013 Springer-Verlag.

  20. Crystal and molecular simulation of high-performance polymers.

    Colquhoun, H M; Williams, D J

    2000-03-01

    Single-crystal X-ray analyses of oligomeric models for high-performance aromatic polymers, interfaced to computer-based molecular modeling and diffraction simulation, have enabled the determination of a range of previously unknown polymer crystal structures from X-ray powder data. Materials which have been successfully analyzed using this approach include aromatic polyesters, polyetherketones, polythioetherketones, polyphenylenes, and polycarboranes. Pure macrocyclic homologues of noncrystalline polyethersulfones afford high-quality single crystals-even at very large ring sizes-and have provided the first examples of a "protein crystallographic" approach to the structures of conventionally amorphous synthetic polymers.

  1. Simulation model of a twin-tail, high performance airplane

    Buttrill, Carey S.; Arbuckle, P. Douglas; Hoffler, Keith D.

    1992-01-01

    The mathematical model and associated computer program to simulate a twin-tailed high performance fighter airplane (McDonnell Douglas F/A-18) are described. The simulation program is written in the Advanced Continuous Simulation Language. The simulation math model includes the nonlinear six degree-of-freedom rigid-body equations, an engine model, sensors, and first order actuators with rate and position limiting. A simplified form of the F/A-18 digital control laws (version 8.3.3) are implemented. The simulated control law includes only inner loop augmentation in the up and away flight mode. The aerodynamic forces and moments are calculated from a wind-tunnel-derived database using table look-ups with linear interpolation. The aerodynamic database has an angle-of-attack range of -10 to +90 and a sideslip range of -20 to +20 degrees. The effects of elastic deformation are incorporated in a quasi-static-elastic manner. Elastic degrees of freedom are not actively simulated. In the engine model, the throttle-commanded steady-state thrust level and the dynamic response characteristics of the engine are based on airflow rate as determined from a table look-up. Afterburner dynamics are switched in at a threshold based on the engine airflow and commanded thrust.

  2. MUMAX: A new high-performance micromagnetic simulation tool

    Vansteenkiste, A.; Van de Wiele, B.

    2011-01-01

    We present MUMAX, a general-purpose micromagnetic simulation tool running on graphical processing units (GPUs). MUMAX is designed for high-performance computations and specifically targets large simulations. In that case speedups of over a factor 100 x can be obtained compared to the CPU-based OOMMF program developed at NIST. MUMAX aims to be general and broadly applicable. It solves the classical Landau-Lifshitz equation taking into account the magnetostatic, exchange and anisotropy interactions, thermal effects and spin-transfer torque. Periodic boundary conditions can optionally be imposed. A spatial discretization using finite differences in two or three dimensions can be employed. MUMAX is publicly available as open-source software. It can thus be freely used and extended by community. Due to its high computational performance, MUMAX should open up the possibility of running extensive simulations that would be nearly inaccessible with typical CPU-based simulators. - Highlights: → Novel, open-source micromagnetic simulator on GPU hardware. → Speedup of ∝100x compared to other widely used tools. → Extensively validated against standard problems. → Makes previously infeasible simulations accessible.

  3. Simulations of KSTAR high performance steady state operation scenarios

    Na, Yong-Su; Kessel, C.E.; Park, J.M.; Yi, Sumin; Kim, J.Y.; Becoulet, A.; Sips, A.C.C.

    2009-01-01

    We report the results of predictive modelling of high performance steady state operation scenarios in KSTAR. Firstly, the capabilities of steady state operation are investigated with time-dependent simulations using a free-boundary plasma equilibrium evolution code coupled with transport calculations. Secondly, the reproducibility of high performance steady state operation scenarios developed in the DIII-D tokamak, of similar size to that of KSTAR, is investigated using the experimental data taken from DIII-D. Finally, the capability of ITER-relevant steady state operation is investigated in KSTAR. It is found that KSTAR is able to establish high performance steady state operation scenarios; β N above 3, H 98 (y, 2) up to 2.0, f BS up to 0.76 and f NI equals 1.0. In this work, a realistic density profile is newly introduced for predictive simulations by employing the scaling law of a density peaking factor. The influence of the current ramp-up scenario and the transport model is discussed with respect to the fusion performance and non-inductive current drive fraction in the transport simulations. As observed in the experiments, both the heating and the plasma current waveforms in the current ramp-up phase produce a strong effect on the q-profile, the fusion performance and also on the non-inductive current drive fraction in the current flattop phase. A criterion in terms of q min is found to establish ITER-relevant steady state operation scenarios. This will provide a guideline for designing the current ramp-up phase in KSTAR. It is observed that the transport model also affects the predictive values of fusion performance as well as the non-inductive current drive fraction. The Weiland transport model predicts the highest fusion performance as well as non-inductive current drive fraction in KSTAR. In contrast, the GLF23 model exhibits the lowest ones. ITER-relevant advanced scenarios cannot be obtained with the GLF23 model in the conditions given in this work

  4. High performance ultrasonic field simulation on complex geometries

    Chouh, H.; Rougeron, G.; Chatillon, S.; Iehl, J. C.; Farrugia, J. P.; Ostromoukhov, V.

    2016-02-01

    Ultrasonic field simulation is a key ingredient for the design of new testing methods as well as a crucial step for NDT inspection simulation. As presented in a previous paper [1], CEA-LIST has worked on the acceleration of these simulations focusing on simple geometries (planar interfaces, isotropic materials). In this context, significant accelerations were achieved on multicore processors and GPUs (Graphics Processing Units), bringing the execution time of realistic computations in the 0.1 s range. In this paper, we present recent works that aim at similar performances on a wider range of configurations. We adapted the physical model used by the CIVA platform to design and implement a new algorithm providing a fast ultrasonic field simulation that yields nearly interactive results for complex cases. The improvements over the CIVA pencil-tracing method include adaptive strategies for pencil subdivisions to achieve a good refinement of the sensor geometry while keeping a reasonable number of ray-tracing operations. Also, interpolation of the times of flight was used to avoid time consuming computations in the impulse response reconstruction stage. To achieve the best performance, our algorithm runs on multi-core superscalar CPUs and uses high performance specialized libraries such as Intel Embree for ray-tracing, Intel MKL for signal processing and Intel TBB for parallelization. We validated the simulation results by comparing them to the ones produced by CIVA on identical test configurations including mono-element and multiple-element transducers, homogeneous, meshed 3D CAD specimens, isotropic and anisotropic materials and wave paths that can involve several interactions with interfaces. We show performance results on complete simulations that achieve computation times in the 1s range.

  5. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    Khaleel, Mohammad A.

    2009-01-01

    This report is an account of the deliberations and conclusions of the workshop on 'Forefront Questions in Nuclear Science and the Role of High Performance Computing' held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to (1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; (2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; (3) provide nuclear physicists the opportunity to influence the development of high performance computing; and (4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  6. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    Khaleel, Mohammad A.

    2009-10-01

    This report is an account of the deliberations and conclusions of the workshop on "Forefront Questions in Nuclear Science and the Role of High Performance Computing" held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to 1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; 2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; 3) provide nuclear physicists the opportunity to influence the development of high performance computing; and 4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  7. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    Jin, Yier [Univ. of Central Florida, Orlando, FL (United States)

    2017-07-14

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from this project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.

  8. Mixed-Language High-Performance Computing for Plasma Simulations

    Quanming Lu

    2003-01-01

    Full Text Available Java is receiving increasing attention as the most popular platform for distributed computing. However, programmers are still reluctant to embrace Java as a tool for writing scientific and engineering applications due to its still noticeable performance drawbacks compared with other programming languages such as Fortran or C. In this paper, we present a hybrid Java/Fortran implementation of a parallel particle-in-cell (PIC algorithm for plasma simulations. In our approach, the time-consuming components of this application are designed and implemented as Fortran subroutines, while less calculation-intensive components usually involved in building the user interface are written in Java. The two types of software modules have been glued together using the Java native interface (JNI. Our mixed-language PIC code was tested and its performance compared with pure Java and Fortran versions of the same algorithm on a Sun E6500 SMP system and a Linux cluster of Pentium~III machines.

  9. Comprehensive Simulation Lifecycle Management for High Performance Computing Modeling and Simulation, Phase I

    National Aeronautics and Space Administration — There are significant logistical barriers to entry-level high performance computing (HPC) modeling and simulation (M IllinoisRocstar) sets up the infrastructure for...

  10. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    Gerber, Richard; Allcock, William; Beggio, Chris; Campbell, Stuart; Cherry, Andrew; Cholia, Shreyas; Dart, Eli; England, Clay; Fahey, Tim; Foertter, Fernanda; Goldstone, Robin; Hick, Jason; Karelitz, David; Kelly, Kaki; Monroe, Laura; Prabhat,; Skinner, David; White, Julia

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at the DOE national laboratories. The report contains findings from that review.

  11. RAPPORT: running scientific high-performance computing applications on the cloud.

    Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt

    2013-01-28

    Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.

  12. High-performance dual-speed CCD camera system for scientific imaging

    Simpson, Raymond W.

    1996-03-01

    Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.

  13. Scientific Modeling and simulations

    Diaz de la Rubia, Tomás

    2009-01-01

    Showcases the conceptual advantages of modeling which, coupled with the unprecedented computing power through simulations, allow scientists to tackle the formibable problems of our society, such as the search for hydrocarbons, understanding the structure of a virus, or the intersection between simulations and real data in extreme environments

  14. High performance real-time flight simulation at NASA Langley

    Cleveland, Jeff I., II

    1994-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be deterministic and be completed in as short a time as possible. This includes simulation mathematical model computational and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, personnel at NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to a standard input/output system to provide for high bandwidth, low latency data acquisition and distribution. The Computer Automated Measurement and Control technology (IEEE standard 595) was extended to meet the performance requirements for real-time simulation. This technology extension increased the effective bandwidth by a factor of ten and increased the performance of modules necessary for simulator communications. This technology is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications of this technology are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC have completed the development of the use of supercomputers for simulation mathematical model computational to support real-time flight simulation. This includes the development of a real-time operating system and the development of specialized software and hardware for the CAMAC simulator network. This work, coupled with the use of an open systems software architecture, has advanced the state of the art in real time flight simulation. The data acquisition technology innovation and experience with recent developments in this technology are described.

  15. High performance stream computing for particle beam transport simulations

    Appleby, R; Bailey, D; Higham, J; Salt, M

    2008-01-01

    Understanding modern particle accelerators requires simulating charged particle transport through the machine elements. These simulations can be very time consuming due to the large number of particles and the need to consider many turns of a circular machine. Stream computing offers an attractive way to dramatically improve the performance of such simulations by calculating the simultaneous transport of many particles using dedicated hardware. Modern Graphics Processing Units (GPUs) are powerful and affordable stream computing devices. The results of simulations of particle transport through the booster-to-storage-ring transfer line of the DIAMOND synchrotron light source using an NVidia GeForce 7900 GPU are compared to the standard transport code MAD. It is found that particle transport calculations are suitable for stream processing and large performance increases are possible. The accuracy and potential speed gains are compared and the prospects for future work in the area are discussed

  16. BurstMem: A High-Performance Burst Buffer System for Scientific Applications

    Wang, Teng [Auburn University, Auburn, Alabama; Oral, H Sarp [ORNL; Wang, Yandong [Auburn University, Auburn, Alabama; Settlemyer, Bradley W [ORNL; Atchley, Scott [ORNL; Yu, Weikuan [Auburn University, Auburn, Alabama

    2014-01-01

    The growth of computing power on large-scale sys- tems requires commensurate high-bandwidth I/O system. Many parallel file systems are designed to provide fast sustainable I/O in response to applications soaring requirements. To meet this need, a novel system is imperative to temporarily buffer the bursty I/O and gradually flush datasets to long-term parallel file systems. In this paper, we introduce the design of BurstMem, a high- performance burst buffer system. BurstMem provides a storage framework with efficient storage and communication manage- ment strategies. Our experiments demonstrate that BurstMem is able to speed up the I/O performance of scientific applications by up to 8.5 on leadership computer systems.

  17. High performance thermal stress analysis on the earth simulator

    Noriyuki, Kushida; Hiroshi, Okuda; Genki, Yagawa

    2003-01-01

    In this study, the thermal stress finite element analysis code optimized for the earth simulator was developed. A processor node of which of the earth simulator is the 8-way vector processor, and each processor can communicate using the message passing interface. Thus, there are two ways to parallelize the finite element method on the earth simulator. The first method is to assign one processor for one sub-domain, and the second method is to assign one node (=8 processors) for one sub-domain considering the shared memory type parallelization. Considering that the preconditioned conjugate gradient (PCG) method, which is one of the suitable linear equation solvers for the large-scale parallel finite element methods, shows the better convergence behavior if the number of domains is the smaller, we have determined to employ PCG and the hybrid parallelization, which is based on the shared and distributed memory type parallelization. It has been said that it is hard to obtain the good parallel or vector performance, since the finite element method is based on unstructured grids. In such situation, the reordering is inevitable to improve the computational performance [2]. In this study, we used three reordering methods, i.e. Reverse Cuthil-McKee (RCM), cyclic multicolor (CM) and diagonal jagged descending storage (DJDS)[3]. RCM provides the good convergence of the incomplete lower-upper (ILU) PCG, but causes the load imbalance. On the other hand, CM provides the good load balance, but worsens the convergence of ILU PCG if the vector length is so long. Therefore, we used the combined-method of RCM and CM. DJDS is the method to store the sparse matrices such that longer vector length can be obtained. For attaining the efficient inter-node parallelization, such partitioning methods as the recursive coordinate bisection (RCM) or MeTIS have been used. Computational performance of the practical large-scale engineering problems will be shown at the meeting. (author)

  18. High performance computer code for molecular dynamics simulations

    Levay, I.; Toekesi, K.

    2007-01-01

    Complete text of publication follows. Molecular Dynamics (MD) simulation is a widely used technique for modeling complicated physical phenomena. Since 2005 we are developing a MD simulations code for PC computers. The computer code is written in C++ object oriented programming language. The aim of our work is twofold: a) to develop a fast computer code for the study of random walk of guest atoms in Be crystal, b) 3 dimensional (3D) visualization of the particles motion. In this case we mimic the motion of the guest atoms in the crystal (diffusion-type motion), and the motion of atoms in the crystallattice (crystal deformation). Nowadays, it is common to use Graphics Devices in intensive computational problems. There are several ways to use this extreme processing performance, but never before was so easy to programming these devices as now. The CUDA (Compute Unified Device) Architecture introduced by nVidia Corporation in 2007 is a very useful for every processor hungry application. A Unified-architecture GPU include 96-128, or more stream processors, so the raw calculation performance is 576(!) GFLOPS. It is ten times faster, than the fastest dual Core CPU [Fig.1]. Our improved MD simulation software uses this new technology, which speed up our software and the code run 10 times faster in the critical calculation code segment. Although the GPU is a very powerful tool, it has a strongly paralleled structure. It means, that we have to create an algorithm, which works on several processors without deadlock. Our code currently uses 256 threads, shared and constant on-chip memory, instead of global memory, which is 100 times slower than others. It is possible to implement the total algorithm on GPU, therefore we do not need to download and upload the data in every iteration. On behalf of maximal throughput, every thread run with the same instructions

  19. Scientific Data Services -- A High-Performance I/O System with Array Semantics

    Wu, Kesheng; Byna, Surendra; Rotem, Doron; Shoshani, Arie

    2011-09-21

    As high-performance computing approaches exascale, the existing I/O system design is having trouble keeping pace in both performance and scalability. We propose to address this challenge by adopting database principles and techniques in parallel I/O systems. First, we propose to adopt an array data model because many scientific applications represent their data in arrays. This strategy follows a cardinal principle from database research, which separates the logical view from the physical layout of data. This high-level data model gives the underlying implementation more freedom to optimize the physical layout and to choose the most effective way of accessing the data. For example, knowing that a set of write operations is working on a single multi-dimensional array makes it possible to keep the subarrays in a log structure during the write operations and reassemble them later into another physical layout as resources permit. While maintaining the high-level view, the storage system could compress the user data to reduce the physical storage requirement, collocate data records that are frequently used together, or replicate data to increase availability and fault-tolerance. Additionally, the system could generate secondary data structures such as database indexes and summary statistics. We expect the proposed Scientific Data Services approach to create a “live” storage system that dynamically adjusts to user demands and evolves with the massively parallel storage hardware.

  20. High-Performance Modeling of Carbon Dioxide Sequestration by Coupling Reservoir Simulation and Molecular Dynamics

    Bao, Kai; Yan, Mi; Allen, Rebecca; Salama, Amgad; Lu, Ligang; Jordan, Kirk E.; Sun, Shuyu; Keyes, David E.

    2015-01-01

    The present work describes a parallel computational framework for carbon dioxide (CO2) sequestration simulation by coupling reservoir simulation and molecular dynamics (MD) on massively parallel high-performance-computing (HPC) systems

  1. Scientific Programming with High Performance Fortran: A Case Study Using the xHPF Compiler

    Eric De Sturler

    1997-01-01

    Full Text Available Recently, the first commercial High Performance Fortran (HPF subset compilers have appeared. This article reports on our experiences with the xHPF compiler of Applied Parallel Research, version 1.2, for the Intel Paragon. At this stage, we do not expect very High Performance from our HPF programs, even though performance will eventually be of paramount importance for the acceptance of HPF. Instead, our primary objective is to study how to convert large Fortran 77 (F77 programs to HPF such that the compiler generates reasonably efficient parallel code. We report on a case study that identifies several problems when parallelizing code with HPF; most of these problems affect current HPF compiler technology in general, although some are specific for the xHPF compiler. We discuss our solutions from the perspective of the scientific programmer, and presenttiming results on the Intel Paragon. The case study comprises three programs of different complexity with respect to parallelization. We use the dense matrix-matrix product to show that the distribution of arrays and the order of nested loops significantly influence the performance of the parallel program. We use Gaussian elimination with partial pivoting to study the parallelization strategy of the compiler. There are various ways to structure this algorithm for a particular data distribution. This example shows how much effort may be demanded from the programmer to support the compiler in generating an efficient parallel implementation. Finally, we use a small application to show that the more complicated structure of a larger program may introduce problems for the parallelization, even though all subroutines of the application are easy to parallelize by themselves. The application consists of a finite volume discretization on a structured grid and a nested iterative solver. Our case study shows that it is possible to obtain reasonably efficient parallel programs with xHPF, although the compiler

  2. Accelerating Scientific Applications using High Performance Dense and Sparse Linear Algebra Kernels on GPUs

    Abdelfattah, Ahmad

    2015-01-15

    High performance computing (HPC) platforms are evolving to more heterogeneous configurations to support the workloads of various applications. The current hardware landscape is composed of traditional multicore CPUs equipped with hardware accelerators that can handle high levels of parallelism. Graphical Processing Units (GPUs) are popular high performance hardware accelerators in modern supercomputers. GPU programming has a different model than that for CPUs, which means that many numerical kernels have to be redesigned and optimized specifically for this architecture. GPUs usually outperform multicore CPUs in some compute intensive and massively parallel applications that have regular processing patterns. However, most scientific applications rely on crucial memory-bound kernels and may witness bottlenecks due to the overhead of the memory bus latency. They can still take advantage of the GPU compute power capabilities, provided that an efficient architecture-aware design is achieved. This dissertation presents a uniform design strategy for optimizing critical memory-bound kernels on GPUs. Based on hierarchical register blocking, double buffering and latency hiding techniques, this strategy leverages the performance of a wide range of standard numerical kernels found in dense and sparse linear algebra libraries. The work presented here focuses on matrix-vector multiplication kernels (MVM) as repre- sentative and most important memory-bound operations in this context. Each kernel inherits the benefits of the proposed strategies. By exposing a proper set of tuning parameters, the strategy is flexible enough to suit different types of matrices, ranging from large dense matrices, to sparse matrices with dense block structures, while high performance is maintained. Furthermore, the tuning parameters are used to maintain the relative performance across different GPU architectures. Multi-GPU acceleration is proposed to scale the performance on several devices. The

  3. A New Approach in Advance Network Reservation and Provisioning for High-Performance Scientific Data Transfers

    Balman, Mehmet; Chaniotakis, Evangelos; Shoshani, Arie; Sim, Alex

    2010-01-28

    Scientific applications already generate many terabytes and even petabytes of data from supercomputer runs and large-scale experiments. The need for transferring data chunks of ever-increasing sizes through the network shows no sign of abating. Hence, we need high-bandwidth high speed networks such as ESnet (Energy Sciences Network). Network reservation systems, i.e. ESnet's OSCARS (On-demand Secure Circuits and Advance Reservation System) establish guaranteed bandwidth of secure virtual circuits at a certain time, for a certain bandwidth and length of time. OSCARS checks network availability and capacity for the specified period of time, and allocates requested bandwidth for that user if it is available. If the requested reservation cannot be granted, no further suggestion is returned back to the user. Further, there is no possibility from the users view-point to make an optimal choice. We report a new algorithm, where the user specifies the total volume that needs to be transferred, a maximum bandwidth that he/she can use, and a desired time period within which the transfer should be done. The algorithm can find alternate allocation possibilities, including earliest time for completion, or shortest transfer duration - leaving the choice to the user. We present a novel approach for path finding in time-dependent networks, and a new polynomial algorithm to find possible reservation options according to given constraints. We have implemented our algorithm for testing and incorporation into a future version of ESnet?s OSCARS. Our approach provides a basis for provisioning end-to-end high performance data transfers over storage and network resources.

  4. An Advanced, Interactive, High-Performance Liquid Chromatography Simulator and Instructor Resources

    Boswell, Paul G.; Stoll, Dwight R.; Carr, Peter W.; Nagel, Megan L.; Vitha, Mark F.; Mabbott, Gary A.

    2013-01-01

    High-performance liquid chromatography (HPLC) simulation software has long been recognized as an effective educational tool, yet many of the existing HPLC simulators are either too expensive, outdated, or lack many important features necessary to make them widely useful for educational purposes. Here, a free, open-source HPLC simulator is…

  5. Progress on H5Part: A Portable High Performance Parallel Data Interface for Electromagnetics Simulations

    Adelmann, Andreas; Gsell, Achim; Oswald, Benedikt; Schietinger, Thomas; Bethel, Wes; Shalf, John; Siegerist, Cristina; Stockinger, Kurt

    2007-01-01

    Significant problems facing all experimental and computational sciences arise from growing data size and complexity. Common to all these problems is the need to perform efficient data I/O on diverse computer architectures. In our scientific application, the largest parallel particle simulations generate vast quantities of six-dimensional data. Such a simulation run produces data for an aggregate data size up to several TB per run. Motivated by the need to address data I/O and access challenges, we have implemented H5Part, an open source data I/O API that simplifies the use of the Hierarchical Data Format v5 library (HDF5). HDF5 is an industry standard for high performance, cross-platform data storage and retrieval that runs on all contemporary architectures from large parallel supercomputers to laptops. H5Part, which is oriented to the needs of the particle physics and cosmology communities, provides support for parallel storage and retrieval of particles, structured and in the future unstructured meshes. In this paper, we describe recent work focusing on I/O support for particles and structured meshes and provide data showing performance on modern supercomputer architectures like the IBM POWER 5

  6. CUDA/GPU Technology : Parallel Programming For High Performance Scientific Computing

    YUHENDRA; KUZE, Hiroaki; JOSAPHAT, Tetuko Sri Sumantyo

    2009-01-01

    [ABSTRACT]Graphics processing units (GP Us) originally designed for computer video cards have emerged as the most powerful chip in a high-performance workstation. In the high performance computation capabilities, graphic processing units (GPU) lead to much more powerful performance than conventional CPUs by means of parallel processing. In 2007, the birth of Compute Unified Device Architecture (CUDA) and CUDA-enabled GPUs by NVIDIA Corporation brought a revolution in the general purpose GPU a...

  7. High Performance Electrical Modeling and Simulation Verification Test Suite - Tier I; TOPICAL

    SCHELLS, REGINA L.; BOGDAN, CAROLYN W.; WIX, STEVEN D.

    2001-01-01

    This document describes the High Performance Electrical Modeling and Simulation (HPEMS) Global Verification Test Suite (VERTS). The VERTS is a regression test suite used for verification of the electrical circuit simulation codes currently being developed by the HPEMS code development team. This document contains descriptions of the Tier I test cases

  8. LIAR -- A computer program for the modeling and simulation of high performance linacs

    Assmann, R.; Adolphsen, C.; Bane, K.; Emma, P.; Raubenheimer, T.; Siemann, R.; Thompson, K.; Zimmermann, F.

    1997-04-01

    The computer program LIAR (LInear Accelerator Research Code) is a numerical modeling and simulation tool for high performance linacs. Amongst others, it addresses the needs of state-of-the-art linear colliders where low emittance, high-intensity beams must be accelerated to energies in the 0.05-1 TeV range. LIAR is designed to be used for a variety of different projects. LIAR allows the study of single- and multi-particle beam dynamics in linear accelerators. It calculates emittance dilutions due to wakefield deflections, linear and non-linear dispersion and chromatic effects in the presence of multiple accelerator imperfections. Both single-bunch and multi-bunch beams can be simulated. Several basic and advanced optimization schemes are implemented. Present limitations arise from the incomplete treatment of bending magnets and sextupoles. A major objective of the LIAR project is to provide an open programming platform for the accelerator physics community. Due to its design, LIAR allows straight-forward access to its internal FORTRAN data structures. The program can easily be extended and its interactive command language ensures maximum ease of use. Presently, versions of LIAR are compiled for UNIX and MS Windows operating systems. An interface for the graphical visualization of results is provided. Scientific graphs can be saved in the PS and EPS file formats. In addition a Mathematica interface has been developed. LIAR now contains more than 40,000 lines of source code in more than 130 subroutines. This report describes the theoretical basis of the program, provides a reference for existing features and explains how to add further commands. The LIAR home page and the ONLINE version of this manual can be accessed under: http://www.slac.stanford.edu/grp/arb/rwa/liar.htm

  9. Statistical physics of fracture: scientific discovery through high-performance computing

    Kumar, Phani; Nukala, V V; Simunovic, Srdan; Mills, Richard T

    2006-01-01

    The paper presents the state-of-the-art algorithmic developments for simulating the fracture of disordered quasi-brittle materials using discrete lattice systems. Large scale simulations are often required to obtain accurate scaling laws; however, due to computational complexity, the simulations using the traditional algorithms were limited to small system sizes. We have developed two algorithms: a multiple sparse Cholesky downdating scheme for simulating 2D random fuse model systems, and a block-circulant preconditioner for simulating 2D random fuse model systems. Using these algorithms, we were able to simulate fracture of largest ever lattice system sizes (L = 1024 in 2D, and L = 64 in 3D) with extensive statistical sampling. Our recent simulations on 1024 processors of Cray-XT3 and IBM Blue-Gene/L have further enabled us to explore fracture of 3D lattice systems of size L = 200, which is a significant computational achievement. These largest ever numerical simulations have enhanced our understanding of physics of fracture; in particular, we analyze damage localization and its deviation from percolation behavior, scaling laws for damage density, universality of fracture strength distribution, size effect on the mean fracture strength, and finally the scaling of crack surface roughness

  10. Aging analysis of high performance FinFET flip-flop under Dynamic NBTI simulation configuration

    Zainudin, M. F.; Hussin, H.; Halim, A. K.; Karim, J.

    2018-03-01

    A mechanism known as Negative-bias Temperature Instability (NBTI) degrades a main electrical parameters of a circuit especially in terms of performance. So far, the circuit design available at present are only focussed on high performance circuit without considering the circuit reliability and robustness. In this paper, the main circuit performances of high performance FinFET flip-flop such as delay time, and power were studied with the presence of the NBTI degradation. The aging analysis was verified using a 16nm High Performance Predictive Technology Model (PTM) based on different commands available at Synopsys HSPICE. The results shown that the circuit under the longer dynamic NBTI simulation produces the highest impact in the increasing of gate delay and decrease in the average power reduction from a fresh simulation until the aged stress time under a nominal condition. In addition, the circuit performance under a varied stress condition such as temperature and negative stress gate bias were also studied.

  11. Development of high performance scientific components for interoperability of computing packages

    Gulabani, Teena Pratap [Iowa State Univ., Ames, IA (United States)

    2008-01-01

    Three major high performance quantum chemistry computational packages, NWChem, GAMESS and MPQC have been developed by different research efforts following different design patterns. The goal is to achieve interoperability among these packages by overcoming the challenges caused by the different communication patterns and software design of each of these packages. A chemistry algorithm is hard to develop as well as being a time consuming process; integration of large quantum chemistry packages will allow resource sharing and thus avoid reinvention of the wheel. Creating connections between these incompatible packages is the major motivation of the proposed work. This interoperability is achieved by bringing the benefits of Component Based Software Engineering through a plug-and-play component framework called Common Component Architecture (CCA). In this thesis, I present a strategy and process used for interfacing two widely used and important computational chemistry methodologies: Quantum Mechanics and Molecular Mechanics. To show the feasibility of the proposed approach the Tuning and Analysis Utility (TAU) has been coupled with NWChem code and its CCA components. Results show that the overhead is negligible when compared to the ease and potential of organizing and coping with large-scale software applications.

  12. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    Moon, Hongsik

    What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the

  13. The computer program LIAR for the simulation and modeling of high performance linacs

    Assmann, R.; Adolphsen, C.; Bane, K.; Emma, P.; Raubenheimer, T.O.; Siemann, R.; Thompson, K.; Zimmermann, F.

    1997-07-01

    High performance linear accelerators are the central components of the proposed next generation of linear colliders. They must provide acceleration of up to 750 GeV per beam while maintaining small normalized emittances. Standard simulation programs, mainly developed for storage rings, did not meet the specific requirements for high performance linacs with high bunch charges and strong wakefields. The authors present the program. LIAR (LInear Accelerator Research code) that includes single and multi-bunch wakefield effects, a 6D coupled beam description, specific optimization algorithms and other advanced features. LIAR has been applied to and checked against the existing Stanford Linear Collider (SLC), the linacs of the proposed Next Linear Collider (NLC) and the proposed Linac Coherent Light Source (LCLS) at SLAC. Its modular structure allows easy extension for different purposes. The program is available for UNIX workstations and Windows PC's

  14. High Performance Wideband CMOS CCI and its Application in Inductance Simulator Design

    ARSLAN, E.

    2012-08-01

    Full Text Available In this paper, a new, differential pair based, low-voltage, high performance and wideband CMOS first generation current conveyor (CCI is proposed. The proposed CCI has high voltage swings on ports X and Y and very low equivalent impedance on port X due to super source follower configuration. It also has high voltage swings (close to supply voltages on input and output ports and wideband current and voltage transfer ratios. Furthermore, two novel grounded inductance simulator circuits are proposed as application examples. Using HSpice, it is shown that the simulation results of the proposed CCI and also of the presented inductance simulators are in very good agreement with the expected ones.

  15. OpenMM 4: A Reusable, Extensible, Hardware Independent Library for High Performance Molecular Simulation.

    Eastman, Peter; Friedrichs, Mark S; Chodera, John D; Radmer, Randall J; Bruns, Christopher M; Ku, Joy P; Beauchamp, Kyle A; Lane, Thomas J; Wang, Lee-Ping; Shukla, Diwakar; Tye, Tony; Houston, Mike; Stich, Timo; Klein, Christoph; Shirts, Michael R; Pande, Vijay S

    2013-01-08

    OpenMM is a software toolkit for performing molecular simulations on a range of high performance computing architectures. It is based on a layered architecture: the lower layers function as a reusable library that can be invoked by any application, while the upper layers form a complete environment for running molecular simulations. The library API hides all hardware-specific dependencies and optimizations from the users and developers of simulation programs: they can be run without modification on any hardware on which the API has been implemented. The current implementations of OpenMM include support for graphics processing units using the OpenCL and CUDA frameworks. In addition, OpenMM was designed to be extensible, so new hardware architectures can be accommodated and new functionality (e.g., energy terms and integrators) can be easily added.

  16. High performance simulation for the Silva project using the tera computer

    Bergeaud, V.; La Hargue, J.P.; Mougery, F. [CS Communication and Systemes, 92 - Clamart (France); Boulet, M.; Scheurer, B. [CEA Bruyeres-le-Chatel, 91 - Bruyeres-le-Chatel (France); Le Fur, J.F.; Comte, M.; Benisti, D.; Lamare, J. de; Petit, A. [CEA Saclay, 91 - Gif sur Yvette (France)

    2003-07-01

    In the context of the SILVA Project (Atomic Vapor Laser Isotope Separation), numerical simulation of the plant scale propagation of laser beams through uranium vapour was a great challenge. The PRODIGE code has been developed to achieve this goal. Here we focus on the task of achieving high performance simulation on the TERA computer. We describe the main issues for optimizing the parallelization of the PRODIGE code on TERA. Thus, we discuss advantages and drawbacks of the implemented diagonal parallelization scheme. As a consequence, it has been found fruitful to fit out the code in three aspects: memory allocation, MPI communications and interconnection network bandwidth usage. We stress out the interest of MPI/IO in this context and the benefit obtained for production computations on TERA. Finally, we shall illustrate our developments. We indicate some performance measurements reflecting the good parallelization properties of PRODIGE on the TERA computer. The code is currently used for demonstrating the feasibility of the laser propagation at a plant enrichment level and for preparing the 2003 Menphis experiment. We conclude by emphasizing the contribution of high performance TERA simulation to the project. (authors)

  17. High performance simulation for the Silva project using the tera computer

    Bergeaud, V.; La Hargue, J.P.; Mougery, F.; Boulet, M.; Scheurer, B.; Le Fur, J.F.; Comte, M.; Benisti, D.; Lamare, J. de; Petit, A.

    2003-01-01

    In the context of the SILVA Project (Atomic Vapor Laser Isotope Separation), numerical simulation of the plant scale propagation of laser beams through uranium vapour was a great challenge. The PRODIGE code has been developed to achieve this goal. Here we focus on the task of achieving high performance simulation on the TERA computer. We describe the main issues for optimizing the parallelization of the PRODIGE code on TERA. Thus, we discuss advantages and drawbacks of the implemented diagonal parallelization scheme. As a consequence, it has been found fruitful to fit out the code in three aspects: memory allocation, MPI communications and interconnection network bandwidth usage. We stress out the interest of MPI/IO in this context and the benefit obtained for production computations on TERA. Finally, we shall illustrate our developments. We indicate some performance measurements reflecting the good parallelization properties of PRODIGE on the TERA computer. The code is currently used for demonstrating the feasibility of the laser propagation at a plant enrichment level and for preparing the 2003 Menphis experiment. We conclude by emphasizing the contribution of high performance TERA simulation to the project. (authors)

  18. High performance cellular level agent-based simulation with FLAME for the GPU.

    Richmond, Paul; Walker, Dawn; Coakley, Simon; Romano, Daniela

    2010-05-01

    Driven by the availability of experimental data and ability to simulate a biological scale which is of immediate interest, the cellular scale is fast emerging as an ideal candidate for middle-out modelling. As with 'bottom-up' simulation approaches, cellular level simulations demand a high degree of computational power, which in large-scale simulations can only be achieved through parallel computing. The flexible large-scale agent modelling environment (FLAME) is a template driven framework for agent-based modelling (ABM) on parallel architectures ideally suited to the simulation of cellular systems. It is available for both high performance computing clusters (www.flame.ac.uk) and GPU hardware (www.flamegpu.com) and uses a formal specification technique that acts as a universal modelling format. This not only creates an abstraction from the underlying hardware architectures, but avoids the steep learning curve associated with programming them. In benchmarking tests and simulations of advanced cellular systems, FLAME GPU has reported massive improvement in performance over more traditional ABM frameworks. This allows the time spent in the development and testing stages of modelling to be drastically reduced and creates the possibility of real-time visualisation for simple visual face-validation.

  19. High-Performance Modeling of Carbon Dioxide Sequestration by Coupling Reservoir Simulation and Molecular Dynamics

    Bao, Kai

    2015-10-26

    The present work describes a parallel computational framework for carbon dioxide (CO2) sequestration simulation by coupling reservoir simulation and molecular dynamics (MD) on massively parallel high-performance-computing (HPC) systems. In this framework, a parallel reservoir simulator, reservoir-simulation toolbox (RST), solves the flow and transport equations that describe the subsurface flow behavior, whereas the MD simulations are performed to provide the required physical parameters. Technologies from several different fields are used to make this novel coupled system work efficiently. One of the major applications of the framework is the modeling of large-scale CO2 sequestration for long-term storage in subsurface geological formations, such as depleted oil and gas reservoirs and deep saline aquifers, which has been proposed as one of the few attractive and practical solutions to reduce CO2 emissions and address the global-warming threat. Fine grids and accurate prediction of the properties of fluid mixtures under geological conditions are essential for accurate simulations. In this work, CO2 sequestration is presented as a first example for coupling reservoir simulation and MD, although the framework can be extended naturally to the full multiphase multicomponent compositional flow simulation to handle more complicated physical processes in the future. Accuracy and scalability analysis are performed on an IBM BlueGene/P and on an IBM BlueGene/Q, the latest IBM supercomputer. Results show good accuracy of our MD simulations compared with published data, and good scalability is observed with the massively parallel HPC systems. The performance and capacity of the proposed framework are well-demonstrated with several experiments with hundreds of millions to one billion cells. To the best of our knowledge, the present work represents the first attempt to couple reservoir simulation and molecular simulation for large-scale modeling. Because of the complexity of

  20. High performance MRI simulations of motion on multi-GPU systems.

    Xanthis, Christos G; Venetis, Ioannis E; Aletras, Anthony H

    2014-07-04

    MRI physics simulators have been developed in the past for optimizing imaging protocols and for training purposes. However, these simulators have only addressed motion within a limited scope. The purpose of this study was the incorporation of realistic motion, such as cardiac motion, respiratory motion and flow, within MRI simulations in a high performance multi-GPU environment. Three different motion models were introduced in the Magnetic Resonance Imaging SIMULator (MRISIMUL) of this study: cardiac motion, respiratory motion and flow. Simulation of a simple Gradient Echo pulse sequence and a CINE pulse sequence on the corresponding anatomical model was performed. Myocardial tagging was also investigated. In pulse sequence design, software crushers were introduced to accommodate the long execution times in order to avoid spurious echoes formation. The displacement of the anatomical model isochromats was calculated within the Graphics Processing Unit (GPU) kernel for every timestep of the pulse sequence. Experiments that would allow simulation of custom anatomical and motion models were also performed. Last, simulations of motion with MRISIMUL on single-node and multi-node multi-GPU systems were examined. Gradient Echo and CINE images of the three motion models were produced and motion-related artifacts were demonstrated. The temporal evolution of the contractility of the heart was presented through the application of myocardial tagging. Better simulation performance and image quality were presented through the introduction of software crushers without the need to further increase the computational load and GPU resources. Last, MRISIMUL demonstrated an almost linear scalable performance with the increasing number of available GPU cards, in both single-node and multi-node multi-GPU computer systems. MRISIMUL is the first MR physics simulator to have implemented motion with a 3D large computational load on a single computer multi-GPU configuration. The incorporation

  1. Reusable Object-Oriented Solutions for Numerical Simulation of PDEs in a High Performance Environment

    Andrea Lani

    2006-01-01

    Full Text Available Object-oriented platforms developed for the numerical solution of PDEs must combine flexibility and reusability, in order to ease the integration of new functionalities and algorithms. While designing similar frameworks, a built-in support for high performance should be provided and enforced transparently, especially in parallel simulations. The paper presents solutions developed to effectively tackle these and other more specific problems (data handling and storage, implementation of physical models and numerical methods that have arisen in the development of COOLFluiD, an environment for PDE solvers. Particular attention is devoted to describe a data storage facility, highly suitable for both serial and parallel computing, and to discuss the application of two design patterns, Perspective and Method-Command-Strategy, that support extensibility and run-time flexibility in the implementation of physical models and generic numerical algorithms respectively.

  2. GPU-based high performance Monte Carlo simulation in neutron transport

    Heimlich, Adino; Mol, Antonio C.A.; Pereira, Claudio M.N.A. [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. de Inteligencia Artificial Aplicada], e-mail: cmnap@ien.gov.br

    2009-07-01

    Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)

  3. GPU-based high performance Monte Carlo simulation in neutron transport

    Heimlich, Adino; Mol, Antonio C.A.; Pereira, Claudio M.N.A.

    2009-01-01

    Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)

  4. Direct numerical simulation of reactor two-phase flows enabled by high-performance computing

    Fang, Jun; Cambareri, Joseph J.; Brown, Cameron S.; Feng, Jinyong; Gouws, Andre; Li, Mengnan; Bolotnov, Igor A.

    2018-04-01

    Nuclear reactor two-phase flows remain a great engineering challenge, where the high-resolution two-phase flow database which can inform practical model development is still sparse due to the extreme reactor operation conditions and measurement difficulties. Owing to the rapid growth of computing power, the direct numerical simulation (DNS) is enjoying a renewed interest in investigating the related flow problems. A combination between DNS and an interface tracking method can provide a unique opportunity to study two-phase flows based on first principles calculations. More importantly, state-of-the-art high-performance computing (HPC) facilities are helping unlock this great potential. This paper reviews the recent research progress of two-phase flow DNS related to reactor applications. The progress in large-scale bubbly flow DNS has been focused not only on the sheer size of those simulations in terms of resolved Reynolds number, but also on the associated advanced modeling and analysis techniques. Specifically, the current areas of active research include modeling of sub-cooled boiling, bubble coalescence, as well as the advanced post-processing toolkit for bubbly flow simulations in reactor geometries. A novel bubble tracking method has been developed to track the evolution of bubbles in two-phase bubbly flow. Also, spectral analysis of DNS database in different geometries has been performed to investigate the modulation of the energy spectrum slope due to bubble-induced turbulence. In addition, the single-and two-phase analysis results are presented for turbulent flows within the pressurized water reactor (PWR) core geometries. The related simulations are possible to carry out only with the world leading HPC platforms. These simulations are allowing more complex turbulence model development and validation for use in 3D multiphase computational fluid dynamics (M-CFD) codes.

  5. A Secure Web Application Providing Public Access to High-Performance Data Intensive Scientific Resources - ScalaBLAST Web Application

    Curtis, Darren S.; Peterson, Elena S.; Oehmen, Chris S.

    2008-01-01

    This work presents the ScalaBLAST Web Application (SWA), a web based application implemented using the PHP script language, MySQL DBMS, and Apache web server under a GNU/Linux platform. SWA is an application built as part of the Data Intensive Computer for Complex Biological Systems (DICCBS) project at the Pacific Northwest National Laboratory (PNNL). SWA delivers accelerated throughput of bioinformatics analysis via high-performance computing through a convenient, easy-to-use web interface. This approach greatly enhances emerging fields of study in biology such as ontology-based homology, and multiple whole genome comparisons which, in the absence of a tool like SWA, require a heroic effort to overcome the computational bottleneck associated with genome analysis. The current version of SWA includes a user account management system, a web based user interface, and a backend process that generates the files necessary for the Internet scientific community to submit a ScalaBLAST parallel processing job on a dedicated cluster

  6. Computational Simulations and the Scientific Method

    Kleb, Bil; Wood, Bill

    2005-01-01

    As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

  7. THC-MP: High performance numerical simulation of reactive transport and multiphase flow in porous media

    Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu

    2015-07-01

    The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.

  8. Simulation of the High Performance Time to Digital Converter for the ATLAS Muon Spectrometer trigger upgrade

    Meng, X.T.; Levin, D.S.; Chapman, J.W.; Zhou, B.

    2016-01-01

    The ATLAS Muon Spectrometer endcap thin-Resistive Plate Chamber trigger project compliments the New Small Wheel endcap Phase-1 upgrade for higher luminosity LHC operation. These new trigger chambers, located in a high rate region of ATLAS, will improve overall trigger acceptance and reduce the fake muon trigger incidence. These chambers must generate a low level muon trigger to be delivered to a remote high level processor within a stringent latency requirement of 43 bunch crossings (1075 ns). To help meet this requirement the High Performance Time to Digital Converter (HPTDC), a multi-channel ASIC designed by CERN Microelectronics group, has been proposed for the digitization of the fast front end detector signals. This paper investigates the HPTDC performance in the context of the overall muon trigger latency, employing detailed behavioral Verilog simulations in which the latency in triggerless mode is measured for a range of configurations and under realistic hit rate conditions. The simulation results show that various HPTDC operational configurations, including leading edge and pair measurement modes can provide high efficiency (>98%) to capture and digitize hits within a time interval satisfying the Phase-1 latency tolerance.

  9. High-Performance Modeling and Simulation of Anchoring in Granular Media for NEO Applications

    Quadrelli, Marco B.; Jain, Abhinandan; Negrut, Dan; Mazhar, Hammad

    2012-01-01

    NASA is interested in designing a spacecraft capable of visiting a near-Earth object (NEO), performing experiments, and then returning safely. Certain periods of this mission would require the spacecraft to remain stationary relative to the NEO, in an environment characterized by very low gravity levels; such situations require an anchoring mechanism that is compact, easy to deploy, and upon mission completion, easy to remove. The design philosophy used in this task relies on the simulation capability of a high-performance multibody dynamics physics engine. On Earth, it is difficult to create low-gravity conditions, and testing in low-gravity environments, whether artificial or in space, can be costly and very difficult to achieve. Through simulation, the effect of gravity can be controlled with great accuracy, making it ideally suited to analyze the problem at hand. Using Chrono::Engine, a simulation pack age capable of utilizing massively parallel Graphic Processing Unit (GPU) hardware, several validation experiments were performed. Modeling of the regolith interaction has been carried out, after which the anchor penetration tests were performed and analyzed. The regolith was modeled by a granular medium composed of very large numbers of convex three-dimensional rigid bodies, subject to microgravity levels and interacting with each other with contact, friction, and cohesional forces. The multibody dynamics simulation approach used for simulating anchors penetrating a soil uses a differential variational inequality (DVI) methodology to solve the contact problem posed as a linear complementarity method (LCP). Implemented within a GPU processing environment, collision detection is greatly accelerated compared to traditional CPU (central processing unit)- based collision detection. Hence, systems of millions of particles interacting with complex dynamic systems can be efficiently analyzed, and design recommendations can be made in a much shorter time. The figure

  10. COMSOL-PHREEQC: a tool for high performance numerical simulation of reactive transport phenomena

    Nardi, Albert; Vries, Luis Manuel de; Trinchero, Paolo; Idiart, Andres; Molinero, Jorge

    2012-01-01

    Document available in extended abstract form only. Comsol Multiphysics (COMSOL, from now on) is a powerful Finite Element software environment for the modelling and simulation of a large number of physics-based systems. The user can apply variables, expressions or numbers directly to solid and fluid domains, boundaries, edges and points, independently of the computational mesh. COMSOL then internally compiles a set of equations representing the entire model. The availability of extremely powerful pre and post processors makes COMSOL a numerical platform well known and extensively used in many branches of sciences and engineering. On the other hand, PHREEQC is a freely available computer program for simulating chemical reactions and transport processes in aqueous systems. It is perhaps the most widely used geochemical code in the scientific community and is openly distributed. The program is based on equilibrium chemistry of aqueous solutions interacting with minerals, gases, solid solutions, exchangers, and sorption surfaces, but also includes the capability to model kinetic reactions with rate equations that are user-specified in a very flexible way by means of Basic statements directly written in the input file. Here we present COMSOL-PHREEQC, a software interface able to communicate and couple these two powerful simulators by means of a Java interface. The methodology is based on Sequential Non Iterative Approach (SNIA), where PHREEQC is compiled as a dynamic subroutine (iPhreeqc) that is called by the interface to solve the geochemical system at every element of the finite element mesh of COMSOL. The numerical tool has been extensively verified by comparison with computed results of 1D, 2D and 3D benchmark examples solved with other reactive transport simulators. COMSOL-PHREEQC is parallelized so that CPU time can be highly optimized in multi-core processors or clusters. Then, fully 3D detailed reactive transport problems can be readily simulated by means of

  11. libRoadRunner: a high performance SBML simulation and analysis library.

    Somogyi, Endre T; Bouteiller, Jean-Marie; Glazier, James A; König, Matthias; Medley, J Kyle; Swat, Maciej H; Sauro, Herbert M

    2015-10-15

    This article presents libRoadRunner, an extensible, high-performance, cross-platform, open-source software library for the simulation and analysis of models expressed using Systems Biology Markup Language (SBML). SBML is the most widely used standard for representing dynamic networks, especially biochemical networks. libRoadRunner is fast enough to support large-scale problems such as tissue models, studies that require large numbers of repeated runs and interactive simulations. libRoadRunner is a self-contained library, able to run both as a component inside other tools via its C++ and C bindings, and interactively through its Python interface. Its Python Application Programming Interface (API) is similar to the APIs of MATLAB ( WWWMATHWORKSCOM: ) and SciPy ( HTTP//WWWSCIPYORG/: ), making it fast and easy to learn. libRoadRunner uses a custom Just-In-Time (JIT) compiler built on the widely used LLVM JIT compiler framework. It compiles SBML-specified models directly into native machine code for a variety of processors, making it appropriate for solving extremely large models or repeated runs. libRoadRunner is flexible, supporting the bulk of the SBML specification (except for delay and non-linear algebraic equations) including several SBML extensions (composition and distributions). It offers multiple deterministic and stochastic integrators, as well as tools for steady-state analysis, stability analysis and structural analysis of the stoichiometric matrix. libRoadRunner binary distributions are available for Mac OS X, Linux and Windows. The library is licensed under Apache License Version 2.0. libRoadRunner is also available for ARM-based computers such as the Raspberry Pi. http://www.libroadrunner.org provides online documentation, full build instructions, binaries and a git source repository. hsauro@u.washington.edu or somogyie@indiana.edu Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2015. This work is written

  12. StagBL : A Scalable, Portable, High-Performance Discretization and Solver Layer for Geodynamic Simulation

    Sanan, P.; Tackley, P. J.; Gerya, T.; Kaus, B. J. P.; May, D.

    2017-12-01

    uninterrupted pipeline from toy/teaching codes to high-performance, extreme-scale solves. StagBLDemo replicates the functionality of an advanced MATLAB-style regional geodynamics code, thus providing users with a concrete procedure to exceed the performance and scalability limitations of smaller-scale tools.

  13. A High Performance Chemical Simulation Preprocessor and Source Code Generator, Phase I

    National Aeronautics and Space Administration — Numerical simulations of chemical kinetics are a critical component of aerospace research, Earth systems research, and energy research. These simulations enable a...

  14. A lattice-particle approach for the simulation of fracture processes in fiber-reinforced high-performance concrete

    Montero-Chacón, F.; Schlangen, H.E.J.G.; Medina, F.

    2013-01-01

    The use of fiber-reinforced high-performance concrete (FRHPC) is becoming more extended; therefore it is necessary to develop tools to simulate and better understand its behavior. In this work, a discrete model for the analysis of fracture mechanics in FRHPC is presented. The plain concrete matrix,

  15. Optimized Parallel Discrete Event Simulation (PDES) for High Performance Computing (HPC) Clusters

    Abu-Ghazaleh, Nael

    2005-01-01

    The aim of this project was to study the communication subsystem performance of state of the art optimistic simulator Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES...

  16. Software Engineering for Scientific Computer Simulations

    Post, Douglass E.; Henderson, Dale B.; Kendall, Richard P.; Whitney, Earl M.

    2004-11-01

    Computer simulation is becoming a very powerful tool for analyzing and predicting the performance of fusion experiments. Simulation efforts are evolving from including only a few effects to many effects, from small teams with a few people to large teams, and from workstations and small processor count parallel computers to massively parallel platforms. Successfully making this transition requires attention to software engineering issues. We report on the conclusions drawn from a number of case studies of large scale scientific computing projects within DOE, academia and the DoD. The major lessons learned include attention to sound project management including setting reasonable and achievable requirements, building a good code team, enforcing customer focus, carrying out verification and validation and selecting the optimum computational mathematics approaches.

  17. Use of high performance networks and supercomputers for real-time flight simulation

    Cleveland, Jeff I., II

    1993-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.

  18. Design and Simulation of a High Performance Emergency Data Delivery Protocol

    Swartz, Kevin; Wang, Di

    2007-01-01

    The purpose of this project was to design a high performance data delivery protocol, capable of delivering data as quickly as possible to a base station or target node. This protocol was designed particularly for wireless network topologies, but could also be applied towards a wired system....... An emergency is defined as any event with high priority that needs to be handled immediately. It is assumed that this emergency event is important enough that energy efficiency is not a factor in our protocol. The desired effect is for fast as possible delivery to the base station for rapid event handling....

  19. A Grid-Based Cyber Infrastructure for High Performance Chemical Dynamics Simulations

    Khadka Prashant

    2008-10-01

    Full Text Available Chemical dynamics simulation is an effective means to study atomic level motions of molecules, collections of molecules, liquids, surfaces, interfaces of materials, and chemical reactions. To make chemical dynamics simulations globally accessible to a broad range of users, recently a cyber infrastructure was developed that provides an online portal to VENUS, a popular chemical dynamics simulation program package, to allow people to submit simulation jobs that will be executed on the web server machine. In this paper, we report new developments of the cyber infrastructure for the improvement of its quality of service by dispatching the submitted simulations jobs from the web server machine onto a cluster of workstations for execution, and by adding an animation tool, which is optimized for animating the simulation results. The separation of the server machine from the simulation-running machine improves the service quality by increasing the capacity to serve more requests simultaneously with even reduced web response time, and allows the execution of large scale, time-consuming simulation jobs on the powerful workstation cluster. With the addition of an animation tool, the cyber infrastructure automatically converts, upon the selection of the user, some simulation results into an animation file that can be viewed on usual web browsers without requiring installation of any special software on the user computer. Since animation is essential for understanding the results of chemical dynamics simulations, this animation capacity provides a better way for understanding simulation details of the chemical dynamics. By combining computing resources at locations under different administrative controls, this cyber infrastructure constitutes a grid environment providing physically and administratively distributed functionalities through a single easy-to-use online portal

  20. Design of the HELICS High-Performance Transmission-Distribution-Communication-Market Co-Simulation Framework

    Palmintier, Bryan S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Krishnamurthy, Dheepak [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Top, Philip [Lawrence Livermore National Laboratories; Smith, Steve [Lawrence Livermore National Laboratories; Daily, Jeff [Pacific Northwest National Laboratory; Fuller, Jason [Pacific Northwest National Laboratory

    2017-10-12

    This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layered co-simulation architecture.

  1. Inductively coupled plasma emission spectrometric detection of simulated high performance liquid chromatographic peaks

    Fraley, D.M.; Yates, D.; Manahan, S.E.

    1979-01-01

    Because of its multielement capability, element-specificity, and low detection limits, inductively coupled plasma optical emission spectrometry (ICP) is a very promising technique for the detection of specific elemental species separated by high performance liquid chromatography (HPLC). This paper evaluated ICP as a detector for HPLC peaks containing specific elements. Detection limits for a number of elements have been evaluated in terms of the minimum detectable concentration of the element at the chromatographic peak maximum. The elements studies were Al, As, B, Ba, Ca, Cd, Co, Cr, Cu, Fe, K, Li, Mg, Mn, Mo, Na, Ni, P, Pb, Sb, Se, Sr, Ti, V, and Zn. In addition, ICP was compared with atomic absorption spectrometry for the detection of HPLC peaks composed of EDTA and NTA chelates of copper. Furthermore, ICP was compared to uv solution absorption for the detection of copper chelates. 6 figures, 4 tables

  2. Multi-scale high-performance fluid flow: Simulations through porous media

    Perović, Nevena

    2016-08-03

    Computational fluid dynamic (CFD) calculations on geometrically complex domains such as porous media require high geometric discretisation for accurately capturing the tested physical phenomena. Moreover, when considering a large area and analysing local effects, it is necessary to deploy a multi-scale approach that is both memory-intensive and time-consuming. Hence, this type of analysis must be conducted on a high-performance parallel computing infrastructure. In this paper, the coupling of two different scales based on the Navier–Stokes equations and Darcy\\'s law is described followed by the generation of complex geometries, and their discretisation and numerical treatment. Subsequently, the necessary parallelisation techniques and a rather specific tool, which is capable of retrieving data from the supercomputing servers and visualising them during the computation runtime (i.e. in situ) are described. All advantages and possible drawbacks of this approach, together with the preliminary results and sensitivity analyses are discussed in detail.

  3. Multi-scale high-performance fluid flow: Simulations through porous media

    Perović, Nevena; Frisch, Jé rô me; Salama, Amgad; Sun, Shuyu; Rank, Ernst; Mundani, Ralf Peter

    2016-01-01

    Computational fluid dynamic (CFD) calculations on geometrically complex domains such as porous media require high geometric discretisation for accurately capturing the tested physical phenomena. Moreover, when considering a large area and analysing local effects, it is necessary to deploy a multi-scale approach that is both memory-intensive and time-consuming. Hence, this type of analysis must be conducted on a high-performance parallel computing infrastructure. In this paper, the coupling of two different scales based on the Navier–Stokes equations and Darcy's law is described followed by the generation of complex geometries, and their discretisation and numerical treatment. Subsequently, the necessary parallelisation techniques and a rather specific tool, which is capable of retrieving data from the supercomputing servers and visualising them during the computation runtime (i.e. in situ) are described. All advantages and possible drawbacks of this approach, together with the preliminary results and sensitivity analyses are discussed in detail.

  4. High-performance modeling of CO2 sequestration by coupling reservoir simulation and molecular dynamics

    Bao, Kai

    2013-01-01

    The present work describes a parallel computational framework for CO2 sequestration simulation by coupling reservoir simulation and molecular dynamics (MD) on massively parallel HPC systems. In this framework, a parallel reservoir simulator, Reservoir Simulation Toolbox (RST), solves the flow and transport equations that describe the subsurface flow behavior, while the molecular dynamics simulations are performed to provide the required physical parameters. Numerous technologies from different fields are employed to make this novel coupled system work efficiently. One of the major applications of the framework is the modeling of large scale CO2 sequestration for long-term storage in the subsurface geological formations, such as depleted reservoirs and deep saline aquifers, which has been proposed as one of the most attractive and practical solutions to reduce the CO2 emission problem to address the global-warming threat. To effectively solve such problems, fine grids and accurate prediction of the properties of fluid mixtures are essential for accuracy. In this work, the CO2 sequestration is presented as our first example to couple the reservoir simulation and molecular dynamics, while the framework can be extended naturally to the full multiphase multicomponent compositional flow simulation to handle more complicated physical process in the future. Accuracy and scalability analysis are performed on an IBM BlueGene/P and on an IBM BlueGene/Q, the latest IBM supercomputer. Results show good accuracy of our MD simulations compared with published data, and good scalability are observed with the massively parallel HPC systems. The performance and capacity of the proposed framework are well demonstrated with several experiments with hundreds of millions to a billion cells. To our best knowledge, the work represents the first attempt to couple the reservoir simulation and molecular simulation for large scale modeling. Due to the complexity of the subsurface systems

  5. Investigating the Mobility of Light Autonomous Tracked Vehicles using a High Performance Computing Simulation Capability

    Negrut, Dan; Mazhar, Hammad; Melanz, Daniel; Lamb, David; Jayakumar, Paramsothy; Letherwood, Michael; Jain, Abhinandan; Quadrelli, Marco

    2012-01-01

    This paper is concerned with the physics-based simulation of light tracked vehicles operating on rough deformable terrain. The focus is on small autonomous vehicles, which weigh less than 100 lb and move on deformable and rough terrain that is feature rich and no longer representable using a continuum approach. A scenario of interest is, for instance, the simulation of a reconnaissance mission for a high mobility lightweight robot where objects such as a boulder or a ditch that could otherwise be considered small for a truck or tank, become major obstacles that can impede the mobility of the light autonomous vehicle and negatively impact the success of its mission. Analyzing and gauging the mobility and performance of these light vehicles is accomplished through a modeling and simulation capability called Chrono::Engine. Chrono::Engine relies on parallel execution on Graphics Processing Unit (GPU) cards.

  6. Development and testing of high performance pseudo random number generator for Monte Carlo simulation

    Chakraborty, Brahmananda

    2009-01-01

    Random number plays an important role in any Monte Carlo simulation. The accuracy of the results depends on the quality of the sequence of random numbers employed in the simulation. These include randomness of the random numbers, uniformity of their distribution, absence of correlation and long period. In a typical Monte Carlo simulation of particle transport in a nuclear reactor core, the history of a particle from its birth in a fission event until its death by an absorption or leakage event is tracked. The geometry of the core and the surrounding materials are exactly modeled in the simulation. To track a neutron history one needs random numbers for determining inter collision distance, nature of the collision, the direction of the scattered neutron etc. Neutrons are tracked in batches. In one batch approximately 2000-5000 neutrons are tracked. The statistical accuracy of the results of the simulation depends on the total number of particles (number of particles in one batch multiplied by the number of batches) tracked. The number of histories to be generated is usually large for a typical radiation transport problem. To track a very large number of histories one needs to generate a long sequence of independent random numbers. In other words the cycle length of the random number generator (RNG) should be more than the total number of random numbers required for simulating the given transport problem. The number of bits of the machine generally limits the cycle length. For a binary machine of p bits the maximum cycle length is 2 p . To achieve higher cycle length in the same machine one has to use either register arithmetic or bit manipulation technique

  7. Finite element simulations and experiments of ballistic impacts on high performance PE composite material

    Herlaar, K.; Jagt-Deutekom, M.J. van der; Jacobs, M.J.N.

    2005-01-01

    The use of lightweight composite armour concepts is essential for the protection of future combat systems, both vehicles and personal. The design of such armour systems is challenging due to the complex material behaviour. Finite element simulations can be used to help understand the important

  8. High performance discrete event simulations to evaluate complex industrial systems, the case of automatic

    Hoekstra, A.G.; Dorst, L.; Bergman, M.; Lagerberg, J.; Visser, A.; Yakali, H.; Groen, F.; Hertzberger, L.O.

    1997-01-01

    We have developed a Modelling and Simulation platform for technical evaluation of Electronic Toll Collection on Motor Highways. This platform is used in a project of the Dutch government to assess the technical feasibility of Toll Collection systems proposed by industry. Motivated by this work we

  9. Time Step Considerations when Simulating Dynamic Behavior of High Performance Homes

    Tabares-Velasco, Paulo Cesar

    2016-09-01

    Building energy simulations, especially those concerning pre-cooling strategies and cooling/heating peak demand management, require careful analysis and detailed understanding of building characteristics. Accurate modeling of the building thermal response and material properties for thermally massive walls or advanced materials like phase change materials (PCMs) are critically important.

  10. High-performance modeling of CO2 sequestration by coupling reservoir simulation and molecular dynamics

    Bao, Kai; Yan, Mi; Lu, Ligang; Allen, Rebecca; Salam, Amgad; Jordan, Kirk E.; Sun, Shuyu

    2013-01-01

    multicomponent compositional flow simulation to handle more complicated physical process in the future. Accuracy and scalability analysis are performed on an IBM BlueGene/P and on an IBM BlueGene/Q, the latest IBM supercomputer. Results show good accuracy of our

  11. H5Part A Portable High Performance Parallel Data Interface for Particle Simulations

    Adelmann, Andreas; Shalf, John M; Siegerist, Cristina

    2005-01-01

    Largest parallel particle simulations, in six dimensional phase space generate wast amont of data. It is also desirable to share data and data analysis tools such as ParViT (Particle Visualization Toolkit) among other groups who are working on particle-based accelerator simulations. We define a very simple file schema built on top of HDF5 (Hierarchical Data Format version 5) as well as an API that simplifies the reading/writing of the data to the HDF5 file format. HDF5 offers a self-describing machine-independent binary file format that supports scalable parallel I/O performance for MPI codes on a variety of supercomputing systems and works equally well on laptop computers. The API is available for C, C++, and Fortran codes. The file format will enable disparate research groups with very different simulation implementations to share data transparently and share data analysis tools. For instance, the common file format will enable groups that depend on completely different simulation implementations to share c...

  12. Development of three-dimensional neoclassical transport simulation code with high performance Fortran on a vector-parallel computer

    Satake, Shinsuke; Okamoto, Masao; Nakajima, Noriyoshi; Takamaru, Hisanori

    2005-11-01

    A neoclassical transport simulation code (FORTEC-3D) applicable to three-dimensional configurations has been developed using High Performance Fortran (HPF). Adoption of computing techniques for parallelization and a hybrid simulation model to the δf Monte-Carlo method transport simulation, including non-local transport effects in three-dimensional configurations, makes it possible to simulate the dynamism of global, non-local transport phenomena with a self-consistent radial electric field within a reasonable computation time. In this paper, development of the transport code using HPF is reported. Optimization techniques in order to achieve both high vectorization and parallelization efficiency, adoption of a parallel random number generator, and also benchmark results, are shown. (author)

  13. LIAR: A COMPUTER PROGRAM FOR THE SIMULATION AND MODELING OF HIGH PERFORMANCE LINACS

    Adolphsen, Chris

    2003-01-01

    The computer program LIAR (''LInear Accelerator Research code'') is a numerical simulation and tracking program for linear colliders. The LIAR project was started at SLAC in August 1995 in order to provide a computing and simulation tool that specifically addresses the needs of high energy linear colliders. LIAR is designed to be used for a variety of different linear accelerators. It has been applied for and checked against the existing Stanford Linear Collider (SLC) as well as the linacs of the proposed Next Linear Collider (NLC) and the proposed Linac Coherent Light Source (LCLS). The program includes wakefield effects, a 4D coupled beam description, specific optimization algorithms and other advanced features. We describe the most important concepts and highlights of the program. After having presented the LIAR program at the LINAC96 and the PAC97 conferences, we do now introduce it to the European particle accelerator community

  14. C-STARS Baltimore Simulation Center Military Trauma Training Program: Training for High Performance Trauma Teams

    2013-09-19

    simulation room and intermittent access to conference and debriefing space. While the C-STARS program had priority for access to this space, it had to...Input (if required) Skin cool to touch Temp: 35.8 C FAST positive for splenic injury 500 mL blood in vac • Correctly interpret radiography...shoveling snow and the pain continued. He is moderately obese , does not exercise, uses EtOH frequently and has a 35-year hx of tobacco use

  15. Application of High Performance Computing for Simulations of N-Dodecane Jet Spray with Evaporation

    2016-11-01

    is unlimited. 10 6. References 1. Malbec L-M, Egúsquiza J, Bruneaux G, Meijer M. Characterization of a set of ECN spray A injectors : nozzle to...sprays and develop a predictive theory for comparison to measurements in the laboratory of turbulent diesel sprays. 15. SUBJECT TERMS high...models into future simulations of turbulent jet sprays and develop a predictive theory for comparison to measurements in the lab of turbulent diesel

  16. Simulation-Driven Development and Optimization of a High-Performance Six-Dimensional Wrist Force/Torque Sensor

    Qiaokang LIANG

    2010-05-01

    Full Text Available This paper describes the Simulation-Driven Development and Optimization (SDDO of a six-dimensional force/torque sensor with high performance. By the implementation of the SDDO, the developed sensor possesses high performance such as high sensitivity, linearity, stiffness and repeatability simultaneously, which is hard for tranditional force/torque sensor. Integrated approach provided by software ANSYS was used to streamline and speed up the process chain and thereby to deliver results significantly faster than traditional approaches. The result of calibration experiment possesses some impressive characters, therefore the developed fore/torque sensor can be usefully used in industry and the methods of design can also be used to develop industrial product.

  17. High Performance Computation of a Jet in Crossflow by Lattice Boltzmann Based Parallel Direct Numerical Simulation

    Jiang Lei

    2015-01-01

    Full Text Available Direct numerical simulation (DNS of a round jet in crossflow based on lattice Boltzmann method (LBM is carried out on multi-GPU cluster. Data parallel SIMT (single instruction multiple thread characteristic of GPU matches the parallelism of LBM well, which leads to the high efficiency of GPU on the LBM solver. With present GPU settings (6 Nvidia Tesla K20M, the present DNS simulation can be completed in several hours. A grid system of 1.5 × 108 is adopted and largest jet Reynolds number reaches 3000. The jet-to-free-stream velocity ratio is set as 3.3. The jet is orthogonal to the mainstream flow direction. The validated code shows good agreement with experiments. Vortical structures of CRVP, shear-layer vortices and horseshoe vortices, are presented and analyzed based on velocity fields and vorticity distributions. Turbulent statistical quantities of Reynolds stress are also displayed. Coherent structures are revealed in a very fine resolution based on the second invariant of the velocity gradients.

  18. Simulation and high performance computing-Building a predictive capability for fusion

    Strand, P.I.; Coelho, R.; Coster, D.; Eriksson, L.-G.; Imbeaux, F.; Guillerminet, Bernard

    2010-01-01

    The Integrated Tokamak Modelling Task Force (ITM-TF) is developing an infrastructure where the validation needs, as being formulated in terms of multi-device data access and detailed physics comparisons aiming for inclusion of synthetic diagnostics in the simulation chain, are key components. As the activity and the modelling tools are aimed for general use, although focused on ITER plasmas, a device independent approach to data transport and a standardized approach to data management (data structures, naming, and access) is being developed in order to allow cross-validation between different fusion devices using a single toolset. Extensive work has already gone into, and is continuing to go into, the development of standardized descriptions of the data (Consistent Physical Objects). The longer term aim is a complete simulation platform which is expected to last and be extended in different ways for the coming 30 years. The technical underpinning is therefore of vital importance. In particular the platform needs to be extensible and open-ended to be able to take full advantage of not only today's most advanced technologies but also be able to marshal future developments. As a full level comprehensive prediction of ITER physics rapidly becomes expensive in terms of computing resources, the simulation framework needs to be able to use both grid and HPC computing facilities. Hence data access and code coupling technologies are required to be available for a heterogeneous, possibly distributed, environment. The developments in this area are pursued in a separate project-EUFORIA (EU Fusion for ITER Applications) which is providing about 15 professional person year (ppy) per annum from 14 different institutes. The range and size of the activity is not only technically challenging but is providing some unique management challenges in that a large and geographically distributed team (a truly pan-European set of researchers) need to be coordinated on a fairly detailed

  19. High Performance Electrical Modeling and Simulation Software Normal Environment Verification and Validation Plan, Version 1.0; TOPICAL

    WIX, STEVEN D.; BOGDAN, CAROLYN W.; MARCHIONDO JR., JULIO P.; DEVENEY, MICHAEL F.; NUNEZ, ALBERT V.

    2002-01-01

    The requirements in modeling and simulation are driven by two fundamental changes in the nuclear weapons landscape: (1) The Comprehensive Test Ban Treaty and (2) The Stockpile Life Extension Program which extends weapon lifetimes well beyond their originally anticipated field lifetimes. The move from confidence based on nuclear testing to confidence based on predictive simulation forces a profound change in the performance asked of codes. The scope of this document is to improve the confidence in the computational results by demonstration and documentation of the predictive capability of electrical circuit codes and the underlying conceptual, mathematical and numerical models as applied to a specific stockpile driver. This document describes the High Performance Electrical Modeling and Simulation software normal environment Verification and Validation Plan

  20. STEMsalabim: A high-performance computing cluster friendly code for scanning transmission electron microscopy image simulations of thin specimens

    Oelerich, Jan Oliver; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D.; Volz, Kerstin

    2017-01-01

    Highlights: • We present STEMsalabim, a modern implementation of the multislice algorithm for simulation of STEM images. • Our package is highly parallelizable on high-performance computing clusters, combining shared and distributed memory architectures. • With STEMsalabim, computationally and memory expensive STEM image simulations can be carried out within reasonable time. - Abstract: We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space.

  1. STEMsalabim: A high-performance computing cluster friendly code for scanning transmission electron microscopy image simulations of thin specimens

    Oelerich, Jan Oliver, E-mail: jan.oliver.oelerich@physik.uni-marburg.de; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D.; Volz, Kerstin

    2017-06-15

    Highlights: • We present STEMsalabim, a modern implementation of the multislice algorithm for simulation of STEM images. • Our package is highly parallelizable on high-performance computing clusters, combining shared and distributed memory architectures. • With STEMsalabim, computationally and memory expensive STEM image simulations can be carried out within reasonable time. - Abstract: We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space.

  2. GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers

    Mark James Abraham

    2015-09-01

    Full Text Available GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules. It provides a rich set of calculation types, preparation and analysis tools. Several advanced techniques for free-energy calculations are supported. In version 5, it reaches new performance heights, through several new and enhanced parallelization algorithms. These work on every level; SIMD registers inside cores, multithreading, heterogeneous CPU–GPU acceleration, state-of-the-art 3D domain decomposition, and ensemble-level parallelization through built-in replica exchange and the separate Copernicus framework. The latest best-in-class compressed trajectory storage format is supported.

  3. Performance of space charge simulations using High Performance Computing (HPC) cluster

    Bartosik, Hannes; CERN. Geneva. ATS Department

    2017-01-01

    In 2016 a collaboration agreement between CERN and Istituto Nazionale di Fisica Nucleare (INFN) through its Centro Nazionale Analisi Fotogrammi (CNAF, Bologna) was signed [1], which foresaw the purchase and installation of a cluster of 20 nodes with 32 cores each, connected with InfiniBand, at CNAF for the use of CERN members to develop parallelized codes as well as conduct massive simulation campaigns with the already available parallelized tools. As outlined in [1], after the installation and the set up of the first 12 nodes, the green light to proceed with the procurement and installation of the next 8 nodes can be given only after successfully passing an acceptance test based on two specific benchmark runs. This condition is necessary to consider the first batch of the cluster operational and complying with the desired performance specifications. In this brief note, we report the results of the above mentioned acceptance test.

  4. Optical Characterization and Energy Simulation of Glazing for High-Performance Windows

    Jonsson, Andreas

    2010-01-01

    This thesis focuses on one important component of the energy system - the window. Windows are installed in buildings mainly to create visual contact with the surroundings and to let in daylight, and should also be heat and sound insulating. This thesis covers four important aspects of windows: antireflection and switchable coatings, energy simulations and optical measurements. Energy simulations have been used to compare different windows and also to estimate the performance of smart or switchable windows, whose transmittance can be regulated. The results from this thesis show the potential of the emerging technology of smart windows, not only from a daylight and an energy perspective, but also for comfort and well-being. The importance of a well functioning control system for such windows, is pointed out. To fulfill all requirements of modern windows, they often have two or more panes. Each glass surface leads to reflection of light and therefore less daylight is transmitted. It is therefore of interest to find ways to increase the transmittance. In this thesis antireflection coatings, similar to those found on eye-glasses and LCD screens, have been investigated. For large area applications such as windows, it is necessary to use techniques which can easily be adapted to large scale manufacturing at low cost. Such a technique is dip-coating in a sol-gel of porous silica. Antireflection coatings have been deposited on glass and plastic materials to study both visual and energy performance and it has been shown that antireflection coatings increase the transmittance of windows without negatively affecting the thermal insulation and the energy efficiency. Optical measurements are important for quantifying product properties for comparisons and evaluations. It is important that new measurement routines are simple and applicable to standard commercial instruments. Different systematic error sources for optical measurements of patterned light diffusing samples using

  5. A high-performance model for shallow-water simulations in distributed and heterogeneous architectures

    Conde, Daniel; Canelas, Ricardo B.; Ferreira, Rui M. L.

    2017-04-01

    One of the most common challenges in hydrodynamic modelling is the trade off one must make between highly resolved simulations and the time required for their computation. In the particular case of urban floods, modelers are often forced to simplify the complex geometries of the problem, or to implicitly include some of its hydrodynamic effects, due to the typically very large spatial scales involved and limited computational resources. At CEris - Instituto Superior Técnico, Universidade de Lisboa - the STAV-2D shallow-water model, particularly suited for strong transient flows in complex and dynamic geometries, has been under development for the past recent years (Canelas et al., 2013 & Conde et al., 2013). The model is based on an explicit, first-order 2DH finite-volume discretization scheme for unstructured triangular meshes, in which a flux-splitting technique is paired with a reviewed Roe-Riemann solver, yielding a model applicable to discontinuous flows over time-evolving geometries. STAV-2D features solid transport in both Euleran and Lagrangian forms, with the first aiming at describing the transport of fine natural sediments and the latter aimed at large individual debris. The model has been validated with theoretical solutions and laboratory experiments (Canelas et al., 2013 & Conde et al., 2015). This work presents our most recent effort in STAV-2D: the re-design of the code in a modern Object-Oriented parallel framework for heterogeneous computations in CPUs and GPUs. The programming language of choice for this re-design was C++, due to its wide support of established and emerging parallel programming interfaces. The current implementation of STAV-2D provides two different levels of parallel granularity: inter-node and intra-node. Inter-node parallelism is achieved by distributing a simulation across a set of worker nodes, with communication between nodes being explicitly managed through MPI. At this level, the main difficulty is associated with the

  6. A high performance computing framework for physics-based modeling and simulation of military ground vehicles

    Negrut, Dan; Lamb, David; Gorsich, David

    2011-06-01

    This paper describes a software infrastructure made up of tools and libraries designed to assist developers in implementing computational dynamics applications running on heterogeneous and distributed computing environments. Together, these tools and libraries compose a so called Heterogeneous Computing Template (HCT). The heterogeneous and distributed computing hardware infrastructure is assumed herein to be made up of a combination of CPUs and Graphics Processing Units (GPUs). The computational dynamics applications targeted to execute on such a hardware topology include many-body dynamics, smoothed-particle hydrodynamics (SPH) fluid simulation, and fluid-solid interaction analysis. The underlying theme of the solution approach embraced by HCT is that of partitioning the domain of interest into a number of subdomains that are each managed by a separate core/accelerator (CPU/GPU) pair. Five components at the core of HCT enable the envisioned distributed computing approach to large-scale dynamical system simulation: (a) the ability to partition the problem according to the one-to-one mapping; i.e., spatial subdivision, discussed above (pre-processing); (b) a protocol for passing data between any two co-processors; (c) algorithms for element proximity computation; and (d) the ability to carry out post-processing in a distributed fashion. In this contribution the components (a) and (b) of the HCT are demonstrated via the example of the Discrete Element Method (DEM) for rigid body dynamics with friction and contact. The collision detection task required in frictional-contact dynamics (task (c) above), is shown to benefit on the GPU of a two order of magnitude gain in efficiency when compared to traditional sequential implementations. Note: Reference herein to any specific commercial products, process, or service by trade name, trademark, manufacturer, or otherwise, does not imply its endorsement, recommendation, or favoring by the United States Army. The views and

  7. Applying GIS and high performance agent-based simulation for managing an Old World Screwworm fly invasion of Australia.

    Welch, M C; Kwan, P W; Sajeev, A S M

    2014-10-01

    Agent-based modelling has proven to be a promising approach for developing rich simulations for complex phenomena that provide decision support functions across a broad range of areas including biological, social and agricultural sciences. This paper demonstrates how high performance computing technologies, namely General-Purpose Computing on Graphics Processing Units (GPGPU), and commercial Geographic Information Systems (GIS) can be applied to develop a national scale, agent-based simulation of an incursion of Old World Screwworm fly (OWS fly) into the Australian mainland. The development of this simulation model leverages the combination of massively data-parallel processing capabilities supported by NVidia's Compute Unified Device Architecture (CUDA) and the advanced spatial visualisation capabilities of GIS. These technologies have enabled the implementation of an individual-based, stochastic lifecycle and dispersal algorithm for the OWS fly invasion. The simulation model draws upon a wide range of biological data as input to stochastically determine the reproduction and survival of the OWS fly through the different stages of its lifecycle and dispersal of gravid females. Through this model, a highly efficient computational platform has been developed for studying the effectiveness of control and mitigation strategies and their associated economic impact on livestock industries can be materialised. Copyright © 2014 International Atomic Energy Agency 2014. Published by Elsevier B.V. All rights reserved.

  8. Studying Scientific Discovery by Computer Simulation.

    1983-03-30

    Mendel’s laws of inheritance, the law of Gay- Lussac for gaseous reactions, tile law of Dulong and Petit, the derivation of atomic weights by Avogadro...neceseary mid identify by block number) scientific discovery -ittri sic properties physical laws extensive terms data-driven heuristics intensive...terms theory-driven heuristics conservation laws 20. ABSTRACT (Continue on revere. side It necessary and identify by block number) Scientific discovery

  9. Numerical simulation of turbulent combustion: Scientific challenges

    Ren, ZhuYin; Lu, Zhen; Hou, LingYun; Lu, LiuYan

    2014-08-01

    Predictive simulation of engine combustion is key to understanding the underlying complicated physicochemical processes, improving engine performance, and reducing pollutant emissions. Critical issues as turbulence modeling, turbulence-chemistry interaction, and accommodation of detailed chemical kinetics in complex flows remain challenging and essential for high-fidelity combustion simulation. This paper reviews the current status of the state-of-the-art large eddy simulation (LES)/prob-ability density function (PDF)/detailed chemistry approach that can address the three challenging modelling issues. PDF as a subgrid model for LES is formulated and the hybrid mesh-particle method for LES/PDF simulations is described. Then the development need in micro-mixing models for the PDF simulations of turbulent premixed combustion is identified. Finally the different acceleration methods for detailed chemistry are reviewed and a combined strategy is proposed for further development.

  10. Cellulose Nanocrystal Templated Graphene Nanoscrolls for High Performance Supercapacitors and Hydrogen Storage: An Experimental and Molecular Simulation Study.

    Dhar, Prodyut; Gaur, Surendra Singh; Kumar, Amit; Katiyar, Vimal

    2018-03-01

    Graphene nanoscrolls (GNS), due to their remarkably interesting properties, have attracted significant interest with applications in various engineering sectors. However, uncontrolled morphologies, poor yield and low quality GNS produced through traditional routes are major challenges associated. We demonstrate sustainable approach of utilizing bio-derived cellulose nanocrystals (CNCs) as template for fabrication of GNS with tunable morphological dimensions ranging from micron-to-nanoscale(controlled length 1 μm), alongwith encapsulation of catalytically active metallic-species in scroll interlayers. The surface-modified magnetic CNCs acts as structural-directing agents which provides enough momentum to initiate self-scrolling phenomenon of graphene through van der Waals forces and π-π interactions, mechanism of which is demonstrated through experimental and molecular simulation studies. The proposed approach of GNS fabrication provides flexibility to tune physico-chemical properties of GNS by simply varying interlayer spacing, scrolling density and fraction of encapsulated metallic nanoparticles. The hybrid GNS with confined palladium or platinum nanoparticles (at lower loading ~1 wt.%) shows enhanced hydrogen storage capacity (~0.2 wt.% at~20 bar and ~273 K) and excellent supercapacitance behavior (~223-357 F/g) for prolonged cycles (retention ~93.5-96.4% at ~10000 cycles). The current strategy of utilizing bio-based templates can be further extended to incorporate complex architectures or nanomaterials in GNS core or inter-layers, which will potentially broaden its applications in fabrication of high-performance devices.

  11. High-performance simulation-based algorithms for an alpine ski racer’s trajectory optimization in heterogeneous computer systems

    Dębski Roman

    2014-09-01

    Full Text Available Effective, simulation-based trajectory optimization algorithms adapted to heterogeneous computers are studied with reference to the problem taken from alpine ski racing (the presented solution is probably the most general one published so far. The key idea behind these algorithms is to use a grid-based discretization scheme to transform the continuous optimization problem into a search problem over a specially constructed finite graph, and then to apply dynamic programming to find an approximation of the global solution. In the analyzed example it is the minimum-time ski line, represented as a piecewise-linear function (a method of elimination of unfeasible solutions is proposed. Serial and parallel versions of the basic optimization algorithm are presented in detail (pseudo-code, time and memory complexity. Possible extensions of the basic algorithm are also described. The implementation of these algorithms is based on OpenCL. The included experimental results show that contemporary heterogeneous computers can be treated as μ-HPC platforms-they offer high performance (the best speedup was equal to 128 while remaining energy and cost efficient (which is crucial in embedded systems, e.g., trajectory planners of autonomous robots. The presented algorithms can be applied to many trajectory optimization problems, including those having a black-box represented performance measure

  12. A Mesoscopic Simulation for the Early-Age Shrinkage Cracking Process of High Performance Concrete in Bridge Engineering

    Guodong Li

    2017-01-01

    Full Text Available On a mesoscopic level, high performance concrete (HPC was assumed to be a heterogeneous composite material consisting of aggregates, mortar, and pores. The concrete mesoscopic structure model had been established based on CT image reconstruction. By combining this model with continuum mechanics, damage mechanics, and fracture mechanics, a relatively complete system for concrete mesoscopic mechanics analysis was established to simulate the process of early-age shrinkage cracking in HPC. This process was based on the dispersion crack model. The results indicated that the interface between the aggregate and mortar was the crack point caused by shrinkage cracks in HPC. The locations of early-age shrinkage cracks in HPC were associated with the spacing and the size of the aggregate particle. However, the shrinkage deformation size of the mortar was related to the scope of concrete cracking and was independent of the crack position. Whereas lower water to cement ratios can improve the early strength of concrete, this ratio cannot control early-age shrinkage cracks in HPC.

  13. Software quality and process improvement in scientific simulation codes

    Ambrosiano, J.; Webster, R. [Los Alamos National Lab., NM (United States)

    1997-11-01

    This report contains viewgraphs on the quest to develope better simulation code quality through process modeling and improvement. This study is based on the experience of the authors and interviews with ten subjects chosen from simulation code development teams at LANL. This study is descriptive rather than scientific.

  14. Cray XT4: An Early Evaluation for Petascale Scientific Simulation

    Alam, Sadaf R.; Barrett, Richard F.; Fahey, Mark R.; Kuehn, Jeffery A.; Sankaran, Ramanan; Worley, Patrick H.; Larkin, Jeffrey M.

    2007-01-01

    The scientific simulation capabilities of next generation high-end computing technology will depend on striking a balance among memory, processor, I/O, and local and global network performance across the breadth of the scientific simulation space. The Cray XT4 combines commodity AMD dual core Opteron processor technology with the second generation of Cray's custom communication accelerator in a system design whose balance is claimed to be driven by the demands of scientific simulation. This paper presents an evaluation of the Cray XT4 using microbenchmarks to develop a controlled understanding of individual system components, providing the context for analyzing and comprehending the performance of several petascale-ready applications. Results gathered from several strategic application domains are compared with observations on the previous generation Cray XT3 and other high-end computing systems, demonstrating performance improvements across a wide variety of application benchmark problems.

  15. Scientific computing and algorithms in industrial simulations projects and products of Fraunhofer SCAI

    Schüller, Anton; Schweitzer, Marc

    2017-01-01

    The contributions gathered here provide an overview of current research projects and selected software products of the Fraunhofer Institute for Algorithms and Scientific Computing SCAI. They show the wide range of challenges that scientific computing currently faces, the solutions it offers, and its important role in developing applications for industry. Given the exciting field of applied collaborative research and development it discusses, the book will appeal to scientists, practitioners, and students alike. The Fraunhofer Institute for Algorithms and Scientific Computing SCAI combines excellent research and application-oriented development to provide added value for our partners. SCAI develops numerical techniques, parallel algorithms and specialized software tools to support and optimize industrial simulations. Moreover, it implements custom software solutions for production and logistics, and offers calculations on high-performance computers. Its services and products are based on state-of-the-art metho...

  16. Arx: a toolset for the efficient simulation and direct synthesis of high-performance signal processing algorithms

    Hofstra, K.L.; Gerez, Sabih H.

    2007-01-01

    This paper addresses the efficient implementation of highperformance signal-processing algorithms. In early stages of such designs many computation-intensive simulations may be necessary. This calls for hardware description formalisms targeted for efficient simulation (such as the programming

  17. Computer simulations and the changing face of scientific experimentation

    Duran, Juan M

    2013-01-01

    Computer simulations have become a central tool for scientific practice. Their use has replaced, in many cases, standard experimental procedures. This goes without mentioning cases where the target system is empirical but there are no techniques for direct manipulation of the system, such as astronomical observation. To these cases, computer simulations have proved to be of central importance. The question about their use and implementation, therefore, is not only a technical one but represents a challenge for the humanities as well. In this volume, scientists, historians, and philosophers joi

  18. Design of the HELICS High-Performance Transmission-Distribution-Communication-Market Co-Simulation Framework: Preprint

    Palmintier, Bryan S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Krishnamurthy, Dheepak [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Top, Philip [Lawrence Livermore National Laboratories; Smith, Steve [Lawrence Livermore National Laboratories; Daily, Jeff [Pacific Northwest National Laboratory; Fuller, Jason [Pacific Northwest National Laboratory

    2017-09-12

    This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layered co-simulation architecture.

  19. Comparison of turbulence measurements from DIII-D low-mode and high-performance plasmas to turbulence simulations and models

    Rhodes, T.L.; Leboeuf, J.-N.; Sydora, R.D.; Groebner, R.J.; Doyle, E.J.; McKee, G.R.; Peebles, W.A.; Rettig, C.L.; Zeng, L.; Wang, G.

    2002-01-01

    Measured turbulence characteristics (correlation lengths, spectra, etc.) in low-confinement (L-mode) and high-performance plasmas in the DIII-D tokamak [Luxon et al., Proceedings Plasma Physics and Controlled Nuclear Fusion Research 1986 (International Atomic Energy Agency, Vienna, 1987), Vol. I, p. 159] show many similarities with the characteristics determined from turbulence simulations. Radial correlation lengths Δr of density fluctuations from L-mode discharges are found to be numerically similar to the ion poloidal gyroradius ρ θ,s , or 5-10 times the ion gyroradius ρ s over the radial region 0.2 θ,s or 5-10 times ρ s , an experiment was performed which modified ρ θs while keeping other plasma parameters approximately fixed. It was found that the experimental Δr did not scale as ρ θ,s , which was similar to low-resolution UCAN simulations. Finally, both experimental measurements and gyrokinetic simulations indicate a significant reduction in the radial correlation length from high-performance quiescent double barrier discharges, as compared to normal L-mode, consistent with reduced transport in these high-performance plasmas

  20. Molecular simulation workflows as parallel algorithms: the execution engine of Copernicus, a distributed high-performance computing platform.

    Pronk, Sander; Pouya, Iman; Lundborg, Magnus; Rotskoff, Grant; Wesén, Björn; Kasson, Peter M; Lindahl, Erik

    2015-06-09

    Computational chemistry and other simulation fields are critically dependent on computing resources, but few problems scale efficiently to the hundreds of thousands of processors available in current supercomputers-particularly for molecular dynamics. This has turned into a bottleneck as new hardware generations primarily provide more processing units rather than making individual units much faster, which simulation applications are addressing by increasingly focusing on sampling with algorithms such as free-energy perturbation, Markov state modeling, metadynamics, or milestoning. All these rely on combining results from multiple simulations into a single observation. They are potentially powerful approaches that aim to predict experimental observables directly, but this comes at the expense of added complexity in selecting sampling strategies and keeping track of dozens to thousands of simulations and their dependencies. Here, we describe how the distributed execution framework Copernicus allows the expression of such algorithms in generic workflows: dataflow programs. Because dataflow algorithms explicitly state dependencies of each constituent part, algorithms only need to be described on conceptual level, after which the execution is maximally parallel. The fully automated execution facilitates the optimization of these algorithms with adaptive sampling, where undersampled regions are automatically detected and targeted without user intervention. We show how several such algorithms can be formulated for computational chemistry problems, and how they are executed efficiently with many loosely coupled simulations using either distributed or parallel resources with Copernicus.

  1. Enhanced nonlinearity interval mapping scheme for high-performance simulation-optimization of watershed-scale BMP placement

    Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn

    2015-03-01

    Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.

  2. High performance direct gravitational N-body simulations on graphics processing units II: An implementation in CUDA

    Belleman, R.G.; Bédorf, J.; Portegies Zwart, S.F.

    2008-01-01

    We present the results of gravitational direct N-body simulations using the graphics processing unit (GPU) on a commercial NVIDIA GeForce 8800GTX designed for gaming computers. The force evaluation of the N-body problem is implemented in "Compute Unified Device Architecture" (CUDA) using the GPU to

  3. High performance shallow water kernels for parallel overland flow simulations based on FullSWOF2D

    Wittmann, Roland

    2017-01-25

    We describe code optimization and parallelization procedures applied to the sequential overland flow solver FullSWOF2D. Major difficulties when simulating overland flows comprise dealing with high resolution datasets of large scale areas which either cannot be computed on a single node either due to limited amount of memory or due to too many (time step) iterations resulting from the CFL condition. We address these issues in terms of two major contributions. First, we demonstrate a generic step-by-step transformation of the second order finite volume scheme in FullSWOF2D towards MPI parallelization. Second, the computational kernels are optimized by the use of templates and a portable vectorization approach. We discuss the load imbalance of the flux computation due to dry and wet cells and propose a solution using an efficient cell counting approach. Finally, scalability results are shown for different test scenarios along with a flood simulation benchmark using the Shaheen II supercomputer.

  4. High performance pseudo-analytical simulation of multi-object adaptive optics over multi-GPU systems

    Abdelfattah, Ahmad; Gendron, É ric; Gratadour, Damien; Keyes, David E.; Ltaief, Hatem; Sevin, Arnaud; Vidal, Fabrice

    2014-01-01

    Multi-object adaptive optics (MOAO) is a novel adaptive optics (AO) technique dedicated to the special case of wide-field multi-object spectrographs (MOS). It applies dedicated wavefront corrections to numerous independent tiny patches spread over a large field of view (FOV). The control of each deformable mirror (DM) is done individually using a tomographic reconstruction of the phase based on measurements from a number of wavefront sensors (WFS) pointing at natural and artificial guide stars in the field. The output of this study helps the design of a new instrument called MOSAIC, a multi-object spectrograph proposed for the European Extremely Large Telescope (E-ELT). We have developed a novel hybrid pseudo-analytical simulation scheme that allows us to accurately simulate in detail the tomographic problem. The main challenge resides in the computation of the tomographic reconstructor, which involves pseudo-inversion of a large dense symmetric matrix. The pseudo-inverse is computed using an eigenvalue decomposition, based on the divide and conquer algorithm, on multicore systems with multi-GPUs. Thanks to a new symmetric matrix-vector product (SYMV) multi-GPU kernel, our overall implementation scores significant speedups over standard numerical libraries on multicore, like Intel MKL, and up to 60% speedups over the standard MAGMA implementation on 8 Kepler K20c GPUs. At 40,000 unknowns, this appears to be the largest-scale tomographic AO matrix solver submitted to computation, to date, to our knowledge and opens new research directions for extreme scale AO simulations. © 2014 Springer International Publishing Switzerland.

  5. Development and testing of high-performance fuel pin simulators for boiling experiments in liquid metal flow

    Casal, V.

    1976-01-01

    There are unknown phenomena, about local and integral boiling events in the core of sodium cooled fast breeder reactors. Therefore at GfK depend out-of-pile boiling experiments have been performed using electrically heated dummies of fuel element bundles. The success of these tests and the amount of information derived from them depend exclusively on the successful simulation of the fuel pins by electrically heated rods as regards the essential physical properties. The report deals with the development and testing of heater rods for sodium boiling experiments in bundles including up to 91 heated pins

  6. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    Tonellot, Thierry; Etienne, Vincent; Gashawbeza, Ewenet; Curiel, Emesto Sandoval; Khan, Azizur; Feki, Saber; Kortas, Samuel

    2017-01-01

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey's acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less than

  7. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    Tonellot, Thierry

    2017-02-27

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey\\'s acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less

  8. Implementation of a Monte Carlo simulation environment for fully 3D PET on a high-performance parallel platform

    Zaidi, H; Morel, Christian

    1998-01-01

    This paper describes the implementation of the Eidolon Monte Carlo program designed to simulate fully three-dimensional (3D) cylindrical positron tomographs on a MIMD parallel architecture. The original code was written in Objective-C and developed under the NeXTSTEP development environment. Different steps involved in porting the software on a parallel architecture based on PowerPC 604 processors running under AIX 4.1 are presented. Basic aspects and strategies of running Monte Carlo calculations on parallel computers are described. A linear decrease of the computing time was achieved with the number of computing nodes. The improved time performances resulting from parallelisation of the Monte Carlo calculations makes it an attractive tool for modelling photon transport in 3D positron tomography. The parallelisation paradigm used in this work is independent from the chosen parallel architecture

  9. Development and verification of a high performance multi-group SP3 transport capability in the ARTEMIS core simulator

    Van Geemert, Rene

    2008-01-01

    For satisfaction of future global customer needs, dedicated efforts are being coordinated internationally and pursued continuously at AREVA NP. The currently ongoing CONVERGENCE project is committed to the development of the ARCADIA R next generation core simulation software package. ARCADIA R will be put to global use by all AREVA NP business regions, for the entire spectrum of core design processes, licensing computations and safety studies. As part of the currently ongoing trend towards more sophisticated neutronics methodologies, an SP 3 nodal transport concept has been developed for ARTEMIS which is the steady-state and transient core simulation part of ARCADIA R . For enabling a high computational performance, the SP N calculations are accelerated by applying multi-level coarse mesh re-balancing. In the current implementation, SP 3 is about 1.4 times as expensive computationally as SP 1 (diffusion). The developed SP 3 solution concept is foreseen as the future computational workhorse for many-group 3D pin-by-pin full core computations by ARCADIA R . With the entire numerical workload being highly parallelizable through domain decomposition techniques, associated CPU-time requirements that adhere to the efficiency needs in the nuclear industry can be expected to become feasible in the near future. The accuracy enhancement obtainable by using SP 3 instead of SP 1 has been verified by a detailed comparison of ARTEMIS 16-group pin-by-pin SP N results with KAERI's DeCart reference results for the 2D pin-by-pin Purdue UO 2 /MOX benchmark. This article presents the accuracy enhancement verification and quantifies the achieved ARTEMIS-SP 3 computational performance for a number of 2D and 3D multi-group and multi-box (up to pin-by-pin) core computations. (authors)

  10. Computer simulation analysis on the machinability of alumina dispersion enforced copper alloy for high performance compact heat exchanger

    Ishiyama, Shintaro; Muto, Yasushi

    2001-01-01

    Feasibility study on a HTGR-GT (High Temperature Gas cooled Reactor-Gas Turbine) system is examining the application of the high strength / high thermal conductivity alumina dispersed copper (AL-25) in the ultra-fine rectangle plate fin of the recuperator for the system. However, it is very difficult to manufacture a ultra-fine fin by large-scale plastic deformation from the hard and brittle Al-25 foil. Therefor, in present study, to establish the fine fin manufacturing technology of the AL-25 foil, it did the processing simulation of the fine fin first by the large-scale elasto-plastic finite element analysis (FEM) and it estimated a forming limit. Next, it experimentally made the manufacturing equipment where it is possible to do new processing using these analytical results, and it implemented a manufacturing experiment on the AL-25 foil. With these results, the following conclusion was obtained. (1) It did the processing simulation to manufacture a fine rectangle fin (fin height x pitch x thickness, 3 mm x 4 mm x 0.156 mm) from AL-25 foil (Thickness=0.156 mm) by the large-scale elasto-plastic FEM using the double action processing method. As a result, the manufacturing of a fine rectangle fin found a possible thing in the following condition by the double action processing method. It made that 0.8 mm and 0.25 mm were a best value respectively in the R part and the clearance between dies by making double action processing examination equipment experimentally and implementing a manufacturing examination using this equipment. (2) It succeeded in the manufacturing of the fine fin that the height x pitch x thickness is 3 mm x 4 mm x (0.156 mm±0.001 mm) after implementing a fine rectangle fin manufacturing examination from the AL-25 foil. (3) The change of the process of the deformation and the thickness by the processing of the AL-25 foil which was estimated by the large-scale elasto-plastic FEM showed the result of the processing experiment and good agreement

  11. Scientific and Computational Challenges of the Fusion Simulation Program (FSP)

    Tang, William M.

    2011-01-01

    This paper highlights the scientific and computational challenges facing the Fusion Simulation Program (FSP) a major national initiative in the United States with the primary objective being to enable scientific discovery of important new plasma phenomena with associated understanding that emerges only upon integration. This requires developing a predictive integrated simulation capability for magnetically-confined fusion plasmas that are properly validated against experiments in regimes relevant for producing practical fusion energy. It is expected to provide a suite of advanced modeling tools for reliably predicting fusion device behavior with comprehensive and targeted science-based simulations of nonlinearly-coupled phenomena in the core plasma, edge plasma, and wall region on time and space scales required for fusion energy production. As such, it will strive to embody the most current theoretical and experimental understanding of magnetic fusion plasmas and to provide a living framework for the simulation of such plasmas as the associated physics understanding continues to advance over the next several decades. Substantive progress on answering the outstanding scientific questions in the field will drive the FSP toward its ultimate goal of developing the ability to predict the behavior of plasma discharges in toroidal magnetic fusion devices with high physics fidelity on all relevant time and space scales. From a computational perspective, this will demand computing resources in the petascale range and beyond together with the associated multi-core algorithmic formulation needed to address burning plasma issues relevant to ITER - a multibillion dollar collaborative experiment involving seven international partners representing over half the world's population. Even more powerful exascale platforms will be needed to meet the future challenges of designing a demonstration fusion reactor (DEMO). Analogous to other major applied physics modeling projects (e

  12. Scientific and computational challenges of the fusion simulation program (FSP)

    Tang, William M.

    2011-01-01

    This paper highlights the scientific and computational challenges facing the Fusion Simulation Program (FSP) - a major national initiative in the United States with the primary objective being to enable scientific discovery of important new plasma phenomena with associated understanding that emerges only upon integration. This requires developing a predictive integrated simulation capability for magnetically-confined fusion plasmas that are properly validated against experiments in regimes relevant for producing practical fusion energy. It is expected to provide a suite of advanced modeling tools for reliably predicting fusion device behavior with comprehensive and targeted science-based simulations of nonlinearly-coupled phenomena in the core plasma, edge plasma, and wall region on time and space scales required for fusion energy production. As such, it will strive to embody the most current theoretical and experimental understanding of magnetic fusion plasmas and to provide a living framework for the simulation of such plasmas as the associated physics understanding continues to advance over the next several decades. Substantive progress on answering the outstanding scientific questions in the field will drive the FSP toward its ultimate goal of developing the ability to predict the behavior of plasma discharges in toroidal magnetic fusion devices with high physics fidelity on all relevant time and space scales. From a computational perspective, this will demand computing resources in the petascale range and beyond together with the associated multi-core algorithmic formulation needed to address burning plasma issues relevant to ITER - a multibillion dollar collaborative experiment involving seven international partners representing over half the world's population. Even more powerful exascale platforms will be needed to meet the future challenges of designing a demonstration fusion reactor (DEMO). Analogous to other major applied physics modeling projects (e

  13. Enabling high performance computational science through combinatorial algorithms

    Boman, Erik G; Bozdag, Doruk; Catalyurek, Umit V; Devine, Karen D; Gebremedhin, Assefaw H; Hovland, Paul D; Pothen, Alex; Strout, Michelle Mills

    2007-01-01

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation

  14. Enabling high performance computational science through combinatorial algorithms

    Boman, Erik G [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Bozdag, Doruk [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Catalyurek, Umit V [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Devine, Karen D [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Gebremedhin, Assefaw H [Computer Science and Center for Computational Science, Old Dominion University (United States); Hovland, Paul D [Mathematics and Computer Science Division, Argonne National Laboratory (United States); Pothen, Alex [Computer Science and Center for Computational Science, Old Dominion University (United States); Strout, Michelle Mills [Computer Science, Colorado State University (United States)

    2007-07-15

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation.

  15. Viscoelastic Waves Simulation in a Blocky Medium with Fluid-Saturated Interlayers Using High-Performance Computing

    Sadovskii, Vladimir; Sadovskaya, Oxana

    2017-04-01

    A thermodynamically consistent approach to the description of linear and nonlinear wave processes in a blocky medium, which consists of a large number of elastic blocks interacting with each other via pliant interlayers, is proposed. The mechanical properties of interlayers are defined by means of the rheological schemes of different levels of complexity. Elastic interaction between the blocks is considered in the framework of the linear elasticity theory [1]. The effects of viscoelastic shear in the interblock interlayers are taken into consideration using the Pointing-Thomson rheological scheme. The model of an elastic porous material is used in the interlayers, where the pores collapse if an abrupt compressive stress is applied. On the basis of the Biot equations for a fluid-saturated porous medium, a new mathematical model of a blocky medium is worked out, in which the interlayers provide a convective fluid motion due to the external perturbations. The collapse of pores is modeled within the generalized rheological approach, wherein the mechanical properties of a material are simulated using four rheological elements. Three of them are the traditional elastic, viscous and plastic elements, the fourth element is the so-called rigid contact [2], which is used to describe the behavior of materials with different resistance to tension and compression. Thermodynamic consistency of the equations in interlayers with the equations in blocks guarantees fulfillment of the energy conservation law for a blocky medium in a whole, i.e. kinetic and potential energy of the system is the sum of kinetic and potential energies of the blocks and interlayers. As a result of discretization of the equations of the model, robust computational algorithm is constructed, that is stable because of the thermodynamic consistency of the finite difference equations at a discrete level. The splitting method by the spatial variables and the Godunov gap decay scheme are used in the blocks, the

  16. Scientific and computational challenges of the fusion simulation project (FSP)

    Tang, W M

    2008-01-01

    This paper highlights the scientific and computational challenges facing the Fusion Simulation Project (FSP). The primary objective is to develop advanced software designed to use leadership-class computers for carrying out multiscale physics simulations to provide information vital to delivering a realistic integrated fusion simulation model with unprecedented physics fidelity. This multiphysics capability will be unprecedented in that in the current FES applications domain, the largest-scale codes are used to carry out first-principles simulations of mostly individual phenomena in realistic 3D geometry while the integrated models are much smaller-scale, lower-dimensionality codes with significant empirical elements used for modeling and designing experiments. The FSP is expected to be the most up-to-date embodiment of the theoretical and experimental understanding of magnetically confined thermonuclear plasmas and to provide a living framework for the simulation of such plasmas as the associated physics understanding continues to advance over the next several decades. Substantive progress on answering the outstanding scientific questions in the field will drive the FSP toward its ultimate goal of developing a reliable ability to predict the behavior of plasma discharges in toroidal magnetic fusion devices on all relevant time and space scales. From a computational perspective, the fusion energy science application goal to produce high-fidelity, whole-device modeling capabilities will demand computing resources in the petascale range and beyond, together with the associated multicore algorithmic formulation needed to address burning plasma issues relevant to ITER - a multibillion dollar collaborative device involving seven international partners representing over half the world's population. Even more powerful exascale platforms will be needed to meet the future challenges of designing a demonstration fusion reactor (DEMO). Analogous to other major applied physics

  17. High performance computing applied to simulation of the flow in pipes; Computacao de alto desempenho aplicada a simulacao de escoamento em dutos

    Cozin, Cristiane; Lueders, Ricardo; Morales, Rigoberto E.M. [Universidade Tecnologica Federal do Parana (UTFPR), Curitiba, PR (Brazil). Dept. de Engenharia Mecanica

    2008-07-01

    In recent years, computer cluster has emerged as a real alternative to solution of problems which require high performance computing. Consequently, the development of new applications has been driven. Among them, flow simulation represents a real computational burden specially for large systems. This work presents a study of using parallel computing for numerical fluid flow simulation in pipelines. A mathematical flow model is numerically solved. In general, this procedure leads to a tridiagonal system of equations suitable to be solved by a parallel algorithm. In this work, this is accomplished by a parallel odd-oven reduction method found in the literature which is implemented on Fortran programming language. A computational platform composed by twelve processors was used. Many measures of CPU times for different tridiagonal system sizes and number of processors were obtained, highlighting the communication time between processors as an important issue to be considered when evaluating the performance of parallel applications. (author)

  18. Development of a common data model for scientific simulations

    Ambrosiano, J. [Los Alamos National Lab., NM (United States); Butler, D.M. [Limit Point Systems, Inc. (United States); Matarazzo, C.; Miller, M. [Lawrence Livermore National Lab., CA (United States); Schoof, L. [Sandia National Lab., Albuquerque, NM (United States)

    1999-06-01

    The problem of sharing data among scientific simulation models is a difficult and persistent one. Computational scientists employ an enormous variety of discrete approximations in modeling physical processes on computers. Problems occur when models based on different representations are required to exchange data with one another, or with some other software package. Within the DOE`s Accelerated Strategic Computing Initiative (ASCI), a cross-disciplinary group called the Data Models and Formats (DMF) group, has been working to develop a common data model. The current model is comprised of several layers of increasing semantic complexity. One of these layers is an abstract model based on set theory and topology called the fiber bundle kernel (FBK). This layer provides the flexibility needed to describe a wide range of mesh-approximated functions as well as other entities. This paper briefly describes the ASCI common data model, its mathematical basis, and ASCI prototype development. These prototypes include an object-oriented data management library developed at Los Alamos called the Common Data Model Library or CDMlib, the Vector Bundle API from the Lawrence Livermore Laboratory, and the DMF API from Sandia National Laboratory.

  19. High performance homes

    Beim, Anne; Vibæk, Kasper Sánchez

    2014-01-01

    Can prefabrication contribute to the development of high performance homes? To answer this question, this chapter defines high performance in more broadly inclusive terms, acknowledging the technical, architectural, social and economic conditions under which energy consumption and production occur....... Consideration of all these factors is a precondition for a truly integrated practice and as this chapter demonstrates, innovative project delivery methods founded on the manufacturing of prefabricated buildings contribute to the production of high performance homes that are cost effective to construct, energy...

  20. A predictive analytic model for high-performance tunneling field-effect transistors approaching non-equilibrium Green's function simulations

    Salazar, Ramon B.; Appenzeller, Joerg; Ilatikhameneh, Hesameddin; Rahman, Rajib; Klimeck, Gerhard

    2015-01-01

    A new compact modeling approach is presented which describes the full current-voltage (I-V) characteristic of high-performance (aggressively scaled-down) tunneling field-effect-transistors (TFETs) based on homojunction direct-bandgap semiconductors. The model is based on an analytic description of two key features, which capture the main physical phenomena related to TFETs: (1) the potential profile from source to channel and (2) the elliptic curvature of the complex bands in the bandgap region. It is proposed to use 1D Poisson's equations in the source and the channel to describe the potential profile in homojunction TFETs. This allows to quantify the impact of source/drain doping on device performance, an aspect usually ignored in TFET modeling but highly relevant in ultra-scaled devices. The compact model is validated by comparison with state-of-the-art quantum transport simulations using a 3D full band atomistic approach based on non-equilibrium Green's functions. It is shown that the model reproduces with good accuracy the data obtained from the simulations in all regions of operation: the on/off states and the n/p branches of conduction. This approach allows calculation of energy-dependent band-to-band tunneling currents in TFETs, a feature that allows gaining deep insights into the underlying device physics. The simplicity and accuracy of the approach provide a powerful tool to explore in a quantitatively manner how a wide variety of parameters (material-, size-, and/or geometry-dependent) impact the TFET performance under any bias conditions. The proposed model presents thus a practical complement to computationally expensive simulations such as the 3D NEGF approach

  1. Construction of Blaze at the University of Illinois at Chicago: A Shared, High-Performance, Visual Computer for Next-Generation Cyberinfrastructure-Accelerated Scientific, Engineering, Medical and Public Policy Research

    Brown, Maxine D. [Acting Director, EVL; Leigh, Jason [PI

    2014-02-17

    The Blaze high-performance visual computing system serves the high-performance computing research and education needs of University of Illinois at Chicago (UIC). Blaze consists of a state-of-the-art, networked, computer cluster and ultra-high-resolution visualization system called CAVE2(TM) that is currently not available anywhere in Illinois. This system is connected via a high-speed 100-Gigabit network to the State of Illinois' I-WIRE optical network, as well as to national and international high speed networks, such as the Internet2, and the Global Lambda Integrated Facility. This enables Blaze to serve as an on-ramp to national cyberinfrastructure, such as the National Science Foundation’s Blue Waters petascale computer at the National Center for Supercomputing Applications at the University of Illinois at Chicago and the Department of Energy’s Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. DOE award # DE-SC005067, leveraged with NSF award #CNS-0959053 for “Development of the Next-Generation CAVE Virtual Environment (NG-CAVE),” enabled us to create a first-of-its-kind high-performance visual computing system. The UIC Electronic Visualization Laboratory (EVL) worked with two U.S. companies to advance their commercial products and maintain U.S. leadership in the global information technology economy. New applications are being enabled with the CAVE2/Blaze visual computing system that is advancing scientific research and education in the U.S. and globally, and help train the next-generation workforce.

  2. Hazard-to-Risk: High-Performance Computing Simulations of Large Earthquake Ground Motions and Building Damage in the Near-Fault Region

    Miah, M.; Rodgers, A. J.; McCallen, D.; Petersson, N. A.; Pitarka, A.

    2017-12-01

    We are running high-performance computing (HPC) simulations of ground motions for large (magnitude, M=6.5-7.0) earthquakes in the near-fault region (steel moment frame buildings throughout the near-fault domain. For ground motions, we are using SW4, a fourth order summation-by-parts finite difference time-domain code running on 10,000-100,000's of cores. Earthquake ruptures are generated using the Graves and Pitarka (2017) method. We validated ground motion intensity measurements against Ground Motion Prediction Equations. We considered two events (M=6.5 and 7.0) for vertical strike-slip ruptures with three-dimensional (3D) basin structures, including stochastic heterogeneity. We have also considered M7.0 scenarios for a Hayward Fault rupture scenario which effects the San Francisco Bay Area and northern California using both 1D and 3D earth structure. Dynamic, inelastic response of canonical buildings is computed with the NEVADA, a nonlinear, finite-deformation finite element code. Canonical buildings include 3-, 9-, 20- and 40-story steel moment frame buildings. Damage potential is tracked by the peak inter-story drift (PID) ratio, which measures the maximum displacement between adjacent floors of the building and is strongly correlated with damage. PID ratios greater 1.0 generally indicate non-linear response and permanent deformation of the structure. We also track roof displacement to identify permanent deformation. PID (damage) for a given earthquake scenario (M, slip distribution, hypocenter) is spatially mapped throughout the SW4 domain with 1-2 km resolution. Results show that in the near fault region building damage is correlated with peak ground velocity (PGV), while farther away (> 20 km) it is better correlated with peak ground acceleration (PGA). We also show how simulated ground motions have peaks in the response spectra that shift to longer periods for larger magnitude events and for locations of forward directivity, as has been reported by

  3. High Performance Marine Vessels

    Yun, Liang

    2012-01-01

    High Performance Marine Vessels (HPMVs) range from the Fast Ferries to the latest high speed Navy Craft, including competition power boats and hydroplanes, hydrofoils, hovercraft, catamarans and other multi-hull craft. High Performance Marine Vessels covers the main concepts of HPMVs and discusses historical background, design features, services that have been successful and not so successful, and some sample data of the range of HPMVs to date. Included is a comparison of all HPMVs craft and the differences between them and descriptions of performance (hydrodynamics and aerodynamics). Readers will find a comprehensive overview of the design, development and building of HPMVs. In summary, this book: Focuses on technology at the aero-marine interface Covers the full range of high performance marine vessel concepts Explains the historical development of various HPMVs Discusses ferries, racing and pleasure craft, as well as utility and military missions High Performance Marine Vessels is an ideal book for student...

  4. High Performance Macromolecular Material

    Forest, M

    2002-01-01

    .... In essence, most commercial high-performance polymers are processed through fiber spinning, following Nature and spider silk, which is still pound-for-pound the toughest liquid crystalline polymer...

  5. High performance conductometry

    Saha, B.

    2000-01-01

    Inexpensive but high performance systems have emerged progressively for basic and applied measurements in physical and analytical chemistry on one hand, and for on-line monitoring and leak detection in plants and facilities on the other. Salient features of the developments will be presented with specific examples

  6. High performance systems

    Vigil, M.B. [comp.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  7. Danish High Performance Concretes

    Nielsen, M. P.; Christoffersen, J.; Frederiksen, J.

    1994-01-01

    In this paper the main results obtained in the research program High Performance Concretes in the 90's are presented. This program was financed by the Danish government and was carried out in cooperation between The Technical University of Denmark, several private companies, and Aalborg University...... concretes, workability, ductility, and confinement problems....

  8. High performance homes

    Beim, Anne; Vibæk, Kasper Sánchez

    2014-01-01

    . Consideration of all these factors is a precondition for a truly integrated practice and as this chapter demonstrates, innovative project delivery methods founded on the manufacturing of prefabricated buildings contribute to the production of high performance homes that are cost effective to construct, energy...

  9. Scientific codes developed and used at GRS. Nuclear simulation chain

    Schaffrath, Andreas; Sonnenkalb, Martin; Sievers, Juergen; Luther, Wolfgang; Velkov, Kiril [Gesellschaft fuer Anlagen und Reaktorsicherheit (GRS) gGmbH, Garching/Muenchen (Germany). Forschungszentrum

    2016-05-15

    Over 60 technical experts of the reactor safety research division of the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) gGmbH are developing and validating reliable methods and computer codes - summarized under the term nuclear simulation chain - for the safety-related assessment for all types of nuclear power plants (NPP) and other nuclear facilities considering the current state of science and technology. This nuclear simulation chain has to be able to simulate and assess all relevant physical processes and phenomena for all operating states and (severe) accidents. In the present contribution, the nuclear simulation chain developed and applied by GRS as well as selected examples of its application are presented. The latter demonstrate impressively the width of its scope and its performance. The GRS codes can be passed on request to other (national as well as international) organizations. This contributes to a worldwide increase of the nuclear safety standards. The code transfer is especially important for developing and emerging countries lacking the financial means and/or the necessary know-how for this purpose. At the end of this contribution, the respective course of action is described.

  10. High-Performance Networking

    CERN. Geneva

    2003-01-01

    The series will start with an historical introduction about what people saw as high performance message communication in their time and how that developed to the now to day known "standard computer network communication". It will be followed by a far more technical part that uses the High Performance Computer Network standards of the 90's, with 1 Gbit/sec systems as introduction for an in depth explanation of the three new 10 Gbit/s network and interconnect technology standards that exist already or emerge. If necessary for a good understanding some sidesteps will be included to explain important protocols as well as some necessary details of concerned Wide Area Network (WAN) standards details including some basics of wavelength multiplexing (DWDM). Some remarks will be made concerning the rapid expanding applications of networked storage.

  11. High Performance Concrete

    Traian Oneţ

    2009-01-01

    Full Text Available The paper presents the last studies and researches accomplished in Cluj-Napoca related to high performance concrete, high strength concrete and self compacting concrete. The purpose of this paper is to raid upon the advantages and inconveniences when a particular concrete type is used. Two concrete recipes are presented, namely for the concrete used in rigid pavement for roads and another one for self-compacting concrete.

  12. High performance polymeric foams

    Gargiulo, M.; Sorrentino, L.; Iannace, S.

    2008-01-01

    The aim of this work was to investigate the foamability of high-performance polymers (polyethersulfone, polyphenylsulfone, polyetherimide and polyethylenenaphtalate). Two different methods have been used to prepare the foam samples: high temperature expansion and two-stage batch process. The effects of processing parameters (saturation time and pressure, foaming temperature) on the densities and microcellular structures of these foams were analyzed by using scanning electron microscopy

  13. High performance in software development

    CERN. Geneva; Haapio, Petri; Liukkonen, Juha-Matti

    2015-01-01

    What are the ingredients of high-performing software? Software development, especially for large high-performance systems, is one the most complex tasks mankind has ever tried. Technological change leads to huge opportunities but challenges our old ways of working. Processing large data sets, possibly in real time or with other tight computational constraints, requires an efficient solution architecture. Efficiency requirements span from the distributed storage and large-scale organization of computation and data onto the lowest level of processor and data bus behavior. Integrating performance behavior over these levels is especially important when the computation is resource-bounded, as it is in numerics: physical simulation, machine learning, estimation of statistical models, etc. For example, memory locality and utilization of vector processing are essential for harnessing the computing power of modern processor architectures due to the deep memory hierarchies of modern general-purpose computers. As a r...

  14. Scalable high-performance algorithm for the simulation of exciton-dynamics. Application to the light harvesting complex II in the presence of resonant vibrational modes

    Kreisbeck, Christoph; Kramer, Tobias; Aspuru-Guzik, Alán

    2014-01-01

    high-performance many-core platforms using the Open Compute Language (OpenCL). For the light-harvesting complex II (LHC II) found in spinach, the HEOM results deviate from predictions of approximate theories and clarify the time-scale of the transfer-process. We investigate the impact of resonantly...

  15. Verification of Scientific Simulations via Hypothesis-Driven Comparative and Quantitative Visualization

    Ahrens, James P [ORNL; Heitmann, Katrin [ORNL; Petersen, Mark R [ORNL; Woodring, Jonathan [Los Alamos National Laboratory (LANL); Williams, Sean [Los Alamos National Laboratory (LANL); Fasel, Patricia [Los Alamos National Laboratory (LANL); Ahrens, Christine [Los Alamos National Laboratory (LANL); Hsu, Chung-Hsing [ORNL; Geveci, Berk [ORNL

    2010-11-01

    This article presents a visualization-assisted process that verifies scientific-simulation codes. Code verification is necessary because scientists require accurate predictions to interpret data confidently. This verification process integrates iterative hypothesis verification with comparative, feature, and quantitative visualization. Following this process can help identify differences in cosmological and oceanographic simulations.

  16. Clojure high performance programming

    Kumar, Shantanu

    2013-01-01

    This is a short, practical guide that will teach you everything you need to know to start writing high performance Clojure code.This book is ideal for intermediate Clojure developers who are looking to get a good grip on how to achieve optimum performance. You should already have some experience with Clojure and it would help if you already know a little bit of Java. Knowledge of performance analysis and engineering is not required. For hands-on practice, you should have access to Clojure REPL with Leiningen.

  17. High performance data transfer

    Cottrell, R.; Fang, C.; Hanushevsky, A.; Kreuger, W.; Yang, W.

    2017-10-01

    The exponentially increasing need for high speed data transfer is driven by big data, and cloud computing together with the needs of data intensive science, High Performance Computing (HPC), defense, the oil and gas industry etc. We report on the Zettar ZX software. This has been developed since 2013 to meet these growing needs by providing high performance data transfer and encryption in a scalable, balanced, easy to deploy and use way while minimizing power and space utilization. In collaboration with several commercial vendors, Proofs of Concept (PoC) consisting of clusters have been put together using off-the- shelf components to test the ZX scalability and ability to balance services using multiple cores, and links. The PoCs are based on SSD flash storage that is managed by a parallel file system. Each cluster occupies 4 rack units. Using the PoCs, between clusters we have achieved almost 200Gbps memory to memory over two 100Gbps links, and 70Gbps parallel file to parallel file with encryption over a 5000 mile 100Gbps link.

  18. High performance sapphire windows

    Bates, Stephen C.; Liou, Larry

    1993-02-01

    High-quality, wide-aperture optical access is usually required for the advanced laser diagnostics that can now make a wide variety of non-intrusive measurements of combustion processes. Specially processed and mounted sapphire windows are proposed to provide this optical access to extreme environment. Through surface treatments and proper thermal stress design, single crystal sapphire can be a mechanically equivalent replacement for high strength steel. A prototype sapphire window and mounting system have been developed in a successful NASA SBIR Phase 1 project. A large and reliable increase in sapphire design strength (as much as 10x) has been achieved, and the initial specifications necessary for these gains have been defined. Failure testing of small windows has conclusively demonstrated the increased sapphire strength, indicating that a nearly flawless surface polish is the primary cause of strengthening, while an unusual mounting arrangement also significantly contributes to a larger effective strength. Phase 2 work will complete specification and demonstration of these windows, and will fabricate a set for use at NASA. The enhanced capabilities of these high performance sapphire windows will lead to many diagnostic capabilities not previously possible, as well as new applications for sapphire.

  19. High Performance Parallel Processing (HPPP) Finite Element Simulation of Fluid Structure Interactions Final Report CRADA No. TC-0824-94-A

    Couch, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ziegler, D. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2018-01-24

    This project was a muki-partner CRADA. This was a partnership between Alcoa and LLNL. AIcoa developed a system of numerical simulation modules that provided accurate and efficient threedimensional modeling of combined fluid dynamics and structural response.

  20. High performance computing on vector systems

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  1. Simulating experiments using a Comsol application for teaching scientific research methods

    Schijndel, van A.W.M.

    2015-01-01

    For universities it is important to teach the principles of scientific methods as soon as possible. However, in case of performing experiments, students need to have some knowledge and skills before start doing measurements. In this case, Comsol can be helpfully by simulating the experiments before

  2. High-performance computing using FPGAs

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  3. A new high-performance 3D multiphase flow code to simulate volcanic blasts and pyroclastic density currents: example from the Boxing Day event, Montserrat

    Ongaro, T. E.; Clarke, A.; Neri, A.; Voight, B.; Widiwijayanti, C.

    2005-12-01

    For the first time the dynamics of directed blasts from explosive lava-dome decompression have been investigated by means of transient, multiphase flow simulations in 2D and 3D. Multiphase flow models developed for the analysis of pyroclastic dispersal from explosive eruptions have been so far limited to 2D axisymmetric or Cartesian formulations which cannot properly account for important 3D features of the volcanic system such as complex morphology and fluid turbulence. Here we use a new parallel multiphase flow code, named PDAC (Pyroclastic Dispersal Analysis Code) (Esposti Ongaro et al., 2005), able to simulate the transient and 3D thermofluid-dynamics of pyroclastic dispersal produced by collapsing columns and volcanic blasts. The code solves the equations of the multiparticle flow model of Neri et al. (2003) on 3D domains extending up to several kilometres in 3D and includes a new description of the boundary conditions over topography which is automatically acquired from a DEM. The initial conditions are represented by a compact volume of gas and pyroclasts, with clasts of different sizes and densities, at high temperature and pressure. Different dome porosities and pressurization models were tested in 2D to assess the sensitivity of the results to the distribution of initial gas pressure, and to the total mass and energy stored in the dome, prior to 3D modeling. The simulations have used topographies appropriate for the 1997 Boxing Day directed blast on Montserrat, which eradicated the village of St. Patricks. Some simulations tested the runout of pyroclastic density currents over the ocean surface, corresponding to observations of over-water surges to several km distances at both locations. The PDAC code was used to perform 3D simulations of the explosive event on the actual volcano topography. The results highlight the strong topographic control on the propagation of the dense pyroclastic flows, the triggering of thermal instabilities, and the elutriation

  4. DoSSiER: Database of Scientific Simulation and Experimental Results

    Wenzel, Hans; Genser, Krzysztof; Elvira, Daniel; Pokorski, Witold; Carminati, Federico; Konstantinov, Dmitri; Ribon, Alberto; Folger, Gunter; Dotti, Andrea

    2017-01-01

    The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER can be easily accessed via a web application. In addition, a web service allows for programmatic access to the repository to extract records in json or xml exchange formats. In this article, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.

  5. Advanced scientific computational methods and their applications to nuclear technologies. (4) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (4)

    Sekimura, Naoto; Okita, Taira

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the fourth issue showing the overview of scientific computational methods with the introduction of continuum simulation methods and their applications. Simulation methods on physical radiation effects on materials are reviewed based on the process such as binary collision approximation, molecular dynamics, kinematic Monte Carlo method, reaction rate method and dislocation dynamics. (T. Tanaka)

  6. The development of high performance numerical simulation code for transient groundwater flow and reactive solute transport problems based on local discontinuous Galerkin method

    Suzuki, Shunichi; Motoshima, Takayuki; Naemura, Yumi; Kubo, Shin; Kanie, Shunji

    2009-01-01

    The authors develop a numerical code based on Local Discontinuous Galerkin Method for transient groundwater flow and reactive solute transport problems in order to make it possible to do three dimensional performance assessment on radioactive waste repositories at the earliest stage possible. Local discontinuous Galerkin Method is one of mixed finite element methods which are more accurate ones than standard finite element methods. In this paper, the developed numerical code is applied to several problems which are provided analytical solutions in order to examine its accuracy and flexibility. The results of the simulations show the new code gives highly accurate numeric solutions. (author)

  7. High performance germanium MOSFETs

    Saraswat, Krishna [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States)]. E-mail: saraswat@stanford.edu; Chui, Chi On [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Krishnamohan, Tejas [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Kim, Donghyun [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Nayfeh, Ammar [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Pethe, Abhijit [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States)

    2006-12-15

    Ge is a very promising material as future channel materials for nanoscale MOSFETs due to its high mobility and thus a higher source injection velocity, which translates into higher drive current and smaller gate delay. However, for Ge to become main-stream, surface passivation and heterogeneous integration of crystalline Ge layers on Si must be achieved. We have demonstrated growth of fully relaxed smooth single crystal Ge layers on Si using a novel multi-step growth and hydrogen anneal process without any graded buffer SiGe layer. Surface passivation of Ge has been achieved with its native oxynitride (GeO {sub x}N {sub y} ) and high-permittivity (high-k) metal oxides of Al, Zr and Hf. High mobility MOSFETs have been demonstrated in bulk Ge with high-k gate dielectrics and metal gates. However, due to their smaller bandgap and higher dielectric constant, most high mobility materials suffer from large band-to-band tunneling (BTBT) leakage currents and worse short channel effects. We present novel, Si and Ge based heterostructure MOSFETs, which can significantly reduce the BTBT leakage currents while retaining high channel mobility, making them suitable for scaling into the sub-15 nm regime. Through full band Monte-Carlo, Poisson-Schrodinger and detailed BTBT simulations we show a dramatic reduction in BTBT and excellent electrostatic control of the channel, while maintaining very high drive currents in these highly scaled heterostructure DGFETs. Heterostructure MOSFETs with varying strained-Ge or SiGe thickness, Si cap thickness and Ge percentage were fabricated on bulk Si and SOI substrates. The ultra-thin ({approx}2 nm) strained-Ge channel heterostructure MOSFETs exhibited >4x mobility enhancements over bulk Si devices and >10x BTBT reduction over surface channel strained SiGe devices.

  8. High performance germanium MOSFETs

    Saraswat, Krishna; Chui, Chi On; Krishnamohan, Tejas; Kim, Donghyun; Nayfeh, Ammar; Pethe, Abhijit

    2006-01-01

    Ge is a very promising material as future channel materials for nanoscale MOSFETs due to its high mobility and thus a higher source injection velocity, which translates into higher drive current and smaller gate delay. However, for Ge to become main-stream, surface passivation and heterogeneous integration of crystalline Ge layers on Si must be achieved. We have demonstrated growth of fully relaxed smooth single crystal Ge layers on Si using a novel multi-step growth and hydrogen anneal process without any graded buffer SiGe layer. Surface passivation of Ge has been achieved with its native oxynitride (GeO x N y ) and high-permittivity (high-k) metal oxides of Al, Zr and Hf. High mobility MOSFETs have been demonstrated in bulk Ge with high-k gate dielectrics and metal gates. However, due to their smaller bandgap and higher dielectric constant, most high mobility materials suffer from large band-to-band tunneling (BTBT) leakage currents and worse short channel effects. We present novel, Si and Ge based heterostructure MOSFETs, which can significantly reduce the BTBT leakage currents while retaining high channel mobility, making them suitable for scaling into the sub-15 nm regime. Through full band Monte-Carlo, Poisson-Schrodinger and detailed BTBT simulations we show a dramatic reduction in BTBT and excellent electrostatic control of the channel, while maintaining very high drive currents in these highly scaled heterostructure DGFETs. Heterostructure MOSFETs with varying strained-Ge or SiGe thickness, Si cap thickness and Ge percentage were fabricated on bulk Si and SOI substrates. The ultra-thin (∼2 nm) strained-Ge channel heterostructure MOSFETs exhibited >4x mobility enhancements over bulk Si devices and >10x BTBT reduction over surface channel strained SiGe devices

  9. Designing and simulation smart multifunctional continuous logic device as a basic cell of advanced high-performance sensor systems with MIMO-structure

    Krasilenko, Vladimir G.; Nikolskyy, Aleksandr I.; Lazarev, Alexander A.

    2015-01-01

    We have proposed a design and simulation of hardware realizations of smart multifunctional continuous logic devices (SMCLD) as advanced basic cells of the sensor systems with MIMO- structure for images processing and interconnection. The SMCLD realize function of two-valued, multi-valued and continuous logics with current inputs and current outputs. Such advanced basic cells realize function nonlinear time-pulse transformation, analog-to-digital converters and neural logic. We showed advantages of such elements. It's have a number of advantages: high speed and reliability, simplicity, small power consumption, high integration level. The conception of construction of SMCLD consists in the use of a current mirrors realized on 1.5μm technology CMOS transistors. Presence of 50÷70 transistors, 1 PD and 1 LED makes the offered circuits quite compact. The simulation results of NOT, MIN, MAX, equivalence (EQ), normalize summation, averaging and other functions, that implemented SMCLD, showed that the level of logical variables can change from 0.1μA to 10μA for low-power consumption variants. The SMCLD have low power consumption <1mW and processing time about 1÷11μS at supply voltage 2.4÷3.3V.

  10. High performance parallel I/O

    Prabhat

    2014-01-01

    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  11. A task-based parallelism and vectorized approach to 3D Method of Characteristics (MOC) reactor simulation for high performance computing architectures

    Tramm, John R.; Gunow, Geoffrey; He, Tim; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.

    2016-05-01

    In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures.

  12. Rapid processing of data based on high-performance algorithms for solving inverse problems and 3D-simulation of the tsunami and earthquakes

    Marinin, I. V.; Kabanikhin, S. I.; Krivorotko, O. I.; Karas, A.; Khidasheli, D. G.

    2012-04-01

    We consider new techniques and methods for earthquake and tsunami related problems, particularly - inverse problems for the determination of tsunami source parameters, numerical simulation of long wave propagation in soil and water and tsunami risk estimations. In addition, we will touch upon the issue of database management and destruction scenario visualization. New approaches and strategies, as well as mathematical tools and software are to be shown. The long joint investigations by researchers of the Institute of Mathematical Geophysics and Computational Mathematics SB RAS and specialists from WAPMERR and Informap have produced special theoretical approaches, numerical methods, and software tsunami and earthquake modeling (modeling of propagation and run-up of tsunami waves on coastal areas), visualization, risk estimation of tsunami, and earthquakes. Algorithms are developed for the operational definition of the origin and forms of the tsunami source. The system TSS numerically simulates the source of tsunami and/or earthquakes and includes the possibility to solve the direct and the inverse problem. It becomes possible to involve advanced mathematical results to improve models and to increase the resolution of inverse problems. Via TSS one can construct maps of risks, the online scenario of disasters, estimation of potential damage to buildings and roads. One of the main tools for the numerical modeling is the finite volume method (FVM), which allows us to achieve stability with respect to possible input errors, as well as to achieve optimum computing speed. Our approach to the inverse problem of tsunami and earthquake determination is based on recent theoretical results concerning the Dirichlet problem for the wave equation. This problem is intrinsically ill-posed. We use the optimization approach to solve this problem and SVD-analysis to estimate the degree of ill-posedness and to find the quasi-solution. The software system we developed is intended to

  13. NMR spectrometers. Current status and assessment of demand for high-resolution NMR spectrometers and for high-performance, solid NMR spectrometers at the scientific colleges and other research institutes in the Federal Republic of Germany. Pt. 1

    Schmidt, K.

    1989-01-01

    The survey includes high-resolution NMR spectrometers for liquids and solutions with magnetic field intensities of 11.7 Tesla and more (proton frequencies from 500 to 600 MHz) as well as high-performance solid-state NMR spectrometers with field intensities of, at least, 6.3 Tesla (proton frequencies of 270 MHz and more). The given results which had been obtained from documents of the manufacturers try to meet the manufacturers' need for safety. Market shares and sites are not listed. (DG) [de

  14. Improving the trust in results of numerical simulations and scientific data analytics

    Cappello, Franck [Argonne National Lab. (ANL), Argonne, IL (United States); Constantinescu, Emil [Argonne National Lab. (ANL), Argonne, IL (United States); Hovland, Paul [Argonne National Lab. (ANL), Argonne, IL (United States); Peterka, Tom [Argonne National Lab. (ANL), Argonne, IL (United States); Phillips, Carolyn [Argonne National Lab. (ANL), Argonne, IL (United States); Snir, Marc [Argonne National Lab. (ANL), Argonne, IL (United States); Wild, Stefan [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-04-30

    This white paper investigates several key aspects of the trust that a user can give to the results of numerical simulations and scientific data analytics. In this document, the notion of trust is related to the integrity of numerical simulations and data analytics applications. This white paper complements the DOE ASCR report on Cybersecurity for Scientific Computing Integrity by (1) exploring the sources of trust loss; (2) reviewing the definitions of trust in several areas; (3) providing numerous cases of result alteration, some of them leading to catastrophic failures; (4) examining the current notion of trust in numerical simulation and scientific data analytics; (5) providing a gap analysis; and (6) suggesting two important research directions and their respective research topics. To simplify the presentation without loss of generality, we consider that trust in results can be lost (or the results’ integrity impaired) because of any form of corruption happening during the execution of the numerical simulation or the data analytics application. In general, the sources of such corruption are threefold: errors, bugs, and attacks. Current applications are already using techniques to deal with different types of corruption. However, not all potential corruptions are covered by these techniques. We firmly believe that the current level of trust that a user has in the results is at least partially founded on ignorance of this issue or the hope that no undetected corruptions will occur during the execution. This white paper explores the notion of trust and suggests recommendations for developing a more scientifically grounded notion of trust in numerical simulation and scientific data analytics. We first formulate the problem and show that it goes beyond previous questions regarding the quality of results such as V&V, uncertainly quantification, and data assimilation. We then explore the complexity of this difficult problem, and we sketch complementary general

  15. Advanced scientific computational methods and their applications of nuclear technologies. (1) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (1)

    Oka, Yoshiaki; Okuda, Hiroshi

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the first issue showing their overview and introduction of continuum simulation methods. Finite element method as their applications is also reviewed. (T. Tanaka)

  16. Designing a High Performance Parallel Personal Cluster

    Kapanova, K. G.; Sellier, J. M.

    2016-01-01

    Today, many scientific and engineering areas require high performance computing to perform computationally intensive experiments. For example, many advances in transport phenomena, thermodynamics, material properties, computational chemistry and physics are possible only because of the availability of such large scale computing infrastructures. Yet many challenges are still open. The cost of energy consumption, cooling, competition for resources have been some of the reasons why the scientifi...

  17. TOWARD END-TO-END MODELING FOR NUCLEAR EXPLOSION MONITORING: SIMULATION OF UNDERGROUND NUCLEAR EXPLOSIONS AND EARTHQUAKES USING HYDRODYNAMIC AND ANELASTIC SIMULATIONS, HIGH-PERFORMANCE COMPUTING AND THREE-DIMENSIONAL EARTH MODELS

    Rodgers, A; Vorobiev, O; Petersson, A; Sjogreen, B

    2009-07-06

    This paper describes new research being performed to improve understanding of seismic waves generated by underground nuclear explosions (UNE) by using full waveform simulation, high-performance computing and three-dimensional (3D) earth models. The goal of this effort is to develop an end-to-end modeling capability to cover the range of wave propagation required for nuclear explosion monitoring (NEM) from the buried nuclear device to the seismic sensor. The goal of this work is to improve understanding of the physical basis and prediction capabilities of seismic observables for NEM including source and path-propagation effects. We are pursuing research along three main thrusts. Firstly, we are modeling the non-linear hydrodynamic response of geologic materials to underground explosions in order to better understand how source emplacement conditions impact the seismic waves that emerge from the source region and are ultimately observed hundreds or thousands of kilometers away. Empirical evidence shows that the amplitudes and frequency content of seismic waves at all distances are strongly impacted by the physical properties of the source region (e.g. density, strength, porosity). To model the near-source shock-wave motions of an UNE, we use GEODYN, an Eulerian Godunov (finite volume) code incorporating thermodynamically consistent non-linear constitutive relations, including cavity formation, yielding, porous compaction, tensile failure, bulking and damage. In order to propagate motions to seismic distances we are developing a one-way coupling method to pass motions to WPP (a Cartesian anelastic finite difference code). Preliminary investigations of UNE's in canonical materials (granite, tuff and alluvium) confirm that emplacement conditions have a strong effect on seismic amplitudes and the generation of shear waves. Specifically, we find that motions from an explosion in high-strength, low-porosity granite have high compressional wave amplitudes and weak

  18. Application of High-performance Visual Analysis Methods to Laser Wakefield Particle Acceleration Data

    Rubel, Oliver; Prabhat, Mr.; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Ahern, Sean; Weber, Gunther H.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes

    2008-01-01

    Our work combines and extends techniques from high-performance scientific data management and visualization to enable scientific researchers to gain insight from extremely large, complex, time-varying laser wakefield particle accelerator simulation data. We extend histogram-based parallel coordinates for use in visual information display as well as an interface for guiding and performing data mining operations, which are based upon multi-dimensional and temporal thresholding and data subsetting operations. To achieve very high performance on parallel computing platforms, we leverage FastBit, a state-of-the-art index/query technology, to accelerate data mining and multi-dimensional histogram computation. We show how these techniques are used in practice by scientific researchers to identify, visualize and analyze a particle beam in a large, time-varying dataset

  19. Computer simulation, rhetoric, and the scientific imagination how virtual evidence shapes science in the making and in the news

    Roundtree, Aimee Kendall

    2013-01-01

    Computer simulations help advance climatology, astrophysics, and other scientific disciplines. They are also at the crux of several high-profile cases of science in the news. How do simulation scientists, with little or no direct observations, make decisions about what to represent? What is the nature of simulated evidence, and how do we evaluate its strength? Aimee Kendall Roundtree suggests answers in Computer Simulation, Rhetoric, and the Scientific Imagination. She interprets simulations in the sciences by uncovering the argumentative strategies that underpin the production and disseminati

  20. Open Knee: Open Source Modeling & Simulation to Enable Scientific Discovery and Clinical Care in Knee Biomechanics

    Erdemir, Ahmet

    2016-01-01

    Virtual representations of the knee joint can provide clinicians, scientists, and engineers the tools to explore mechanical function of the knee and its tissue structures in health and disease. Modeling and simulation approaches such as finite element analysis also provide the possibility to understand the influence of surgical procedures and implants on joint stresses and tissue deformations. A large number of knee joint models are described in the biomechanics literature. However, freely accessible, customizable, and easy-to-use models are scarce. Availability of such models can accelerate clinical translation of simulations, where labor intensive reproduction of model development steps can be avoided. The interested parties can immediately utilize readily available models for scientific discovery and for clinical care. Motivated by this gap, this study aims to describe an open source and freely available finite element representation of the tibiofemoral joint, namely Open Knee, which includes detailed anatomical representation of the joint's major tissue structures, their nonlinear mechanical properties and interactions. Three use cases illustrate customization potential of the model, its predictive capacity, and its scientific and clinical utility: prediction of joint movements during passive flexion, examining the role of meniscectomy on contact mechanics and joint movements, and understanding anterior cruciate ligament mechanics. A summary of scientific and clinically directed studies conducted by other investigators are also provided. The utilization of this open source model by groups other than its developers emphasizes the premise of model sharing as an accelerator of simulation-based medicine. Finally, the imminent need to develop next generation knee models are noted. These are anticipated to incorporate individualized anatomy and tissue properties supported by specimen-specific joint mechanics data for evaluation, all acquired in vitro from varying age

  1. Using an Agent-Based Modeling Simulation and Game to Teach Socio-Scientific Topics

    Lori L. Scarlatos

    2014-02-01

    Full Text Available In our modern world, where science, technology and society are tightly interwoven, it is essential that all students be able to evaluate scientific evidence and make informed decisions. Energy Choices, an agent-based simulation with a multiplayer game interface, was developed as a learning tool that models the interdependencies between the energy choices that are made, growth in local economies, and climate change on a global scale. This paper presents the results of pilot testing Energy Choices in two different settings, using two different modes of delivery.

  2. RavenDB high performance

    Ritchie, Brian

    2013-01-01

    RavenDB High Performance is comprehensive yet concise tutorial that developers can use to.This book is for developers & software architects who are designing systems in order to achieve high performance right from the start. A basic understanding of RavenDB is recommended, but not required. While the book focuses on advanced topics, it does not assume that the reader has a great deal of prior knowledge of working with RavenDB.

  3. Using Just-in-Time Information to Support Scientific Discovery Learning in a Computer-Based Simulation

    Hulshof, Casper D.; de Jong, Ton

    2006-01-01

    Students encounter many obstacles during scientific discovery learning with computer-based simulations. It is hypothesized that an effective type of support, that does not interfere with the scientific discovery learning process, should be delivered on a "just-in-time" base. This study explores the effect of facilitating access to…

  4. Simulation of ODE/PDE models with MATLAB, OCTAVE and SCILAB scientific and engineering applications

    Vande Wouwer, Alain; Vilas, Carlos

    2014-01-01

    Simulation of ODE/PDE Models with MATLAB®, OCTAVE and SCILAB shows the reader how to exploit a fuller array of numerical methods for the analysis of complex scientific and engineering systems than is conventionally employed. The book is dedicated to numerical simulation of distributed parameter systems described by mixed systems of algebraic equations, ordinary differential equations (ODEs) and partial differential equations (PDEs). Special attention is paid to the numerical method of lines (MOL), a popular approach to the solution of time-dependent PDEs, which proceeds in two basic steps: spatial discretization and time integration. Besides conventional finite-difference and -element techniques, more advanced spatial-approximation methods are examined in some detail, including nonoscillatory schemes and adaptive-grid approaches. A MOL toolbox has been developed within MATLAB®/OCTAVE/SCILAB. In addition to a set of spatial approximations and time integrators, this toolbox includes a collection of applicatio...

  5. High performance cloud auditing and applications

    Choi, Baek-Young; Song, Sejun

    2014-01-01

    This book mainly focuses on cloud security and high performance computing for cloud auditing. The book discusses emerging challenges and techniques developed for high performance semantic cloud auditing, and presents the state of the art in cloud auditing, computing and security techniques with focus on technical aspects and feasibility of auditing issues in federated cloud computing environments.   In summer 2011, the United States Air Force Research Laboratory (AFRL) CyberBAT Cloud Security and Auditing Team initiated the exploration of the cloud security challenges and future cloud auditing research directions that are covered in this book. This work was supported by the United States government funds from the Air Force Office of Scientific Research (AFOSR), the AFOSR Summer Faculty Fellowship Program (SFFP), the Air Force Research Laboratory (AFRL) Visiting Faculty Research Program (VFRP), the National Science Foundation (NSF) and the National Institute of Health (NIH). All chapters were partially suppor...

  6. High Performance Multivariate Visual Data Exploration for Extremely Large Data

    Ruebel, Oliver; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Ahern, Sean; Weber, Gunther H.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes; Prabhat

    2008-01-01

    One of the central challenges in modern science is the need to quickly derive knowledge and understanding from large, complex collections of data. We present a new approach that deals with this challenge by combining and extending techniques from high performance visual data analysis and scientific data management. This approach is demonstrated within the context of gaining insight from complex, time-varying datasets produced by a laser wakefield accelerator simulation. Our approach leverages histogram-based parallel coordinates for both visual information display as well as a vehicle for guiding a data mining operation. Data extraction and subsetting are implemented with state-of-the-art index/query technology. This approach, while applied here to accelerator science, is generally applicable to a broad set of science applications, and is implemented in a production-quality visual data analysis infrastructure. We conduct a detailed performance analysis and demonstrate good scalability on a distributed memory Cray XT4 system

  7. High Performance Multivariate Visual Data Exploration for Extremely Large Data

    Rubel, Oliver; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Ahern, Sean; Weber, Gunther H.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes; Prabhat,

    2008-08-22

    One of the central challenges in modern science is the need to quickly derive knowledge and understanding from large, complex collections of data. We present a new approach that deals with this challenge by combining and extending techniques from high performance visual data analysis and scientific data management. This approach is demonstrated within the context of gaining insight from complex, time-varying datasets produced by a laser wakefield accelerator simulation. Our approach leverages histogram-based parallel coordinates for both visual information display as well as a vehicle for guiding a data mining operation. Data extraction and subsetting are implemented with state-of-the-art index/query technology. This approach, while applied here to accelerator science, is generally applicable to a broad set of science applications, and is implemented in a production-quality visual data analysis infrastructure. We conduct a detailed performance analysis and demonstrate good scalability on a distributed memory Cray XT4 system.

  8. Modern industrial simulation tools: Kernel-level integration of high performance parallel processing, object-oriented numerics, and adaptive finite element analysis. Final report, July 16, 1993--September 30, 1997

    Deb, M.K.; Kennon, S.R.

    1998-04-01

    A cooperative R&D effort between industry and the US government, this project, under the HPPP (High Performance Parallel Processing) initiative of the Dept. of Energy, started the investigations into parallel object-oriented (OO) numerics. The basic goal was to research and utilize the emerging technologies to create a physics-independent computational kernel for applications using adaptive finite element method. The industrial team included Computational Mechanics Co., Inc. (COMCO) of Austin, TX (as the primary contractor), Scientific Computing Associates, Inc. (SCA) of New Haven, CT, Texaco and CONVEX. Sandia National Laboratory (Albq., NM) was the technology partner from the government side. COMCO had the responsibility of the main kernel design and development, SCA had the lead in parallel solver technology and guidance on OO technologies was Sandia`s main expertise in this venture. CONVEX and Texaco supported the partnership by hardware resource and application knowledge, respectively. As such, a minimum of fifty-percent cost-sharing was provided by the industry partnership during this project. This report describes the R&D activities and provides some details about the prototype kernel and example applications.

  9. Improving UV Resistance of High Performance Fibers

    Hassanin, Ahmed

    High performance fibers are characterized by their superior properties compared to the traditional textile fibers. High strength fibers have high modules, high strength to weight ratio, high chemical resistance, and usually high temperature resistance. It is used in application where superior properties are needed such as bulletproof vests, ropes and cables, cut resistant products, load tendons for giant scientific balloons, fishing rods, tennis racket strings, parachute cords, adhesives and sealants, protective apparel and tire cords. Unfortunately, Ultraviolet (UV) radiation causes serious degradation to the most of high performance fibers. UV lights, either natural or artificial, cause organic compounds to decompose and degrade, because the energy of the photons of UV light is high enough to break chemical bonds causing chain scission. This work is aiming at achieving maximum protection of high performance fibers using sheathing approaches. The sheaths proposed are of lightweight to maintain the advantage of the high performance fiber that is the high strength to weight ratio. This study involves developing three different types of sheathing. The product of interest that need be protected from UV is braid from PBO. First approach is extruding a sheath from Low Density Polyethylene (LDPE) loaded with different rutile TiO2 % nanoparticles around the braid from the PBO. The results of this approach showed that LDPE sheath loaded with 10% TiO2 by weight achieved the highest protection compare to 0% and 5% TiO2. The protection here is judged by strength loss of PBO. This trend noticed in different weathering environments, where the sheathed samples were exposed to UV-VIS radiations in different weatheromter equipments as well as exposure to high altitude environment using NASA BRDL balloon. The second approach is focusing in developing a protective porous membrane from polyurethane loaded with rutile TiO2 nanoparticles. Membrane from polyurethane loaded with 4

  10. INL High Performance Building Strategy

    Jennifer D. Morton

    2010-02-01

    High performance buildings, also known as sustainable buildings and green buildings, are resource efficient structures that minimize the impact on the environment by using less energy and water, reduce solid waste and pollutants, and limit the depletion of natural resources while also providing a thermally and visually comfortable working environment that increases productivity for building occupants. As Idaho National Laboratory (INL) becomes the nation’s premier nuclear energy research laboratory, the physical infrastructure will be established to help accomplish this mission. This infrastructure, particularly the buildings, should incorporate high performance sustainable design features in order to be environmentally responsible and reflect an image of progressiveness and innovation to the public and prospective employees. Additionally, INL is a large consumer of energy that contributes to both carbon emissions and resource inefficiency. In the current climate of rising energy prices and political pressure for carbon reduction, this guide will help new construction project teams to design facilities that are sustainable and reduce energy costs, thereby reducing carbon emissions. With these concerns in mind, the recommendations described in the INL High Performance Building Strategy (previously called the INL Green Building Strategy) are intended to form the INL foundation for high performance building standards. This revised strategy incorporates the latest federal and DOE orders (Executive Order [EO] 13514, “Federal Leadership in Environmental, Energy, and Economic Performance” [2009], EO 13423, “Strengthening Federal Environmental, Energy, and Transportation Management” [2007], and DOE Order 430.2B, “Departmental Energy, Renewable Energy, and Transportation Management” [2008]), the latest guidelines, trends, and observations in high performance building construction, and the latest changes to the Leadership in Energy and Environmental Design

  11. High Performance Bulk Thermoelectric Materials

    Ren, Zhifeng [Boston College, Chestnut Hill, MA (United States)

    2013-03-31

    Over 13 plus years, we have carried out research on electron pairing symmetry of superconductors, growth and their field emission property studies on carbon nanotubes and semiconducting nanowires, high performance thermoelectric materials and other interesting materials. As a result of the research, we have published 104 papers, have educated six undergraduate students, twenty graduate students, nine postdocs, nine visitors, and one technician.

  12. High-Performance Operating Systems

    Sharp, Robin

    1999-01-01

    Notes prepared for the DTU course 49421 "High Performance Operating Systems". The notes deal with quantitative and qualitative techniques for use in the design and evaluation of operating systems in computer systems for which performance is an important parameter, such as real-time applications......, communication systems and multimedia systems....

  13. Identifying High Performance ERP Projects

    Stensrud, Erik; Myrtveit, Ingunn

    2002-01-01

    Learning from high performance projects is crucial for software process improvement. Therefore, we need to identify outstanding projects that may serve as role models. It is common to measure productivity as an indicator of performance. It is vital that productivity measurements deal correctly with variable returns to scale and multivariate data. Software projects generally exhibit variable returns to scale, and the output from ERP projects is multivariate. We propose to use Data Envelopment ...

  14. Neo4j high performance

    Raj, Sonal

    2015-01-01

    If you are a professional or enthusiast who has a basic understanding of graphs or has basic knowledge of Neo4j operations, this is the book for you. Although it is targeted at an advanced user base, this book can be used by beginners as it touches upon the basics. So, if you are passionate about taming complex data with the help of graphs and building high performance applications, you will be able to get valuable insights from this book.

  15. Advanced scientific computational methods and their applications to nuclear technologies. (3) Introduction of continuum simulation methods and their applications (3)

    Satake, Shin-ichi; Kunugi, Tomoaki

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the third issue showing the introduction of continuum simulation methods and their applications. Spectral methods and multi-interface calculation methods in fluid dynamics are reviewed. (T. Tanaka)

  16. Multicore Challenges and Benefits for High Performance Scientific Computing

    Ida M.B. Nielsen

    2008-01-01

    Full Text Available Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexity of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.

  17. EDITORIAL: High performance under pressure High performance under pressure

    Demming, Anna

    2011-11-01

    The accumulation of charge in certain materials in response to an applied mechanical stress was first discovered in 1880 by Pierre Curie and his brother Paul-Jacques. The effect, piezoelectricity, forms the basis of today's microphones, quartz watches, and electronic components and constitutes an awesome scientific legacy. Research continues to develop further applications in a range of fields including imaging [1, 2], sensing [3] and, as reported in this issue of Nanotechnology, energy harvesting [4]. Piezoelectricity in biological tissue was first reported in 1941 [5]. More recently Majid Minary-Jolandan and Min-Feng Yu at the University of Illinois at Urbana-Champaign in the USA have studied the piezoelectric properties of collagen I [1]. Their observations support the nanoscale origin of piezoelectricity in bone and tendons and also imply the potential importance of the shear load transfer mechanism in mechanoelectric transduction in bone. Shear load transfer has been the principle basis of the nanoscale mechanics model of collagen. The piezoelectric effect in quartz causes a shift in the resonant frequency in response to a force gradient. This has been exploited for sensing forces in scanning probe microscopes that do not need optical readout. Recently researchers in Spain explored the dynamics of a double-pronged quartz tuning fork [2]. They observed thermal noise spectra in agreement with a coupled-oscillators model, providing important insights into the system's behaviour. Nano-electromechanical systems are increasingly exploiting piezoresistivity for motion detection. Observations of the change in a material's resistance in response to the applied stress pre-date the discovery of piezoelectric effect and were first reported in 1856 by Lord Kelvin. Researchers at Caltech recently demonstrated that a bridge configuration of piezoresistive nanowires can be used to detect in-plane CMOS-based and fully compatible with future very-large scale integration of

  18. High performance computing in science and engineering Garching/Munich 2016

    Wagner, Siegfried; Bode, Arndt; Bruechle, Helmut; Brehm, Matthias (eds.)

    2016-11-01

    Computer simulations are the well-established third pillar of natural sciences along with theory and experimentation. Particularly high performance computing is growing fast and constantly demands more and more powerful machines. To keep pace with this development, in spring 2015, the Leibniz Supercomputing Centre installed the high performance computing system SuperMUC Phase 2, only three years after the inauguration of its sibling SuperMUC Phase 1. Thereby, the compute capabilities were more than doubled. This book covers the time-frame June 2014 until June 2016. Readers will find many examples of outstanding research in the more than 130 projects that are covered in this book, with each one of these projects using at least 4 million core-hours on SuperMUC. The largest scientific communities using SuperMUC in the last two years were computational fluid dynamics simulations, chemistry and material sciences, astrophysics, and life sciences.

  19. High-performance computing in seismology

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  20. High Performance Proactive Digital Forensics

    Alharbi, Soltan; Traore, Issa; Moa, Belaid; Weber-Jahnke, Jens

    2012-01-01

    With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.

  1. A Comparison Study of Augmented Reality versus Interactive Simulation Technology to Support Student Learning of a Socio-Scientific Issue

    Chang, Hsin-Yi; Hsu, Ying-Shao; Wu, Hsin-Kai

    2016-01-01

    We investigated the impact of an augmented reality (AR) versus interactive simulation (IS) activity incorporated in a computer learning environment to facilitate students' learning of a socio-scientific issue (SSI) on nuclear power plants and radiation pollution. We employed a quasi-experimental research design. Two classes (a total of 45…

  2. Enabling Efficient Climate Science Workflows in High Performance Computing Environments

    Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.

    2015-12-01

    A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.

  3. Integrated plasma control for high performance tokamaks

    Humphreys, D.A.; Deranian, R.D.; Ferron, J.R.; Johnson, R.D.; LaHaye, R.J.; Leuer, J.A.; Penaflor, B.G.; Walker, M.L.; Welander, A.S.; Jayakumar, R.J.; Makowski, M.A.; Khayrutdinov, R.R.

    2005-01-01

    Sustaining high performance in a tokamak requires controlling many equilibrium shape and profile characteristics simultaneously with high accuracy and reliability, while suppressing a variety of MHD instabilities. Integrated plasma control, the process of designing high-performance tokamak controllers based on validated system response models and confirming their performance in detailed simulations, provides a systematic method for achieving and ensuring good control performance. For present-day devices, this approach can greatly reduce the need for machine time traditionally dedicated to control optimization, and can allow determination of high-reliability controllers prior to ever producing the target equilibrium experimentally. A full set of tools needed for this approach has recently been completed and applied to present-day devices including DIII-D, NSTX and MAST. This approach has proven essential in the design of several next-generation devices including KSTAR, EAST, JT-60SC, and ITER. We describe the method, results of design and simulation tool development, and recent research producing novel approaches to equilibrium and MHD control in DIII-D. (author)

  4. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  5. Multi-Language Programming Environments for High Performance Java Computing

    Vladimir Getov; Paul Gray; Sava Mintchev; Vaidy Sunderam

    1999-01-01

    Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI) tool which provides ...

  6. Charlotte: Scientific Modeling and Simulation Under the Software as a Service Paradigm, Phase I

    National Aeronautics and Space Administration — NASA spends considerable effort supporting the efforts of collaborating researchers. These researchers are interested in interacting with scientific models provided...

  7. Development of high performance cladding

    Kiuchi, Kiyoshi

    2003-01-01

    The developments of superior next-generation light water reactor are requested on the basis of general view points, such as improvement of safety, economics, reduction of radiation waste and effective utilization of plutonium, until 2030 year in which conventional reactor plants should be renovate. Improvements of stainless steel cladding for conventional high burn-up reactor to more than 100 GWd/t, developments of manufacturing technology for reduced moderation-light water reactor (RMWR) of breeding ratio beyond 1.0 and researches of water-materials interaction on super critical pressure-water cooled reactor are carried out in Japan Atomic Energy Research Institute. Stable austenite stainless steel has been selected for fuel element cladding of advanced boiling water reactor (ABWR). The austenite stain less has the superiority for anti-irradiation properties, corrosion resistance and mechanical strength. A hard spectrum of neutron energy up above 0.1 MeV takes place in core of the reduced moderation-light water reactor, as liquid metal-fast breeding reactor (LMFBR). High performance cladding for the RMWR fuel elements is required to get anti-irradiation properties, corrosion resistance and mechanical strength also. Slow strain rate test (SSRT) of SUS 304 and SUS 316 are carried out for studying stress corrosion cracking (SCC). Irradiation tests in LMFBR are intended to obtain irradiation data for damaged quantity of the cladding materials. (M. Suetake)

  8. High performance fuel technology development

    Koon, Yang Hyun; Kim, Keon Sik; Park, Jeong Yong; Yang, Yong Sik; In, Wang Kee; Kim, Hyung Kyu [KAERI, Daejeon (Korea, Republic of)

    2012-01-15

    {omicron} Development of High Plasticity and Annular Pellet - Development of strong candidates of ultra high burn-up fuel pellets for a PCI remedy - Development of fabrication technology of annular fuel pellet {omicron} Development of High Performance Cladding Materials - Irradiation test of HANA claddings in Halden research reactor and the evaluation of the in-pile performance - Development of the final candidates for the next generation cladding materials. - Development of the manufacturing technology for the dual-cooled fuel cladding tubes. {omicron} Irradiated Fuel Performance Evaluation Technology Development - Development of performance analysis code system for the dual-cooled fuel - Development of fuel performance-proving technology {omicron} Feasibility Studies on Dual-Cooled Annular Fuel Core - Analysis on the property of a reactor core with dual-cooled fuel - Feasibility evaluation on the dual-cooled fuel core {omicron} Development of Design Technology for Dual-Cooled Fuel Structure - Definition of technical issues and invention of concept for dual-cooled fuel structure - Basic design and development of main structure components for dual- cooled fuel - Basic design of a dual-cooled fuel rod.

  9. HIGH PERFORMANCE ADVANCED TOKAMAK REGIMES FOR NEXT-STEP EXPERIMENTS

    GREENFIELD, C.M.; MURAKAMI, M.; FERRON, J.R.; WADE, M.R.; LUCE, T.C.; PETTY, C.C.; MENARD, J.E; PETRIE, T.W.; ALLEN, S.L.; BURRELL, K.H.; CASPER, T.A; DeBOO, J.C.; DOYLE, E.J.; GAROFALO, A.M; GORELOV, Y.A; GROEBNER, R.J.; HOBIRK, J.; HYATT, A.W; JAYAKUMAR, R.J; KESSEL, C.E; LA HAYE, R.J; JACKSON, G.L; LOHR, J.; MAKOWSKI, M.A.; PINSKER, R.I.; POLITZER, P.A.; PRATER, R.; STRAIT, E.J.; TAYLOR, T.S; WEST, W.P.

    2003-01-01

    OAK-B135 Advanced Tokamak (AT) research in DIII-D seeks to provide a scientific basis for steady-state high performance operation in future devices. These regimes require high toroidal beta to maximize fusion output and poloidal beta to maximize the self-driven bootstrap current. Achieving these conditions requires integrated, simultaneous control of the current and pressure profiles, and active magnetohydrodynamic (MHD) stability control. The building blocks for AT operation are in hand. Resistive wall mode stabilization via plasma rotation and active feedback with non-axisymmetric coils allows routine operation above the no-wall beta limit. Neoclassical tearing modes are stabilized by active feedback control of localized electron cyclotron current drive (ECCD). Plasma shaping and profile control provide further improvements. Under these conditions, bootstrap supplies most of the current. Steady-state operation requires replacing the remaining Ohmic current, mostly located near the half-radius, with noninductive external sources. In DIII-D this current is provided by ECCD, and nearly stationary AT discharges have been sustained with little remaining Ohmic current. Fast wave current drive is being developed to control the central magnetic shear. Density control, with divertor cryopumps, of AT discharges with edge localized moding (ELMing) H-mode edges facilitates high current drive efficiency at reactor relevant collisionalities. A sophisticated plasma control system allows integrated control of these elements. Close coupling between modeling and experiment is key to understanding the separate elements, their complex nonlinear interactions, and their integration into self-consistent high performance scenarios. Progress on this development, and its implications for next-step devices, will be illustrated by results of recent experiment and simulation efforts

  10. High-Performance Matrix-Vector Multiplication on the GPU

    Sørensen, Hans Henrik Brandenborg

    2012-01-01

    In this paper, we develop a high-performance GPU kernel for one of the most popular dense linear algebra operations, the matrix-vector multiplication. The target hardware is the most recent Nvidia Tesla 20-series (Fermi architecture), which is designed from the ground up for scientific computing...

  11. 8th International Workshop on Parallel Tools for High Performance Computing

    Gracia, José; Knüpfer, Andreas; Resch, Michael; Nagel, Wolfgang

    2015-01-01

    Numerical simulation and modelling using High Performance Computing has evolved into an established technique in academic and industrial research. At the same time, the High Performance Computing infrastructure is becoming ever more complex. For instance, most of the current top systems around the world use thousands of nodes in which classical CPUs are combined with accelerator cards in order to enhance their compute power and energy efficiency. This complexity can only be mastered with adequate development and optimization tools. Key topics addressed by these tools include parallelization on heterogeneous systems, performance optimization for CPUs and accelerators, debugging of increasingly complex scientific applications, and optimization of energy usage in the spirit of green IT. This book represents the proceedings of the 8th International Parallel Tools Workshop, held October 1-2, 2014 in Stuttgart, Germany – which is a forum to discuss the latest advancements in the parallel tools.

  12. High-Performance Phylogeny Reconstruction

    Tiffani L. Williams

    2004-11-10

    Under the Alfred P. Sloan Fellowship in Computational Biology, I have been afforded the opportunity to study phylogenetics--one of the most important and exciting disciplines in computational biology. A phylogeny depicts an evolutionary relationship among a set of organisms (or taxa). Typically, a phylogeny is represented by a binary tree, where modern organisms are placed at the leaves and ancestral organisms occupy internal nodes, with the edges of the tree denoting evolutionary relationships. The task of phylogenetics is to infer this tree from observations upon present-day organisms. Reconstructing phylogenies is a major component of modern research programs in many areas of biology and medicine, but it is enormously expensive. The most commonly used techniques attempt to solve NP-hard problems such as maximum likelihood and maximum parsimony, typically by bounded searches through an exponentially-sized tree-space. For example, there are over 13 billion possible trees for 13 organisms. Phylogenetic heuristics that quickly analyze large amounts of data accurately will revolutionize the biological field. This final report highlights my activities in phylogenetics during the two-year postdoctoral period at the University of New Mexico under Prof. Bernard Moret. Specifically, this report reports my scientific, community and professional activities as an Alfred P. Sloan Postdoctoral Fellow in Computational Biology.

  13. High-Performance Data Converters

    Steensgaard-Madsen, Jesper

    -resolution internal D/A converters are required. Unit-element mismatch-shaping D/A converters are analyzed, and the concept of mismatch-shaping is generalized to include scaled-element D/A converters. Several types of scaled-element mismatch-shaping D/A converters are proposed. Simulations show that, when implemented...... in a standard CMOS technology, they can be designed to yield 100 dB performance at 10 times oversampling. The proposed scaled-element mismatch-shaping D/A converters are well suited for use as the feedback stage in oversampled delta-sigma quantizers. It is, however, not easy to make full use of their potential......-order difference of the output signal from the loop filter's first integrator stage. This technique avoids the need for accurate matching of analog and digital filters that characterizes the MASH topology, and it preserves the signal-band suppression of quantization errors. Simulations show that quantizers...

  14. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  15. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  16. High performance light water reactor

    Squarer, D.; Schulenberg, T.; Struwe, D.; Oka, Y.; Bittermann, D.; Aksan, N.; Maraczy, C.; Kyrki-Rajamaeki, R.; Souyri, A.; Dumaz, P.

    2003-01-01

    The objective of the high performance light water reactor (HPLWR) project is to assess the merit and economic feasibility of a high efficiency LWR operating at thermodynamically supercritical regime. An efficiency of approximately 44% is expected. To accomplish this objective, a highly qualified team of European research institutes and industrial partners together with the University of Tokyo is assessing the major issues pertaining to a new reactor concept, under the co-sponsorship of the European Commission. The assessment has emphasized the recent advancement achieved in this area by Japan. Additionally, it accounts for advanced European reactor design requirements, recent improvements, practical design aspects, availability of plant components and the availability of high temperature materials. The final objective of this project is to reach a conclusion on the potential of the HPLWR to help sustain the nuclear option, by supplying competitively priced electricity, as well as to continue the nuclear competence in LWR technology. The following is a brief summary of the main project achievements:-A state-of-the-art review of supercritical water-cooled reactors has been performed for the HPLWR project.-Extensive studies have been performed in the last 10 years by the University of Tokyo. Therefore, a 'reference design', developed by the University of Tokyo, was selected in order to assess the available technological tools (i.e. computer codes, analyses, advanced materials, water chemistry, etc.). Design data and results of the analysis were supplied by the University of Tokyo. A benchmark problem, based on the 'reference design' was defined for neutronics calculations and several partners of the HPLWR project carried out independent analyses. The results of these analyses, which in addition help to 'calibrate' the codes, have guided the assessment of the core and the design of an improved HPLWR fuel assembly. Preliminary selection was made for the HPLWR scale

  17. Reduced-order modeling (ROM) for simulation and optimization powerful algorithms as key enablers for scientific computing

    Milde, Anja; Volkwein, Stefan

    2018-01-01

    This edited monograph collects research contributions and addresses the advancement of efficient numerical procedures in the area of model order reduction (MOR) for simulation, optimization and control. The topical scope includes, but is not limited to, new out-of-the-box algorithmic solutions for scientific computing, e.g. reduced basis methods for industrial problems and MOR approaches for electrochemical processes. The target audience comprises research experts and practitioners in the field of simulation, optimization and control, but the book may also be beneficial for graduate students alike. .

  18. Simulator of Cryogenic process and Refrigeration, and its Control in scientific -nuclear facilities with EcosimPro

    Veleiro Blanco, A. M.

    2011-01-01

    The cryogenic plants and their control in Scientific-Nuclear Facilities is complicated by the large number of variables and the wide range of variation during operation. Initially the design and control of these systems in CERN was based on stationary calculations which non yielded the expected results. Due to its complexity, the dynamic simulation is the only way to get adequate results during operational transients.

  19. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  20. Indoor Air Quality in High Performance Schools

    High performance schools are facilities that improve the learning environment while saving energy, resources, and money. The key is understanding the lifetime value of high performance schools and effectively managing priorities, time, and budget.

  1. High performance computing in linear control

    Datta, B.N.

    1993-01-01

    Remarkable progress has been made in both theory and applications of all important areas of control. The theory is rich and very sophisticated. Some beautiful applications of control theory are presently being made in aerospace, biomedical engineering, industrial engineering, robotics, economics, power systems, etc. Unfortunately, the same assessment of progress does not hold in general for computations in control theory. Control Theory is lagging behind other areas of science and engineering in this respect. Nowadays there is a revolution going on in the world of high performance scientific computing. Many powerful computers with vector and parallel processing have been built and have been available in recent years. These supercomputers offer very high speed in computations. Highly efficient software, based on powerful algorithms, has been developed to use on these advanced computers, and has also contributed to increased performance. While workers in many areas of science and engineering have taken great advantage of these hardware and software developments, control scientists and engineers, unfortunately, have not been able to take much advantage of these developments

  2. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  3. Carpet Aids Learning in High Performance Schools

    Hurd, Frank

    2009-01-01

    The Healthy and High Performance Schools Act of 2002 has set specific federal guidelines for school design, and developed a federal/state partnership program to assist local districts in their school planning. According to the Collaborative for High Performance Schools (CHPS), high-performance schools are, among other things, healthy, comfortable,…

  4. Parallel Backprojection: A Case Study in High-Performance Reconfigurable Computing

    Cordes Ben

    2009-01-01

    Full Text Available High-performance reconfigurable computing (HPRC is a novel approach to provide large-scale computing power to modern scientific applications. Using both general-purpose processors and FPGAs allows application designers to exploit fine-grained and coarse-grained parallelism, achieving high degrees of speedup. One scientific application that benefits from this technique is backprojection, an image formation algorithm that can be used as part of a synthetic aperture radar (SAR processing system. We present an implementation of backprojection for SAR on an HPRC system. Using simulated data taken at a variety of ranges, our implementation runs over 200 times faster than a similar software program, with an overall application speedup better than 50x. The backprojection application is easily parallelizable, achieving near-linear speedup when run on multiple nodes of a clustered HPRC system. The results presented can be applied to other systems and other algorithms with similar characteristics.

  5. Parallel Backprojection: A Case Study in High-Performance Reconfigurable Computing

    2009-03-01

    Full Text Available High-performance reconfigurable computing (HPRC is a novel approach to provide large-scale computing power to modern scientific applications. Using both general-purpose processors and FPGAs allows application designers to exploit fine-grained and coarse-grained parallelism, achieving high degrees of speedup. One scientific application that benefits from this technique is backprojection, an image formation algorithm that can be used as part of a synthetic aperture radar (SAR processing system. We present an implementation of backprojection for SAR on an HPRC system. Using simulated data taken at a variety of ranges, our implementation runs over 200 times faster than a similar software program, with an overall application speedup better than 50x. The backprojection application is easily parallelizable, achieving near-linear speedup when run on multiple nodes of a clustered HPRC system. The results presented can be applied to other systems and other algorithms with similar characteristics.

  6. Visualization and Data Analysis for High-Performance Computing

    Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  7. The path toward HEP High Performance Computing

    Apostolakis, John; Brun, René; Gheata, Andrei; Wenzel, Sandro; Carminati, Federico

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit

  8. Micromagnetics on high-performance workstation and mobile computational platforms

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  9. Blaze-DEMGPU: Modular high performance DEM framework for the GPU architecture

    Nicolin Govender

    2016-01-01

    Full Text Available Blaze-DEMGPU is a modular GPU based discrete element method (DEM framework that supports polyhedral shaped particles. The high level performance is attributed to the light weight and Single Instruction Multiple Data (SIMD that the GPU architecture offers. Blaze-DEMGPU offers suitable algorithms to conduct DEM simulations on the GPU and these algorithms can be extended and modified. Since a large number of scientific simulations are particle based, many of the algorithms and strategies for GPU implementation present in Blaze-DEMGPU can be applied to other fields. Blaze-DEMGPU will make it easier for new researchers to use high performance GPU computing as well as stimulate wider GPU research efforts by the DEM community.

  10. High performance carbon nanocomposites for ultracapacitors

    Lu, Wen

    2012-10-02

    The present invention relates to composite electrodes for electrochemical devices, particularly to carbon nanotube composite electrodes for high performance electrochemical devices, such as ultracapacitors.

  11. Computer science of the high performance; Informatica del alto rendimiento

    Moraleda, A.

    2008-07-01

    The high performance computing is taking shape as a powerful accelerator of the process of innovation, to drastically reduce the waiting times for access to the results and the findings in a growing number of processes and activities as complex and important as medicine, genetics, pharmacology, environment, natural resources management or the simulation of complex processes in a wide variety of industries. (Author)

  12. Scientific Visualization and Simulation for Multi-dimensional Marine Environment Data

    Su, T.; Liu, H.; Wang, W.; Song, Z.; Jia, Z.

    2017-12-01

    As higher attention on the ocean and rapid development of marine detection, there are increasingly demands for realistic simulation and interactive visualization of marine environment in real time. Based on advanced technology such as GPU rendering, CUDA parallel computing and rapid grid oriented strategy, a series of efficient and high-quality visualization methods, which can deal with large-scale and multi-dimensional marine data in different environmental circumstances, has been proposed in this paper. Firstly, a high-quality seawater simulation is realized by FFT algorithm, bump mapping and texture animation technology. Secondly, large-scale multi-dimensional marine hydrological environmental data is virtualized by 3d interactive technologies and volume rendering techniques. Thirdly, seabed terrain data is simulated with improved Delaunay algorithm, surface reconstruction algorithm, dynamic LOD algorithm and GPU programming techniques. Fourthly, seamless modelling in real time for both ocean and land based on digital globe is achieved by the WebGL technique to meet the requirement of web-based application. The experiments suggest that these methods can not only have a satisfying marine environment simulation effect, but also meet the rendering requirements of global multi-dimension marine data. Additionally, a simulation system for underwater oil spill is established by OSG 3D-rendering engine. It is integrated with the marine visualization method mentioned above, which shows movement processes, physical parameters, current velocity and direction for different types of deep water oil spill particle (oil spill particles, hydrates particles, gas particles, etc.) dynamically and simultaneously in multi-dimension. With such application, valuable reference and decision-making information can be provided for understanding the progress of oil spill in deep water, which is helpful for ocean disaster forecasting, warning and emergency response.

  13. Delivering high performance BWR fuel reliably

    Schardt, J.F.

    1998-01-01

    Utilities are under intense pressure to reduce their production costs in order to compete in the increasingly deregulated marketplace. They need fuel, which can deliver high performance to meet demanding operating strategies. GE's latest BWR fuel design, GE14, provides that high performance capability. GE's product introduction process assures that this performance will be delivered reliably, with little risk to the utility. (author)

  14. High-performance liquid chromatography - Ultraviolet method for the determination of total specific migration of nine ultraviolet absorbers in food simulants based on 1,1,3,3-Tetramethylguanidine and organic phase anion exchange solid phase extraction to remove glyceride.

    Wang, Jianling; Xiao, Xiaofeng; Chen, Tong; Liu, Tingfei; Tao, Huaming; He, Jun

    2016-06-17

    The glyceride in oil food simulant usually causes serious interferences to target analytes and leads to failure of the normal function of the RP-HPLC column. In this work, a convenient HPLC-UV method for the determination of the total specific migration of nine ultraviolet (UV) absorbers in food simulants was developed based on 1,1,3,3-tetramethylguanidine (TMG) and organic phase anion exchange (OPAE) SPE to efficiently remove glyceride in olive oil simulant. In contrast to the normal ion exchange carried out in an aqueous solution or aqueous phase environment, the OPAE SPE was performed in the organic phase environments, and the time-consuming and challenging extraction of the nine UV absorbers from vegetable oil with aqueous solution could be readily omitted. The method was proved to have good linearity (r≥0.99992), precision (intra-day RSD≤3.3%), and accuracy(91.0%≤recoveries≤107%); furthermore, the lower limit of quantifications (0.05-0.2mg/kg) in five types of food simulants(10% ethanol, 3% acetic acid, 20% ethanol, 50% ethanol and olive oil) was observed. The method was found to be well suited for quantitative determination of the total specific migration of the nine UV absorbers both in aqueous and vegetable oil simulant according to Commission Regulation (EU) No. 10/2011. Migration levels of the nine UV absorbers were determined in 31 plastic samples, and UV-24, UV-531, HHBP and UV-326 were frequently detected, especially in olive oil simulant for UV-326 in PE samples. In addition, the OPAE SPE procedure was also been applied to efficiently enrich or purify seven antioxidants in olive oil simulant. Results indicate that this procedure will have more extensive applications in the enriching or purification of the extremely weak acidic compounds with phenol hydroxyl group that are relatively stable in TMG n-hexane solution and that can be barely extracted from vegetable oil. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will

  16. Alternative High-Performance Ceramic Waste Forms

    Sundaram, S. K. [Alfred Univ., NY (United States)

    2017-02-01

    This final report (M5NU-12-NY-AU # 0202-0410) summarizes the results of the project titled “Alternative High-Performance Ceramic Waste Forms,” funded in FY12 by the Nuclear Energy University Program (NEUP Project # 12-3809) being led by Alfred University in collaboration with Savannah River National Laboratory (SRNL). The overall focus of the project is to advance fundamental understanding of crystalline ceramic waste forms and to demonstrate their viability as alternative waste forms to borosilicate glasses. We processed single- and multiphase hollandite waste forms based on simulated waste streams compositions provided by SRNL based on the advanced fuel cycle initiative (AFCI) aqueous separation process developed in the Fuel Cycle Research and Development (FCR&D). For multiphase simulated waste forms, oxide and carbonate precursors were mixed together via ball milling with deionized water using zirconia media in a polyethylene jar for 2 h. The slurry was dried overnight and then separated from the media. The blended powders were then subjected to melting or spark plasma sintering (SPS) processes. Microstructural evolution and phase assemblages of these samples were studied using x-ray diffraction (XRD), scanning electron microscopy (SEM), energy dispersion analysis of x-rays (EDAX), wavelength dispersive spectrometry (WDS), transmission electron spectroscopy (TEM), selective area x-ray diffraction (SAXD), and electron backscatter diffraction (EBSD). These results showed that the processing methods have significant effect on the microstructure and thus the performance of these waste forms. The Ce substitution into zirconolite and pyrochlore materials was investigated using a combination of experimental (in situ XRD and x-ray absorption near edge structure (XANES)) and modeling techniques to study these single phases independently. In zirconolite materials, a transition from the 2M to the 4M polymorph was observed with increasing Ce content. The resulting

  17. Modeling, Simulation and Analysis of Complex Networked Systems: A Program Plan for DOE Office of Advanced Scientific Computing Research

    Brown, D L

    2009-05-01

    Many complex systems of importance to the U.S. Department of Energy consist of networks of discrete components. Examples are cyber networks, such as the internet and local area networks over which nearly all DOE scientific, technical and administrative data must travel, the electric power grid, social networks whose behavior can drive energy demand, and biological networks such as genetic regulatory networks and metabolic networks. In spite of the importance of these complex networked systems to all aspects of DOE's operations, the scientific basis for understanding these systems lags seriously behind the strong foundations that exist for the 'physically-based' systems usually associated with DOE research programs that focus on such areas as climate modeling, fusion energy, high-energy and nuclear physics, nano-science, combustion, and astrophysics. DOE has a clear opportunity to develop a similarly strong scientific basis for understanding the structure and dynamics of networked systems by supporting a strong basic research program in this area. Such knowledge will provide a broad basis for, e.g., understanding and quantifying the efficacy of new security approaches for computer networks, improving the design of computer or communication networks to be more robust against failures or attacks, detecting potential catastrophic failure on the power grid and preventing or mitigating its effects, understanding how populations will respond to the availability of new energy sources or changes in energy policy, and detecting subtle vulnerabilities in large software systems to intentional attack. This white paper outlines plans for an aggressive new research program designed to accelerate the advancement of the scientific basis for complex networked systems of importance to the DOE. It will focus principally on four research areas: (1) understanding network structure, (2) understanding network dynamics, (3) predictive modeling and simulation for complex

  18. Modeling, Simulation and Analysis of Complex Networked Systems: A Program Plan for DOE Office of Advanced Scientific Computing Research

    Brown, D.L.

    2009-01-01

    Many complex systems of importance to the U.S. Department of Energy consist of networks of discrete components. Examples are cyber networks, such as the internet and local area networks over which nearly all DOE scientific, technical and administrative data must travel, the electric power grid, social networks whose behavior can drive energy demand, and biological networks such as genetic regulatory networks and metabolic networks. In spite of the importance of these complex networked systems to all aspects of DOE's operations, the scientific basis for understanding these systems lags seriously behind the strong foundations that exist for the 'physically-based' systems usually associated with DOE research programs that focus on such areas as climate modeling, fusion energy, high-energy and nuclear physics, nano-science, combustion, and astrophysics. DOE has a clear opportunity to develop a similarly strong scientific basis for understanding the structure and dynamics of networked systems by supporting a strong basic research program in this area. Such knowledge will provide a broad basis for, e.g., understanding and quantifying the efficacy of new security approaches for computer networks, improving the design of computer or communication networks to be more robust against failures or attacks, detecting potential catastrophic failure on the power grid and preventing or mitigating its effects, understanding how populations will respond to the availability of new energy sources or changes in energy policy, and detecting subtle vulnerabilities in large software systems to intentional attack. This white paper outlines plans for an aggressive new research program designed to accelerate the advancement of the scientific basis for complex networked systems of importance to the DOE. It will focus principally on four research areas: (1) understanding network structure, (2) understanding network dynamics, (3) predictive modeling and simulation for complex networked systems

  19. High-performance ceramics. Fabrication, structure, properties

    Petzow, G.; Tobolski, J.; Telle, R.

    1996-01-01

    The program ''Ceramic High-performance Materials'' pursued the objective to understand the chaining of cause and effect in the development of high-performance ceramics. This chain of problems begins with the chemical reactions for the production of powders, comprises the characterization, processing, shaping and compacting of powders, structural optimization, heat treatment, production and finishing, and leads to issues of materials testing and of a design appropriate to the material. The program ''Ceramic High-performance Materials'' has resulted in contributions to the understanding of fundamental interrelationships in terms of materials science, which are summarized in the present volume - broken down into eight special aspects. (orig./RHM)

  20. High Performance Grinding and Advanced Cutting Tools

    Jackson, Mark J

    2013-01-01

    High Performance Grinding and Advanced Cutting Tools discusses the fundamentals and advances in high performance grinding processes, and provides a complete overview of newly-developing areas in the field. Topics covered are grinding tool formulation and structure, grinding wheel design and conditioning and applications using high performance grinding wheels. Also included are heat treatment strategies for grinding tools, using grinding tools for high speed applications, laser-based and diamond dressing techniques, high-efficiency deep grinding, VIPER grinding, and new grinding wheels.

  1. Strategy Guideline: High Performance Residential Lighting

    Holton, J.

    2012-02-01

    The Strategy Guideline: High Performance Residential Lighting has been developed to provide a tool for the understanding and application of high performance lighting in the home. The high performance lighting strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner's expectations for high quality lighting.

  2. FY 1992 Blue Book: Grand Challenges: High Performance Computing and Communications

    Networking and Information Technology Research and Development, Executive Office of the President — High performance computing and computer communications networks are becoming increasingly important to scientific advancement, economic competition, and national...

  3. FY 1993 Blue Book: Grand Challenges 1993: High Performance Computing and Communications

    Networking and Information Technology Research and Development, Executive Office of the President — High performance computing and computer communications networks are becoming increasingly important to scientific advancement, economic competition, and national...

  4. High Performance Computing Software Applications for Space Situational Awareness

    Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.

    The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.

  5. High performance liquid chromatographic determination of ...

    STORAGESEVER

    2010-02-08

    ) high performance liquid chromatography (HPLC) grade .... applications. These are important requirements if the reagent is to be applicable to on-line pre or post column derivatisation in a possible automation of the analytical.

  6. Analog circuit design designing high performance amplifiers

    Feucht, Dennis

    2010-01-01

    The third volume Designing High Performance Amplifiers applies the concepts from the first two volumes. It is an advanced treatment of amplifier design/analysis emphasizing both wideband and precision amplification.

  7. Strategies and Experiences Using High Performance Fortran

    Shires, Dale

    2001-01-01

    .... High performance Fortran (HPF) is a relative new addition to the Fortran dialect It is an attempt to provide an efficient high-level Fortran parallel programming language for the latest generation of been debatable...

  8. Embedded High Performance Scalable Computing Systems

    Ngo, David

    2003-01-01

    The Embedded High Performance Scalable Computing Systems (EHPSCS) program is a cooperative agreement between Sanders, A Lockheed Martin Company and DARPA that ran for three years, from Apr 1995 - Apr 1998...

  9. Gradient High Performance Liquid Chromatography Method ...

    Purpose: To develop a gradient high performance liquid chromatography (HPLC) method for the simultaneous determination of phenylephrine (PHE) and ibuprofen (IBU) in solid ..... nimesulide, phenylephrine. Hydrochloride, chlorpheniramine maleate and caffeine anhydrous in pharmaceutical dosage form. Acta Pol.

  10. High performance computing in Windows Azure cloud

    Ambruš, Dejan

    2013-01-01

    High performance, security, availability, scalability, flexibility and lower costs of maintenance have essentially contributed to the growing popularity of cloud computing in all spheres of life, especially in business. In fact cloud computing offers even more than this. With usage of virtual computing clusters a runtime environment for high performance computing can be efficiently implemented also in a cloud. There are many advantages but also some disadvantages of cloud computing, some ...

  11. Carbon nanomaterials for high-performance supercapacitors

    Tao Chen; Liming Dai

    2013-01-01

    Owing to their high energy density and power density, supercapacitors exhibit great potential as high-performance energy sources for advanced technologies. Recently, carbon nanomaterials (especially, carbon nanotubes and graphene) have been widely investigated as effective electrodes in supercapacitors due to their high specific surface area, excellent electrical and mechanical properties. This article summarizes the recent progresses on the development of high-performance supercapacitors bas...

  12. Delivering high performance BWR fuel reliably

    Schardt, J.F. [GE Nuclear Energy, Wilmington, NC (United States)

    1998-07-01

    Utilities are under intense pressure to reduce their production costs in order to compete in the increasingly deregulated marketplace. They need fuel, which can deliver high performance to meet demanding operating strategies. GE's latest BWR fuel design, GE14, provides that high performance capability. GE's product introduction process assures that this performance will be delivered reliably, with little risk to the utility. (author)

  13. HPTA: High-Performance Text Analytics

    Vandierendonck, Hans; Murphy, Karen; Arif, Mahwish; Nikolopoulos, Dimitrios S.

    2017-01-01

    One of the main targets of data analytics is unstructured data, which primarily involves textual data. High-performance processing of textual data is non-trivial. We present the HPTA library for high-performance text analytics. The library helps programmers to map textual data to a dense numeric representation, which can be handled more efficiently. HPTA encapsulates three performance optimizations: (i) efficient memory management for textual data, (ii) parallel computation on associative dat...

  14. High-performance computing — an overview

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  15. PetIGA: A framework for high-performance isogeometric analysis

    Dalcin, Lisandro; Collier, N.; Vignal, Philippe; Cortes, Adriano Mauricio; Calo, Victor M.

    2016-01-01

    We present PetIGA, a code framework to approximate the solution of partial differential equations using isogeometric analysis. PetIGA can be used to assemble matrices and vectors which come from a Galerkin weak form, discretized with Non-Uniform Rational B-spline basis functions. We base our framework on PETSc, a high-performance library for the scalable solution of partial differential equations, which simplifies the development of large-scale scientific codes, provides a rich environment for prototyping, and separates parallelism from algorithm choice. We describe the implementation of PetIGA, and exemplify its use by solving a model nonlinear problem. To illustrate the robustness and flexibility of PetIGA, we solve some challenging nonlinear partial differential equations that include problems in both solid and fluid mechanics. We show strong scaling results on up to 40964096 cores, which confirm the suitability of PetIGA for large scale simulations.

  16. PetIGA: A framework for high-performance isogeometric analysis

    Dalcin, L.

    2016-05-25

    We present PetIGA, a code framework to approximate the solution of partial differential equations using isogeometric analysis. PetIGA can be used to assemble matrices and vectors which come from a Galerkin weak form, discretized with Non-Uniform Rational B-spline basis functions. We base our framework on PETSc, a high-performance library for the scalable solution of partial differential equations, which simplifies the development of large-scale scientific codes, provides a rich environment for prototyping, and separates parallelism from algorithm choice. We describe the implementation of PetIGA, and exemplify its use by solving a model nonlinear problem. To illustrate the robustness and flexibility of PetIGA, we solve some challenging nonlinear partial differential equations that include problems in both solid and fluid mechanics. We show strong scaling results on up to 40964096 cores, which confirm the suitability of PetIGA for large scale simulations.

  17. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brightwell, Ron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In this paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.

  18. High Performance Networks From Supercomputing to Cloud Computing

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  19. Wavy channel transistor for area efficient high performance operation

    Fahad, Hossain M.

    2013-04-05

    We report a wavy channel FinFET like transistor where the channel is wavy to increase its width without any area penalty and thereby increasing its drive current. Through simulation and experiments, we show the effectiveness of such device architecture is capable of high performance operation compared to conventional FinFETs with comparatively higher area efficiency and lower chip latency as well as lower power consumption.

  20. High performance bio-integrated devices

    Kim, Dae-Hyeong; Lee, Jongha; Park, Minjoon

    2014-06-01

    In recent years, personalized electronics for medical applications, particularly, have attracted much attention with the rise of smartphones because the coupling of such devices and smartphones enables the continuous health-monitoring in patients' daily life. Especially, it is expected that the high performance biomedical electronics integrated with the human body can open new opportunities in the ubiquitous healthcare. However, the mechanical and geometrical constraints inherent in all standard forms of high performance rigid wafer-based electronics raise unique integration challenges with biotic entities. Here, we describe materials and design constructs for high performance skin-mountable bio-integrated electronic devices, which incorporate arrays of single crystalline inorganic nanomembranes. The resulting electronic devices include flexible and stretchable electrophysiology electrodes and sensors coupled with active electronic components. These advances in bio-integrated systems create new directions in the personalized health monitoring and/or human-machine interfaces.

  1. Strategy Guideline. Partnering for High Performance Homes

    Prahl, Duncan [IBACOS, Inc., Pittsburgh, PA (United States)

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. This guide is intended for use by all parties associated in the design and construction of high performance homes. It serves as a starting point and features initial tools and resources for teams to collaborate to continually improve the energy efficiency and durability of new houses.

  2. Scientific results and lessons learned from an integrated crewed Mars exploration simulation at the Rio Tinto Mars analogue site

    Orgel, Csilla; Kereszturi, Ákos; Váczi, Tamás; Groemer, Gernot; Sattler, Birgit

    2014-02-01

    Between 15 and 25 April 2011 in the framework of the PolAres programme of the Austrian Space Forum, a five-day field test of the Aouda.X spacesuit simulator was conducted at the Rio Tinto Mars-analogue site in southern Spain. The field crew was supported by a full-scale Mission Control Center (MCC) in Innsbruck, Austria. The field telemetry data were relayed to the MCC, enabling a Remote Science Support (RSS) team to study field data in near-real-time and adjust the flight planning in a flexible manner. We report on the experiences in the field of robotics, geophysics (Ground Penetrating Radar) and geology as well as life sciences in a simulated spaceflight operational environment. Extravehicular Activity (EVA) maps had been prepared using Google Earth and aerial images. The Rio Tinto mining area offers an excellent location for Mars analogue simulations. It is recognised as a terrestrial Mars analogue site because of the presence of jarosite and related sulphates, which have been identified by the NASA Mars Exploration Rover "Opportunity" in the El Capitan region of Meridiani Planum on Mars. The acidic, high ferric-sulphate content water of Rio Tinto is also considered as a possible analogue in astrobiology regarding the analysis of ferric sulphate related biochemical pathways and produced biomarkers. During our Mars simulation, 18 different types of soil and rock samples were collected by the spacesuit tester. The Raman results confirm the presence of minerals expected, such as jarosite, different Fe oxides and oxi-hydroxides, pyrite and complex Mg and Ca sulphates. Eight science experiments were conducted in the field. In this contribution first we list the important findings during the management and realisation of tests, and also a first summary of the scientific results. Based on these experiences suggestions for future analogue work are also summarised. We finish with recommendations for future field missions, including the preparation of the experiments

  3. Intra-EVA Space-to-Ground Interactions when Conducting Scientific Fieldwork Under Simulated Mars Mission Constraints

    Beaton, Kara H.; Chappell, Steven P.; Abercromby, Andrew F. J.; Lim, Darlene S. S.

    2018-01-01

    The Biologic Analog Science Associated with Lava Terrains (BASALT) project is a four-year program dedicated to iteratively designing, implementing, and evaluating concepts of operations (ConOps) and supporting capabilities to enable and enhance scientific exploration for future human Mars missions. The BASALT project has incorporated three field deployments during which real (non-simulated) biological and geochemical field science have been conducted at two high-fidelity Mars analog locations under simulated Mars mission conditions, including communication delays and data transmission limitations. BASALT's primary Science objective has been to extract basaltic samples for the purpose of investigating how microbial communities and habitability correlate with the physical and geochemical characteristics of chemically altered basalt environments. Field sites include the active East Rift Zone on the Big Island of Hawai'i, reminiscent of early Mars when basaltic volcanism and interaction with water were widespread, and the dormant eastern Snake River Plain in Idaho, similar to present-day Mars where basaltic volcanism is rare and most evidence for volcano-driven hydrothermal activity is relict. BASALT's primary Science Operations objective has been to investigate exploration ConOps and capabilities that facilitate scientific return during human-robotic exploration under Mars mission constraints. Each field deployment has consisted of ten extravehicular activities (EVAs) on the volcanic flows in which crews of two extravehicular and two intravehicular crewmembers conducted the field science while communicating across time delay and under bandwidth constraints with an Earth-based Mission Support Center (MSC) comprised of expert scientists and operators. Communication latencies of 5 and 15 min one-way light time and low (0.512 Mb/s uplink, 1.54 Mb/s downlink) and high (5.0 Mb/s uplink, 10.0 Mb/s downlink) bandwidth conditions were evaluated. EVA crewmembers communicated

  4. Team Development for High Performance Management.

    Schermerhorn, John R., Jr.

    1986-01-01

    The author examines a team development approach to management that creates shared commitments to performance improvement by focusing the attention of managers on individual workers and their task accomplishments. It uses the "high-performance equation" to help managers confront shared beliefs and concerns about performance and develop realistic…

  5. Validated High Performance Liquid Chromatography Method for ...

    Purpose: To develop a simple, rapid and sensitive high performance liquid chromatography (HPLC) method for the determination of cefadroxil monohydrate in human plasma. Methods: Schimadzu HPLC with LC solution software was used with Waters Spherisorb, C18 (5 μm, 150mm × 4.5mm) column. The mobile phase ...

  6. An Introduction to High Performance Fortran

    John Merlin

    1995-01-01

    Full Text Available High Performance Fortran (HPF is an informal standard for extensions to Fortran 90 to assist its implementation on parallel architectures, particularly for data-parallel computation. Among other things, it includes directives for specifying data distribution across multiple memories, and concurrent execution features. This article provides a tutorial introduction to the main features of HPF.

  7. High Performance Work Systems for Online Education

    Contacos-Sawyer, Jonna; Revels, Mark; Ciampa, Mark

    2010-01-01

    The purpose of this paper is to identify the key elements of a High Performance Work System (HPWS) and explore the possibility of implementation in an online institution of higher learning. With the projected rapid growth of the demand for online education and its importance in post-secondary education, providing high quality curriculum, excellent…

  8. Debugging a high performance computing program

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  9. High Performance Networks for High Impact Science

    Scott, Mary A.; Bair, Raymond A.

    2003-02-13

    This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

  10. Teacher Accountability at High Performing Charter Schools

    Aguirre, Moises G.

    2016-01-01

    This study will examine the teacher accountability and evaluation policies and practices at three high performing charter schools located in San Diego County, California. Charter schools are exempted from many laws, rules, and regulations that apply to traditional school systems. By examining the teacher accountability systems at high performing…

  11. Technology Leadership in Malaysia's High Performance School

    Yieng, Wong Ai; Daud, Khadijah Binti

    2017-01-01

    Headmaster as leader of the school also plays a role as a technology leader. This applies to the high performance schools (HPS) headmaster as well. The HPS excel in all aspects of education. In this study, researcher is interested in examining the role of the headmaster as a technology leader through interviews with three headmasters of high…

  12. Toward High Performance in Industrial Refrigeration Systems

    Thybo, C.; Izadi-Zamanabadi, Roozbeh; Niemann, H.

    2002-01-01

    Achieving high performance in complex industrial systems requires information manipulation at different system levels. The paper shows how different models of same subsystems, but using different quality of information/data, are used for fault diagnosis as well as robust control design...

  13. Towards high performance in industrial refrigeration systems

    Thybo, C.; Izadi-Zamanabadi, R.; Niemann, Hans Henrik

    2002-01-01

    Achieving high performance in complex industrial systems requires information manipulation at different system levels. The paper shows how different models of same subsystems, but using different quality of information/data, are used for fault diagnosis as well as robust control design...

  14. Validated high performance liquid chromatographic (HPLC) method ...

    STORAGESEVER

    2010-02-22

    Feb 22, 2010 ... specific and accurate high performance liquid chromatographic method for determination of ZER in micro-volumes ... tional medicine as a cure for swelling, sores, loss of appetite and ... Receptor Activator for Nuclear Factor κ B Ligand .... The effect of ... be suitable for preclinical pharmacokinetic studies. The.

  15. Validated High Performance Liquid Chromatography Method for ...

    Purpose: To develop a simple, rapid and sensitive high performance liquid ... response, tailing factor and resolution of six replicate injections was < 3 %. ... Cefadroxil monohydrate, Human plasma, Pharmacokinetics Bioequivalence ... Drug-free plasma was obtained from the local .... Influence of probenicid on the renal.

  16. High-performance OPCPA laser system

    Zuegel, J.D.; Bagnoud, V.; Bromage, J.; Begishev, I.A.; Puth, J.

    2006-01-01

    Optical parametric chirped-pulse amplification (OPCPA) is ideally suited for amplifying ultra-fast laser pulses since it provides broadband gain across a wide range of wavelengths without many of the disadvantages of regenerative amplification. A high-performance OPCPA system has been demonstrated as a prototype for the front end of the OMEGA Extended Performance (EP) Laser System. (authors)

  17. High-performance OPCPA laser system

    Zuegel, J.D.; Bagnoud, V.; Bromage, J.; Begishev, I.A.; Puth, J. [Rochester Univ., Lab. for Laser Energetics, NY (United States)

    2006-06-15

    Optical parametric chirped-pulse amplification (OPCPA) is ideally suited for amplifying ultra-fast laser pulses since it provides broadband gain across a wide range of wavelengths without many of the disadvantages of regenerative amplification. A high-performance OPCPA system has been demonstrated as a prototype for the front end of the OMEGA Extended Performance (EP) Laser System. (authors)

  18. Comparing Dutch and British high performing managers

    Waal, A.A. de; Heijden, B.I.J.M. van der; Selvarajah, C.; Meyer, D.

    2016-01-01

    National cultures have a strong influence on the performance of organizations and should be taken into account when studying the traits of high performing managers. At the same time, many studies that focus upon the attributes of successful managers show that there are attributes that are similar

  19. Project materials [Commercial High Performance Buildings Project

    None

    2001-01-01

    The Consortium for High Performance Buildings (ChiPB) is an outgrowth of DOE'S Commercial Whole Buildings Roadmapping initiatives. It is a team-driven public/private partnership that seeks to enable and demonstrate the benefit of buildings that are designed, built and operated to be energy efficient, environmentally sustainable, superior quality, and cost effective.

  20. High performance structural ceramics for nuclear industry

    Pujari, Vimal K.; Faker, Paul

    2006-01-01

    A family of Saint-Gobain structural ceramic materials and products produced by its High performance Refractory Division is described. Over the last fifty years or so, Saint-Gobain has been a leader in developing non oxide ceramic based novel materials, processes and products for application in Nuclear, Chemical, Automotive, Defense and Mining industries

  1. A new high performance current transducer

    Tang Lijun; Lu Songlin; Li Deming

    2003-01-01

    A DC-100 kHz current transducer is developed using a new technique on zero-flux detecting principle. It was shown that the new current transducer is of high performance, its magnetic core need not be selected very stringently, and it is easy to manufacture

  2. High Performance Fortran for Aerospace Applications

    Mehrotra, Piyush

    2000-01-01

    .... HPF is a set of Fortran extensions designed to provide users with a high-level interface for programming data parallel scientific applications while delegating to the compiler/runtime system the task...

  3. Quantum Accelerators for High-performance Computing Systems

    Humble, Travis S. [ORNL; Britt, Keith A. [ORNL; Mohiyaddin, Fahd A. [ORNL

    2017-11-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.

  4. High performance simulation of lattice physics using enhanced transputer arrays

    Hey, A.J.G.; Jesshope, C.R.; Nicole, D.A.

    1986-01-01

    The authors describe an architecture under construction at Southampton using arrays of communicating transputers with enhanced floating-point capabilities. Performance in the Gigaflop range is expected. Algorithms for taking explicit advantage of this MIMD architecture are discussed using the Occam programming paradigm. (Auth.)

  5. Simulating elastic light scattering using high performance computing methods

    Hoekstra, A.G.; Sloot, P.M.A.; Verbraeck, A.; Kerckhoffs, E.J.H.

    1993-01-01

    The Coupled Dipole method, as originally formulated byPurcell and Pennypacker, is a very powerful method tosimulate the Elastic Light Scattering from arbitraryparticles. This method, which is a particle simulationmodel for Computational Electromagnetics, has one majordrawback: if the size of the

  6. High performance computing network for cloud environment using simulators

    Singh, N. Ajith; Hemalatha, M.

    2012-01-01

    Cloud computing is the next generation computing. Adopting the cloud computing is like signing up new form of a website. The GUI which controls the cloud computing make is directly control the hardware resource and your application. The difficulty part in cloud computing is to deploy in real environment. Its' difficult to know the exact cost and it's requirement until and unless we buy the service not only that whether it will support the existing application which is available on traditional...

  7. Evaluation of high-performance computing software

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  8. Architecting Web Sites for High Performance

    Arun Iyengar

    2002-01-01

    Full Text Available Web site applications are some of the most challenging high-performance applications currently being developed and deployed. The challenges emerge from the specific combination of high variability in workload characteristics and of high performance demands regarding the service level, scalability, availability, and costs. In recent years, a large body of research has addressed the Web site application domain, and a host of innovative software and hardware solutions have been proposed and deployed. This paper is an overview of recent solutions concerning the architectures and the software infrastructures used in building Web site applications. The presentation emphasizes three of the main functions in a complex Web site: the processing of client requests, the control of service levels, and the interaction with remote network caches.

  9. Monitoring SLAC High Performance UNIX Computing Systems

    Lettsome, Annette K.

    2005-01-01

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface

  10. High Performance Interactive System Dynamics Visualization

    Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Duckworth, Jonathan C [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-14

    This brochure describes a system dynamics simulation (SD) framework that supports an end-to-end analysis workflow that is optimized for deployment on ESIF facilities(Peregrine and the Insight Center). It includes (I) parallel and distributed simulation of SD models, (ii) real-time 3D visualization of running simulations, and (iii) comprehensive database-oriented persistence of simulation metadata, inputs, and outputs.

  11. High Performance Interactive System Dynamics Visualization

    Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Duckworth, Jonathan C [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-14

    This presentation describes a system dynamics simulation (SD) framework that supports an end-to-end analysis workflow that is optimized for deployment on ESIF facilities(Peregrine and the Insight Center). It includes (I) parallel and distributed simulation of SD models, (ii) real-time 3D visualization of running simulations, and (iii) comprehensive database-oriented persistence of simulation metadata, inputs, and outputs.

  12. High-performance phase-field modeling

    Vignal, Philippe; Sarmiento, Adel; Cortes, Adriano Mauricio; Dalcin, L.; Collier, N.; Calo, Victor M.

    2015-01-01

    and phase-field crystal equation will be presented, which corroborate the theoretical findings, and illustrate the robustness of the method. Results related to more challenging examples, namely the Navier-Stokes Cahn-Hilliard and a diusion-reaction Cahn-Hilliard system, will also be presented. The implementation was done in PetIGA and PetIGA-MF, high-performance Isogeometric Analysis frameworks [1, 3], designed to handle non-linear, time-dependent problems.

  13. AHPCRC - Army High Performance Computing Research Center

    2010-01-01

    computing. Of particular interest is the ability of a distrib- uted jamming network (DJN) to jam signals in all or part of a sensor or communications net...and reasoning, assistive technologies. FRIEDRICH (FRITZ) PRINZ Finmeccanica Professor of Engineering, Robert Bosch Chair, Department of Engineering...High Performance Computing Research Center www.ahpcrc.org BARBARA BRYAN AHPCRC Research and Outreach Manager, HPTi (650) 604-3732 bbryan@hpti.com Ms

  14. Governance among Malaysian high performing companies

    Asri Marsidi

    2016-07-01

    Full Text Available Well performed companies have always been linked with effective governance which is generally reflected through effective board of directors. However many issues concerning the attributes for effective board of directors remained unresolved. Nowadays diversity has been perceived as able to influence the corporate performance due to the likelihood of meeting variety of needs and demands from diverse customers and clients. The study therefore aims to provide a fundamental understanding on governance among high performing companies in Malaysia.

  15. DURIP: High Performance Computing in Biomathematics Applications

    2017-05-10

    Mathematics and Statistics (AMS) at the University of California, Santa Cruz (UCSC) to conduct research and research-related education in areas of...Computing in Biomathematics Applications Report Title The goal of this award was to enhance the capabilities of the Department of Applied Mathematics and...DURIP: High Performance Computing in Biomathematics Applications The goal of this award was to enhance the capabilities of the Department of Applied

  16. High Performance Computing Operations Review Report

    Cupps, Kimberly C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-19

    The High Performance Computing Operations Review (HPCOR) meeting—requested by the ASC and ASCR program headquarters at DOE—was held November 5 and 6, 2013, at the Marriott Hotel in San Francisco, CA. The purpose of the review was to discuss the processes and practices for HPC integration and its related software and facilities. Experiences and lessons learned from the most recent systems deployed were covered in order to benefit the deployment of new systems.

  17. Planning for high performance project teams

    Reed, W.; Keeney, J.; Westney, R.

    1997-01-01

    Both industry-wide research and corporate benchmarking studies confirm the significant savings in cost and time that result from early planning of a project. Amoco's Team Planning Workshop combines long-term strategic project planning and short-term tactical planning with team building to provide the basis for high performing project teams, better project planning, and effective implementation of the Amoco Common Process for managing projects

  18. vSphere high performance cookbook

    Sarkar, Prasenjit

    2013-01-01

    vSphere High Performance Cookbook is written in a practical, helpful style with numerous recipes focusing on answering and providing solutions to common, and not-so common, performance issues and problems.The book is primarily written for technical professionals with system administration skills and some VMware experience who wish to learn about advanced optimization and the configuration features and functions for vSphere 5.1.

  19. High performance work practices, innovation and performance

    Jørgensen, Frances; Newton, Cameron; Johnston, Kim

    2013-01-01

    Research spanning nearly 20 years has provided considerable empirical evidence for relationships between High Performance Work Practices (HPWPs) and various measures of performance including increased productivity, improved customer service, and reduced turnover. What stands out from......, and Africa to examine these various questions relating to the HPWP-innovation-performance relationship. Each paper discusses a practice that has been identified in HPWP literature and potential variables that can facilitate or hinder the effects of these practices of innovation- and performance...

  20. High Performance Electronics on Flexible Silicon

    Sevilla, Galo T.

    2016-09-01

    Over the last few years, flexible electronic systems have gained increased attention from researchers around the world because of their potential to create new applications such as flexible displays, flexible energy harvesters, artificial skin, and health monitoring systems that cannot be integrated with conventional wafer based complementary metal oxide semiconductor processes. Most of the current efforts to create flexible high performance devices are based on the use of organic semiconductors. However, inherent material\\'s limitations make them unsuitable for big data processing and high speed communications. The objective of my doctoral dissertation is to develop integration processes that allow the transformation of rigid high performance electronics into flexible ones while maintaining their performance and cost. In this work, two different techniques to transform inorganic complementary metal-oxide-semiconductor electronics into flexible ones have been developed using industry compatible processes. Furthermore, these techniques were used to realize flexible discrete devices and circuits which include metal-oxide-semiconductor field-effect-transistors, the first demonstration of flexible Fin-field-effect-transistors, and metal-oxide-semiconductors-based circuits. Finally, this thesis presents a new technique to package, integrate, and interconnect flexible high performance electronics using low cost additive manufacturing techniques such as 3D printing and inkjet printing. This thesis contains in depth studies on electrical, mechanical, and thermal properties of the fabricated devices.

  1. Computational Biology and High Performance Computing 2000

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  2. ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS

    WONG, CPC; MALANG, S; NISHIO, S; RAFFRAY, R; SAGARA, S

    2002-01-01

    OAK A271 ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS. First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability

  3. Unravelling the structure of matter on high-performance computers

    Kieu, T.D.; McKellar, B.H.J.

    1992-11-01

    The various phenomena and the different forms of matter in nature are believed to be the manifestation of only a handful set of fundamental building blocks-the elementary particles-which interact through the four fundamental forces. In the study of the structure of matter at this level one has to consider forces which are not sufficiently weak to be treated as small perturbations to the system, an example of which is the strong force that binds the nucleons together. High-performance computers, both vector and parallel machines, have facilitated the necessary non-perturbative treatments. The principles and the techniques of computer simulations applied to Quantum Chromodynamics are explained examples include the strong interactions, the calculation of the mass of nucleons and their decay rates. Some commercial and special-purpose high-performance machines for such calculations are also mentioned. 3 refs., 2 tabs

  4. High Performance Computing in Science and Engineering '14

    Kröner, Dietmar; Resch, Michael

    2015-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS). The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and   engineers. The book comes with a wealth of color illustrations and tables of results.  

  5. Fracture modelling of a high performance armour steel

    Skoglund, P.; Nilsson, M.; Tjernberg, A.

    2006-08-01

    The fracture characteristics of the high performance armour steel Armox 500T is investigated. Tensile mechanical experiments using samples with different notch geometries are used to investigate the effect of multi-axial stress states on the strain to fracture. The experiments are numerically simulated and from the simulation the stress at the point of fracture initiation is determined as a function of strain and these data are then used to extract parameters for fracture models. A fracture model based on quasi-static experiments is suggested and the model is tested against independent experiments done at both static and dynamic loading. The result show that the fracture model give reasonable good agreement between simulations and experiments at both static and dynamic loading condition. This indicates that multi-axial loading is more important to the strain to fracture than the deformation rate in the investigated loading range. However on-going work will further characterise the fracture behaviour of Armox 500T.

  6. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  7. Toward a theory of high performance.

    Kirby, Julia

    2005-01-01

    What does it mean to be a high-performance company? The process of measuring relative performance across industries and eras, declaring top performers, and finding the common drivers of their success is such a difficult one that it might seem a fool's errand to attempt. In fact, no one did for the first thousand or so years of business history. The question didn't even occur to many scholars until Tom Peters and Bob Waterman released In Search of Excellence in 1982. Twenty-three years later, we've witnessed several more attempts--and, just maybe, we're getting closer to answers. In this reported piece, HBR senior editor Julia Kirby explores why it's so difficult to study high performance and how various research efforts--including those from John Kotter and Jim Heskett; Jim Collins and Jerry Porras; Bill Joyce, Nitin Nohria, and Bruce Roberson; and several others outlined in a summary chart-have attacked the problem. The challenge starts with deciding which companies to study closely. Are the stars the ones with the highest market caps, the ones with the greatest sales growth, or simply the ones that remain standing at the end of the game? (And when's the end of the game?) Each major study differs in how it defines success, which companies it therefore declares to be worthy of emulation, and the patterns of activity and attitude it finds in common among them. Yet, Kirby concludes, as each study's method incrementally solves problems others have faced, we are progressing toward a consensus theory of high performance.

  8. Multi-Language Programming Environments for High Performance Java Computing

    Vladimir Getov

    1999-01-01

    Full Text Available Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI tool which provides application programmers wishing to use Java with immediate accessibility to existing scientific packages. The JCI tool also facilitates rapid development and reuse of existing code. These benefits are provided at minimal cost to the programmer. While beneficial to the programmer, the additional advantages of mixed‐language programming in terms of application performance and portability are addressed in detail within the context of this paper. In addition, we discuss how the JCI tool is complementing other ongoing projects such as IBM’s High‐Performance Compiler for Java (HPCJ and IceT’s metacomputing environment.

  9. Utilities for high performance dispersion model PHYSIC

    Yamazawa, Hiromi

    1992-09-01

    The description and usage of the utilities for the dispersion calculation model PHYSIC were summarized. The model was developed in the study of developing high performance SPEEDI with the purpose of introducing meteorological forecast function into the environmental emergency response system. The procedure of PHYSIC calculation consists of three steps; preparation of relevant files, creation and submission of JCL, and graphic output of results. A user can carry out the above procedure with the help of the Geographical Data Processing Utility, the Model Control Utility, and the Graphic Output Utility. (author)

  10. Playa: High-Performance Programmable Linear Algebra

    Victoria E. Howle

    2012-01-01

    Full Text Available This paper introduces Playa, a high-level user interface layer for composing algorithms for complex multiphysics problems out of objects from other Trilinos packages. Among other features, Playa provides very high-performance overloaded operators implemented through an expression template mechanism. In this paper, we give an overview of the central Playa objects from a user's perspective, show application to a sequence of increasingly complex solver algorithms, provide timing results for Playa's overloaded operators and other functions, and briefly survey some of the implementation issues involved.

  11. An integrated high performance fastbus slave interface

    Christiansen, J.; Ljuslin, C.

    1992-01-01

    A high performance Fastbus slave interface ASIC is presented. The Fastbus slave integrated circuit (FASIC) is a programmable device, enabling its direct use in many different applications. The FASIC acts as an interface between Fastbus and a 'standard' processor/memory bus. It can work stand-alone or together with a microprocessor. A set of address mapping windows can map Fastbus addresses to convenient memory addresses and at the same time act as address decoding logic. Data rates of 100 MBytes/s to Fastbus can be obtained using an internal FIFO buffer in the FASIC. (orig.)

  12. Strategy Guideline. High Performance Residential Lighting

    Holton, J. [IBACOS, Inc., Pittsburgh, PA (United States)

    2012-02-01

    This report has been developed to provide a tool for the understanding and application of high performance lighting in the home. The strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner’s expectations for high quality lighting.

  13. High-Performance Tiled WMS and KML Web Server

    Plesea, Lucian

    2007-01-01

    This software is an Apache 2.0 module implementing a high-performance map server to support interactive map viewers and virtual planet client software. It can be used in applications that require access to very-high-resolution geolocated images, such as GIS, virtual planet applications, and flight simulators. It serves Web Map Service (WMS) requests that comply with a given request grid from an existing tile dataset. It also generates the KML super-overlay configuration files required to access the WMS image tiles.

  14. Power efficient and high performance VLSI architecture for AES algorithm

    K. Kalaiselvi

    2015-09-01

    Full Text Available Advanced encryption standard (AES algorithm has been widely deployed in cryptographic applications. This work proposes a low power and high throughput implementation of AES algorithm using key expansion approach. We minimize the power consumption and critical path delay using the proposed high performance architecture. It supports both encryption and decryption using 256-bit keys with a throughput of 0.06 Gbps. The VHDL language is utilized for simulating the design and an FPGA chip has been used for the hardware implementations. Experimental results reveal that the proposed AES architectures offer superior performance than the existing VLSI architectures in terms of power, throughput and critical path delay.

  15. Transport in JET high performance plasmas

    2001-01-01

    Two type of high performance scenarios have been produced in JET during DTE1 campaign. One of them is the well known and extensively used in the past ELM-free hot ion H-mode scenario which has two distinct regions- plasma core and the edge transport barrier. The results obtained during DTE-1 campaign with D, DT and pure T plasmas confirms our previous conclusion that the core transport scales as a gyroBohm in the inner half of plasma volume, recovers its Bohm nature closer to the separatrix and behaves as ion neoclassical in the transport barrier. Measurements on the top of the barrier suggest that the width of the barrier is dependent upon isotope and moreover suggest that fast ions play a key role. The other high performance scenario is a relatively recently developed Optimised Shear Scenario with small or slightly negative magnetic shear in plasma core. Different mechanisms of Internal Transport Barrier (ITB) formation have been tested by predictive modelling and the results are compared with experimentally observed phenomena. The experimentally observed non-penetration of the heavy impurities through the strong ITB which contradicts to a prediction of the conventional neo-classical theory is discussed. (author)

  16. Advanced high performance solid wall blanket concepts

    Wong, C.P.C.; Malang, S.; Nishio, S.; Raffray, R.; Sagara, A.

    2002-01-01

    First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability

  17. A High Performance COTS Based Computer Architecture

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  18. High-performance computing for airborne applications

    Quinn, Heather M.; Manuzatto, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-01-01

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  19. Development of high performance cladding materials

    Park, Jeong Yong; Jeong, Y. H.; Park, S. Y.

    2010-04-01

    The irradiation test for HANA claddings conducted and a series of evaluation for next-HANA claddings as well as their in-pile and out-of pile performances tests were also carried out at Halden research reactor. The 6th irradiation test have been completed successfully in Halden research reactor. As a result, HANA claddings showed high performance, such as corrosion resistance increased by 40% compared to Zircaloy-4. The high performance of HANA claddings in Halden test has enabled lead test rod program as the first step of the commercialization of HANA claddings. DB has been established for thermal and LOCA-related properties. It was confirmed from the thermal shock test that the integrity of HANA claddings was maintained in more expanded region than the criteria regulated by NRC. The manufacturing process of strips was established in order to apply HANA alloys, which were originally developed for the claddings, to the spacer grids. 250 kinds of model alloys for the next-generation claddings were designed and manufactured over 4 times and used to select the preliminary candidate alloys for the next-generation claddings. The selected candidate alloys showed 50% better corrosion resistance and 20% improved high temperature oxidation resistance compared to the foreign advanced claddings. We established the manufacturing condition controlling the performance of the dual-cooled claddings by changing the reduction rate in the cold working steps

  20. Strategy Guideline: Partnering for High Performance Homes

    Prahl, D.

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. In an environment where the builder is the only source of communication between trades and consultants and where relationships are, in general, adversarial as opposed to cooperative, the chances of any one building system to fail are greater. Furthermore, it is much harder for the builder to identify and capitalize on synergistic opportunities. Partnering can help bridge the cross-functional aspects of the systems approach and achieve performance-based criteria. Critical success factors for partnering include support from top management, mutual trust, effective and open communication, effective coordination around common goals, team building, appropriate use of an outside facilitator, a partnering charter progress toward common goals, an effective problem-solving process, long-term commitment, continuous improvement, and a positive experience for all involved.

  1. Management issues for high performance storage systems

    Louis, S. [Lawrence Livermore National Lab., CA (United States); Burris, R. [Oak Ridge National Lab., TN (United States)

    1995-03-01

    Managing distributed high-performance storage systems is complex and, although sharing common ground with traditional network and systems management, presents unique storage-related issues. Integration technologies and frameworks exist to help manage distributed network and system environments. Industry-driven consortia provide open forums where vendors and users cooperate to leverage solutions. But these new approaches to open management fall short addressing the needs of scalable, distributed storage. We discuss the motivation and requirements for storage system management (SSM) capabilities and describe how SSM manages distributed servers and storage resource objects in the High Performance Storage System (HPSS), a new storage facility for data-intensive applications and large-scale computing. Modem storage systems, such as HPSS, require many SSM capabilities, including server and resource configuration control, performance monitoring, quality of service, flexible policies, file migration, file repacking, accounting, and quotas. We present results of initial HPSS SSM development including design decisions and implementation trade-offs. We conclude with plans for follow-on work and provide storage-related recommendations for vendors and standards groups seeking enterprise-wide management solutions.

  2. Transport in JET high performance plasmas

    1999-01-01

    Two type of high performance scenarios have been produced in JET during DTE1 campaign. One of them is the well known and extensively used in the past ELM-free hot ion H-mode scenario which has two distinct regions- plasma core and the edge transport barrier. The results obtained during DTE-1 campaign with D, DT and pure T plasmas confirms our previous conclusion that the core transport scales as a gyroBohm in the inner half of plasma volume, recovers its Bohm nature closer to the separatrix and behaves as ion neoclassical in the transport barrier. Measurements on the top of the barrier suggest that the width of the barrier is dependent upon isotope and moreover suggest that fast ions play a key role. The other high performance scenario is a relatively recently developed Optimised Shear Scenario with small or slightly negative magnetic shear in plasma core. Different mechanisms of Internal Transport Barrier (ITB) formation have been tested by predictive modelling and the results are compared with experimentally observed phenomena. The experimentally observed non-penetration of the heavy impurities through the strong ITB which contradicts to a prediction of the conventional neo-classical theory is discussed. (author)

  3. High-performance vertical organic transistors.

    Kleemann, Hans; Günther, Alrun A; Leo, Karl; Lüssem, Björn

    2013-11-11

    Vertical organic thin-film transistors (VOTFTs) are promising devices to overcome the transconductance and cut-off frequency restrictions of horizontal organic thin-film transistors. The basic physical mechanisms of VOTFT operation, however, are not well understood and VOTFTs often require complex patterning techniques using self-assembly processes which impedes a future large-area production. In this contribution, high-performance vertical organic transistors comprising pentacene for p-type operation and C60 for n-type operation are presented. The static current-voltage behavior as well as the fundamental scaling laws of such transistors are studied, disclosing a remarkable transistor operation with a behavior limited by injection of charge carriers. The transistors are manufactured by photolithography, in contrast to other VOTFT concepts using self-assembled source electrodes. Fluorinated photoresist and solvent compounds allow for photolithographical patterning directly and strongly onto the organic materials, simplifying the fabrication protocol and making VOTFTs a prospective candidate for future high-performance applications of organic transistors. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. A Linux Workstation for High Performance Graphics

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  5. High performance separation of lanthanides and actinides

    Sivaraman, N.; Vasudeva Rao, P.R.

    2011-01-01

    The major advantage of High Performance Liquid Chromatography (HPLC) is its ability to provide rapid and high performance separations. It is evident from Van Deemter curve for particle size versus resolution that packing materials with particle sizes less than 2 μm provide better resolution for high speed separations and resolving complex mixtures compared to 5 μm based supports. In the recent past, chromatographic support material using monolith has been studied extensively at our laboratory. Monolith column consists of single piece of porous, rigid material containing mesopores and micropores, which provide fast analyte mass transfer. Monolith support provides significantly higher separation efficiency than particle-packed columns. A clear advantage of monolith is that it could be operated at higher flow rates but with lower back pressure. Higher operating flow rate results in higher column permeability, which drastically reduces analysis time and provides high separation efficiency. The above developed fast separation methods were applied to assay the lanthanides and actinides from the dissolver solutions of nuclear reactor fuels

  6. Building Trust in High-Performing Teams

    Aki Soudunsaari

    2012-06-01

    Full Text Available Facilitation of growth is more about good, trustworthy contacts than capital. Trust is a driving force for business creation, and to create a global business you need to build a team that is capable of meeting the challenge. Trust is a key factor in team building and a needed enabler for cooperation. In general, trust building is a slow process, but it can be accelerated with open interaction and good communication skills. The fast-growing and ever-changing nature of global business sets demands for cooperation and team building, especially for startup companies. Trust building needs personal knowledge and regular face-to-face interaction, but it also requires empathy, respect, and genuine listening. Trust increases communication, and rich and open communication is essential for the building of high-performing teams. Other building materials are a shared vision, clear roles and responsibilities, willingness for cooperation, and supporting and encouraging leadership. This study focuses on trust in high-performing teams. It asks whether it is possible to manage trust and which tools and operation models should be used to speed up the building of trust. In this article, preliminary results from the authors’ research are presented to highlight the importance of sharing critical information and having a high level of communication through constant interaction.

  7. The path toward HEP High Performance Computing

    Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on th...

  8. Design of High Performance Permanent-Magnet Synchronous Wind Generators

    Chun-Yu Hsiao

    2014-11-01

    Full Text Available This paper is devoted to the analysis and design of high performance permanent-magnet synchronous wind generators (PSWGs. A systematic and sequential methodology for the design of PMSGs is proposed with a high performance wind generator as a design model. Aiming at high induced voltage, low harmonic distortion as well as high generator efficiency, optimal generator parameters such as pole-arc to pole-pitch ratio and stator-slot-shoes dimension, etc. are determined with the proposed technique using Maxwell 2-D, Matlab software and the Taguchi method. The proposed double three-phase and six-phase winding configurations, which consist of six windings in the stator, can provide evenly distributed current for versatile applications regarding the voltage and current demands for practical consideration. Specifically, windings are connected in series to increase the output voltage at low wind speed, and in parallel during high wind speed to generate electricity even when either one winding fails, thereby enhancing the reliability as well. A PMSG is designed and implemented based on the proposed method. When the simulation is performed with a 6 Ω load, the output power for the double three-phase winding and six-phase winding are correspondingly 10.64 and 11.13 kW. In addition, 24 Ω load experiments show that the efficiencies of double three-phase winding and six-phase winding are 96.56% and 98.54%, respectively, verifying the proposed high performance operation.

  9. Accelerating the scientific exploration process with scientific workflows

    Altintas, Ilkay; Barney, Oscar; Cheng, Zhengang; Critchlow, Terence; Ludaescher, Bertram; Parker, Steve; Shoshani, Arie; Vouk, Mladen

    2006-01-01

    Although an increasing amount of middleware has emerged in the last few years to achieve remote data access, distributed job execution, and data management, orchestrating these technologies with minimal overhead still remains a difficult task for scientists. Scientific workflow systems improve this situation by creating interfaces to a variety of technologies and automating the execution and monitoring of the workflows. Workflow systems provide domain-independent customizable interfaces and tools that combine different tools and technologies along with efficient methods for using them. As simulations and experiments move into the petascale regime, the orchestration of long running data and compute intensive tasks is becoming a major requirement for the successful steering and completion of scientific investigations. A scientific workflow is the process of combining data and processes into a configurable, structured set of steps that implement semi-automated computational solutions of a scientific problem. Kepler is a cross-project collaboration, co-founded by the SciDAC Scientific Data Management (SDM) Center, whose purpose is to develop a domain-independent scientific workflow system. It provides a workflow environment in which scientists design and execute scientific workflows by specifying the desired sequence of computational actions and the appropriate data flow, including required data transformations, between these steps. Currently deployed workflows range from local analytical pipelines to distributed, high-performance and high-throughput applications, which can be both data- and compute-intensive. The scientific workflow approach offers a number of advantages over traditional scripting-based approaches, including ease of configuration, improved reusability and maintenance of workflows and components (called actors), automated provenance management, 'smart' re-running of different versions of workflow instances, on-the-fly updateable parameters, monitoring

  10. High performance coronagraphy for direct imaging of exoplanets

    Guyon O.

    2011-07-01

    Full Text Available Coronagraphy has recently been an extremely active field of research, with several high performance concepts proposed, and several new coronagraphs tested in laboratories and telescopes. Coronagraph concepts can be grouped in a few broad categories: Lyot-type coronagraphs, pupil apodization and nulling interferometers. Among existing coronagraph concepts, several approach the fundamental performance limit imposed by the physical nature of light. To achieve their full potential, coronagraphs require exquisite wavefront control and calibration. This has been, and still is, the main bottleneck for the scientifically productive use of coronagraphs on ground-based telescopes. New and promising wavefront sensing techniques suitable for high contrast imaging have however been developed in the last few years and are started to be realized in laboratories. I will review some of these enabling technologies, and show that coronagraphs are now ready for “prime time” on existing and future telescopes.

  11. Using High Performance Computing to Support Water Resource Planning

    Groves, David G. [RAND Corporation, Santa Monica, CA (United States); Lembert, Robert J. [RAND Corporation, Santa Monica, CA (United States); May, Deborah W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Leek, James R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Syme, James [RAND Corporation, Santa Monica, CA (United States)

    2015-10-22

    In recent years, decision support modeling has embraced deliberation-withanalysis— an iterative process in which decisionmakers come together with experts to evaluate a complex problem and alternative solutions in a scientifically rigorous and transparent manner. Simulation modeling supports decisionmaking throughout this process; visualizations enable decisionmakers to assess how proposed strategies stand up over time in uncertain conditions. But running these simulation models over standard computers can be slow. This, in turn, can slow the entire decisionmaking process, interrupting valuable interaction between decisionmakers and analytics.

  12. A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations

    Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel; Buluc, Aydin; Shao, Meiyue

    2017-01-01

    As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using the compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.

  13. Highlighting High Performance: Clearview Elementary School, Hanover, Pennsylvania

    2002-08-01

    Case study on high performance building features of Clearview Elementary School in Hanover, Pennsylvania. Clearview Elementary School in Hanover, Pennsylvania, is filled with natural light, not only in classrooms but also in unexpected, and traditionally dark, places like stairwells and hallways. The result is enhanced learning. Recent scientific studies conducted by the California Board for Energy Efficiency, involving 21,000 students, show test scores were 15% to 26% higher in classrooms with daylighting. Clearview's ventilation system also helps students and teachers stay healthy, alert, and focused on learning. The school's superior learning environment comes with annual average energy savings of about 40% over a conventional school. For example, with so much daylight, the school requires about a third less energy for electric lighting than a typical school. The school's innovative geothermal heating and cooling system uses the constant temperature of the Earth to cool and heat the building. The building and landscape designs work together to enhance solar heating in the winter, summer cooling, and daylighting all year long. Students and teachers have the opportunity to learn about high-performance design by studying their own school. At Clearview, the Hanover Public School District has shown that designing a school to save energy is affordable. Even with its many innovative features, the school's $6.35 million price tag is just $150,000 higher than average for elementary schools in Pennsylvania. Projected annual energy cost savings of approximately $18,000 mean a payback in 9 years. Reasonable construction costs demonstrate that other school districts can build schools that conserve energy, protect natural resources, and provide the educational and health benefits that come with high-performance buildings.

  14. Center for Technology for Advanced Scientific Component Software (TASCS)

    Damevski, Kostadin [Virginia State Univ., Petersburg, VA (United States)

    2009-03-30

    A resounding success of the Scientific Discover through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedened computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS) tackles these issues by exploiting component-based software development to facilitate collaborative hig-performance scientific computing.

  15. Intel Xeon Phi coprocessor high performance programming

    Jeffers, James

    2013-01-01

    Authors Jim Jeffers and James Reinders spent two years helping educate customers about the prototype and pre-production hardware before Intel introduced the first Intel Xeon Phi coprocessor. They have distilled their own experiences coupled with insights from many expert customers, Intel Field Engineers, Application Engineers and Technical Consulting Engineers, to create this authoritative first book on the essentials of programming for this new architecture and these new products. This book is useful even before you ever touch a system with an Intel Xeon Phi coprocessor. To ensure that your applications run at maximum efficiency, the authors emphasize key techniques for programming any modern parallel computing system whether based on Intel Xeon processors, Intel Xeon Phi coprocessors, or other high performance microprocessors. Applying these techniques will generally increase your program performance on any system, and better prepare you for Intel Xeon Phi coprocessors and the Intel MIC architecture. It off...

  16. Robust High Performance Aquaporin based Biomimetic Membranes

    Helix Nielsen, Claus; Zhao, Yichun; Qiu, C.

    2013-01-01

    on top of a support membrane. Control membranes, either without aquaporins or with the inactive AqpZ R189A mutant aquaporin served as controls. The separation performance of the membranes was evaluated by cross-flow forward osmosis (FO) and reverse osmosis (RO) tests. In RO the ABM achieved a water......Aquaporins are water channel proteins with high water permeability and solute rejection, which makes them promising for preparing high-performance biomimetic membranes. Despite the growing interest in aquaporin-based biomimetic membranes (ABMs), it is challenging to produce robust and defect...... permeability of ~ 4 L/(m2 h bar) with a NaCl rejection > 97% at an applied hydraulic pressure of 5 bar. The water permeability was ~40% higher compared to a commercial brackish water RO membrane (BW30) and an order of magnitude higher compared to a seawater RO membrane (SW30HR). In FO, the ABMs had > 90...

  17. High performance nano-composite technology development

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D. [KAERI, Taejon (Korea, Republic of); Kim, E. K.; Jung, S. Y.; Ryu, H. J. [KRICT, Taejon (Korea, Republic of); Hwang, S. S.; Kim, J. K.; Hong, S. M. [KIST, Taejon (Korea, Republic of); Chea, Y. B. [KIGAM, Taejon (Korea, Republic of); Choi, C. H.; Kim, S. D. [ATS, Taejon (Korea, Republic of); Cho, B. G.; Lee, S. H. [HGREC, Taejon (Korea, Republic of)

    1999-06-15

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  18. High Performance OLED Panel and Luminaire

    Spindler, Jeffrey [OLEDWorks LLC, Rochester, NY (United States)

    2017-02-20

    In this project, OLEDWorks developed and demonstrated the technology required to produce OLED lighting panels with high energy efficiency and excellent light quality. OLED panels developed in this program produce high quality warm white light with CRI greater than 85 and efficacy up to 80 lumens per watt (LPW). An OLED luminaire employing 24 of the high performance panels produces practical levels of illumination for general lighting, with a flux of over 2200 lumens at 60 LPW. This is a significant advance in the state of the art for OLED solid-state lighting (SSL), which is expected to be a complementary light source to the more advanced LED SSL technology that is rapidly replacing all other traditional forms of lighting.

  19. How to create high-performing teams.

    Lam, Samuel M

    2010-02-01

    This article is intended to discuss inspirational aspects on how to lead a high-performance team. Cogent topics discussed include how to hire staff through methods of "topgrading" with reference to Geoff Smart and "getting the right people on the bus" referencing Jim Collins' work. In addition, once the staff is hired, this article covers how to separate the "eagles from the ducks" and how to inspire one's staff by creating the right culture with suggestions for further reading by Don Miguel Ruiz (The four agreements) and John Maxwell (21 Irrefutable laws of leadership). In addition, Simon Sinek's concept of "Start with Why" is elaborated to help a leader know what the core element should be with any superior culture. Thieme Medical Publishers.

  20. High performance nano-composite technology development

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D. [KAERI, Taejon (Korea, Republic of); Kim, E. K.; Jung, S. Y.; Ryu, H. J. [KRICT, Taejon (Korea, Republic of); Hwang, S. S.; Kim, J. K.; Hong, S. M. [KIST, Taejon (Korea, Republic of); Chea, Y. B. [KIGAM, Taejon (Korea, Republic of); Choi, C. H.; Kim, S. D. [ATS, Taejon (Korea, Republic of); Cho, B. G.; Lee, S. H. [HGREC, Taejon (Korea, Republic of)

    1999-06-15

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  1. High performance nano-composite technology development

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D.; Kim, E. K.; Jung, S. Y.; Ryu, H. J.; Hwang, S. S.; Kim, J. K.; Hong, S. M.; Chea, Y. B.; Choi, C. H.; Kim, S. D.; Cho, B. G.; Lee, S. H.

    1999-06-01

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  2. Development of high-performance blended cements

    Wu, Zichao

    2000-10-01

    This thesis presents the development of high-performance blended cements from industrial by-products. To overcome the low-early strength of blended cements, several chemicals were studied as the activators for cement hydration. Sodium sulfate was discovered as the best activator. The blending proportions were optimized by Taguchi experimental design. The optimized blended cements containing up to 80% fly ash performed better than Type I cement in strength development and durability. Maintaining a constant cement content, concrete produced from the optimized blended cements had equal or higher strength and higher durability than that produced from Type I cement alone. The key for the activation mechanism was the reaction between added SO4 2- and Ca2+ dissolved from cement hydration products.

  3. High performance parallel computers for science

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1989-01-01

    This paper reports that Fermilab's Advanced Computer Program (ACP) has been developing cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 Mflops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction

  4. High Performance with Prescriptive Optimization and Debugging

    Jensen, Nicklas Bo

    parallelization and automatic vectorization is attractive as it transparently optimizes programs. The thesis contributes an improved dependence analysis for explicitly parallel programs. These improvements lead to more loops being vectorized, on average we achieve a speedup of 1.46 over the existing dependence...... analysis and vectorizer in GCC. Automatic optimizations often fail for theoretical and practical reasons. When they fail we argue that a hybrid approach can be effective. Using compiler feedback, we propose to use the programmer’s intuition and insight to achieve high performance. Compiler feedback...... enlightens the programmer why a given optimization was not applied, and suggest how to change the source code to make it more amenable to optimizations. We show how this can yield significant speedups and achieve 2.4 faster execution on a real industrial use case. To aid in parallel debugging we propose...

  5. Center for Center for Technology for Advanced Scientific Component Software (TASCS)

    Kostadin, Damevski [Virginia State Univ., Petersburg, VA (United States)

    2015-01-25

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS)1 tackles these these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.

  6. Center for Technology for Advanced Scientific Component Software (TASCS) Consolidated Progress Report July 2006 - March 2009

    Bernholdt, D E; McInnes, L C; Govindaraju, M; Bramley, R; Epperly, T; Kohl, J A; Nieplocha, J; Armstrong, R; Shasharina, S; Sussman, A L; Sottile, M; Damevski, K

    2009-04-14

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS) tackles these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.

  7. High performance anode for advanced Li batteries

    Lake, Carla [Applied Sciences, Inc., Cedarville, OH (United States)

    2015-11-02

    The overall objective of this Phase I SBIR effort was to advance the manufacturing technology for ASI’s Si-CNF high-performance anode by creating a framework for large volume production and utilization of low-cost Si-coated carbon nanofibers (Si-CNF) for the battery industry. This project explores the use of nano-structured silicon which is deposited on a nano-scale carbon filament to achieve the benefits of high cycle life and high charge capacity without the consequent fading of, or failure in the capacity resulting from stress-induced fracturing of the Si particles and de-coupling from the electrode. ASI’s patented coating process distinguishes itself from others, in that it is highly reproducible, readily scalable and results in a Si-CNF composite structure containing 25-30% silicon, with a compositionally graded interface at the Si-CNF interface that significantly improve cycling stability and enhances adhesion of silicon to the carbon fiber support. In Phase I, the team demonstrated the production of the Si-CNF anode material can successfully be transitioned from a static bench-scale reactor into a fluidized bed reactor. In addition, ASI made significant progress in the development of low cost, quick testing methods which can be performed on silicon coated CNFs as a means of quality control. To date, weight change, density, and cycling performance were the key metrics used to validate the high performance anode material. Under this effort, ASI made strides to establish a quality control protocol for the large volume production of Si-CNFs and has identified several key technical thrusts for future work. Using the results of this Phase I effort as a foundation, ASI has defined a path forward to commercialize and deliver high volume and low-cost production of SI-CNF material for anodes in Li-ion batteries.

  8. Design and experimentally measure a high performance metamaterial filter

    Xu, Ya-wen; Xu, Jing-cheng

    2018-03-01

    Metamaterial filter is a kind of expecting optoelectronic device. In this paper, a metal/dielectric/metal (M/D/M) structure metamaterial filter is simulated and measured. Simulated results indicate that the perfect impedance matching condition between the metamaterial filter and the free space leads to the transmission band. Measured results show that the proposed metamaterial filter achieves high performance transmission on TM and TE polarization directions. Moreover, the high transmission rate is also can be obtained when the incident angle reaches to 45°. Further measured results show that the transmission band can be expanded through optimizing structural parameters. The central frequency of the transmission band is also can be adjusted through optimizing structural parameters. The physical mechanism behind the central frequency shifted is solved through establishing an equivalent resonant circuit model.

  9. High Performance Systolic Array Core Architecture Design for DNA Sequencer

    Saiful Nurdin Dayana

    2018-01-01

    Full Text Available This paper presents a high performance systolic array (SA core architecture design for Deoxyribonucleic Acid (DNA sequencer. The core implements the affine gap penalty score Smith-Waterman (SW algorithm. This time-consuming local alignment algorithm guarantees optimal alignment between DNA sequences, but it requires quadratic computation time when performed on standard desktop computers. The use of linear SA decreases the time complexity from quadratic to linear. In addition, with the exponential growth of DNA databases, the SA architecture is used to overcome the timing issue. In this work, the SW algorithm has been captured using Verilog Hardware Description Language (HDL and simulated using Xilinx ISIM simulator. The proposed design has been implemented in Xilinx Virtex -6 Field Programmable Gate Array (FPGA and improved in the core area by 90% reduction.

  10. High performance APCS conceptual design and evaluation scoping study

    Soelberg, N.; Liekhus, K.; Chambers, A.; Anderson, G.

    1998-02-01

    This Air Pollution Control System (APCS) Conceptual Design and Evaluation study was conducted to evaluate a high-performance (APC) system for minimizing air emissions from mixed waste thermal treatment systems. Seven variations of high-performance APCS designs were conceptualized using several design objectives. One of the system designs was selected for detailed process simulation using ASPEN PLUS to determine material and energy balances and evaluate performance. Installed system capital costs were also estimated. Sensitivity studies were conducted to evaluate the incremental cost and benefit of added carbon adsorber beds for mercury control, specific catalytic reduction for NO x control, and offgas retention tanks for holding the offgas until sample analysis is conducted to verify that the offgas meets emission limits. Results show that the high-performance dry-wet APCS can easily meet all expected emission limits except for possibly mercury. The capability to achieve high levels of mercury control (potentially necessary for thermally treating some DOE mixed streams) could not be validated using current performance data for mercury control technologies. The engineering approach and ASPEN PLUS modeling tool developed and used in this study identified APC equipment and system performance, size, cost, and other issues that are not yet resolved. These issues need to be addressed in feasibility studies and conceptual designs for new facilities or for determining how to modify existing facilities to meet expected emission limits. The ASPEN PLUS process simulation with current and refined input assumptions and calculations can be used to provide system performance information for decision-making, identifying best options, estimating costs, reducing the potential for emission violations, providing information needed for waste flow analysis, incorporating new APCS technologies in existing designs, or performing facility design and permitting activities

  11. Applied and numerical partial differential equations scientific computing in simulation, optimization and control in a multidisciplinary context

    Glowinski, R; Kuznetsov, Y A; Periaux, Jacques; Neittaanmaki, Pekka; Pironneau, Olivier

    2010-01-01

    Standing at the intersection of mathematics and scientific computing, this collection of state-of-the-art papers in nonlinear PDEs examines their applications to subjects as diverse as dynamical systems, computational mechanics, and the mathematics of finance.

  12. High-performance computing in accelerating structure design and analysis

    Li Zenghai; Folwell, Nathan; Ge Lixin; Guetz, Adam; Ivanov, Valentin; Kowalski, Marc; Lee, Lie-Quan; Ng, Cho-Kuen; Schussman, Greg; Stingelin, Lukas; Uplenchwar, Ravindra; Wolf, Michael; Xiao, Liling; Ko, Kwok

    2006-01-01

    Future high-energy accelerators such as the Next Linear Collider (NLC) will accelerate multi-bunch beams of high current and low emittance to obtain high luminosity, which put stringent requirements on the accelerating structures for efficiency and beam stability. While numerical modeling has been quite standard in accelerator R and D, designing the NLC accelerating structure required a new simulation capability because of the geometric complexity and level of accuracy involved. Under the US DOE Advanced Computing initiatives (first the Grand Challenge and now SciDAC), SLAC has developed a suite of electromagnetic codes based on unstructured grids and utilizing high-performance computing to provide an advanced tool for modeling structures at accuracies and scales previously not possible. This paper will discuss the code development and computational science research (e.g. domain decomposition, scalable eigensolvers, adaptive mesh refinement) that have enabled the large-scale simulations needed for meeting the computational challenges posed by the NLC as well as projects such as the PEP-II and RIA. Numerical results will be presented to show how high-performance computing has made a qualitative improvement in accelerator structure modeling for these accelerators, either at the component level (single cell optimization), or on the scale of an entire structure (beam heating and long-range wakefields)

  13. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools

  14. High-performance mass storage system for workstations

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive

  15. Miniaturized high performance sensors for space plasmas

    Young, D.T.

    1996-01-01

    Operating under ever more constrained budgets, NASA has turned to a new paradigm for instrumentation and mission development in which smaller, faster, better, cheaper is of primary consideration for future space plasma investigations. The author presents several examples showing the influence of this new paradigm on sensor development and discuss certain implications for the scientific return from resource constrained sensors. The author also discusses one way to improve space plasma sensor performance which is to search out new technologies, measurement techniques and instrument analogs from related fields including among others, laboratory plasma physics

  16. Physical modeling and high-performance GPU computing for characterization, interception, and disruption of hazardous near-Earth objects

    Kaplinger, Brian Douglas

    For the past few decades, both the scientific community and the general public have been becoming more aware that the Earth lives in a shooting gallery of small objects. We classify all of these asteroids and comets, known or unknown, that cross Earth's orbit as near-Earth objects (NEOs). A look at our geologic history tells us that NEOs have collided with Earth in the past, and we expect that they will continue to do so. With thousands of known NEOs crossing the orbit of Earth, there has been significant scientific interest in developing the capability to deflect an NEO from an impacting trajectory. This thesis applies the ideas of Smoothed Particle Hydrodynamics (SPH) theory to the NEO disruption problem. A simulation package was designed that allows efficacy simulation to be integrated into the mission planning and design process. This is done by applying ideas in high-performance computing (HPC) on the computer graphics processing unit (GPU). Rather than prove a concept through large standalone simulations on a supercomputer, a highly parallel structure allows for flexible, target dependent questions to be resolved. Built around nonclassified data and analysis, this computer package will allow academic institutions to better tackle the issue of NEO mitigation effectiveness.

  17. SISYPHUS: A high performance seismic inversion factory

    Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas

    2016-04-01

    In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with

  18. High performance liquid chromatography in pharmaceutical analyses

    Branko Nikolin

    2004-05-01

    Full Text Available In testing the pre-sale procedure the marketing of drugs and their control in the last ten years, high performance liquid chromatographyreplaced numerous spectroscopic methods and gas chromatography in the quantitative and qualitative analysis. In the first period of HPLC application it was thought that it would become a complementary method of gas chromatography, however, today it has nearly completely replaced gas chromatography in pharmaceutical analysis. The application of the liquid mobile phase with the possibility of transformation of mobilized polarity during chromatography and all other modifications of mobile phase depending upon the characteristics of substance which are being tested, is a great advantage in the process of separation in comparison to other methods. The greater choice of stationary phase is the next factor which enables realization of good separation. The separation line is connected to specific and sensitive detector systems, spectrafluorimeter, diode detector, electrochemical detector as other hyphernated systems HPLC-MS and HPLC-NMR, are the basic elements on which is based such wide and effective application of the HPLC method. The purpose high performance liquid chromatography(HPLC analysis of any drugs is to confirm the identity of a drug and provide quantitative results and also to monitor the progress of the therapy of a disease.1 Measuring presented on the Fig. 1. is chromatogram obtained for the plasma of depressed patients 12 h before oral administration of dexamethasone. It may also be used to further our understanding of the normal and disease process in the human body trough biomedical and therapeutically research during investigation before of the drugs registration. The analyses of drugs and metabolites in biological fluids, particularly plasma, serum or urine is one of the most demanding but one of the most common uses of high performance of liquid chromatography. Blood, plasma or

  19. Tackling some of the most intricate geophysical challenges via high-performance computing

    Khosronejad, A.

    2016-12-01

    Recently, world has been witnessing significant enhancements in computing power of supercomputers. Computer clusters in conjunction with the advanced mathematical algorithms has set the stage for developing and applying powerful numerical tools to tackle some of the most intricate geophysical challenges that today`s engineers face. One such challenge is to understand how turbulent flows, in real-world settings, interact with (a) rigid and/or mobile complex bed bathymetry of waterways and sea-beds in the coastal areas; (b) objects with complex geometry that are fully or partially immersed; and (c) free-surface of waterways and water surface waves in the coastal area. This understanding is especially important because the turbulent flows in real-world environments are often bounded by geometrically complex boundaries, which dynamically deform and give rise to multi-scale and multi-physics transport phenomena, and characterized by multi-lateral interactions among various phases (e.g. air/water/sediment phases). Herein, I present some of the multi-scale and multi-physics geophysical fluid mechanics processes that I have attempted to study using an in-house high-performance computational model, the so-called VFS-Geophysics. More specifically, I will present the simulation results of turbulence/sediment/solute/turbine interactions in real-world settings. Parts of the simulations I present are performed to gain scientific insights into the processes such as sand wave formation (A. Khosronejad, and F. Sotiropoulos, (2014), Numerical simulation of sand waves in a turbulent open channel flow, Journal of Fluid Mechanics, 753:150-216), while others are carried out to predict the effects of climate change and large flood events on societal infrastructures ( A. Khosronejad, et al., (2016), Large eddy simulation of turbulence and solute transport in a forested headwater stream, Journal of Geophysical Research:, doi: 10.1002/2014JF003423).

  20. Optimizing High Performance Self Compacting Concrete

    Raymond A Yonathan

    2017-01-01

    Full Text Available This paper’s objectives are to learn the effect of glass powder, silica fume, Polycarboxylate Ether, and gravel to optimizing composition of each factor in making High Performance SCC. Taguchi method is proposed in this paper as best solution to minimize specimen variable which is more than 80 variations. Taguchi data analysis method is applied to provide composition, optimizing, and the effect of contributing materials for nine variable of specimens. Concrete’s workability was analyzed using Slump flow test, V-funnel test, and L-box test. Compressive and porosity test were performed for the hardened state. With a dimension of 100×200 mm the cylindrical specimens were cast for compressive test with the age of 3, 7, 14, 21, 28 days. Porosity test was conducted at 28 days. It is revealed that silica fume contributes greatly to slump flow and porosity. Coarse aggregate shows the greatest contributing factor to L-box and compressive test. However, all factors show unclear result to V-funnel test.

  1. An integrated high performance Fastbus slave interface

    Christiansen, J.; Ljuslin, C.

    1993-01-01

    A high performance CMOS Fastbus slave interface ASIC (Application Specific Integrated Circuit) supporting all addressing and data transfer modes defined in the IEEE 960 - 1986 standard is presented. The FAstbus Slave Integrated Circuit (FASIC) is an interface between the asynchronous Fastbus and a clock synchronous processor/memory bus. It can work stand-alone or together with a 32 bit microprocessor. The FASIC is a programmable device enabling its direct use in many different applications. A set of programmable address mapping windows can map Fastbus addresses to convenient memory addresses and at the same time act as address decoding logic. Data rates of 100 MBytes/sec to Fastbus can be obtained using an internal FIFO in the FASIC to buffer data between the two buses during block transfers. Message passing from Fastbus to a microprocessor on the slave module is supported. A compact (70 mm x 170 mm) Fastbus slave piggy back sub-card interface including level conversion between ECL and TTL signal levels has been implemented using surface mount components and the 208 pin FASIC chip

  2. A high performance architecture for accelerator controls

    Allen, M.; Hunt, S.M; Lue, H.; Saltmarsh, C.G.; Parker, C.R.C.B.

    1991-01-01

    The demands placed on the Superconducting Super Collider (SSC) control system due to large distances, high bandwidth and fast response time required for operation will require a fresh approach to the data communications architecture of the accelerator. The prototype design effort aims at providing deterministic communication across the accelerator complex with a response time of < 100 ms and total bandwidth of 2 Gbits/sec. It will offer a consistent interface for a large number of equipment types, from vacuum pumps to beam position monitors, providing appropriate communications performance for each equipment type. It will consist of highly parallel links to all equipment: those with computing resources, non-intelligent direct control interfaces, and data concentrators. This system will give each piece of equipment a dedicated link of fixed bandwidth to the control system. Application programs will have access to all accelerator devices which will be memory mapped into a global virtual addressing scheme. Links to devices in the same geographical area will be multiplexed using commercial Time Division Multiplexing equipment. Low-level access will use reflective memory techniques, eliminating processing overhead and complexity of traditional data communication protocols. The use of commercial standards and equipment will enable a high performance system to be built at low cost

  3. A high performance architecture for accelerator controls

    Allen, M.; Hunt, S.M.; Lue, H.; Saltmarsh, C.G.; Parker, C.R.C.B.

    1991-03-01

    The demands placed on the Superconducting Super Collider (SSC) control system due to large distances, high bandwidth and fast response time required for operation will require a fresh approach to the data communications architecture of the accelerator. The prototype design effort aims at providing deterministic communication across the accelerator complex with a response time of <100 ms and total bandwidth of 2 Gbits/sec. It will offer a consistent interface for a large number of equipment types, from vacuum pumps to beam position monitors, providing appropriate communications performance for each equipment type. It will consist of highly parallel links to all equipments: those with computing resources, non-intelligent direct control interfaces, and data concentrators. This system will give each piece of equipment a dedicated link of fixed bandwidth to the control system. Application programs will have access to all accelerator devices which will be memory mapped into a global virtual addressing scheme. Links to devices in the same geographical area will be multiplexed using commercial Time Division Multiplexing equipment. Low-level access will use reflective memory techniques, eliminating processing overhead and complexity of traditional data communication protocols. The use of commercial standards and equipment will enable a high performance system to be built at low cost. 1 fig

  4. High Performance Graphene Oxide Based Rubber Composites

    Mao, Yingyan; Wen, Shipeng; Chen, Yulong; Zhang, Fazhong; Panine, Pierre; Chan, Tung W.; Zhang, Liqun; Liang, Yongri; Liu, Li

    2013-01-01

    In this paper, graphene oxide/styrene-butadiene rubber (GO/SBR) composites with complete exfoliation of GO sheets were prepared by aqueous-phase mixing of GO colloid with SBR latex and a small loading of butadiene-styrene-vinyl-pyridine rubber (VPR) latex, followed by their co-coagulation. During co-coagulation, VPR not only plays a key role in the prevention of aggregation of GO sheets but also acts as an interface-bridge between GO and SBR. The results demonstrated that the mechanical properties of the GO/SBR composite with 2.0 vol.% GO is comparable with those of the SBR composite reinforced with 13.1 vol.% of carbon black (CB), with a low mass density and a good gas barrier ability to boot. The present work also showed that GO-silica/SBR composite exhibited outstanding wear resistance and low-rolling resistance which make GO-silica/SBR very competitive for the green tire application, opening up enormous opportunities to prepare high performance rubber composites for future engineering applications. PMID:23974435

  5. Initial rheological description of high performance concretes

    Alessandra Lorenzetti de Castro

    2006-12-01

    Full Text Available Concrete is defined as a composite material and, in rheological terms, it can be understood as a concentrated suspension of solid particles (aggregates in a viscous liquid (cement paste. On a macroscopic scale, concrete flows as a liquid. It is known that the rheological behavior of the concrete is close to that of a Bingham fluid and two rheological parameters regarding its description are needed: yield stress and plastic viscosity. The aim of this paper is to present the initial rheological description of high performance concretes using the modified slump test. According to the results, an increase of yield stress was observed over time, while a slight variation in plastic viscosity was noticed. The incorporation of silica fume showed changes in the rheological properties of fresh concrete. The behavior of these materials also varied with the mixing procedure employed in their production. The addition of superplasticizer meant that there was a large reduction in the mixture's yield stress, while plastic viscosity remained practically constant.

  6. Development of a High Performance Spacer Grid

    Song, Kee Nam; Song, K. N.; Yoon, K. H. (and others)

    2007-03-15

    A spacer grid in a LWR fuel assembly is a key structural component to support fuel rods and to enhance the heat transfer from the fuel rod to the coolant. In this research, the main research items are the development of inherent and high performance spacer grid shapes, the establishment of mechanical/structural analysis and test technology, and the set-up of basic test facilities for the spacer grid. The main research areas and results are as follows. 1. 18 different spacer grid candidates have been invented and applied for domestic and US patents. Among the candidates 16 are chosen from the patent. 2. Two kinds of spacer grids are finally selected for the advanced LWR fuel after detailed performance tests on the candidates and commercial spacer grids from a mechanical/structural point of view. According to the test results the features of the selected spacer grids are better than those of the commercial spacer grids. 3. Four kinds of basic test facilities are set up and the relevant test technologies are established. 4. Mechanical/structural analysis models and technology for spacer grid performance are developed and the analysis results are compared with the test results to enhance the reliability of the models.

  7. Low-Cost High-Performance MRI

    Sarracanie, Mathieu; Lapierre, Cristen D.; Salameh, Najat; Waddington, David E. J.; Witzel, Thomas; Rosen, Matthew S.

    2015-10-01

    Magnetic Resonance Imaging (MRI) is unparalleled in its ability to visualize anatomical structure and function non-invasively with high spatial and temporal resolution. Yet to overcome the low sensitivity inherent in inductive detection of weakly polarized nuclear spins, the vast majority of clinical MRI scanners employ superconducting magnets producing very high magnetic fields. Commonly found at 1.5-3 tesla (T), these powerful magnets are massive and have very strict infrastructure demands that preclude operation in many environments. MRI scanners are costly to purchase, site, and maintain, with the purchase price approaching $1 M per tesla (T) of magnetic field. We present here a remarkably simple, non-cryogenic approach to high-performance human MRI at ultra-low magnetic field, whereby modern under-sampling strategies are combined with fully-refocused dynamic spin control using steady-state free precession techniques. At 6.5 mT (more than 450 times lower than clinical MRI scanners) we demonstrate (2.5 × 3.5 × 8.5) mm3 imaging resolution in the living human brain using a simple, open-geometry electromagnet, with 3D image acquisition over the entire brain in 6 minutes. We contend that these practical ultra-low magnetic field implementations of MRI (standards for affordable (<$50,000) and robust portable devices.

  8. Energy Efficient Graphene Based High Performance Capacitors.

    Bae, Joonwon; Kwon, Oh Seok; Lee, Chang-Soo

    2017-07-10

    Graphene (GRP) is an interesting class of nano-structured electronic materials for various cutting-edge applications. To date, extensive research activities have been performed on the investigation of diverse properties of GRP. The incorporation of this elegant material can be very lucrative in terms of practical applications in energy storage/conversion systems. Among various those systems, high performance electrochemical capacitors (ECs) have become popular due to the recent need for energy efficient and portable devices. Therefore, in this article, the application of GRP for capacitors is described succinctly. In particular, a concise summary on the previous research activities regarding GRP based capacitors is also covered extensively. It was revealed that a lot of secondary materials such as polymers and metal oxides have been introduced to improve the performance. Also, diverse devices have been combined with capacitors for better use. More importantly, recent patents related to the preparation and application of GRP based capacitors are also introduced briefly. This article can provide essential information for future study. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  9. Durability of high performance concrete in seawater

    Amjad Hussain Memon; Salihuddin Radin Sumadi; Rabitah Handan

    2000-01-01

    This paper presents a report on the effects of blended cements on the durability of high performance concrete (HPC) in seawater. In this research the effect of seawater was investigated. The specimens were initially subjected to water curing for seven days inside the laboratory at room temperature, followed by seawater curing exposed to tidal zone until testing. In this study three levels of cement replacement (0%, 30% and 70%) were used. The combined use of chemical and mineral admixtures has resulted in a new generation of concrete called HPC. The HPC has been identified as one of the most important advanced materials necessary in the effort to build a nation's infrastructure. HPC opens new opportunities in the utilization of the industrial by-products (mineral admixtures) in the construction industry. As a matter of fact permeability is considered as one of the fundamental properties governing the durability of concrete in the marine environment. Results of this investigation indicated that the oxygen permeability values for the blended cement concretes at the age of one year are reduced by a factor of about 2 as compared to OPC control mix concrete. Therefore both blended cement concretes are expected to withstand in the seawater exposed to tidal zone without serious deterioration. (Author)

  10. Automatic Energy Schemes for High Performance Applications

    Sundriyal, Vaibhav [Iowa State Univ., Ames, IA (United States)

    2013-01-01

    Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale. Drastic increases in the power consumption of supercomputers affect significantly their operating costs and failure rates. In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This work first studies two important collective communication operations, all-to-all and allgather and proposes energy saving strategies on the per-call basis. Next, it targets point-to-point communications to group them into phases and apply frequency scaling to them to save energy by exploiting the architectural and communication stalls. Finally, it proposes an automatic runtime system which combines both collective and point-to-point communications into phases, and applies throttling to them apart from DVFS to maximize energy savings. The experimental results are presented for NAS parallel benchmark problems as well as for the realistic parallel electronic structure calculations performed by the widely used quantum chemistry package GAMESS. Close to the maximum energy savings were obtained with a substantially low performance loss on the given platform.

  11. The XXL Survey. I. Scientific motivations - XMM-Newton observing plan - Follow-up observations and simulation programme

    Pierre, M.; Pacaud, F.; Adami, C.; Alis, S.; Altieri, B.; Baran, N.; Benoist, C.; Birkinshaw, M.; Bongiorno, A.; Bremer, M. N.; Brusa, M.; Butler, A.; Ciliegi, P.; Chiappetti, L.; Clerc, N.; Corasaniti, P. S.; Coupon, J.; De Breuck, C.; Democles, J.; Desai, S.; Delhaize, J.; Devriendt, J.; Dubois, Y.; Eckert, D.; Elyiv, A.; Ettori, S.; Evrard, A.; Faccioli, L.; Farahi, A.; Ferrari, C.; Finet, F.; Fotopoulou, S.; Fourmanoit, N.; Gandhi, P.; Gastaldello, F.; Gastaud, R.; Georgantopoulos, I.; Giles, P.; Guennou, L.; Guglielmo, V.; Horellou, C.; Husband, K.; Huynh, M.; Iovino, A.; Kilbinger, M.; Koulouridis, E.; Lavoie, S.; Le Brun, A. M. C.; Le Fevre, J. P.; Lidman, C.; Lieu, M.; Lin, C. A.; Mantz, A.; Maughan, B. J.; Maurogordato, S.; McCarthy, I. G.; McGee, S.; Melin, J. B.; Melnyk, O.; Menanteau, F.; Novak, M.; Paltani, S.; Plionis, M.; Poggianti, B. M.; Pomarede, D.; Pompei, E.; Ponman, T. J.; Ramos-Ceja, M. E.; Ranalli, P.; Rapetti, D.; Raychaudury, S.; Reiprich, T. H.; Rottgering, H.; Rozo, E.; Rykoff, E.; Sadibekova, T.; Santos, J.; Sauvageot, J. L.; Schimd, C.; Sereno, M.; Smith, G. P.; Smolčić, V.; Snowden, S.; Spergel, D.; Stanford, S.; Surdej, J.; Valageas, P.; Valotti, A.; Valtchanov, I.; Vignali, C.; Willis, J.; Ziparo, F.

    2016-06-01

    Context. The quest for the cosmological parameters that describe our universe continues to motivate the scientific community to undertake very large survey initiatives across the electromagnetic spectrum. Over the past two decades, the Chandra and XMM-Newton observatories have supported numerous studies of X-ray-selected clusters of galaxies, active galactic nuclei (AGNs), and the X-ray background. The present paper is the first in a series reporting results of the XXL-XMM survey; it comes at a time when the Planck mission results are being finalised. Aims: We present the XXL Survey, the largest XMM programme totaling some 6.9 Ms to date and involving an international consortium of roughly 100 members. The XXL Survey covers two extragalactic areas of 25 deg2 each at a point-source sensitivity of ~5 × 10-15 erg s-1 cm-2 in the [0.5-2] keV band (completeness limit). The survey's main goals are to provide constraints on the dark energy equation of state from the space-time distribution of clusters of galaxies and to serve as a pathfinder for future, wide-area X-ray missions. We review science objectives, including cluster studies, AGN evolution, and large-scale structure, that are being conducted with the support of approximately 30 follow-up programmes. Methods: We describe the 542 XMM observations along with the associated multi-λ and numerical simulation programmes. We give a detailed account of the X-ray processing steps and describe innovative tools being developed for the cosmological analysis. Results: The paper provides a thorough evaluation of the X-ray data, including quality controls, photon statistics, exposure and background maps, and sky coverage. Source catalogue construction and multi-λ associations are briefly described. This material will be the basis for the calculation of the cluster and AGN selection functions, critical elements of the cosmological and science analyses. Conclusions: The XXL multi-λ data set will have a unique lasting legacy

  12. Ultra high performance concrete dematerialization study

    NONE

    2004-03-01

    Concrete is the most widely used building material in the world and its use is expected to grow. It is well recognized that the production of portland cement results in the release of large amounts of carbon dioxide, a greenhouse gas (GHG). The main challenge facing the industry is to produce concrete in an environmentally sustainable manner. Reclaimed industrial by-proudcts such as fly ash, silica fume and slag can reduce the amount of portland cement needed to make concrete, thereby reducing the amount of GHGs released to the atmosphere. The use of these supplementary cementing materials (SCM) can also enhance the long-term strength and durability of concrete. The intention of the EcoSmart{sup TM} Concrete Project is to develop sustainable concrete through innovation in supply, design and construction. In particular, the project focuses on finding a way to minimize the GHG signature of concrete by maximizing the replacement of portland cement in the concrete mix with SCM while improving the cost, performance and constructability. This paper describes the use of Ductal{sup R} Ultra High Performance Concrete (UHPC) for ramps in a condominium. It examined the relationship between the selection of UHPC and the overall environmental performance, cost, constructability maintenance and operational efficiency as it relates to the EcoSmart Program. The advantages and challenges of using UHPC were outlined. In addition to its very high strength, UHPC has been shown to have very good potential for GHG emission reduction due to the reduced material requirements, reduced transport costs and increased SCM content. refs., tabs., figs.

  13. High-performance laboratories and cleanrooms; TOPICAL

    Tschudi, William; Sartor, Dale; Mills, Evan; Xu, Tengfang

    2002-01-01

    The California Energy Commission sponsored this roadmap to guide energy efficiency research and deployment for high performance cleanrooms and laboratories. Industries and institutions utilizing these building types (termed high-tech buildings) have played an important part in the vitality of the California economy. This roadmap's key objective to present a multi-year agenda to prioritize and coordinate research efforts. It also addresses delivery mechanisms to get the research products into the market. Because of the importance to the California economy, it is appropriate and important for California to take the lead in assessing the energy efficiency research needs, opportunities, and priorities for this market. In addition to the importance to California's economy, energy demand for this market segment is large and growing (estimated at 9400 GWH for 1996, Mills et al. 1996). With their 24hr. continuous operation, high tech facilities are a major contributor to the peak electrical demand. Laboratories and cleanrooms constitute the high tech building market, and although each building type has its unique features, they are similar in that they are extremely energy intensive, involve special environmental considerations, have very high ventilation requirements, and are subject to regulations-primarily safety driven-that tend to have adverse energy implications. High-tech buildings have largely been overlooked in past energy efficiency research. Many industries and institutions utilize laboratories and cleanrooms. As illustrated, there are many industries operating cleanrooms in California. These include semiconductor manufacturing, semiconductor suppliers, pharmaceutical, biotechnology, disk drive manufacturing, flat panel displays, automotive, aerospace, food, hospitals, medical devices, universities, and federal research facilities

  14. High-performance phase-field modeling

    Vignal, Philippe

    2015-04-27

    Many processes in engineering and sciences involve the evolution of interfaces. Among the mathematical frameworks developed to model these types of problems, the phase-field method has emerged as a possible solution. Phase-fields nonetheless lead to complex nonlinear, high-order partial differential equations, whose solution poses mathematical and computational challenges. Guaranteeing some of the physical properties of the equations has lead to the development of efficient algorithms and discretizations capable of recovering said properties by construction [2, 5]. This work builds-up on these ideas, and proposes novel discretization strategies that guarantee numerical energy dissipation for both conserved and non-conserved phase-field models. The temporal discretization is based on a novel method which relies on Taylor series and ensures strong energy stability. It is second-order accurate, and can also be rendered linear to speed-up the solution process [4]. The spatial discretization relies on Isogeometric Analysis, a finite element method that possesses the k-refinement technology and enables the generation of high-order, high-continuity basis functions. These basis functions are well suited to handle the high-order operators present in phase-field models. Two-dimensional and three dimensional results of the Allen-Cahn, Cahn-Hilliard, Swift-Hohenberg and phase-field crystal equation will be presented, which corroborate the theoretical findings, and illustrate the robustness of the method. Results related to more challenging examples, namely the Navier-Stokes Cahn-Hilliard and a diusion-reaction Cahn-Hilliard system, will also be presented. The implementation was done in PetIGA and PetIGA-MF, high-performance Isogeometric Analysis frameworks [1, 3], designed to handle non-linear, time-dependent problems.

  15. Rapid Prototyping of High Performance Signal Processing Applications

    Sane, Nimish

    Advances in embedded systems for digital signal processing (DSP) are enabling many scientific projects and commercial applications. At the same time, these applications are key to driving advances in many important kinds of computing platforms. In this region of high performance DSP, rapid prototyping is critical for faster time-to-market (e.g., in the wireless communications industry) or time-to-science (e.g., in radio astronomy). DSP system architectures have evolved from being based on application specific integrated circuits (ASICs) to incorporate reconfigurable off-the-shelf field programmable gate arrays (FPGAs), the latest multiprocessors such as graphics processing units (GPUs), or heterogeneous combinations of such devices. We, thus, have a vast design space to explore based on performance trade-offs, and expanded by the multitude of possibilities for target platforms. In order to allow systematic design space exploration, and develop scalable and portable prototypes, model based design tools are increasingly used in design and implementation of embedded systems. These tools allow scalable high-level representations, model based semantics for analysis and optimization, and portable implementations that can be verified at higher levels of abstractions and targeted toward multiple platforms for implementation. The designer can experiment using such tools at an early stage in the design cycle, and employ the latest hardware at later stages. In this thesis, we have focused on dataflow-based approaches for rapid DSP system prototyping. This thesis contributes to various aspects of dataflow-based design flows and tools as follows: 1. We have introduced the concept of topological patterns, which exploits commonly found repetitive patterns in DSP algorithms to allow scalable, concise, and parameterizable representations of large scale dataflow graphs in high-level languages. We have shown how an underlying design tool can systematically exploit a high

  16. The Convergence of High Performance Computing and Large Scale Data Analytics

    Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.

    2015-12-01

    As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.

  17. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  18. Requirements for high performance computing for lattice QCD. Report of the ECFA working panel

    Jegerlehner, F.; Kenway, R.D.; Martinelli, G.; Michael, C.; Pene, O.; Petersson, B.; Petronzio, R.; Sachrajda, C.T.; Schilling, K.

    2000-01-01

    This report, prepared at the request of the European Committee for Future Accelerators (ECFA), contains an assessment of the High Performance Computing resources which will be required in coming years by European physicists working in Lattice Field Theory and a review of the scientific opportunities which these resources would open. (orig.)

  19. High performance computations using dynamical nucleation theory

    Windus, T L; Crosby, L D; Kathmann, S M

    2008-01-01

    Chemists continue to explore the use of very large computations to perform simulations that describe the molecular level physics of critical challenges in science. In this paper, we describe the Dynamical Nucleation Theory Monte Carlo (DNTMC) model - a model for determining molecular scale nucleation rate constants - and its parallel capabilities. The potential for bottlenecks and the challenges to running on future petascale or larger resources are delineated. A 'master-slave' solution is proposed to scale to the petascale and will be developed in the NWChem software. In addition, mathematical and data analysis challenges are described

  20. The need for high performance breeder reactors

    Vaughan, R.D.; Chermanne, J.

    1977-01-01

    It can be easily demonstrated, on the basis of realistic estimates of continued high oil costs, that an increasing portion of the growth in energy demand must be supplied by nuclear power and that this one might account for 20% of all the energy production by the end of the century. Such assumptions lead very quickly to the conclusion that the discovery, extraction and processing of the uranium will not be able to follow the demand; the bottleneck will essentially be related to the rate at which the ore can be discovered and extracted, and not to the existing quantities nor their grade. Figures as high as 150.000 T/annum and more would be quickly reached, and it is necessary to wonder already now if enough capital can be attracted to meet these requirements. There is only one solution to this problem: improve the conversion ratio of the nuclear system and quickly reach the breeding; this would lead to the reduction of the natural uranium consumption by a factor of about 50. However, this condition is not sufficient; the commercial breeder must have a breeding gain as high as possible because the Pu out-of-pile time and the Pu losses in the cycle could lead to an unacceptable doubling time for the system, if the breeding gain is too low. That is the reason why it is vital to develop high performance breeder reactors. The present paper indicates how the Gas-cooled Breeder Reactor [GBR] can meet the problems mentioned above, on the basis of recent and realistic studies. It briefly describes the present status of GBR development, from the predecessors in the gas cooled reactor line, particularly the AGR. It shows how the GBR fuel takes mostly profit from the LMFBR fuel irradiation experience. It compares the GBR performance on a consistent basis with that of the LMFBR. The GBR capital and fuel cycle costs are compared with those of thermal and fast reactors respectively. The conclusion is, based on a cost-benefit study, that the GBR must be quickly developed in order

  1. Integrating advanced facades into high performance buildings

    Selkowitz, Stephen E.

    2001-01-01

    Glass is a remarkable material but its functionality is significantly enhanced when it is processed or altered to provide added intrinsic capabilities. The overall performance of glass elements in a building can be further enhanced when they are designed to be part of a complete facade system. Finally the facade system delivers the greatest performance to the building owner and occupants when it becomes an essential element of a fully integrated building design. This presentation examines the growing interest in incorporating advanced glazing elements into more comprehensive facade and building systems in a manner that increases comfort, productivity and amenity for occupants, reduces operating costs for building owners, and contributes to improving the health of the planet by reducing overall energy use and negative environmental impacts. We explore the role of glazing systems in dynamic and responsive facades that provide the following functionality: Enhanced sun protection and cooling load control while improving thermal comfort and providing most of the light needed with daylighting; Enhanced air quality and reduced cooling loads using natural ventilation schemes employing the facade as an active air control element; Reduced operating costs by minimizing lighting, cooling and heating energy use by optimizing the daylighting-thermal tradeoffs; Net positive contributions to the energy balance of the building using integrated photovoltaic systems; Improved indoor environments leading to enhanced occupant health, comfort and performance. In addressing these issues facade system solutions must, of course, respect the constraints of latitude, location, solar orientation, acoustics, earthquake and fire safety, etc. Since climate and occupant needs are dynamic variables, in a high performance building the facade solution have the capacity to respond and adapt to these variable exterior conditions and to changing occupant needs. This responsive performance capability

  2. JT-60U high performance regimes

    Ishida, S.

    1999-01-01

    High performance regimes of JT-60U plasmas are presented with an emphasis upon the results from the use of a semi-closed pumped divertor with W-shaped geometry. Plasma performance in transient and quasi steady states has been significantly improved in reversed shear and high- βp regimes. The reversed shear regime elevated an equivalent Q DT eq transiently up to 1.25 (n D (0)τ E T i (0)=8.6x10 20 m-3·s·keV) in a reactor-relevant thermonuclear dominant regime. Long sustainment of enhanced confinement with internal transport barriers (ITBs) with a fully non-inductive current drive in a reversed shear discharge was successfully demonstrated with LH wave injection. Performance sustainment has been extended in the high- bp regime with a high triangularity achieving a long sustainment of plasma conditions equivalent to Q DT eq ∼0.16 (n D (0)τ E T i (0)∼1.4x10 20 m -3 ·s·keV) for ∼4.5 s with a large non-inductive current drive fraction of 60-70% of the plasma current. Thermal and particle transport analyses show significant reduction of thermal and particle diffusivities around ITB resulting in a strong Er shear in the ITB region. The W-shaped divertor is effective for He ash exhaust demonstrating steady exhaust capability of τ He */τ E ∼3-10 in support of ITER. Suppression of neutral back flow and chemical sputtering effect have been observed while MARFE onset density is rather decreased. Negative-ion based neutral beam injection (N-NBI) experiments have created a clear H-mode transition. Enhanced ionization cross- section due to multi-step ionization processes was confirmed as theoretically predicted. A current density profile driven by N-NBI is measured in a good agreement with theoretical prediction. N-NBI induced TAE modes characterized as persistent and bursting oscillations have been observed from a low hot beta of h >∼0.1-0.2% without a significant loss of fast ions. (author)

  3. High Performance Commercial Fenestration Framing Systems

    Mike Manteghi; Sneh Kumar; Joshua Early; Bhaskar Adusumalli

    2010-01-31

    A major objective of the U.S. Department of Energy is to have a zero energy commercial building by the year 2025. Windows have a major influence on the energy performance of the building envelope as they control over 55% of building energy load, and represent one important area where technologies can be developed to save energy. Aluminum framing systems are used in over 80% of commercial fenestration products (i.e. windows, curtain walls, store fronts, etc.). Aluminum framing systems are often required in commercial buildings because of their inherent good structural properties and long service life, which is required from commercial and architectural frames. At the same time, they are lightweight and durable, requiring very little maintenance, and offer design flexibility. An additional benefit of aluminum framing systems is their relatively low cost and easy manufacturability. Aluminum, being an easily recyclable material, also offers sustainable features. However, from energy efficiency point of view, aluminum frames have lower thermal performance due to the very high thermal conductivity of aluminum. Fenestration systems constructed of aluminum alloys therefore have lower performance in terms of being effective barrier to energy transfer (heat loss or gain). Despite the lower energy performance, aluminum is the choice material for commercial framing systems and dominates the commercial/architectural fenestration market because of the reasons mentioned above. In addition, there is no other cost effective and energy efficient replacement material available to take place of aluminum in the commercial/architectural market. Hence it is imperative to improve the performance of aluminum framing system to improve the energy performance of commercial fenestration system and in turn reduce the energy consumption of commercial building and achieve zero energy building by 2025. The objective of this project was to develop high performance, energy efficient commercial

  4. Learning in a Simulation-OT in Heart Surgery and the Challenges of the Scientification of Work

    Langemeyer, Ines

    2014-01-01

    Enhancing competency and collaboration has become a salient topic of the professional debate on medical safety issues. The advantages of simulation-based training scenarios for team communication, routines and critical work procedures especially in operation theatres have been vigorously discussed. However, the literature on simulation-based…

  5. High performance visual display for HENP detectors

    McGuigan, Michael; Smith, Gordon; Spiletic, John; Fine, Valeri; Nevski, Pavel

    2001-01-01

    A high end visual display for High Energy Nuclear Physics (HENP) detectors is necessary because of the sheer size and complexity of the detector. For BNL this display will be of special interest because of STAR and ATLAS. To load, rotate, query, and debug simulation code with a modern detector simply takes too long even on a powerful work station. To visualize the HENP detectors with maximal performance we have developed software with the following characteristics. We develop a visual display of HENP detectors on BNL multiprocessor visualization server at multiple level of detail. We work with general and generic detector framework consistent with ROOT, GAUDI etc, to avoid conflicting with the many graphic development groups associated with specific detectors like STAR and ATLAS. We develop advanced OpenGL features such as transparency and polarized stereoscopy. We enable collaborative viewing of detector and events by directly running the analysis in BNL stereoscopic theatre. We construct enhanced interactive control, including the ability to slice, search and mark areas of the detector. We incorporate the ability to make a high quality still image of a view of the detector and the ability to generate animations and a fly through of the detector and output these to MPEG or VRML models. We develop data compression hardware and software so that remote interactive visualization will be possible among dispersed collaborators. We obtain real time visual display for events accumulated during simulations

  6. Thermal interface pastes nanostructured for high performance

    Lin, Chuangang

    Thermal interface materials in the form of pastes are needed to improve thermal contacts, such as that between a microprocessor and a heat sink of a computer. High-performance and low-cost thermal pastes have been developed in this dissertation by using polyol esters as the vehicle and various nanoscale solid components. The proportion of a solid component needs to be optimized, as an excessive amount degrades the performance, due to the increase in the bond line thickness. The optimum solid volume fraction tends to be lower when the mating surfaces are smoother, and higher when the thermal conductivity is higher. Both a low bond line thickness and a high thermal conductivity help the performance. When the surfaces are smooth, a low bond line thickness can be even more important than a high thermal conductivity, as shown by the outstanding performance of the nanoclay paste of low thermal conductivity in the smooth case (0.009 mum), with the bond line thickness less than 1 mum, as enabled by low storage modulus G', low loss modulus G" and high tan delta. However, for rough surfaces, the thermal conductivity is important. The rheology affects the bond line thickness, but it does not correlate well with the performance. This study found that the structure of carbon black is an important parameter that governs the effectiveness of a carbon black for use in a thermal paste. By using a carbon black with a lower structure (i.e., a lower DBP value), a thermal paste that is more effective than the previously reported carbon black paste was obtained. Graphite nanoplatelet (GNP) was found to be comparable in effectiveness to carbon black (CB) pastes for rough surfaces, but it is less effective for smooth surfaces. At the same filler volume fraction, GNP gives higher thermal conductivity than carbon black paste. At the same pressure, GNP gives higher bond line thickness than CB (Tokai or Cabot). The effectiveness of GNP is limited, due to the high bond line thickness. A

  7. COMPUTERS: Teraflops for Europe; EEC Working Group on High Performance Computing

    Anon.

    1991-03-15

    In little more than a decade, simulation on high performance computers has become an essential tool for theoretical physics, capable of solving a vast range of crucial problems inaccessible to conventional analytic mathematics. In many ways, computer simulation has become the calculus for interacting many-body systems, a key to the study of transitions from isolated to collective behaviour.

  8. COMPUTERS: Teraflops for Europe; EEC Working Group on High Performance Computing

    Anon.

    1991-01-01

    In little more than a decade, simulation on high performance computers has become an essential tool for theoretical physics, capable of solving a vast range of crucial problems inaccessible to conventional analytic mathematics. In many ways, computer simulation has become the calculus for interacting many-body systems, a key to the study of transitions from isolated to collective behaviour

  9. Scalability of DL_POLY on High Performance Computing Platform

    Mabule Samuel Mabakane

    2017-12-01

    Full Text Available This paper presents a case study on the scalability of several versions of the molecular dynamics code (DL_POLY performed on South Africa‘s Centre for High Performance Computing e1350 IBM Linux cluster, Sun system and Lengau supercomputers. Within this study different problem sizes were designed and the same chosen systems were employed in order to test the performance of DL_POLY using weak and strong scalability. It was found that the speed-up results for the small systems were better than large systems on both Ethernet and Infiniband network. However, simulations of large systems in DL_POLY performed well using Infiniband network on Lengau cluster as compared to e1350 and Sun supercomputer.

  10. High performance discharges near the operational limit in HT-7

    Li Jiangang; Wan Baonian; Luo Jiarong; Gao Xiang; Zhao Yanping; Kuang Guangli; Zhang Xiaodong; Yang Yu; Yi Bao; Bojiang Ding; Jikang Xie; Yuanxi Wan

    2001-01-01

    Efforts have been made on the HT-7 tokamak to extend the stable operation boundaries. Extensive RF boronization and siliconization have been used and a wider operational Hugill diagram has been obtained. The transit density reached 1.3 times the Greenwald density limit in ohmic discharges. A stationary high performance discharge with q a =2.1 has been obtained after siliconization. Confinement improvement was obtained as a result of the significant reduction of electron thermal diffusivity χ e in the outer region of the plasma. An improved confinement phase was also observed with LHCD in the density range of 70-120% of the Greenwald density limit. Off-axis LH wave power deposition was attributed to the weak hollow current density profile. Code simulations and measurements showed good agreement with the off-axis LH wave deposition. Supersonic molecular beam injection has been successfully used to achieve stable high density operation in the region of the Greenwald density limit. (author)

  11. High performance visual display for HENP detectors

    McGuigan, M; Spiletic, J; Fine, V; Nevski, P

    2001-01-01

    A high end visual display for High Energy Nuclear Physics (HENP) detectors is necessary because of the sheer size and complexity of the detector. For BNL this display will be of special interest because of STAR and ATLAS. To load, rotate, query, and debug simulation code with a modern detector simply takes too long even on a powerful work station. To visualize the HENP detectors with maximal performance we have developed software with the following characteristics. We develop a visual display of HENP detectors on BNL multiprocessor visualization server at multiple level of detail. We work with general and generic detector framework consistent with ROOT, GAUDI etc, to avoid conflicting with the many graphic development groups associated with specific detectors like STAR and ATLAS. We develop advanced OpenGL features such as transparency and polarized stereoscopy. We enable collaborative viewing of detector and events by directly running the analysis in BNL stereoscopic theatre. We construct enhanced interactiv...

  12. Intelligent Facades for High Performance Green Buildings

    Dyson, Anna [Rensselaer Polytechnic Inst., Troy, NY (United States)

    2017-03-01

    Progress Towards Net-Zero and Net-Positive-Energy Commercial Buildings and Urban Districts Through Intelligent Building Envelope Strategies Previous research and development of intelligent facades systems has been limited in their contribution towards national goals for achieving on-site net zero buildings, because this R&D has failed to couple the many qualitative requirements of building envelopes such as the provision of daylighting, access to exterior views, satisfying aesthetic and cultural characteristics, with the quantitative metrics of energy harvesting, storage and redistribution. To achieve energy self-sufficiency from on-site solar resources, building envelopes can and must address this gamut of concerns simultaneously. With this project, we have undertaken a high-performance building integrated combined-heat and power concentrating photovoltaic system with high temperature thermal capture, storage and transport towards multiple applications (BICPV/T). The critical contribution we are offering with the Integrated Concentrating Solar Façade (ICSF) is conceived to improve daylighting quality for improved health of occupants and mitigate solar heat gain while maximally capturing and transferring onsite solar energy. The ICSF accomplishes this multi-functionality by intercepting only the direct-normal component of solar energy (which is responsible for elevated cooling loads) thereby transforming a previously problematic source of energy into a high quality resource that can be applied to building demands such as heating, cooling, dehumidification, domestic hot water, and possible further augmentation of electrical generation through organic Rankine cycles. With the ICSF technology, our team is addressing the global challenge in transitioning commercial and residential building stock towards on-site clean energy self-sufficiency, by fully integrating innovative environmental control systems strategies within an intelligent and responsively dynamic building

  13. Research initiatives for plug-and-play scientific computing

    McInnes, Lois Curfman; Dahlgren, Tamara; Nieplocha, Jarek; Bernholdt, David; Allan, Ben; Armstrong, Rob; Chavarria, Daniel; Elwasif, Wael; Gorton, Ian; Kenny, Joe; Krishan, Manoj; Malony, Allen; Norris, Boyana; Ray, Jaideep; Shende, Sameer

    2007-01-01

    This paper introduces three component technology initiatives within the SciDAC Center for Technology for Advanced Scientific Component Software (TASCS) that address ever-increasing productivity challenges in creating, managing, and applying simulation software to scientific discovery. By leveraging the Common Component Architecture (CCA), a new component standard for high-performance scientific computing, these initiatives tackle difficulties at different but related levels in the development of component-based scientific software: (1) deploying applications on massively parallel and heterogeneous architectures, (2) investigating new approaches to the runtime enforcement of behavioral semantics, and (3) developing tools to facilitate dynamic composition, substitution, and reconfiguration of component implementations and parameters, so that application scientists can explore tradeoffs among factors such as accuracy, reliability, and performance

  14. Homemade Buckeye-Pi: A Learning Many-Node Platform for High-Performance Parallel Computing

    Amooie, M. A.; Moortgat, J.

    2017-12-01

    We report on the "Buckeye-Pi" cluster, the supercomputer developed in The Ohio State University School of Earth Sciences from 128 inexpensive Raspberry Pi (RPi) 3 Model B single-board computers. Each RPi is equipped with fast Quad Core 1.2GHz ARMv8 64bit processor, 1GB of RAM, and 32GB microSD card for local storage. Therefore, the cluster has a total RAM of 128GB that is distributed on the individual nodes and a flash capacity of 4TB with 512 processors, while it benefits from low power consumption, easy portability, and low total cost. The cluster uses the Message Passing Interface protocol to manage the communications between each node. These features render our platform the most powerful RPi supercomputer to date and suitable for educational applications in high-performance-computing (HPC) and handling of large datasets. In particular, we use the Buckeye-Pi to implement optimized parallel codes in our in-house simulator for subsurface media flows with the goal of achieving a massively-parallelized scalable code. We present benchmarking results for the computational performance across various number of RPi nodes. We believe our project could inspire scientists and students to consider the proposed unconventional cluster architecture as a mainstream and a feasible learning platform for challenging engineering and scientific problems.

  15. Open-Source Integrated Design-Analysis Environment For Nuclear Energy Advanced Modeling & Simulation Final Scientific/Technical Report

    O' Leary, Patrick [Kitware, Inc., Clifton Park, NY (United States)

    2017-01-30

    The framework created through the Open-Source Integrated Design-Analysis Environment (IDAE) for Nuclear Energy Advanced Modeling & Simulation grant has simplify and democratize advanced modeling and simulation in the nuclear energy industry that works on a range of nuclear engineering applications. It leverages millions of investment dollars from the Department of Energy's Office of Nuclear Energy for modeling and simulation of light water reactors and the Office of Nuclear Energy's research and development. The IDEA framework enhanced Kitware’s Computational Model Builder (CMB) while leveraging existing open-source toolkits and creating a graphical end-to-end umbrella guiding end-users and developers through the nuclear energy advanced modeling and simulation lifecycle. In addition, the work deliver strategic advancements in meshing and visualization for ensembles.

  16. High-performance silicon nanowire bipolar phototransistors

    Tan, Siew Li; Zhao, Xingyan; Chen, Kaixiang; Crozier, Kenneth B.; Dan, Yaping

    2016-07-01

    Silicon nanowires (SiNWs) have emerged as sensitive absorbing materials for photodetection at wavelengths ranging from ultraviolet (UV) to the near infrared. Most of the reports on SiNW photodetectors are based on photoconductor, photodiode, or field-effect transistor device structures. These SiNW devices each have their own advantages and trade-offs in optical gain, response time, operating voltage, and dark current noise. Here, we report on the experimental realization of single SiNW bipolar phototransistors on silicon-on-insulator substrates. Our SiNW devices are based on bipolar transistor structures with an optically injected base region and are fabricated using CMOS-compatible processes. The experimentally measured optoelectronic characteristics of the SiNW phototransistors are in good agreement with simulation results. The SiNW phototransistors exhibit significantly enhanced response to UV and visible light, compared with typical Si p-i-n photodiodes. The near infrared responsivities of the SiNW phototransistors are comparable to those of Si avalanche photodiodes but are achieved at much lower operating voltages. Compared with other reported SiNW photodetectors as well as conventional bulk Si photodiodes and phototransistors, the SiNW phototransistors in this work demonstrate the combined advantages of high gain, high photoresponse, low dark current, and low operating voltage.

  17. High-performance commercial building systems

    Selkowitz, Stephen

    2003-10-01

    This report summarizes key technical accomplishments resulting from the three year PIER-funded R&D program, ''High Performance Commercial Building Systems'' (HPCBS). The program targets the commercial building sector in California, an end-use sector that accounts for about one-third of all California electricity consumption and an even larger fraction of peak demand, at a cost of over $10B/year. Commercial buildings also have a major impact on occupant health, comfort and productivity. Building design and operations practices that influence energy use are deeply engrained in a fragmented, risk-averse industry that is slow to change. Although California's aggressive standards efforts have resulted in new buildings designed to use less energy than those constructed 20 years ago, the actual savings realized are still well below technical and economic potentials. The broad goal of this program is to develop and deploy a set of energy-saving technologies, strategies, and techniques, and improve processes for designing, commissioning, and operating commercial buildings, while improving health, comfort, and performance of occupants, all in a manner consistent with sound economic investment practices. Results are to be broadly applicable to the commercial sector for different building sizes and types, e.g. offices and schools, for different classes of ownership, both public and private, and for owner-occupied as well as speculative buildings. The program aims to facilitate significant electricity use savings in the California commercial sector by 2015, while assuring that these savings are affordable and promote high quality indoor environments. The five linked technical program elements contain 14 projects with 41 distinct R&D tasks. Collectively they form a comprehensive Research, Development, and Demonstration (RD&D) program with the potential to capture large savings in the commercial building sector, providing significant economic benefits to

  18. Development of a high performance liquid chromatography method ...

    Development of a high performance liquid chromatography method for simultaneous ... Purpose: To develop and validate a new low-cost high performance liquid chromatography (HPLC) method for ..... Several papers have reported the use of ...

  19. High Performance Home Building Guide for Habitat for Humanity Affiliates

    Lindsey Marburger

    2010-10-01

    This guide covers basic principles of high performance Habitat construction, steps to achieving high performance Habitat construction, resources to help improve building practices, materials, etc., and affiliate profiles and recommendations.

  20. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  1. Generic algorithms for high performance scalable geocomputing

    de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek

    2016-04-01

    During the last decade, the characteristics of computing hardware have changed a lot. For example, instead of a single general purpose CPU core, personal computers nowadays contain multiple cores per CPU and often general purpose accelerators, like GPUs. Additionally, compute nodes are often grouped together to form clusters or a supercomputer, providing enormous amounts of compute power. For existing earth simulation models to be able to use modern hardware platforms, their compute intensive parts must be rewritten. This can be a major undertaking and may involve many technical challenges. Compute tasks must be distributed over CPU cores, offloaded to hardware accelerators, or distributed to different compute nodes. And ideally, all of this should be done in such a way that the compute task scales well with the hardware resources. This presents two challenges: 1) how to make good use of all the compute resources and 2) how to make these compute resources available for developers of simulation models, who may not (want to) have the required technical background for distributing compute tasks. The first challenge requires the use of specialized technology (e.g.: threads, OpenMP, MPI, OpenCL, CUDA). The second challenge requires the abstraction of the logic handling the distribution of compute tasks from the model-specific logic, hiding the technical details from the model developer. To assist the model developer, we are developing a C++ software library (called Fern) containing algorithms that can use all CPU cores available in a single compute node (distributing tasks over multiple compute nodes will be done at a later stage). The algorithms are grid-based (finite difference) and include local and spatial operations such as convolution filters. The algorithms handle distribution of the compute tasks to CPU cores internally. In the resulting model the low-level details of how this is done is separated from the model-specific logic representing the modeled system

  2. Can Knowledge of the Characteristics of "High Performers" Be Generalised?

    McKenna, Stephen

    2002-01-01

    Two managers described as high performing constructed complexity maps of their organization/world. The maps suggested that high performance is socially constructed and negotiated in specific contexts and management competencies associated with it are context specific. Development of high performers thus requires personalized coaching more than…

  3. Development of High-Performance Cast Crankshafts. Final Technical Report

    Bauer, Mark E [General Motors, Detroit, MI (United States)

    2017-03-31

    The objective of this project was to develop technologies that would enable the production of cast crankshafts that can replace high performance forged steel crankshafts. To achieve this, the Ultimate Tensile Strength (UTS) of the new material needs to be 850 MPa with a desired minimum Yield Strength (YS; 0.2% offset) of 615 MPa and at least 10% elongation. Perhaps more challenging, the cast material needs to be able to achieve sufficient local fatigue properties to satisfy the durability requirements in today’s high performance gasoline and diesel engine applications. The project team focused on the development of cast steel alloys for application in crankshafts to take advantage of the higher stiffness over other potential material choices. The material and process developed should be able to produce high-performance crankshafts at no more than 110% of the cost of current production cast units, perhaps the most difficult objective to achieve. To minimize costs, the primary alloy design strategy was to design compositions that can achieve the required properties with minimal alloying and post-casting heat treatments. An Integrated Computational Materials Engineering (ICME) based approach was utilized, rather than relying only on traditional trial-and-error methods, which has been proven to accelerate alloy development time. Prototype melt chemistries designed using ICME were cast as test specimens and characterized iteratively to develop an alloy design within a stage-gate process. Standard characterization and material testing was done to validate the alloy performance against design targets and provide feedback to material design and manufacturing process models. Finally, the project called for Caterpillar and General Motors (GM) to develop optimized crankshaft designs using the final material and manufacturing processing path developed. A multi-disciplinary effort was to integrate finite element analyses by engine designers and geometry-specific casting

  4. Use of Heuristics to Facilitate Scientific Discovery Learning in a Simulation Learning Environment in a Physics Domain

    Veermans, Koen; van Joolingen, Wouter; de Jong, Ton

    2006-01-01

    This article describes a study into the role of heuristic support in facilitating discovery learning through simulation-based learning. The study compares the use of two such learning environments in the physics domain of collisions. In one learning environment (implicit heuristics) heuristics are only used to provide the learner with guidance…

  5. Powder metallurgical high performance materials. Proceedings. Volume 1: high performance P/M metals

    Kneringer, G.; Roedhammer, P.; Wildner, H.

    2001-01-01

    The proceedings of this sequence of seminars form an impressive chronicle of the continued progress in the understanding of refractory metals and cemented carbides and in their manufacture and application. There the ingenuity and assiduous work of thousands of scientists and engineers striving for progress in the field of powder metallurgy is documented in more than 2000 contributions covering some 30000 pages. The 15th Plansee Seminar was convened under the general theme 'Powder Metallurgical High Performance Materials'. Under this broadened perspective the seminar will strive to look beyond the refractory metals and cemented carbides, which remain at its focus, to novel classes of materials, such as intermetallic compounds, with potential for high temperature applications. (author)

  6. Powder metallurgical high performance materials. Proceedings. Volume 1: high performance P/M metals

    Kneringer, G; Roedhammer, P; Wildner, H [eds.

    2001-07-01

    The proceedings of this sequence of seminars form an impressive chronicle of the continued progress in the understanding of refractory metals and cemented carbides and in their manufacture and application. There the ingenuity and assiduous work of thousands of scientists and engineers striving for progress in the field of powder metallurgy is documented in more than 2000 contributions covering some 30000 pages. The 15th Plansee Seminar was convened under the general theme 'Powder Metallurgical High Performance Materials'. Under this broadened perspective the seminar will strive to look beyond the refractory metals and cemented carbides, which remain at its focus, to novel classes of materials, such as intermetallic compounds, with potential for high temperature applications. (author)

  7. On the impact of quantum computing technology on future developments in high-performance scientific computing

    Möller, Matthias; Vuik, Cornelis

    2017-01-01

    Quantum computing technologies have become a hot topic in academia and industry receiving much attention and financial support from all sides. Building a quantum computer that can be used practically is in itself an outstanding challenge that has become the ‘new race to the moon’. Next to researchers and vendors of future computing technologies, national authorities are showing strong interest in maturing this technology due to its known potential to break many of today’s encryption technique...

  8. Topic 14+16: High-performance and scientific applications and extreme-scale computing (Introduction)

    Downes, Turlough P.; Roller, Sabine P.; Seitsonen, Ari Paavo; Valcke, Sophie; Keyes, David E.; Sawley, Marie Christine; Schulthess, Thomas C.; Shalf, John M.

    2013-01-01

    and algorithms to address the varied, complex and increasing challenges of modern research throughout both the "hard" and "soft" sciences. This necessitates being able to use large numbers of compute nodes, many of which are equipped with accelerators

  9. Accelerating Scientific Applications using High Performance Dense and Sparse Linear Algebra Kernels on GPUs

    Abdelfattah, Ahmad

    2015-01-01

    applications rely on crucial memory-bound kernels and may witness bottlenecks due to the overhead of the memory bus latency. They can still take advantage of the GPU compute power capabilities, provided that an efficient architecture-aware design is achieved

  10. On the impact of quantum computing technology on future developments in high-performance scientific computing

    Möller, M.; Vuik, C.

    2017-01-01

    Quantum computing technologies have become a hot topic in academia and industry receiving much attention and financial support from all sides. Building a quantum computer that can be used practically is in itself an outstanding challenge that has become the ‘new race to the moon’. Next to

  11. Progress Towards High Performance, Steady-state Spherical Torus

    Ono, M.; Bell, M.G.; Bell, R.E.; Bigelow, T.; Bitter, M.; Blanchard, W.; Boedo, J.; Bourdelle, C.; Bush, C.; Choe, W.; Chrzanowski, J.; Darrow, D.S.; Diem, S.J.; Doerner, R.; Efthimion, P.C.; Ferron, J.R.; Fonck, R.J.; Fredrickson, E.D.; Garstka, G.D.; Gates, D.A.; Gray, T.; Grisham, L.R.; Heidbrink, W.; Hill, K.W.; Hoffman, D.; Jarboe, T.R.; Johnson, D.W.; Kaita, R.; Kaye, S.M.; Kessel, C.; Kim, J.H.; Kissick, M.W.; Kubota, S.; Kugel, H.W.; LeBlanc, B.P.; Lee, K.; Lee, S.G.; Lewicki, B.T.; Luckhardt, S.; Maingi, R.; Majeski, R.; Manickam, J.; Maqueda, R.; Mau, T.K.; Mazzucato, E.; Medley, S.S.; Menard, J.; Mueller, D.; Nelson, B.A.; Neumeyer, C.; Nishino, N.; Ostrander, C.N.; Pacella, D.; Paoletti, F.; Park, H.K.; Park, W.; Paul, S.F.; Peng, Y.-K. M.; Phillips, C.K.; Pinsker, R.; Probert, P.H.; Ramakrishnan, S.; Raman, R.; Redi, M.; Roquemore, A.L.; Rosenberg, A.; Ryan, P.M.; Sabbagh, S.A.; Schaffer, M.; Schooff, R.J.; Seraydarian, R.; Skinner, C.H.; Sontag, A.C.; Soukhanovskii, V.; Spaleta, J.; Stevenson, T.; Stutman, D.; Swain, D.W.; Synakowski, E.; Takase, Y.; Tang, X.; Taylor, G.; Timberlake, J.; Tritz, K.L.; Unterberg, E.A.; Von Halle, A.; Wilgen, J.; Williams, M.; Wilson, J.R.; Xu, X.; Zweben, S.J.; Akers, R.; Barry, R.E.; Beiersdorfer, P.; Bialek, J.M.; Blagojevic, B.; Bonoli, P.T.; Carter, M.D.; Davis, W.; Deng, B.; Dudek, L.; Egedal, J.; Ellis, R.; Finkenthal, M.; Foley, J.; Fredd, E.; Glasser, A.; Gibney, T.; Gilmore, M.; Goldston, R.J.; Hatcher, R.E.; Hawryluk, R.J.; Houlberg, W.; Harvey, R.; Jardin, S.C.; Hosea, J.C.; Ji, H.; Kalish, M.; Lowrance, J.; Lao, L.L.; Levinton, F.M.; Luhmann, N.C.; Marsala, R.; Mastravito, D.; Menon, M.M.; Mitarai, O.; Nagata, M.; Oliaro, G.; Parsells, R.; Peebles, T.; Peneflor, B.; Piglowski, D.; Porter, G.D.; Ram, A.K.; Rensink, M.; Rewoldt, G.; Roney, P.; Shaing, K.; Shiraiwa, S.; Sichta, P.; Stotler, D.; Stratton, B.C.; Vero, R.; Wampler, W.R.; Wurden, G.A.

    2003-01-01

    Research on the Spherical Torus (or Spherical Tokamak) is being pursued to explore the scientific benefits of modifying the field line structure from that in more moderate aspect-ratio devices, such as the conventional tokamak. The Spherical Tours (ST) experiments are being conducted in various U.S. research facilities including the MA-class National Spherical Torus Experiment (NSTX) at Princeton, and three medium-size ST research facilities: Pegasus at University of Wisconsin, HIT-II at University of Washington, and CDX-U at Princeton. In the context of the fusion energy development path being formulated in the U.S., an ST-based Component Test Facility (CTF) and, ultimately a Demo device, are being discussed. For these, it is essential to develop high-performance, steady-state operational scenarios. The relevant scientific issues are energy confinement, MHD stability at high beta (B), noninductive sustainment, ohmic-solenoid-free start-up, and power and particle handling. In the confinement area, the NSTX experiments have shown that the confinement can be up to 50% better than the ITER-98-pby2 H-mode scaling, consistent with the requirements for an ST-based CTF and Demo. In NSTX, CTF-relevant average toroidal beta values bT of up to 35% with the near unity central betaT have been obtained. NSTX will be exploring advanced regimes where bT up to 40% can be sustained through active stabilization of resistive wall modes. To date, the most successful technique for noninductive sustainment in NSTX is the high beta-poloidal regime, where discharges with a high noninductive fraction (∼60% bootstrap current + neutral-beam-injected current drive) were sustained over the resistive skin time. Research on radio-frequency-based heating and current drive utilizing HHFW (High Harmonic Fast Wave) and EBW (Electron Bernstein Wave) is also pursued on NSTX, Pegasus, and CDX-U. For noninductive start-up, the Coaxial Helicity Injection (CHI), developed in HIT/HIT-II, has been adopted

  12. DOE research in utilization of high-performance computers

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-12-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  13. High-performance computing on GPUs for resistivity logging of oil and gas wells

    Glinskikh, V.; Dudaev, A.; Nechaev, O.; Surodina, I.

    2017-10-01

    We developed and implemented into software an algorithm for high-performance simulation of electrical logs from oil and gas wells using high-performance heterogeneous computing. The numerical solution of the 2D forward problem is based on the finite-element method and the Cholesky decomposition for solving a system of linear algebraic equations (SLAE). Software implementations of the algorithm used the NVIDIA CUDA technology and computing libraries are made, allowing us to perform decomposition of SLAE and find its solution on central processor unit (CPU) and graphics processor unit (GPU). The calculation time is analyzed depending on the matrix size and number of its non-zero elements. We estimated the computing speed on CPU and GPU, including high-performance heterogeneous CPU-GPU computing. Using the developed algorithm, we simulated resistivity data in realistic models.

  14. Modeling Phase-transitions Using a High-performance, Isogeometric Analysis Framework

    Vignal, Philippe

    2014-06-06

    In this paper, we present a high-performance framework for solving partial differential equations using Isogeometric Analysis, called PetIGA, and show how it can be used to solve phase-field problems. We specifically chose the Cahn-Hilliard equation, and the phase-field crystal equation as test cases. These two models allow us to highlight some of the main advantages that we have access to while using PetIGA for scientific computing.

  15. Computer simulation of plasma behavior in open-ended linear theta machines. Scientific report 81-5

    Stover, E. K.

    1981-04-01

    Zero-dimensional and one-dimensional fluid plasma computer models have been developed to study the behavior of linear theta pinch plasmas. Computer simulation results generated from these codes are compared with data obtained from two theta pinch experiments so that significant machine plasma behavior can be identified. The experiments examined are a collisional experiment, T/sub i/ approx. 50 eV, n/sub e/ approx. 10/sup 17/ cm/sup -3/, where the plasma mean-free-path was significantly less than the plasma column length, and a hot ion species experiment, T/sub i/ approx. 3 keV, n/sub e/ approx. 10/sup 16/ cm/sup -3/, where the ion mean-free-path was on the order of the plasma column length.

  16. Computer simulation of plasma behavior in open-ended linear theta machines. Scientific report 81-5

    Stover, E.K.

    1981-04-01

    Zero-dimensional and one-dimensional fluid plasma computer models have been developed to study the behavior of linear theta pinch plasmas. Computer simulation results generated from these codes are compared with data obtained from two theta pinch experiments so that significant machine plasma behavior can be identified. The experiments examined are a collisional experiment, T/sub i/ approx. 50 eV, n/sub e/ approx. 10 17 cm -3 , where the plasma mean-free-path was significantly less than the plasma column length, and a hot ion species experiment, T/sub i/ approx. 3 keV, n/sub e/ approx. 10 16 cm -3 , where the ion mean-free-path was on the order of the plasma column length

  17. High Performance Computing in Science and Engineering '08 : Transactions of the High Performance Computing Center

    Kröner, Dietmar; Resch, Michael

    2009-01-01

    The discussions and plans on all scienti?c, advisory, and political levels to realize an even larger “European Supercomputer” in Germany, where the hardware costs alone will be hundreds of millions Euro – much more than in the past – are getting closer to realization. As part of the strategy, the three national supercomputing centres HLRS (Stuttgart), NIC/JSC (Julic ¨ h) and LRZ (Munich) have formed the Gauss Centre for Supercomputing (GCS) as a new virtual organization enabled by an agreement between the Federal Ministry of Education and Research (BMBF) and the state ministries for research of Baden-Wurttem ¨ berg, Bayern, and Nordrhein-Westfalen. Already today, the GCS provides the most powerful high-performance computing - frastructure in Europe. Through GCS, HLRS participates in the European project PRACE (Partnership for Advances Computing in Europe) and - tends its reach to all European member countries. These activities aligns well with the activities of HLRS in the European HPC infrastructur...

  18. High performance computing in power and energy systems

    Khaitan, Siddhartha Kumar [Iowa State Univ., Ames, IA (United States); Gupta, Anshul (eds.) [IBM Watson Research Center, Yorktown Heights, NY (United States)

    2013-07-01

    The twin challenge of meeting global energy demands in the face of growing economies and populations and restricting greenhouse gas emissions is one of the most daunting ones that humanity has ever faced. Smart electrical generation and distribution infrastructure will play a crucial role in meeting these challenges. We would need to develop capabilities to handle large volumes of data generated by the power system components like PMUs, DFRs and other data acquisition devices as well as by the capacity to process these data at high resolution via multi-scale and multi-period simulations, cascading and security analysis, interaction between hybrid systems (electric, transport, gas, oil, coal, etc.) and so on, to get meaningful information in real time to ensure a secure, reliable and stable power system grid. Advanced research on development and implementation of market-ready leading-edge high-speed enabling technologies and algorithms for solving real-time, dynamic, resource-critical problems will be required for dynamic security analysis targeted towards successful implementation of Smart Grid initiatives. This books aims to bring together some of the latest research developments as well as thoughts on the future research directions of the high performance computing applications in electric power systems planning, operations, security, markets, and grid integration of alternate sources of energy, etc.

  19. High performance data acquisition with InfiniBand

    Adamczewski, Joern; Essel, Hans G.; Kurz, Nikolaus; Linev, Sergey

    2008-01-01

    For the new experiments at FAIR new concepts of data acquisition systems have to be developed like the distribution of self-triggered, time stamped data streams over high performance networks for event building. In this concept any data filtering is done behind the network. Therefore the network must achieve up to 1 GByte/s bi-directional data transfer per node. Detailed simulations have been done to optimize scheduling mechanisms for such event building networks. For real performance tests InfiniBand has been chosen as one of the fastest available network technology. The measurements of network event building have been performed on different Linux clusters from four to over hundred nodes. Several InfiniBand libraries have been tested like uDAPL, Verbs, or MPI. The tests have been integrated in the data acquisition backbone core software DABC, a general purpose data acquisition library. Detailed results are presented. In the worst cases (over hundred nodes) 50% of the required bandwidth can be already achieved. It seems possible to improve these results by further investigations

  20. High Performance Gigabit Ethernet Switches for DAQ Systems

    Barczyk, Artur

    2005-01-01

    Commercially available high performance Gigabit Ethernet (GbE) switches are optimized mostly for Internet and standard LAN application traffic. DAQ systems on the other hand usually make use of very specific traffic patterns, with e.g. deterministic arrival times. Industry's accepted loss-less limit of 99.999% may be still unacceptably high for DAQ purposes, as e.g. in the case of the LHCb readout system. In addition, even switches passing this criteria under random traffic can show significantly higher loss rates if subject to our traffic pattern, mainly due to buffer memory limitations. We have evaluated the performance of several switches, ranging from "pizza-box" devices with 24 or 48 ports up to chassis based core switches in a test-bed capable to emulate realistic traffic patterns as expected in the readout system of our experiment. The results obtained in our tests have been used to refine and parametrize our packet level simulation of the complete LHCb readout network. In this paper we report on the...

  1. Inclusive vision for high performance computing at the CSIR

    Gazendam, A

    2006-02-01

    Full Text Available and computationally intensive applications. A number of different technologies and standards were identified as core to the open and distributed high-performance infrastructure envisaged...

  2. Modern Trends Of Computation, Simulation, and Communication, And Their Impacts On The Progress Of Scientific And Engineering Research, Development, And Education

    Bunjamin, Muhammad

    2001-01-01

    A short report on the modern trends of computation, simulation, and communication in the 1990s is presented, along with their impacts on the progress of scientific and engineering research, development, and education. A full description of this giant issue is certainly a m ission impossible f or the author. Nevertheless, it is the author's hope that it will at least give an overall view about what is going on in this very dynamic field in the advanced countries. After t hinking globally t hru reading this report, we should then decide on w hat and how to act locally t o respond to these global trends. The main source of information reported here were the computational science and engineering journals and books issued during the 1990s as listed in the references below

  3. High-Performance Computing in Neuroscience for Data-Driven Discovery, Integration, and Dissemination

    Bouchard, Kristofer E.

    2016-01-01

    A lack of coherent plans to analyze, manage, and understand data threatens the various opportunities offered by new neuro-technologies. High-performance computing will allow exploratory analysis of massive datasets stored in standardized formats, hosted in open repositories, and integrated with simulations.

  4. High Performance Multiphase Combustion Tool Using Level Set-Based Primary Atomization Coupled with Flamelet Models, Phase II

    National Aeronautics and Space Administration — The innovative methodologies proposed in this STTR Phase 2 project will enhance Loci-STREAM which is a high performance, high fidelity simulation tool already being...

  5. High Performance Multiphase Combustion Tool Using Level Set-Based Primary Atomization Coupled with Flamelet Models, Phase I

    National Aeronautics and Space Administration — The innovative methodologies proposed in this STTR Phase 1 project will enhance Loci-STREAM which is a high performance, high fidelity simulation tool already being...

  6. RSYST: From nuclear reactor calculations towards a highly sophisticated scientific software integration environment

    Noack, M.; Seybold, J.; Ruehle, R.

    1996-01-01

    The software environment RSYST was originally used to solve problems of reactor physics. The consideration of advanced scientific simulation requirements and the strict application of modern software design principles led to a system which is perfectly suitable to solve problems in various complex scientific problem domains. Starting with a review of the early days of RSYST, we describe the straight evolution driven by the need of software environment which combines the advantages of a high-performance database system with the capability to integrate sophisticated scientific technical applications. The RSYST architecture is presented and the data modelling capabilities are described. To demonstrate the powerful possibilities and flexibility of the RSYST environment, we describe a wide range of RSYST applications, e.g., mechanical simulations of multibody systems, which are used in biomechanical research, civil engineering and robotics. In addition, a hypermedia system which is used for scientific technical training and documentation is presented. (orig.) [de

  7. Protein Simulation Data in the Relational Model.

    Simms, Andrew M; Daggett, Valerie

    2012-10-01

    High performance computing is leading to unprecedented volumes of data. Relational databases offer a robust and scalable model for storing and analyzing scientific data. However, these features do not come without a cost-significant design effort is required to build a functional and efficient repository. Modeling protein simulation data in a relational database presents several challenges: the data captured from individual simulations are large, multi-dimensional, and must integrate with both simulation software and external data sites. Here we present the dimensional design and relational implementation of a comprehensive data warehouse for storing and analyzing molecular dynamics simulations using SQL Server.

  8. Control switching in high performance and fault tolerant control

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2010-01-01

    The problem of reliability in high performance control and in fault tolerant control is considered in this paper. A feedback controller architecture for high performance and fault tolerance is considered. The architecture is based on the Youla-Jabr-Bongiorno-Kucera (YJBK) parameterization. By usi...

  9. Mechanical Properties of High Performance Cementitious Grout (II)

    Sørensen, Eigil V.

    The present report is an update of the report “Mechanical Properties of High Performance Cementitious Grout (I)” [1] and describes tests carried out on the high performance grout MASTERFLOW 9500, marked “WMG 7145 FP”, developed by BASF Construction Chemicals A/S and designed for use in grouted...

  10. Development of new high-performance stainless steels

    Park, Yong Soo

    2002-01-01

    This paper focused on high-performance stainless steels and their development status. Effect of nitrogen addition on super-stainless steel was discussed. Research activities at Yonsei University, on austenitic and martensitic high-performance stainless, steels, and the next-generation duplex stainless steels were introduced

  11. High-performance, scalable optical network-on-chip architectures

    Tan, Xianfang

    The rapid advance of technology enables a large number of processing cores to be integrated into a single chip which is called a Chip Multiprocessor (CMP) or a Multiprocessor System-on-Chip (MPSoC) design. The on-chip interconnection network, which is the communication infrastructure for these processing cores, plays a central role in a many-core system. With the continuously increasing complexity of many-core systems, traditional metallic wired electronic networks-on-chip (NoC) became a bottleneck because of the unbearable latency in data transmission and extremely high energy consumption on chip. Optical networks-on-chip (ONoC) has been proposed as a promising alternative paradigm for electronic NoC with the benefits of optical signaling communication such as extremely high bandwidth, negligible latency, and low power consumption. This dissertation focus on the design of high-performance and scalable ONoC architectures and the contributions are highlighted as follow: 1. A micro-ring resonator (MRR)-based Generic Wavelength-routed Optical Router (GWOR) is proposed. A method for developing any sized GWOR is introduced. GWOR is a scalable non-blocking ONoC architecture with simple structure, low cost and high power efficiency compared to existing ONoC designs. 2. To expand the bandwidth and improve the fault tolerance of the GWOR, a redundant GWOR architecture is designed by cascading different type of GWORs into one network. 3. The redundant GWOR built with MRR-based comb switches is proposed. Comb switches can expand the bandwidth while keep the topology of GWOR unchanged by replacing the general MRRs with comb switches. 4. A butterfly fat tree (BFT)-based hybrid optoelectronic NoC (HONoC) architecture is developed in which GWORs are used for global communication and electronic routers are used for local communication. The proposed HONoC uses less numbers of electronic routers and links than its counterpart of electronic BFT-based NoC. It takes the advantages of

  12. Improving the high performance concrete (HPC behaviour in high temperatures

    Cattelan Antocheves De Lima, R.

    2003-12-01

    Full Text Available High performance concrete (HPC is an interesting material that has been long attracting the interest from the scientific and technical community, due to the clear advantages obtained in terms of mechanical strength and durability. Given these better characteristics, HFC, in its various forms, has been gradually replacing normal strength concrete, especially in structures exposed to severe environments. However, the veiy dense microstructure and low permeability typical of HPC can result in explosive spalling under certain thermal and mechanical conditions, such as when concrete is subject to rapid temperature rises, during a f¡re. This behaviour is caused by the build-up of internal water pressure, in the pore structure, during heating, and by stresses originating from thermal deformation gradients. Although there are still a limited number of experimental programs in this area, some researchers have reported that the addition of polypropylene fibers to HPC is a suitable way to avoid explosive spalling under f re conditions. This change in behavior is derived from the fact that polypropylene fibers melt in high temperatures and leave a pathway for heated gas to escape the concrete matrix, therefore allowing the outward migration of water vapor and resulting in the reduction of interned pore pressure. The present research investigates the behavior of high performance concrete on high temperatures, especially when polypropylene fibers are added to the mix.

    El hormigón de alta resistencia (HAR es un material de gran interés para la comunidad científica y técnica, debido a las claras ventajas obtenidas en término de resistencia mecánica y durabilidad. A causa de estas características, el HAR, en sus diversas formas, en algunas aplicaciones está reemplazando gradualmente al hormigón de resistencia normal, especialmente en estructuras expuestas a ambientes severos. Sin embargo, la microestructura muy densa y la baja permeabilidad t

  13. High performance computing and communications: Advancing the frontiers of information technology

    NONE

    1997-12-31

    This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental in the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.

  14. High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science

    Pop, Florin

    2014-01-01

    Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP) analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.

  15. Methodology for the preliminary design of high performance schools in hot and humid climates

    Im, Piljae

    A methodology to develop an easy-to-use toolkit for the preliminary design of high performance schools in hot and humid climates was presented. The toolkit proposed in this research will allow decision makers without simulation knowledge easily to evaluate accurately energy efficient measures for K-5 schools, which would contribute to the accelerated dissemination of energy efficient design. For the development of the toolkit, first, a survey was performed to identify high performance measures available today being implemented in new K-5 school buildings. Then an existing case-study school building in a hot and humid climate was selected and analyzed to understand the energy use pattern in a school building and to be used in developing a calibrated simulation. Based on the information from the previous step, an as-built and calibrated simulation was then developed. To accomplish this, five calibration steps were performed to match the simulation results with the measured energy use. The five steps include: (1) Using an actual 2006 weather file with measured solar radiation, (2) Modifying lighting & equipment schedule using ASHRAE's RP-1093 methods, (3) Using actual equipment performance curves (i.e., scroll chiller), (4) Using the Winkelmann's method for the underground floor heat transfer, and (5) Modifying the HVAC and room setpoint temperature based on the measured field data. Next, the calibrated simulation of the case-study K-5 school was compared to an ASHRAE Standard 90.1-1999 code-compliant school. In the next step, the energy savings potentials from the application of several high performance measures to an equivalent ASHRAE Standard 90.1-1999 code-compliant school. The high performance measures applied included the recommendations from the ASHRAE Advanced Energy Design Guides (AEDG) for K-12 and other high performance measures from the literature review as well as a daylighting strategy and solar PV and thermal systems. The results show that the net

  16. High-Performance Secure Database Access Technologies for HEP Grids

    Matthew Vranicar; John Weicher

    2006-04-17

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysis capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the

  17. High-Performance Secure Database Access Technologies for HEP Grids

    Vranicar, Matthew; Weicher, John

    2006-01-01

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysis capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist's computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that 'Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications'. There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure

  18. Using a multi-user virtual simulation to promote science content: Mastery, scientific reasoning, and academic self-efficacy in fifth grade science

    Ronelus, Wednaud J.

    The purpose of this study was to examine the impact of using a role-playing game versus a more traditional text-based instructional method on a cohort of general education fifth grade students' science content mastery, scientific reasoning abilities, and academic self-efficacy. This is an action research study that employs an embedded mixed methods design model, involving both quantitative and qualitative data. The study is guided by the critical design ethnography theoretical lens: an ethnographic process involving participatory design work aimed at transforming a local context while producing an instructional design that can be used in multiple contexts. The impact of an immersive 3D multi-user web-based educational simulation game on a cohort of fifth-grade students was examined on multiple levels of assessments--immediate, close, proximal and distal. A survey instrument was used to assess students' self-efficacy in technology and scientific inquiry. Science content mastery was assessed at the immediate (participation in game play), close (engagement in-game reports) and proximal (understanding of targeted concepts) levels; scientific reasoning was assessed at the distal (domain general critical thinking test) level. This quasi-experimental study used a convenient sampling method. Seven regular fifth-grade classes participated in this study. Three of the classes were the control group and the other four were the intervention group. A cohort of 165 students participated in this study. The treatment group contained 38 boys and 52 girls, and the control group contained 36 boys and 39 girls. Two-tailed t-test, Analysis of Covariance (ANCOVA), and Pearson Correlation were used to analyze data. The data supported the rejection of the null hypothesis for the three research questions. The correlational analyses showed strong relationship among three of the four variables. There were no correlations between gender and the three dependent variables. The findings of this

  19. Interactive Data Exploration for High-Performance Fluid Flow Computations through Porous Media

    Perovic, Nevena

    2014-09-01

    © 2014 IEEE. Huge data advent in high-performance computing (HPC) applications such as fluid flow simulations usually hinders the interactive processing and exploration of simulation results. Such an interactive data exploration not only allows scientiest to \\'play\\' with their data but also to visualise huge (distributed) data sets in both an efficient and easy way. Therefore, we propose an HPC data exploration service based on a sliding window concept, that enables researches to access remote data (available on a supercomputer or cluster) during simulation runtime without exceeding any bandwidth limitations between the HPC back-end and the user front-end.

  20. Energy Design Guidelines for High Performance Schools: Tropical Island Climates

    None

    2004-11-01

    Design guidelines outline high performance principles for the new or retrofit design of K-12 schools in tropical island climates. By incorporating energy improvements into construction or renovation plans, schools can reduce energy consumption and costs.

  1. Decal electronics for printed high performance cmos electronic systems

    Hussain, Muhammad Mustafa; Sevilla, Galo Torres; Cordero, Marlon Diaz; Kutbee, Arwa T.

    2017-01-01

    High performance complementary metal oxide semiconductor (CMOS) electronics are critical for any full-fledged electronic system. However, state-of-the-art CMOS electronics are rigid and bulky making them unusable for flexible electronic applications

  2. Brain inspired high performance electronics on flexible silicon

    Sevilla, Galo T.; Rojas, Jhonathan Prieto; Hussain, Muhammad Mustafa

    2014-01-01

    Brain's stunning speed, energy efficiency and massive parallelism makes it the role model for upcoming high performance computation systems. Although human brain components are a million times slower than state of the art silicon industry components

  3. Enabling High-Performance Computing as a Service

    AbdelBaky, Moustafa; Parashar, Manish; Kim, Hyunjoo; Jordan, Kirk E.; Sachdeva, Vipin; Sexton, James; Jamjoom, Hani; Shae, Zon-Yin; Pencheva, Gergina; Tavakoli, Reza; Wheeler, Mary F.

    2012-01-01

    With the right software infrastructure, clouds can provide scientists with as a service access to high-performance computing resources. An award-winning prototype framework transforms the Blue Gene/P system into an elastic cloud to run a

  4. Mechanical Properties of High Performance Cementitious Grout Masterflow 9200

    Sørensen, Eigil V.

    The present report describes tests carried out on the high performance grout Masterflow 9200, developed by BASF Construction Chemicals A/S and designed for use in grouted connections of windmill foundations....

  5. Implementation of a high performance parallel finite element micromagnetics package

    Scholz, W.; Suess, D.; Dittrich, R.; Schrefl, T.; Tsiantos, V.; Forster, H.; Fidler, J.

    2004-01-01

    A new high performance scalable parallel finite element micromagnetics package has been implemented. It includes solvers for static energy minimization, time integration of the Landau-Lifshitz-Gilbert equation, and the nudged elastic band method

  6. High Performance Low Mass Nanowire Enabled Heatpipe, Phase I

    National Aeronautics and Space Administration — Illuminex Corporation proposes a NASA Phase I SBIR project to develop high performance, lightweight, low-profile heat pipes with enhanced thermal transfer properties...

  7. High Performance Thin-Film Composite Forward Osmosis Membrane

    Yip, Ngai Yin; Tiraferri, Alberto; Phillip, William A.; Schiffman, Jessica D.; Elimelech, Menachem

    2010-01-01

    obstacle hindering further advancements of this technology. This work presents the development of a high performance thin-film composite membrane for forward osmosis applications. The membrane consists of a selective polyamide active layer formed

  8. High Performance Low Mass Nanowire Enabled Heatpipe, Phase II

    National Aeronautics and Space Administration — Heat pipes are widely used for passive, two-phase electronics cooling. As advanced high power, high performance electronics in space based and terrestrial...

  9. High Performing Greenways Design: A Case Study of Gainesville, GA

    AKPINAR, Abdullah

    2015-01-01

    Greenways play a significant role in structuring and developing our living environment in urban as well as suburban areas. They provide many recreational, environmental, ecological, social, educational, and economical benefits to cities. This article questions what makes high performing greenways by exploring the concept, history, and development of greenways in the United States. The paper illustrates the concept of linked open spaces and high performing urban greenways in residential commun...

  10. High Performing Greenways Design: A Case Study of Gainesville, GA

    AKPINAR, Abdullah

    2014-01-01

    Greenways play a significant role in structuring and developing our living environment in urban as well as suburban areas. They provide many recreational, environmental, ecological, social, educational, and economical benefits to cities. This article questions what makes high performing greenways by exploring the concept, history, and development of greenways in the United States. The paper illustrates the concept of linked open spaces and high performing urban greenways in residential commun...

  11. Highlighting High Performance: Whitman Hanson Regional High School; Whitman, Massachusetts

    2006-06-01

    This brochure describes the key high-performance building features of the Whitman-Hanson Regional High School. The brochure was paid for by the Massachusetts Technology Collaborative as part of their Green Schools Initiative. High-performance features described are daylighting and energy-efficient lighting, indoor air quality, solar and wind energy, building envelope, heating and cooling systems, water conservation, and acoustics. Energy cost savings are also discussed.

  12. High Performance Computing Modernization Program Kerberos Throughput Test Report

    2017-10-26

    Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/5524--17-9751 High Performance Computing Modernization Program Kerberos Throughput Test ...NUMBER 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 2. REPORT TYPE1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 6. AUTHOR(S) 8. PERFORMING...PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT High Performance Computing Modernization Program Kerberos Throughput Test Report Daniel G. Gdula* and

  13. Energy Design Guidelines for High Performance Schools: Tropical Island Climates

    2004-11-01

    The Energy Design Guidelines for High Performance Schools--Tropical Island Climates provides school boards, administrators, and design staff with guidance to help them make informed decisions about energy and environmental issues important to school systems and communities. These design guidelines outline high performance principles for the new or retrofit design of your K-12 school in tropical island climates. By incorporating energy improvements into their construction or renovation plans, schools can significantly reduce energy consumption and costs.

  14. Simulations

    Ngada, Narcisse

    2015-06-15

    The complexity and cost of building and running high-power electrical systems make the use of simulations unavoidable. The simulations available today provide great understanding about how systems really operate. This paper helps the reader to gain an insight into simulation in the field of power converters for particle accelerators. Starting with the definition and basic principles of simulation, two simulation types, as well as their leading tools, are presented: analog and numerical simulations. Some practical applications of each simulation type are also considered. The final conclusion then summarizes the main important items to keep in mind before opting for a simulation tool or before performing a simulation.

  15. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    Barker, Ashley D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bernholdt, David E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bland, Arthur S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Gary, Jeff D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Hack, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; McNally, Stephen T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Rogers, James H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Smith, Brian E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Straatsma, T. P. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Sukumar, Sreenivas Rangan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Thach, Kevin G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Tichenor, Suzy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Wells, Jack C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility

    2016-03-01

    Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatest number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for Southern

  16. Exploring HPCS languages in scientific computing

    Barrett, R F; Alam, S R; Almeida, V F d; Bernholdt, D E; Elwasif, W R; Kuehn, J A; Poole, S W; Shet, A G

    2008-01-01

    As computers scale up dramatically to tens and hundreds of thousands of cores, develop deeper computational and memory hierarchies, and increased heterogeneity, developers of scientific software are increasingly challenged to express complex parallel simulations effectively and efficiently. In this paper, we explore the three languages developed under the DARPA High-Productivity Computing Systems (HPCS) program to help address these concerns: Chapel, Fortress, and X10. These languages provide a variety of features not found in currently popular HPC programming environments and make it easier to express powerful computational constructs, leading to new ways of thinking about parallel programming. Though the languages and their implementations are not yet mature enough for a comprehensive evaluation, we discuss some of the important features, and provide examples of how they can be used in scientific computing. We believe that these characteristics will be important to the future of high-performance scientific computing, whether the ultimate language of choice is one of the HPCS languages or something else

  17. Exploring HPCS languages in scientific computing

    Barrett, R. F.; Alam, S. R.; Almeida, V. F. d.; Bernholdt, D. E.; Elwasif, W. R.; Kuehn, J. A.; Poole, S. W.; Shet, A. G.

    2008-07-01

    As computers scale up dramatically to tens and hundreds of thousands of cores, develop deeper computational and memory hierarchies, and increased heterogeneity, developers of scientific software are increasingly challenged to express complex parallel simulations effectively and efficiently. In this paper, we explore the three languages developed under the DARPA High-Productivity Computing Systems (HPCS) program to help address these concerns: Chapel, Fortress, and X10. These languages provide a variety of features not found in currently popular HPC programming environments and make it easier to express powerful computational constructs, leading to new ways of thinking about parallel programming. Though the languages and their implementations are not yet mature enough for a comprehensive evaluation, we discuss some of the important features, and provide examples of how they can be used in scientific computing. We believe that these characteristics will be important to the future of high-performance scientific computing, whether the ultimate language of choice is one of the HPCS languages or something else.

  18. Scientific report 1997

    Gosset, J.; Gueneau, C.; Doizi, D.

    1998-01-01

    In this book are found technical and scientific papers on the main works of the Direction of the Fuel Cycle (DCC) in France. The study fields are: the up-side of the nuclear fuel cycle with theoretical studies (plasma simulation) and technological developments and instrumentation (lasers diodes, carbides plasma projection, carbon 13 enrichment); the down-side nuclear fuel cycle with theoretical studies (ion Eu 3+ complexation simulation, decay simulation, uranium and plutonium diffusion study, electrolyser operating simulation), scenario studies ( recycling, wastes management), experimental studies; dismantling and cleaning (soils cleaning, surface-active agent for decontamination, fault tree analysis); analysis with expert systems and mass spectrometry. (A.L.B.)

  19. High performance statistical computing with parallel R: applications to biology and climate modelling

    Samatova, Nagiza F; Branstetter, Marcia; Ganguly, Auroop R; Hettich, Robert; Khan, Shiraj; Kora, Guruprasad; Li, Jiangtian; Ma, Xiaosong; Pan, Chongle; Shoshani, Arie; Yoginath, Srikanth

    2006-01-01

    Ultrascale computing and high-throughput experimental technologies have enabled the production of scientific data about complex natural phenomena. With this opportunity, comes a new problem - the massive quantities of data so produced. Answers to fundamental questions about the nature of those phenomena remain largely hidden in the produced data. The goal of this work is to provide a scalable high performance statistical data analysis framework to help scientists perform interactive analyses of these raw data to extract knowledge. Towards this goal we have been developing an open source parallel statistical analysis package, called Parallel R, that lets scientists employ a wide range of statistical analysis routines on high performance shared and distributed memory architectures without having to deal with the intricacies of parallelizing these routines

  20. Real-time Tsunami Inundation Prediction Using High Performance Computers

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2014-12-01

    Recently off-shore tsunami observation stations based on cabled ocean bottom pressure gauges are actively being deployed especially in Japan. These cabled systems are designed to provide real-time tsunami data before tsunamis reach coastlines for disaster mitigation purposes. To receive real benefits of these observations, real-time analysis techniques to make an effective use of these data are necessary. A representative study was made by Tsushima et al. (2009) that proposed a method to provide instant tsunami source prediction based on achieving tsunami waveform data. As time passes, the prediction is improved by using updated waveform data. After a tsunami source is predicted, tsunami waveforms are synthesized from pre-computed tsunami Green functions of linear long wave equations. Tsushima et al. (2014) updated the method by combining the tsunami waveform inversion with an instant inversion of coseismic crustal deformation and improved the prediction accuracy and speed in the early stages. For disaster mitigation purposes, real-time predictions of tsunami inundation are also important. In this study, we discuss the possibility of real-time tsunami inundation predictions, which require faster-than-real-time tsunami inundation simulation in addition to instant tsunami source analysis. Although the computational amount is large to solve non-linear shallow water equations for inundation predictions, it has become executable through the recent developments of high performance computing technologies. We conducted parallel computations of tsunami inundation and achieved 6.0 TFLOPS by using 19,000 CPU cores. We employed a leap-frog finite difference method with nested staggered grids of which resolution range from 405 m to 5 m. The resolution ratio of each nested domain was 1/3. Total number of grid points were 13 million, and the time step was 0.1 seconds. Tsunami sources of 2011 Tohoku-oki earthquake were tested. The inundation prediction up to 2 hours after the

  1. High Performance Fiber Reinforced Cement Composites 6 HPFRCC 6

    Reinhardt, Hans; Naaman, A

    2012-01-01

    High Performance Fiber Reinforced Cement Composites (HPFRCC) represent a class of cement composites whose stress-strain response in tension undergoes strain hardening behaviour accompanied by multiple cracking, leading to a high strain prior to failure. The primary objective of this International Workshop was to provide a compendium of up-to-date information on the most recent developments and research advances in the field of High Performance Fiber Reinforced Cement Composites. Approximately 65 contributions from leading world experts are assembled in these proceedings and provide an authoritative perspective on the subject. Special topics include fresh and hardening state properties; self-compacting mixtures; mechanical behavior under compressive, tensile, and shear loading; structural applications; impact, earthquake and fire resistance; durability issues; ultra-high performance fiber reinforced concrete; and textile reinforced concrete. Target readers: graduate students, researchers, fiber producers, desi...

  2. High performance leadership in unusually challenging educational circumstances

    Andy Hargreaves

    2015-04-01

    Full Text Available This paper draws on findings from the results of a study of leadership in high performing organizations in three sectors. Organizations were sampled and included on the basis of high performance in relation to no performance, past performance, performance among similar peers and performance in the face of limited resources or challenging circumstances. The paper concentrates on leadership in four schools that met the sample criteria.  It draws connections to explanations of the high performance ofEstoniaon the OECD PISA tests of educational achievement. The article argues that leadership in these four schools that performed above expectations comprised more than a set of competencies. Instead, leadership took the form of a narrative or quest that pursued an inspiring dream with relentless determination; took improvement pathways that were more innovative than comparable peers; built collaboration and community including with competing schools; and connected short-term success to long-term sustainability.

  3. High Performance Building Facade Solutions - PIER Final Project Report

    Lee, Eleanor; Selkowitz, Stephen

    2009-12-31

    Building facades directly influence heating and cooling loads and indirectly influence lighting loads when daylighting is considered, and are therefore a major determinant of annual energy use and peak electric demand. Facades also significantly influence occupant comfort and satisfaction, making the design optimization challenge more complex than many other building systems.This work focused on addressing significant near-term opportunities to reduce energy use in California commercial building stock by a) targeting voluntary, design-based opportunities derived from the use of better design guidelines and tools, and b) developing and deploying more efficient glazings, shading systems, daylighting systems, facade systems and integrated controls. This two-year project, supported by the California Energy Commission PIER program and the US Department of Energy, initiated a collaborative effort between The Lawrence Berkeley National Laboratory (LBNL) and major stakeholders in the facades industry to develop, evaluate, and accelerate market deployment of emerging, high-performance, integrated facade solutions. The LBNL Windows Testbed Facility acted as the primary catalyst and mediator on both sides of the building industry supply-user business transaction by a) aiding component suppliers to create and optimize cost effective, integrated systems that work, and b) demonstrating and verifying to the owner, designer, and specifier community that these integrated systems reliably deliver required energy performance. An industry consortium was initiated amongst approximately seventy disparate stakeholders, who unlike the HVAC or lighting industry, has no single representative, multi-disciplinary body or organized means of communicating and collaborating. The consortium provided guidance on the project and more importantly, began to mutually work out and agree on the goals, criteria, and pathways needed to attain the ambitious net zero energy goals defined by California and

  4. Scientific Misconduct.

    Goodstein, David

    2002-01-01

    Explores scientific fraud, asserting that while few scientists actually falsify results, the field has become so competitive that many are misbehaving in other ways; an example would be unreasonable criticism by anonymous peer reviewers. (EV)

  5. Contemporary high performance computing from petascale toward exascale

    Vetter, Jeffrey S

    2013-01-01

    Contemporary High Performance Computing: From Petascale toward Exascale focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. The first part of the book examines significant trends in HPC systems, including computer architectures, applications, performance, and software. It discusses the growth from terascale to petascale computing and the influence of the TOP500 and Green500 lists. The second part of the

  6. Turbostratic stacked CVD graphene for high-performance devices

    Uemura, Kohei; Ikuta, Takashi; Maehashi, Kenzo

    2018-03-01

    We have fabricated turbostratic stacked graphene with high-transport properties by the repeated transfer of CVD monolayer graphene. The turbostratic stacked CVD graphene exhibited higher carrier mobility and conductivity than CVD monolayer graphene. The electron mobility for the three-layer turbostratic stacked CVD graphene surpassed 10,000 cm2 V-1 s-1 at room temperature, which is five times greater than that for CVD monolayer graphene. The results indicate that the high performance is derived from maintenance of the linear band dispersion, suppression of the carrier scattering, and parallel conduction. Therefore, turbostratic stacked CVD graphene is a superior material for high-performance devices.

  7. High performance computing and communications: FY 1997 implementation plan

    NONE

    1996-12-01

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A provides a detailed look at HPCC Program activities within each agency.

  8. Architectural Principles and Experimentation of Distributed High Performance Virtual Clusters

    Younge, Andrew J.

    2016-01-01

    With the advent of virtualization and Infrastructure-as-a-Service (IaaS), the broader scientific computing community is considering the use of clouds for their scientific computing needs. This is due to the relative scalability, ease of use, advanced user environment customization abilities, and the many novel computing paradigms available for…

  9. High-performance-vehicle technology. [fighter aircraft propulsion

    Povinelli, L. A.

    1979-01-01

    Propulsion needs of high performance military aircraft are discussed. Inlet performance, nozzle performance and cooling, and afterburner performance are covered. It is concluded that nonaxisymmetric nozzles provide cleaner external lines and enhanced maneuverability, but the internal flows are more complex. Swirl afterburners show promise for enhanced performance in the high altitude, low Mach number region.

  10. Optical interconnection networks for high-performance computing systems

    Biberman, Aleksandr; Bergman, Keren

    2012-01-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. (review article)

  11. Frictional behaviour of high performance fibrous tows: Friction experiments

    Cornelissen, Bo; Rietman, Bert; Akkerman, Remko

    2013-01-01

    Tow friction is an important mechanism in the production and processing of high performance fibrous tows. The frictional behaviour of these tows is anisotropic due to the texture of the filaments as well as the tows. This work describes capstan experiments that were performed to measure the

  12. Determination of Caffeine in Beverages by High Performance Liquid Chromatography.

    DiNunzio, James E.

    1985-01-01

    Describes the equipment, procedures, and results for the determination of caffeine in beverages by high performance liquid chromatography. The method is simple, fast, accurate, and, because sample preparation is minimal, it is well suited for use in a teaching laboratory. (JN)

  13. Enabling High-Performance Computing as a Service

    AbdelBaky, Moustafa

    2012-10-01

    With the right software infrastructure, clouds can provide scientists with as a service access to high-performance computing resources. An award-winning prototype framework transforms the Blue Gene/P system into an elastic cloud to run a representative HPC application. © 2012 IEEE.

  14. High Performance Skiing. How to Become a Better Alpine Skier.

    Yacenda, John

    This book is intended for people who desire to improve their skiing by exploring high performance techniques leading to: (1) more consistent performance; (2) less fatigue and more endurance; (3) greater strength and flexibility; (4) greater versatility; (5) greater confidence in all skiing conditions; and (6) the knowledge to participate in…

  15. Sensitive high performance liquid chromatographic method for the ...

    A new simple, sensitive, cost-effective and reproducible high performance liquid chromatographic (HPLC) method for the determination of proguanil (PG) and its metabolites, cycloguanil (CG) and 4-chlorophenylbiguanide (4-CPB) in urine and plasma is described. The extraction procedure is a simple three-step process ...

  16. Contemporary high performance computing from petascale toward exascale

    Vetter, Jeffrey S

    2015-01-01

    A continuation of Contemporary High Performance Computing: From Petascale toward Exascale, this second volume continues the discussion of HPC flagship systems, major application workloads, facilities, and sponsors. The book includes of figures and pictures that capture the state of existing systems: pictures of buildings, systems in production, floorplans, and many block diagrams and charts to illustrate system design and performance.

  17. High-Performance Management Practices and Employee Outcomes in Denmark

    Cristini, Annalisa; Eriksson, Tor; Pozzoli, Dario

    High-performance work practices are frequently considered to have positive effects on corporate performance, but what do they do for employees? After showing that organizational innovation is indeed positively associated with firm performance, we investigate whether high-involvement work practices...

  18. Fatigue Behaviour of High Performance Cementitious Grout Masterflow 9500

    Sørensen, Eigil V.

    The present report describes the fatigue behaviour of the high performance grout MASTERFLOW 9500 subjected to cyclic loading, in air as well as submerged in water, at various frequencies and levels of maximum stress. Part of the results were also reported in [1] together with other mechanical...

  19. Two Profiles of the Dutch High Performing Employee

    de Waal, A. A.; Oudshoorn, Michella

    2015-01-01

    Purpose: The purpose of this study is to explore the profile of an ideal employee, to be more precise the behavioral characteristics of the Dutch high-performing employee (HPE). Organizational performance depends for a large part on the commitment of employees. Employees provide their knowledge, skills, experiences and creativity to the…

  20. A high performance electrometer amplifier of hybrid design

    Rao, N.V.; Nazare, C.K.

    1979-01-01

    A high performance, reliable, electrometer amplifier of hybrid design for low current measurements in mass spectrometers has been developed. The short term instability with a 5 x 10 11 ohms input resistor is less than 1 x 10sup(-15) Amp. The drift is better than 1 mV/hour. The design steps are illustrated with a typical amplifier performance details. (auth.)

  1. Development and validation of a reversed phase High Performance ...

    A simple, rapid, accurate and economical isocratic Reversed Phase High Performance Liquid Chromatography (RPHPLC) method was developed, validated and used for the evaluation of content of different brands of paracetamol tablets. The method was validated according to ICH guidelines and may be adopted for the ...

  2. Dynamic Social Networks in High Performance Football Coaching

    Occhino, Joseph; Mallett, Cliff; Rynne, Steven

    2013-01-01

    Background: Sports coaching is largely a social activity where engagement with athletes and support staff can enhance the experiences for all involved. This paper examines how high performance football coaches develop knowledge through their interactions with others within a social learning theory framework. Purpose: The key purpose of this study…

  3. Resolution of RNA using high-performance liquid chromatography

    Mclaughlin, L.W.; Bischoff, Rainer

    1987-01-01

    High-performance liquid chromatographic techniques can be very effective for the resolution and isolation of nucleic acids. The characteristic ionic (phosphodiesters) and hydrophobic (nucleobases) properties of RNAs can be exploited for their separation. In this respect anion-exchange and

  4. Mallow carotenoids determined by high-performance liquid chromatography

    Mallow (corchorus olitorius) is a green vegetable, which is widely consumed either fresh or dry by Middle East population. This study was carried out to determine the contents of major carotenoids quantitatively in mallow, by using a High Performance Liquid Chromatography (HPLC) equipped with a Bis...

  5. High-Performance Liquid Chromatography-Mass Spectrometry.

    Vestal, Marvin L.

    1984-01-01

    Reviews techniques for online coupling of high-performance liquid chromatography with mass spectrometry, emphasizing those suitable for application to nonvolatile samples. Also summarizes the present status, strengths, and weaknesses of various techniques and discusses potential applications of recently developed techniques for combined liquid…

  6. Developments on HNF based high performance and green solid propellants

    Keizers, H.L.J.; Heijden, A.E.D.M. van der; Vliet, L.D. van; Welland-Veltmans, W.H.M.; Ciucci, A.

    2001-01-01

    Worldwide developments are ongoing to develop new and more energetic composite solid propellant formulations for space transportation and military applications. Since the 90's, the use of HNF as a new high performance oxidiser is being reinvestigated. Within European development programmes,

  7. High-performance carbon nanotube-reinforced bioplastic

    Ramontja, J

    2009-12-01

    Full Text Available -1 High-Performance Carbon Nanotube-Reinforced Bioplastic 1. James Ramontja1,2, 2. Suprakas Sinha Ray1,*, 3. Sreejarani K. Pillai1, 4. Adriaan S. Luyt2 1. 1 DST/CSIR Nanotechnology Innovation Centre, National Centre for Nano-Structured Materials...

  8. High performance current controller for particle accelerator magnets supply

    Maheshwari, Ram Krishan; Bidoggia, Benoit; Munk-Nielsen, Stig

    2013-01-01

    The electromagnets in modern particle accelerators require high performance power supply whose output is required to track the current reference with a very high accuracy (down to 50 ppm). This demands very high bandwidth controller design. A converter based on buck converter topology is used...

  9. The design of high performance weak current integrated amplifier

    Chen Guojie; Cao Hui

    2005-01-01

    A design method of high performance weak current integrated amplifier using ICL7650 operational amplifier is introduced. The operating principle of circuits and the step of improving amplifier's performance are illustrated. Finally, the experimental results are given. The amplifier has programmable measurement range of 10 -9 -10 -12 A, automatic zero-correction, accurate measurement, and good stability. (authors)

  10. Monitoring aged reversed-phase high performance liquid chromatography columns

    Bolck, A; Smilde, AK; Bruins, CHP

    1999-01-01

    In this paper, a new approach for the quality assessment of routinely used reversed-phase high performance liquid chromatography columns is presented. A used column is not directly considered deteriorated when changes in retention occur. If attention is paid to the type and magnitude of the changes,

  11. Ultra high performance liquid chromatography of seized drugs

    Lurie, I.S.

    2010-01-01

    The primary goal of this thesis is to investigate the use of ultra high performance liquid chromatography (UHPLC) for the analysis of seized drugs. This goal was largely achieved and significant progress was made in achieving improved separation and detection of drugs of forensic interest.

  12. Comparative Studies of Some Polypores Using High Performance ...

    ... these polypores in a previous work. The ability of the polypores to produce triterpenoids is affected by their age, period of collection, geographical location and method of drying, which also affected the High Performance Liquid Chromatography characteristics of their secondary metabolites. African Research Review Vol.

  13. Manufacturing Advantage: Why High-Performance Work Systems Pay Off.

    Appelbaum, Eileen; Bailey, Thomas; Berg, Peter; Kalleberg, Arne L.

    A study examined the relationship between high-performance workplace practices and the performance of plants in the following manufacturing industries: steel, apparel, and medical electronic instruments and imaging. The multilevel research methodology combined the following data collection activities: (1) site visits; (2) collection of plant…

  14. Quantification of Tea Flavonoids by High Performance Liquid Chromatography

    Freeman, Jessica D.; Niemeyer, Emily D.

    2008-01-01

    We have developed a laboratory experiment that uses high performance liquid chromatography (HPLC) to quantify flavonoid levels in a variety of commercial teas. Specifically, this experiment analyzes a group of flavonoids known as catechins, plant-derived polyphenolic compounds commonly found in many foods and beverages, including green and black…

  15. High performance co-polyimide nanofiber reinforced composites

    Yao, Jian; Li, Guang; Bastiaansen, Cees; Peijs, Ton

    2015-01-01

    Electrospun co-polyimide BPDA (3, 3′, 4, 4′-Biphenyltetracarboxylic dianhydride)/PDA (p-Phenylenediamine)/ODA (4, 4′-oxydianiline) nanofiber reinforced flexible composites were manufactured by impregnating these high performance nanofibers with styrene-butadiene-styrene (SBS) triblock copolymer

  16. High performance flexible CMOS SOI FinFETs

    Fahad, Hossain M.; Sevilla, Galo T.; Ghoneim, Mohamed T.; Hussain, Muhammad Mustafa

    2014-01-01

    We demonstrate the first ever CMOS compatible soft etch back based high performance flexible CMOS SOI FinFETs. The move from planar to non-planar FinFETs has enabled continued scaling down to the 14 nm technology node. This has been possible due

  17. Flexible nanoscale high-performance FinFETs

    Sevilla, Galo T.; Ghoneim, Mohamed T.; Fahad, Hossain M.; Rojas, Jhonathan Prieto; Hussain, Aftab M.; Hussain, Muhammad Mustafa

    2014-01-01

    With the emergence of the Internet of Things (IoT), flexible high-performance nanoscale electronics are more desired. At the moment, FinFET is the most advanced transistor architecture used in the state-of-the-art microprocessors. Therefore, we show

  18. Radioactivity monitor for high-performance liquid chromatography

    Reeve, D.R.; Crozier, A.

    1977-01-01

    The coupling of a homogeneous radioactivity monitor to a liquid chromatograph involves compromises between the sensitivity of the monitor and the resolution and speed of analysis of the chromatograph. The theoretical relationships between these parameters are considered and expressions derived which make it possible to calculate suitable monitor operating conditions for most types of high-performance liquid chromatography

  19. Cobra Strikes! High-Performance Car Inspires Students, Markets Program

    Jenkins, Bonita

    2008-01-01

    Nestled in the Lower Piedmont region of upstate South Carolina, Piedmont Technical College (PTC) is one of 16 technical colleges in the state. Automotive technology is one of its most popular programs. The program features an instructive, motivating activity that the author describes in this article: building a high-performance car. The Cobra…

  20. Neural Correlates of High Performance in Foreign Language Vocabulary Learning

    Macedonia, Manuela; Muller, Karsten; Friederici, Angela D.

    2010-01-01

    Learning vocabulary in a foreign language is a laborious task which people perform with varying levels of success. Here, we investigated the neural underpinning of high performance on this task. In a within-subjects paradigm, participants learned 92 vocabulary items under two multimodal conditions: one condition paired novel words with iconic…