WorldWideScience

Sample records for high-performance scientific applications

  1. RAPPORT: running scientific high-performance computing applications on the cloud.

    Science.gov (United States)

    Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt

    2013-01-28

    Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.

  2. BurstMem: A High-Performance Burst Buffer System for Scientific Applications

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Teng [Auburn University, Auburn, Alabama; Oral, H Sarp [ORNL; Wang, Yandong [Auburn University, Auburn, Alabama; Settlemyer, Bradley W [ORNL; Atchley, Scott [ORNL; Yu, Weikuan [Auburn University, Auburn, Alabama

    2014-01-01

    The growth of computing power on large-scale sys- tems requires commensurate high-bandwidth I/O system. Many parallel file systems are designed to provide fast sustainable I/O in response to applications soaring requirements. To meet this need, a novel system is imperative to temporarily buffer the bursty I/O and gradually flush datasets to long-term parallel file systems. In this paper, we introduce the design of BurstMem, a high- performance burst buffer system. BurstMem provides a storage framework with efficient storage and communication manage- ment strategies. Our experiments demonstrate that BurstMem is able to speed up the I/O performance of scientific applications by up to 8.5 on leadership computer systems.

  3. High-performance scientific computing in the cloud

    Science.gov (United States)

    Jorissen, Kevin; Vila, Fernando; Rehr, John

    2011-03-01

    Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.

  4. Topic 14+16: High-performance and scientific applications and extreme-scale computing (Introduction)

    KAUST Repository

    Downes, Turlough P.

    2013-01-01

    As our understanding of the world around us increases it becomes more challenging to make use of what we already know, and to increase our understanding still further. Computational modeling and simulation have become critical tools in addressing this challenge. The requirements of high-resolution, accurate modeling have outstripped the ability of desktop computers and even small clusters to provide the necessary compute power. Many applications in the scientific and engineering domains now need very large amounts of compute time, while other applications, particularly in the life sciences, frequently have large data I/O requirements. There is thus a growing need for a range of high performance applications which can utilize parallel compute systems effectively, which have efficient data handling strategies and which have the capacity to utilise current and future systems. The High Performance and Scientific Applications topic aims to highlight recent progress in the use of advanced computing and algorithms to address the varied, complex and increasing challenges of modern research throughout both the "hard" and "soft" sciences. This necessitates being able to use large numbers of compute nodes, many of which are equipped with accelerators, and to deal with difficult I/O requirements. © 2013 Springer-Verlag.

  5. 5th International Conference on High Performance Scientific Computing

    CERN Document Server

    Hoang, Xuan; Rannacher, Rolf; Schlöder, Johannes

    2014-01-01

    This proceedings volume gathers a selection of papers presented at the Fifth International Conference on High Performance Scientific Computing, which took place in Hanoi on March 5-9, 2012. The conference was organized by the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) of Heidelberg University, Ho Chi Minh City University of Technology, and the Vietnam Institute for Advanced Study in Mathematics. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and practical applications. Subjects covered include mathematical modeling; numerical simulation; methods for optimization and control; parallel computing; software development; and applications of scientific computing in physics, mechanics and biomechanics, material science, hydrology, chemistry, biology, biotechnology, medicine, sports, psychology, transport, logistics, com...

  6. 3rd International Conference on High Performance Scientific Computing

    CERN Document Server

    Kostina, Ekaterina; Phu, Hoang; Rannacher, Rolf

    2008-01-01

    This proceedings volume contains a selection of papers presented at the Third International Conference on High Performance Scientific Computing held at the Hanoi Institute of Mathematics, Vietnamese Academy of Science and Technology (VAST), March 6-10, 2006. The conference has been organized by the Hanoi Institute of Mathematics, Interdisciplinary Center for Scientific Computing (IWR), Heidelberg, and its International PhD Program ``Complex Processes: Modeling, Simulation and Optimization'', and Ho Chi Minh City University of Technology. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and applications in practice. Subjects covered are mathematical modelling, numerical simulation, methods for optimization and control, parallel computing, software development, applications of scientific computing in physics, chemistry, biology and mechanics, environmental and hydrology problems, transport, logistics and site loca...

  7. 6th International Conference on High Performance Scientific Computing

    CERN Document Server

    Phu, Hoang; Rannacher, Rolf; Schlöder, Johannes

    2017-01-01

    This proceedings volume highlights a selection of papers presented at the Sixth International Conference on High Performance Scientific Computing, which took place in Hanoi, Vietnam on March 16-20, 2015. The conference was jointly organized by the Heidelberg Institute of Theoretical Studies (HITS), the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) at Heidelberg University, and the Vietnam Institute for Advanced Study in Mathematics, Ministry of Education The contributions cover a broad, interdisciplinary spectrum of scientific computing and showcase recent advances in theory, methods, and practical applications. Subjects covered numerical simulation, methods for optimization and control, parallel computing, and software development, as well as the applications of scientific computing in physics, mechanics, biomechanics and robotics, material science, hydrology, biotechnology, medicine, transport, scheduling, and in...

  8. A high performance scientific cloud computing environment for materials simulations

    Science.gov (United States)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  9. Accelerating Scientific Applications using High Performance Dense and Sparse Linear Algebra Kernels on GPUs

    KAUST Repository

    Abdelfattah, Ahmad

    2015-01-15

    High performance computing (HPC) platforms are evolving to more heterogeneous configurations to support the workloads of various applications. The current hardware landscape is composed of traditional multicore CPUs equipped with hardware accelerators that can handle high levels of parallelism. Graphical Processing Units (GPUs) are popular high performance hardware accelerators in modern supercomputers. GPU programming has a different model than that for CPUs, which means that many numerical kernels have to be redesigned and optimized specifically for this architecture. GPUs usually outperform multicore CPUs in some compute intensive and massively parallel applications that have regular processing patterns. However, most scientific applications rely on crucial memory-bound kernels and may witness bottlenecks due to the overhead of the memory bus latency. They can still take advantage of the GPU compute power capabilities, provided that an efficient architecture-aware design is achieved. This dissertation presents a uniform design strategy for optimizing critical memory-bound kernels on GPUs. Based on hierarchical register blocking, double buffering and latency hiding techniques, this strategy leverages the performance of a wide range of standard numerical kernels found in dense and sparse linear algebra libraries. The work presented here focuses on matrix-vector multiplication kernels (MVM) as repre- sentative and most important memory-bound operations in this context. Each kernel inherits the benefits of the proposed strategies. By exposing a proper set of tuning parameters, the strategy is flexible enough to suit different types of matrices, ranging from large dense matrices, to sparse matrices with dense block structures, while high performance is maintained. Furthermore, the tuning parameters are used to maintain the relative performance across different GPU architectures. Multi-GPU acceleration is proposed to scale the performance on several devices. The

  10. Scientific Applications Performance Evaluation on Burst Buffer

    KAUST Repository

    Markomanolis, George S.

    2017-10-19

    Parallel I/O is an integral component of modern high performance computing, especially in storing and processing very large datasets, such as the case of seismic imaging, CFD, combustion and weather modeling. The storage hierarchy includes nowadays additional layers, the latest being the usage of SSD-based storage as a Burst Buffer for I/O acceleration. We present an in-depth analysis on how to use Burst Buffer for specific cases and how the internal MPI I/O aggregators operate according to the options that the user provides during his job submission. We analyze the performance of a range of I/O intensive scientific applications, at various scales on a large installation of Lustre parallel file system compared to an SSD-based Burst Buffer. Our results show a performance improvement over Lustre when using Burst Buffer. Moreover, we show results from a data hierarchy library which indicate that the standard I/O approaches are not enough to get the expected performance from this technology. The performance gain on the total execution time of the studied applications is between 1.16 and 3 times compared to Lustre. One of the test cases achieved an impressive I/O throughput of 900 GB/s on Burst Buffer.

  11. A Secure Web Application Providing Public Access to High-Performance Data Intensive Scientific Resources - ScalaBLAST Web Application

    International Nuclear Information System (INIS)

    Curtis, Darren S.; Peterson, Elena S.; Oehmen, Chris S.

    2008-01-01

    This work presents the ScalaBLAST Web Application (SWA), a web based application implemented using the PHP script language, MySQL DBMS, and Apache web server under a GNU/Linux platform. SWA is an application built as part of the Data Intensive Computer for Complex Biological Systems (DICCBS) project at the Pacific Northwest National Laboratory (PNNL). SWA delivers accelerated throughput of bioinformatics analysis via high-performance computing through a convenient, easy-to-use web interface. This approach greatly enhances emerging fields of study in biology such as ontology-based homology, and multiple whole genome comparisons which, in the absence of a tool like SWA, require a heroic effort to overcome the computational bottleneck associated with genome analysis. The current version of SWA includes a user account management system, a web based user interface, and a backend process that generates the files necessary for the Internet scientific community to submit a ScalaBLAST parallel processing job on a dedicated cluster

  12. Language interoperability for high-performance parallel scientific components

    International Nuclear Information System (INIS)

    Elliot, N; Kohn, S; Smolinski, B

    1999-01-01

    With the increasing complexity and interdisciplinary nature of scientific applications, code reuse is becoming increasingly important in scientific computing. One method for facilitating code reuse is the use of components technologies, which have been used widely in industry. However, components have only recently worked their way into scientific computing. Language interoperability is an important underlying technology for these component architectures. In this paper, we present an approach to language interoperability for a high-performance parallel, component architecture being developed by the Common Component Architecture (CCA) group. Our approach is based on Interface Definition Language (IDL) techniques. We have developed a Scientific Interface Definition Language (SIDL), as well as bindings to C and Fortran. We have also developed a SIDL compiler and run-time library support for reference counting, reflection, object management, and exception handling (Babel). Results from using Babel to call a standard numerical solver library (written in C) from C and Fortran show that the cost of using Babel is minimal, where as the savings in development time and the benefits of object-oriented development support for C and Fortran far outweigh the costs

  13. Performance Analysis Tool for HPC and Big Data Applications on Scientific Clusters

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wucherl [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Koo, Michelle [Univ. of California, Berkeley, CA (United States); Cao, Yu [California Inst. of Technology (CalTech), Pasadena, CA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Nugent, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States); Wu, Kesheng [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-09-17

    Big data is prevalent in HPC computing. Many HPC projects rely on complex workflows to analyze terabytes or petabytes of data. These workflows often require running over thousands of CPU cores and performing simultaneous data accesses, data movements, and computation. It is challenging to analyze the performance involving terabytes or petabytes of workflow data or measurement data of the executions, from complex workflows over a large number of nodes and multiple parallel task executions. To help identify performance bottlenecks or debug the performance issues in large-scale scientific applications and scientific clusters, we have developed a performance analysis framework, using state-ofthe- art open-source big data processing tools. Our tool can ingest system logs and application performance measurements to extract key performance features, and apply the most sophisticated statistical tools and data mining methods on the performance data. It utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of the big data analysis framework, we conduct case studies on the workflows from an astronomy project known as the Palomar Transient Factory (PTF) and the job logs from the genome analysis scientific cluster. Our study processed many terabytes of system logs and application performance measurements collected on the HPC systems at NERSC. The implementation of our tool is generic enough to be used for analyzing the performance of other HPC systems and Big Data workows.

  14. The Centre of High-Performance Scientific Computing, Geoverbund, ABC/J - Geosciences enabled by HPSC

    Science.gov (United States)

    Kollet, Stefan; Görgen, Klaus; Vereecken, Harry; Gasper, Fabian; Hendricks-Franssen, Harrie-Jan; Keune, Jessica; Kulkarni, Ketan; Kurtz, Wolfgang; Sharples, Wendy; Shrestha, Prabhakar; Simmer, Clemens; Sulis, Mauro; Vanderborght, Jan

    2016-04-01

    The Centre of High-Performance Scientific Computing (HPSC TerrSys) was founded 2011 to establish a centre of competence in high-performance scientific computing in terrestrial systems and the geosciences enabling fundamental and applied geoscientific research in the Geoverbund ABC/J (geoscientfic research alliance of the Universities of Aachen, Cologne, Bonn and the Research Centre Jülich, Germany). The specific goals of HPSC TerrSys are to achieve relevance at the national and international level in (i) the development and application of HPSC technologies in the geoscientific community; (ii) student education; (iii) HPSC services and support also to the wider geoscientific community; and in (iv) the industry and public sectors via e.g., useful applications and data products. A key feature of HPSC TerrSys is the Simulation Laboratory Terrestrial Systems, which is located at the Jülich Supercomputing Centre (JSC) and provides extensive capabilities with respect to porting, profiling, tuning and performance monitoring of geoscientific software in JSC's supercomputing environment. We will present a summary of success stories of HPSC applications including integrated terrestrial model development, parallel profiling and its application from watersheds to the continent; massively parallel data assimilation using physics-based models and ensemble methods; quasi-operational terrestrial water and energy monitoring; and convection permitting climate simulations over Europe. The success stories stress the need for a formalized education of students in the application of HPSC technologies in future.

  15. Component-based software for high-performance scientific computing

    Energy Technology Data Exchange (ETDEWEB)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

  16. Component-based software for high-performance scientific computing

    International Nuclear Information System (INIS)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly

  17. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel's MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  18. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  19. High Performance Fortran for Aerospace Applications

    National Research Council Canada - National Science Library

    Mehrotra, Piyush

    2000-01-01

    .... HPF is a set of Fortran extensions designed to provide users with a high-level interface for programming data parallel scientific applications while delegating to the compiler/runtime system the task...

  20. Performance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systems

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2011-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore clusters: IBM POWER4, POWER5+ and Blue Gene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore clusters because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyro kinetic Toroidal Code in magnetic fusion to validate our performance model of the hybrid application on these multicore clusters. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore clusters. © 2011 IEEE.

  1. Performance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systems

    KAUST Repository

    Wu, Xingfu

    2011-08-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore clusters: IBM POWER4, POWER5+ and Blue Gene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore clusters because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyro kinetic Toroidal Code in magnetic fusion to validate our performance model of the hybrid application on these multicore clusters. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore clusters. © 2011 IEEE.

  2. A high performance scientific cloud computing environment for materials simulations

    OpenAIRE

    Jorissen, Kevin; Vila, Fernando D.; Rehr, John J.

    2011-01-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including...

  3. Scientific Data Services -- A High-Performance I/O System with Array Semantics

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Byna, Surendra; Rotem, Doron; Shoshani, Arie

    2011-09-21

    As high-performance computing approaches exascale, the existing I/O system design is having trouble keeping pace in both performance and scalability. We propose to address this challenge by adopting database principles and techniques in parallel I/O systems. First, we propose to adopt an array data model because many scientific applications represent their data in arrays. This strategy follows a cardinal principle from database research, which separates the logical view from the physical layout of data. This high-level data model gives the underlying implementation more freedom to optimize the physical layout and to choose the most effective way of accessing the data. For example, knowing that a set of write operations is working on a single multi-dimensional array makes it possible to keep the subarrays in a log structure during the write operations and reassemble them later into another physical layout as resources permit. While maintaining the high-level view, the storage system could compress the user data to reduce the physical storage requirement, collocate data records that are frequently used together, or replicate data to increase availability and fault-tolerance. Additionally, the system could generate secondary data structures such as database indexes and summary statistics. We expect the proposed Scientific Data Services approach to create a “live” storage system that dynamically adjusts to user demands and evolves with the massively parallel storage hardware.

  4. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  5. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu

    2013-07-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  6. HPCToolkit: performance tools for scientific computing

    Energy Technology Data Exchange (ETDEWEB)

    Tallent, N; Mellor-Crummey, J; Adhianto, L; Fagan, M; Krentel, M [Department of Computer Science, Rice University, Houston, TX 77005 (United States)

    2008-07-15

    As part of the U.S. Department of Energy's Scientific Discovery through Advanced Computing (SciDAC) program, science teams are tackling problems that require simulation and modeling on petascale computers. As part of activities associated with the SciDAC Center for Scalable Application Development Software (CScADS) and the Performance Engineering Research Institute (PERI), Rice University is building software tools for performance analysis of scientific applications on the leadership-class platforms. In this poster abstract, we briefly describe the HPCToolkit performance tools and how they can be used to pinpoint bottlenecks in SPMD and multi-threaded parallel codes. We demonstrate HPCToolkit's utility by applying it to two SciDAC applications: the S3D code for simulation of turbulent combustion and the MFDn code for ab initio calculations of microscopic structure of nuclei.

  7. HPCToolkit: performance tools for scientific computing

    International Nuclear Information System (INIS)

    Tallent, N; Mellor-Crummey, J; Adhianto, L; Fagan, M; Krentel, M

    2008-01-01

    As part of the U.S. Department of Energy's Scientific Discovery through Advanced Computing (SciDAC) program, science teams are tackling problems that require simulation and modeling on petascale computers. As part of activities associated with the SciDAC Center for Scalable Application Development Software (CScADS) and the Performance Engineering Research Institute (PERI), Rice University is building software tools for performance analysis of scientific applications on the leadership-class platforms. In this poster abstract, we briefly describe the HPCToolkit performance tools and how they can be used to pinpoint bottlenecks in SPMD and multi-threaded parallel codes. We demonstrate HPCToolkit's utility by applying it to two SciDAC applications: the S3D code for simulation of turbulent combustion and the MFDn code for ab initio calculations of microscopic structure of nuclei

  8. The application of cloud computing to scientific workflows: a study of cost and performance.

    Science.gov (United States)

    Berriman, G Bruce; Deelman, Ewa; Juve, Gideon; Rynge, Mats; Vöckler, Jens-S

    2013-01-28

    The current model of transferring data from data centres to desktops for analysis will soon be rendered impractical by the accelerating growth in the volume of science datasets. Processing will instead often take place on high-performance servers co-located with data. Evaluations of how new technologies such as cloud computing would support such a new distributed computing model are urgently needed. Cloud computing is a new way of purchasing computing and storage resources on demand through virtualization technologies. We report here the results of investigations of the applicability of commercial cloud computing to scientific computing, with an emphasis on astronomy, including investigations of what types of applications can be run cheaply and efficiently on the cloud, and an example of an application well suited to the cloud: processing a large dataset to create a new science product.

  9. Results of data base management system parameterized performance testing related to GSFC scientific applications

    Science.gov (United States)

    Carchedi, C. H.; Gough, T. L.; Huston, H. A.

    1983-01-01

    The results of a variety of tests designed to demonstrate and evaluate the performance of several commercially available data base management system (DBMS) products compatible with the Digital Equipment Corporation VAX 11/780 computer system are summarized. The tests were performed on the INGRES, ORACLE, and SEED DBMS products employing applications that were similar to scientific applications under development by NASA. The objectives of this testing included determining the strength and weaknesses of the candidate systems, performance trade-offs of various design alternatives and the impact of some installation and environmental (computer related) influences.

  10. Implementation of Scientific Computing Applications on the Cell Broadband Engine

    Directory of Open Access Journals (Sweden)

    Guochun Shi

    2009-01-01

    Full Text Available The Cell Broadband Engine architecture is a revolutionary processor architecture well suited for many scientific codes. This paper reports on an effort to implement several traditional high-performance scientific computing applications on the Cell Broadband Engine processor, including molecular dynamics, quantum chromodynamics and quantum chemistry codes. The paper discusses data and code restructuring strategies necessary to adapt the applications to the intrinsic properties of the Cell processor and demonstrates performance improvements achieved on the Cell architecture. It concludes with the lessons learned and provides practical recommendations on optimization techniques that are believed to be most appropriate.

  11. Communication Requirements and Interconnect Optimization forHigh-End Scientific Applications

    Energy Technology Data Exchange (ETDEWEB)

    Kamil, Shoaib; Oliker, Leonid; Pinar, Ali; Shalf, John

    2007-11-12

    The path towards realizing peta-scale computing isincreasingly dependent on building supercomputers with unprecedentednumbers of processors. To prevent the interconnect from dominating theoverall cost of these ultra-scale systems, there is a critical need forhigh-performance network solutions whose costs scale linearly with systemsize. This work makes several unique contributions towards attaining thatgoal. First, we conduct one of the broadest studies to date of high-endapplication communication requirements, whose computational methodsinclude: finite-difference, lattice-bolzmann, particle in cell, sparselinear algebra, particle mesh ewald, and FFT-based solvers. Toefficiently collect this data, we use the IPM (Integrated PerformanceMonitoring) profiling layer to gather detailed messaging statistics withminimal impact to code performance. Using the derived communicationcharacterizations, we next present fit-trees interconnects, a novelapproach for designing network infrastructure at a fraction of thecomponent cost of traditional fat-tree solutions. Finally, we propose theHybrid Flexibly Assignable Switch Topology (HFAST) infrastructure, whichuses both passive (circuit) and active (packet) commodity switchcomponents to dynamically reconfigure interconnects to suit thetopological requirements of scientific applications. Overall ourexploration leads to a promising directions for practically addressingthe interconnect requirements of future peta-scale systems.

  12. Techniques and tools for measuring energy efficiency of scientific software applications

    CERN Document Server

    Abdurachmanov, David; Eulisse, Giulio; Knight, Robert; Niemi, Tapio; Nurminen, Jukka K.; Nyback, Filip; Pestana, Goncalo; Ou, Zhonghong; Khan, Kashif

    2014-01-01

    The scale of scientific High Performance Computing (HPC) and High Throughput Computing (HTC) has increased significantly in recent years, and is becoming sensitive to total energy use and cost. Energy-efficiency has thus become an important concern in scientific fields such as High Energy Physics (HEP). There has been a growing interest in utilizing alternate architectures, such as low power ARM processors, to replace traditional Intel x86 architectures. Nevertheless, even though such solutions have been successfully used in mobile applications with low I/O and memory demands, it is unclear if they are suitable and more energy-efficient in the scientific computing environment. Furthermore, there is a lack of tools and experience to derive and compare power consumption between the architectures for various workloads, and eventually to support software optimizations for energy efficiency. To that end, we have performed several physical and software-based measurements of workloads from HEP applications running o...

  13. Techniques and tools for measuring energy efficiency of scientific software applications

    International Nuclear Information System (INIS)

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Niemi, Tapio; Pestana, Gonçalo; Khan, Kashif; Nurminen, Jukka K; Nyback, Filip; Ou, Zhonghong

    2015-01-01

    The scale of scientific High Performance Computing (HPC) and High Throughput Computing (HTC) has increased significantly in recent years, and is becoming sensitive to total energy use and cost. Energy-efficiency has thus become an important concern in scientific fields such as High Energy Physics (HEP). There has been a growing interest in utilizing alternate architectures, such as low power ARM processors, to replace traditional Intel x86 architectures. Nevertheless, even though such solutions have been successfully used in mobile applications with low I/O and memory demands, it is unclear if they are suitable and more energy-efficient in the scientific computing environment. Furthermore, there is a lack of tools and experience to derive and compare power consumption between the architectures for various workloads, and eventually to support software optimizations for energy efficiency. To that end, we have performed several physical and software-based measurements of workloads from HEP applications running on ARM and Intel architectures, and compare their power consumption and performance. We leverage several profiling tools (both in hardware and software) to extract different characteristics of the power use. We report the results of these measurements and the experience gained in developing a set of measurement techniques and profiling tools to accurately assess the power consumption for scientific workloads. (paper)

  14. Are Cloud Environments Ready for Scientific Applications?

    Science.gov (United States)

    Mehrotra, P.; Shackleford, K.

    2011-12-01

    Cloud computing environments are becoming widely available both in the commercial and government sectors. They provide flexibility to rapidly provision resources in order to meet dynamic and changing computational needs without the customers incurring capital expenses and/or requiring technical expertise. Clouds also provide reliable access to resources even though the end-user may not have in-house expertise for acquiring or operating such resources. Consolidation and pooling in a cloud environment allow organizations to achieve economies of scale in provisioning or procuring computing resources and services. Because of these and other benefits, many businesses and organizations are migrating their business applications (e.g., websites, social media, and business processes) to cloud environments-evidenced by the commercial success of offerings such as the Amazon EC2. In this paper, we focus on the feasibility of utilizing cloud environments for scientific workloads and workflows particularly of interest to NASA scientists and engineers. There is a wide spectrum of such technical computations. These applications range from small workstation-level computations to mid-range computing requiring small clusters to high-performance simulations requiring supercomputing systems with high bandwidth/low latency interconnects. Data-centric applications manage and manipulate large data sets such as satellite observational data and/or data previously produced by high-fidelity modeling and simulation computations. Most of the applications are run in batch mode with static resource requirements. However, there do exist situations that have dynamic demands, particularly ones with public-facing interfaces providing information to the general public, collaborators and partners, as well as to internal NASA users. In the last few months we have been studying the suitability of cloud environments for NASA's technical and scientific workloads. We have ported several applications to

  15. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Yier [Univ. of Central Florida, Orlando, FL (United States)

    2017-07-14

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from this project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.

  16. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    International Nuclear Information System (INIS)

    Khaleel, Mohammad A.

    2009-01-01

    This report is an account of the deliberations and conclusions of the workshop on 'Forefront Questions in Nuclear Science and the Role of High Performance Computing' held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to (1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; (2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; (3) provide nuclear physicists the opportunity to influence the development of high performance computing; and (4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  17. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.

    2009-10-01

    This report is an account of the deliberations and conclusions of the workshop on "Forefront Questions in Nuclear Science and the Role of High Performance Computing" held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to 1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; 2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; 3) provide nuclear physicists the opportunity to influence the development of high performance computing; and 4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  18. Top scientific research center deploys Zambeel Aztera (TM) network storage system in high performance environment

    CERN Multimedia

    2002-01-01

    " The National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory has implemented a Zambeel Aztera storage system and software to accelerate the productivity of scientists running high performance scientific simulations and computations" (1 page).

  19. High-performance dual-speed CCD camera system for scientific imaging

    Science.gov (United States)

    Simpson, Raymond W.

    1996-03-01

    Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.

  20. High Performance Data Distribution for Scientific Community

    Science.gov (United States)

    Tirado, Juan M.; Higuero, Daniel; Carretero, Jesus

    2010-05-01

    Institutions such as NASA, ESA or JAXA find solutions to distribute data from their missions to the scientific community, and their long term archives. This is a complex problem, as it includes a vast amount of data, several geographically distributed archives, heterogeneous architectures with heterogeneous networks, and users spread around the world. We propose a novel architecture (HIDDRA) that solves this problem aiming to reduce user intervention in data acquisition and processing. HIDDRA is a modular system that provides a highly efficient parallel multiprotocol download engine, using a publish/subscribe policy which helps the final user to obtain data of interest transparently. Our system can deal simultaneously with multiple protocols (HTTP,HTTPS, FTP, GridFTP among others) to obtain the maximum bandwidth, reducing the workload in data server and increasing flexibility. It can also provide high reliability and fault tolerance, as several sources of data can be used to perform one file download. HIDDRA architecture can be arranged into a data distribution network deployed on several sites that can cooperate to provide former features. HIDDRA has been addressed by the 2009 e-IRG Report on Data Management as a promising initiative for data interoperability. Our first prototype has been evaluated in collaboration with the ESAC centre in Villafranca del Castillo (Spain) that shows a high scalability and performance, opening a wide spectrum of opportunities. Some preliminary results have been published in the Journal of Astrophysics and Space Science [1]. [1] D. Higuero, J.M. Tirado, J. Carretero, F. Félix, and A. de La Fuente. HIDDRA: a highly independent data distribution and retrieval architecture for space observation missions. Astrophysics and Space Science, 321(3):169-175, 2009

  1. Optimal Design of Fixed-Point and Floating-Point Arithmetic Units for Scientific Applications

    OpenAIRE

    Pongyupinpanich, Surapong

    2012-01-01

    The challenge in designing a floating-point arithmetic co-processor/processor for scientific and engineering applications is to improve the performance, efficiency, and computational accuracy of the arithmetic unit. The arithmetic unit should efficiently support several mathematical functions corresponding to scientific and engineering computation demands. Moreover, the computations should be performed as fast as possible with a high degree of accuracy. Thus, this thesis proposes algorithm, d...

  2. Performance Engineering Technology for Scientific Component Software

    Energy Technology Data Exchange (ETDEWEB)

    Malony, Allen D.

    2007-05-08

    Large-scale, complex scientific applications are beginning to benefit from the use of component software design methodology and technology for software development. Integral to the success of component-based applications is the ability to achieve high-performing code solutions through the use of performance engineering tools for both intra-component and inter-component analysis and optimization. Our work on this project aimed to develop performance engineering technology for scientific component software in association with the DOE CCTTSS SciDAC project (active during the contract period) and the broader Common Component Architecture (CCA) community. Our specific implementation objectives were to extend the TAU performance system and Program Database Toolkit (PDT) to support performance instrumentation, measurement, and analysis of CCA components and frameworks, and to develop performance measurement and monitoring infrastructure that could be integrated in CCA applications. These objectives have been met in the completion of all project milestones and in the transfer of the technology into the continuing CCA activities as part of the DOE TASCS SciDAC2 effort. In addition to these achievements, over the past three years, we have been an active member of the CCA Forum, attending all meetings and serving in several working groups, such as the CCA Toolkit working group, the CQoS working group, and the Tutorial working group. We have contributed significantly to CCA tutorials since SC'04, hosted two CCA meetings, participated in the annual ACTS workshops, and were co-authors on the recent CCA journal paper [24]. There are four main areas where our project has delivered results: component performance instrumentation and measurement, component performance modeling and optimization, performance database and data mining, and online performance monitoring. This final report outlines the achievements in these areas for the entire project period. The submitted progress

  3. Scalability of Parallel Scientific Applications on the Cloud

    Directory of Open Access Journals (Sweden)

    Satish Narayana Srirama

    2011-01-01

    Full Text Available Cloud computing, with its promise of virtually infinite resources, seems to suit well in solving resource greedy scientific computing problems. To study the effects of moving parallel scientific applications onto the cloud, we deployed several benchmark applications like matrix–vector operations and NAS parallel benchmarks, and DOUG (Domain decomposition On Unstructured Grids on the cloud. DOUG is an open source software package for parallel iterative solution of very large sparse systems of linear equations. The detailed analysis of DOUG on the cloud showed that parallel applications benefit a lot and scale reasonable on the cloud. We could also observe the limitations of the cloud and its comparison with cluster in terms of performance. However, for efficiently running the scientific applications on the cloud infrastructure, the applications must be reduced to frameworks that can successfully exploit the cloud resources, like the MapReduce framework. Several iterative and embarrassingly parallel algorithms are reduced to the MapReduce model and their performance is measured and analyzed. The analysis showed that Hadoop MapReduce has significant problems with iterative methods, while it suits well for embarrassingly parallel algorithms. Scientific computing often uses iterative methods to solve large problems. Thus, for scientific computing on the cloud, this paper raises the necessity for better frameworks or optimizations for MapReduce.

  4. Designing scientific applications on GPUs

    CERN Document Server

    Couturier, Raphael

    2013-01-01

    Many of today's complex scientific applications now require a vast amount of computational power. General purpose graphics processing units (GPGPUs) enable researchers in a variety of fields to benefit from the computational power of all the cores available inside graphics cards.Understand the Benefits of Using GPUs for Many Scientific ApplicationsDesigning Scientific Applications on GPUs shows you how to use GPUs for applications in diverse scientific fields, from physics and mathematics to computer science. The book explains the methods necessary for designing or porting your scientific appl

  5. Numerical research on the thermal performance of high altitude scientific balloons

    International Nuclear Information System (INIS)

    Dai, Qiumin; Xing, Daoming; Fang, Xiande; Zhao, Yingjie

    2017-01-01

    Highlights: • A model is presented to evaluate the IR radiation between translucent surfaces. • Comprehensive ascent and thermal models of balloons are established. • The effect of IR transmissivity on film temperature distribution is unneglectable. • Atmospheric IR radiation is the primary thermal factor of balloons at night. • Solar radiation is the primary thermal factor of balloons during the day. - Abstract: Internal infrared (IR) radiation is an important factor that affects the thermal performance of high altitude balloons. The internal IR radiation is commonly neglected or treated as the IR radiation between opaque gray bodies. In this paper, a mathematical model which considers the IR transmissivity of the film is proposed to estimate the internal IR radiation. Comprehensive ascent and thermal models for high altitude scientific balloons are established. Based on the models, thermal characteristics of a NASA super pressure balloon are simulated. The effects of film IR property on the thermal behaviors of the balloon are discussed in detail. The results are helpful for the design and operation of high altitude scientific balloons.

  6. On the Performance of the Python Programming Language for Serial and Parallel Scientific Computations

    Directory of Open Access Journals (Sweden)

    Xing Cai

    2005-01-01

    Full Text Available This article addresses the performance of scientific applications that use the Python programming language. First, we investigate several techniques for improving the computational efficiency of serial Python codes. Then, we discuss the basic programming techniques in Python for parallelizing serial scientific applications. It is shown that an efficient implementation of the array-related operations is essential for achieving good parallel performance, as for the serial case. Once the array-related operations are efficiently implemented, probably using a mixed-language implementation, good serial and parallel performance become achievable. This is confirmed by a set of numerical experiments. Python is also shown to be well suited for writing high-level parallel programs.

  7. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Chase Qishi [New Jersey Inst. of Technology, Newark, NJ (United States); Univ. of Memphis, TN (United States); Zhu, Michelle Mengxia [Southern Illinois Univ., Carbondale, IL (United States)

    2016-06-06

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models feature diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific

  8. Scientific Programming with High Performance Fortran: A Case Study Using the xHPF Compiler

    Directory of Open Access Journals (Sweden)

    Eric De Sturler

    1997-01-01

    Full Text Available Recently, the first commercial High Performance Fortran (HPF subset compilers have appeared. This article reports on our experiences with the xHPF compiler of Applied Parallel Research, version 1.2, for the Intel Paragon. At this stage, we do not expect very High Performance from our HPF programs, even though performance will eventually be of paramount importance for the acceptance of HPF. Instead, our primary objective is to study how to convert large Fortran 77 (F77 programs to HPF such that the compiler generates reasonably efficient parallel code. We report on a case study that identifies several problems when parallelizing code with HPF; most of these problems affect current HPF compiler technology in general, although some are specific for the xHPF compiler. We discuss our solutions from the perspective of the scientific programmer, and presenttiming results on the Intel Paragon. The case study comprises three programs of different complexity with respect to parallelization. We use the dense matrix-matrix product to show that the distribution of arrays and the order of nested loops significantly influence the performance of the parallel program. We use Gaussian elimination with partial pivoting to study the parallelization strategy of the compiler. There are various ways to structure this algorithm for a particular data distribution. This example shows how much effort may be demanded from the programmer to support the compiler in generating an efficient parallel implementation. Finally, we use a small application to show that the more complicated structure of a larger program may introduce problems for the parallelization, even though all subroutines of the application are easy to parallelize by themselves. The application consists of a finite volume discretization on a structured grid and a nested iterative solver. Our case study shows that it is possible to obtain reasonably efficient parallel programs with xHPF, although the compiler

  9. High performance cloud auditing and applications

    CERN Document Server

    Choi, Baek-Young; Song, Sejun

    2014-01-01

    This book mainly focuses on cloud security and high performance computing for cloud auditing. The book discusses emerging challenges and techniques developed for high performance semantic cloud auditing, and presents the state of the art in cloud auditing, computing and security techniques with focus on technical aspects and feasibility of auditing issues in federated cloud computing environments.   In summer 2011, the United States Air Force Research Laboratory (AFRL) CyberBAT Cloud Security and Auditing Team initiated the exploration of the cloud security challenges and future cloud auditing research directions that are covered in this book. This work was supported by the United States government funds from the Air Force Office of Scientific Research (AFOSR), the AFOSR Summer Faculty Fellowship Program (SFFP), the Air Force Research Laboratory (AFRL) Visiting Faculty Research Program (VFRP), the National Science Foundation (NSF) and the National Institute of Health (NIH). All chapters were partially suppor...

  10. Multi-Language Programming Environments for High Performance Java Computing

    Directory of Open Access Journals (Sweden)

    Vladimir Getov

    1999-01-01

    Full Text Available Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI tool which provides application programmers wishing to use Java with immediate accessibility to existing scientific packages. The JCI tool also facilitates rapid development and reuse of existing code. These benefits are provided at minimal cost to the programmer. While beneficial to the programmer, the additional advantages of mixed‐language programming in terms of application performance and portability are addressed in detail within the context of this paper. In addition, we discuss how the JCI tool is complementing other ongoing projects such as IBM’s High‐Performance Compiler for Java (HPCJ and IceT’s metacomputing environment.

  11. Vision systems for scientific and engineering applications

    International Nuclear Information System (INIS)

    Chadda, V.K.

    2009-01-01

    Human performance can get degraded due to boredom, distraction and fatigue in vision-related tasks such as measurement, counting etc. Vision based techniques are increasingly being employed in many scientific and engineering applications. Notable advances in this field are emerging from continuing improvements in the fields of sensors and related technologies, and advances in computer hardware and software. Automation utilizing vision-based systems can perform repetitive tasks faster and more accurately, with greater consistency over time than humans. Electronics and Instrumentation Services Division has developed vision-based systems for several applications to perform tasks such as precision alignment, biometric access control, measurement, counting etc. This paper describes in brief four such applications. (author)

  12. RSYST: From nuclear reactor calculations towards a highly sophisticated scientific software integration environment

    International Nuclear Information System (INIS)

    Noack, M.; Seybold, J.; Ruehle, R.

    1996-01-01

    The software environment RSYST was originally used to solve problems of reactor physics. The consideration of advanced scientific simulation requirements and the strict application of modern software design principles led to a system which is perfectly suitable to solve problems in various complex scientific problem domains. Starting with a review of the early days of RSYST, we describe the straight evolution driven by the need of software environment which combines the advantages of a high-performance database system with the capability to integrate sophisticated scientific technical applications. The RSYST architecture is presented and the data modelling capabilities are described. To demonstrate the powerful possibilities and flexibility of the RSYST environment, we describe a wide range of RSYST applications, e.g., mechanical simulations of multibody systems, which are used in biomechanical research, civil engineering and robotics. In addition, a hypermedia system which is used for scientific technical training and documentation is presented. (orig.) [de

  13. High-brightness electron beams for production of high intensity, coherent radiation for scientific and industrial applications

    International Nuclear Information System (INIS)

    Kim, K.-J.

    1999-01-01

    Relativistic electron beams with high six-dimensional phase space densities, i.e., high-brightness beams, are the basis for efficient generation of intense and coherent radiation beams for advanced scientific and industrial applications. The remarkable progress in synchrotrons radiation facilities from the first generation to the current, third-generation capability illustrates this point. With the recent development of the high-brightness electron gun based on laser-driven rf photocathodes, linacs have become another important option for high-brightness electron beams. With linacs of about 100 MeV, megawatt-class infrared free-electron lasers can be designed for industrial applications such as power beaming. With linacs of about 10 GeV, 1- angstrom x-ray beams with brightness and time resolution exceeding by several orders of magnitude the current synchrotrons radiation sources can be generated based on self-amplified spontaneous emission. Scattering of a high-brightness electron beam by high power laser beams is emerging as a compact method of generating short-pulse, bright x-rays. In the high-energy frontier, photons of TeV quantum energy could be generated by scattering laser beams with TeV electron beams in future linear colliders

  14. HPC Cloud for Scientific and Business Applications: Taxonomy, Vision, and Research Challenges

    OpenAIRE

    Netto, Marco A. S.; Calheiros, Rodrigo N.; Rodrigues, Eduardo R.; Cunha, Renato L. F.; Buyya, Rajkumar

    2017-01-01

    High Performance Computing (HPC) clouds are becoming an alternative to on-premise clusters for executing scientific applications and business analytics services. Most research efforts in HPC cloud aim to understand the cost-benefit of moving resource-intensive applications from on-premise environments to public cloud platforms. Industry trends show hybrid environments are the natural path to get the best of the on-premise and cloud resources---steady (and sensitive) workloads can run on on-pr...

  15. 3D graphene nanomaterials for binder-free supercapacitors: scientific design for enhanced performance

    Science.gov (United States)

    He, Shuijian; Chen, Wei

    2015-04-01

    Because of the excellent intrinsic properties, especially the strong mechanical strength, extraordinarily high surface area and extremely high conductivity, graphene is deemed as a versatile building block for fabricating functional materials for energy production and storage applications. In this article, the recent progress in the assembly of binder-free and self-standing graphene-based materials, as well as their application in supercapacitors are reviewed, including electrical double layer capacitors, pseudocapacitors, and asymmetric supercapacitors. Various fabrication strategies and the influence of structures on the capacitance performance of 3D graphene-based materials are discussed. We finally give concluding remarks and an outlook on the scientific design of binder-free and self-standing graphene materials for achieving better capacitance performance.

  16. Application of High-performance Visual Analysis Methods to Laser Wakefield Particle Acceleration Data

    International Nuclear Information System (INIS)

    Rubel, Oliver; Prabhat, Mr.; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Ahern, Sean; Weber, Gunther H.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes

    2008-01-01

    Our work combines and extends techniques from high-performance scientific data management and visualization to enable scientific researchers to gain insight from extremely large, complex, time-varying laser wakefield particle accelerator simulation data. We extend histogram-based parallel coordinates for use in visual information display as well as an interface for guiding and performing data mining operations, which are based upon multi-dimensional and temporal thresholding and data subsetting operations. To achieve very high performance on parallel computing platforms, we leverage FastBit, a state-of-the-art index/query technology, to accelerate data mining and multi-dimensional histogram computation. We show how these techniques are used in practice by scientific researchers to identify, visualize and analyze a particle beam in a large, time-varying dataset

  17. Multi-Language Programming Environments for High Performance Java Computing

    OpenAIRE

    Vladimir Getov; Paul Gray; Sava Mintchev; Vaidy Sunderam

    1999-01-01

    Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI) tool which provides ...

  18. Predictive Performance Tuning of OpenACC Accelerated Applications

    KAUST Repository

    Siddiqui, Shahzeb; Feki, Saber

    2014-01-01

    , with the introduction of high level programming models such as OpenACC [1] and OpenMP 4.0 [2], these devices are becoming more accessible and practical to use by a larger scientific community. However, performance optimization of OpenACC accelerated applications usually

  19. High Performance Computing Software Applications for Space Situational Awareness

    Science.gov (United States)

    Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.

    The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.

  20. Optical Thermal Characterization Enables High-Performance Electronics Applications

    Energy Technology Data Exchange (ETDEWEB)

    2016-02-01

    NREL developed a modeling and experimental strategy to characterize thermal performance of materials. The technique provides critical data on thermal properties with relevance for electronics packaging applications. Thermal contact resistance and bulk thermal conductivity were characterized for new high-performance materials such as thermoplastics, boron-nitride nanosheets, copper nanowires, and atomically bonded layers. The technique is an important tool for developing designs and materials that enable power electronics packaging with small footprint, high power density, and low cost for numerous applications.

  1. Performance Issues in High Performance Fortran Implementations of Sensor-Based Applications

    Directory of Open Access Journals (Sweden)

    David R. O'hallaron

    1997-01-01

    Full Text Available Applications that get their inputs from sensors are an important and often overlooked application domain for High Performance Fortran (HPF. Such sensor-based applications typically perform regular operations on dense arrays, and often have latency and through put requirements that can only be achieved with parallel machines. This article describes a study of sensor-based applications, including the fast Fourier transform, synthetic aperture radar imaging, narrowband tracking radar processing, multibaseline stereo imaging, and medical magnetic resonance imaging. The applications are written in a dialect of HPF developed at Carnegie Mellon, and are compiled by the Fx compiler for the Intel Paragon. The main results of the study are that (1 it is possible to realize good performance for realistic sensor-based applications written in HPF and (2 the performance of the applications is determined by the performance of three core operations: independent loops (i.e., loops with no dependences between iterations, reductions, and index permutations. The article discusses the implications for HPF implementations and introduces some simple tests that implementers and users can use to measure the efficiency of the loops, reductions, and index permutations generated by an HPF compiler.

  2. Efficient Use of Distributed Systems for Scientific Applications

    Science.gov (United States)

    Taylor, Valerie; Chen, Jian; Canfield, Thomas; Richard, Jacques

    2000-01-01

    Distributed computing has been regarded as the future of high performance computing. Nationwide high speed networks such as vBNS are becoming widely available to interconnect high-speed computers, virtual environments, scientific instruments and large data sets. One of the major issues to be addressed with distributed systems is the development of computational tools that facilitate the efficient execution of parallel applications on such systems. These tools must exploit the heterogeneous resources (networks and compute nodes) in distributed systems. This paper presents a tool, called PART, which addresses this issue for mesh partitioning. PART takes advantage of the following heterogeneous system features: (1) processor speed; (2) number of processors; (3) local network performance; and (4) wide area network performance. Further, different finite element applications under consideration may have different computational complexities, different communication patterns, and different element types, which also must be taken into consideration when partitioning. PART uses parallel simulated annealing to partition the domain, taking into consideration network and processor heterogeneity. The results of using PART for an explicit finite element application executing on two IBM SPs (located at Argonne National Laboratory and the San Diego Supercomputer Center) indicate an increase in efficiency by up to 36% as compared to METIS, a widely used mesh partitioning tool. The input to METIS was modified to take into consideration heterogeneous processor performance; METIS does not take into consideration heterogeneous networks. The execution times for these applications were reduced by up to 30% as compared to METIS. These results are given in Figure 1 for four irregular meshes with number of elements ranging from 30,269 elements for the Barth5 mesh to 11,451 elements for the Barth4 mesh. Future work with PART entails using the tool with an integrated application requiring

  3. NCI's Transdisciplinary High Performance Scientific Data Platform

    Science.gov (United States)

    Evans, Ben; Antony, Joseph; Bastrakova, Irina; Car, Nicholas; Cox, Simon; Druken, Kelsey; Evans, Bradley; Fraser, Ryan; Ip, Alex; Kemp, Carina; King, Edward; Minchin, Stuart; Larraondo, Pablo; Pugh, Tim; Richards, Clare; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    The Australian National Computational Infrastructure (NCI) manages Earth Systems data collections sourced from several domains and organisations onto a single High Performance Data (HPD) Node to further Australia's national priority research and innovation agenda. The NCI HPD Node has rapidly established its value, currently managing over 10 PBytes of datasets from collections that span a wide range of disciplines including climate, weather, environment, geoscience, geophysics, water resources and social sciences. Importantly, in order to facilitate broad user uptake, maximise reuse and enable transdisciplinary access through software and standardised interfaces, the datasets, associated information systems and processes have been incorporated into the design and operation of a unified platform that NCI has called, the National Environmental Research Data Interoperability Platform (NERDIP). The key goal of the NERDIP is to regularise data access so that it is easily discoverable, interoperable for different domains and enabled for high performance methods. It adopts and implements international standards and data conventions, and promotes scientific integrity within a high performance computing and data analysis environment. NCI has established a rich and flexible computing environment to access to this data, through the NCI supercomputer; a private cloud that supports both domain focused virtual laboratories and in-common interactive analysis interfaces; as well as remotely through scalable data services. Data collections of this importance must be managed with careful consideration of both their current use and the needs of the end-communities, as well as its future potential use, such as transitioning to more advanced software and improved methods. It is therefore critical that the data platform is both well-managed and trusted for stable production use (including transparency and reproducibility), agile enough to incorporate new technological advances and

  4. CCD developed for scientific application by Hamamatsu

    CERN Document Server

    Miyaguchi, K; Dezaki, J; Yamamoto, K

    1999-01-01

    We have developed CCDs for scientific applications that feature a low readout noise of less than 5 e-rms and low dark current of 10-25 pA/cm sup 2 at room temperature. CCDs with these characteristics will prove extremely useful in applications such as spectroscopic measurement and dental radiography. In addition, a large-area CCD of 2kx4k pixels and 15 mu m square pixel size has recently been completed for optical use in astronomical observations. Applications to X-ray astronomy require the most challenging device performance in terms of deep depletion, high CTE, and focal plane size, among others. An abuttable X-ray CCD, having 1024x1024 pixels and 24 mu m square pixel size, is to be installed in an international space station (ISS). We are now striving to achieve the lowest usable cooling temperature by means of a built-in TEC with limited power consumption. Details on the development status are described in this report. We would also like to present our future plans for a large active area and deep depleti...

  5. Performance Validity Testing in Neuropsychology: Scientific Basis and Clinical Application-A Brief Review.

    Science.gov (United States)

    Greher, Michael R; Wodushek, Thomas R

    2017-03-01

    Performance validity testing refers to neuropsychologists' methodology for determining whether neuropsychological test performances completed in the course of an evaluation are valid (ie, the results of true neurocognitive function) or invalid (ie, overly impacted by the patient's effort/engagement in testing). This determination relies upon the use of either standalone tests designed for this sole purpose, or specific scores/indicators embedded within traditional neuropsychological measures that have demonstrated this utility. In response to a greater appreciation for the critical role that performance validity issues play in neuropsychological testing and the need to measure this variable to the best of our ability, the scientific base for performance validity testing has expanded greatly over the last 20 to 30 years. As such, the majority of current day neuropsychologists in the United States use a variety of measures for the purpose of performance validity testing as part of everyday forensic and clinical practice and address this issue directly in their evaluations. The following is the first article of a 2-part series that will address the evolution of performance validity testing in the field of neuropsychology, both in terms of the science as well as the clinical application of this measurement technique. The second article of this series will review performance validity tests in terms of methods for development of these measures, and maximizing of diagnostic accuracy.

  6. Age and Scientific Performance.

    Science.gov (United States)

    Cole, Stephen

    1979-01-01

    The long-standing belief that age is negatively associated with scientific productivity and creativity is shown to be based upon incorrect analysis of data. Studies reported in this article suggest that the relationship between age and scientific performance is influenced by the operation of the reward system. (Author)

  7. Predictive Performance Tuning of OpenACC Accelerated Applications

    KAUST Repository

    Siddiqui, Shahzeb

    2014-05-04

    Graphics Processing Units (GPUs) are gradually becoming mainstream in supercomputing as their capabilities to significantly accelerate a large spectrum of scientific applications have been clearly identified and proven. Moreover, with the introduction of high level programming models such as OpenACC [1] and OpenMP 4.0 [2], these devices are becoming more accessible and practical to use by a larger scientific community. However, performance optimization of OpenACC accelerated applications usually requires an in-depth knowledge of the hardware and software specifications. We suggest a prediction-based performance tuning mechanism [3] to quickly tune OpenACC parameters for a given application to dynamically adapt to the execution environment on a given system. This approach is applied to a finite difference kernel to tune the OpenACC gang and vector clauses for mapping the compute kernels into the underlying accelerator architecture. Our experiments show a significant performance improvement against the default compiler parameters and a faster tuning by an order of magnitude compared to the brute force search tuning.

  8. DURIP: High Performance Computing in Biomathematics Applications

    Science.gov (United States)

    2017-05-10

    Mathematics and Statistics (AMS) at the University of California, Santa Cruz (UCSC) to conduct research and research-related education in areas of...Computing in Biomathematics Applications Report Title The goal of this award was to enhance the capabilities of the Department of Applied Mathematics and...DURIP: High Performance Computing in Biomathematics Applications The goal of this award was to enhance the capabilities of the Department of Applied

  9. Cyberinfrastructure and Scientific Collaboration: Application of a Virtual Team Performance Framework with Potential Relevance to Education. WCER Working Paper No. 2010-12

    Science.gov (United States)

    Kraemer, Sara; Thorn, Christopher A.

    2010-01-01

    The purpose of this exploratory study was to identify and describe some of the dimensions of scientific collaborations using high throughput computing (HTC) through the lens of a virtual team performance framework. A secondary purpose was to assess the viability of using a virtual team performance framework to study scientific collaborations using…

  10. High-performance silicon photonics technology for telecommunications applications.

    Science.gov (United States)

    Yamada, Koji; Tsuchizawa, Tai; Nishi, Hidetaka; Kou, Rai; Hiraki, Tatsurou; Takeda, Kotaro; Fukuda, Hiroshi; Ishikawa, Yasuhiko; Wada, Kazumi; Yamamoto, Tsuyoshi

    2014-04-01

    By way of a brief review of Si photonics technology, we show that significant improvements in device performance are necessary for practical telecommunications applications. In order to improve device performance in Si photonics, we have developed a Si-Ge-silica monolithic integration platform, on which compact Si-Ge-based modulators/detectors and silica-based high-performance wavelength filters are monolithically integrated. The platform features low-temperature silica film deposition, which cannot damage Si-Ge-based active devices. Using this platform, we have developed various integrated photonic devices for broadband telecommunications applications.

  11. High-performance silicon photonics technology for telecommunications applications

    International Nuclear Information System (INIS)

    Yamada, Koji; Tsuchizawa, Tai; Nishi, Hidetaka; Kou, Rai; Hiraki, Tatsurou; Takeda, Kotaro; Fukuda, Hiroshi; Yamamoto, Tsuyoshi; Ishikawa, Yasuhiko; Wada, Kazumi

    2014-01-01

    By way of a brief review of Si photonics technology, we show that significant improvements in device performance are necessary for practical telecommunications applications. In order to improve device performance in Si photonics, we have developed a Si-Ge-silica monolithic integration platform, on which compact Si-Ge–based modulators/detectors and silica-based high-performance wavelength filters are monolithically integrated. The platform features low-temperature silica film deposition, which cannot damage Si-Ge–based active devices. Using this platform, we have developed various integrated photonic devices for broadband telecommunications applications. (review)

  12. High-performance silicon photonics technology for telecommunications applications

    Science.gov (United States)

    Yamada, Koji; Tsuchizawa, Tai; Nishi, Hidetaka; Kou, Rai; Hiraki, Tatsurou; Takeda, Kotaro; Fukuda, Hiroshi; Ishikawa, Yasuhiko; Wada, Kazumi; Yamamoto, Tsuyoshi

    2014-04-01

    By way of a brief review of Si photonics technology, we show that significant improvements in device performance are necessary for practical telecommunications applications. In order to improve device performance in Si photonics, we have developed a Si-Ge-silica monolithic integration platform, on which compact Si-Ge-based modulators/detectors and silica-based high-performance wavelength filters are monolithically integrated. The platform features low-temperature silica film deposition, which cannot damage Si-Ge-based active devices. Using this platform, we have developed various integrated photonic devices for broadband telecommunications applications.

  13. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard; Allcock, William; Beggio, Chris; Campbell, Stuart; Cherry, Andrew; Cholia, Shreyas; Dart, Eli; England, Clay; Fahey, Tim; Foertter, Fernanda; Goldstone, Robin; Hick, Jason; Karelitz, David; Kelly, Kaki; Monroe, Laura; Prabhat,; Skinner, David; White, Julia

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at the DOE national laboratories. The report contains findings from that review.

  14. A framework for integration of scientific applications into the OpenTopography workflow

    Science.gov (United States)

    Nandigam, V.; Crosby, C.; Baru, C.

    2012-12-01

    The NSF-funded OpenTopography facility provides online access to Earth science-oriented high-resolution LIDAR topography data, online processing tools, and derivative products. The underlying cyberinfrastructure employs a multi-tier service oriented architecture that is comprised of an infrastructure tier, a processing services tier, and an application tier. The infrastructure tier consists of storage, compute resources as well as supporting databases. The services tier consists of the set of processing routines each deployed as a Web service. The applications tier provides client interfaces to the system. (e.g. Portal). We propose a "pluggable" infrastructure design that will allow new scientific algorithms and processing routines developed and maintained by the community to be integrated into the OpenTopography system so that the wider earth science community can benefit from its availability. All core components in OpenTopography are available as Web services using a customized open-source Opal toolkit. The Opal toolkit provides mechanisms to manage and track job submissions, with the help of a back-end database. It allows monitoring of job and system status by providing charting tools. All core components in OpenTopography have been developed, maintained and wrapped as Web services using Opal by OpenTopography developers. However, as the scientific community develops new processing and analysis approaches this integration approach is not scalable efficiently. Most of the new scientific applications will have their own active development teams performing regular updates, maintenance and other improvements. It would be optimal to have the application co-located where its developers can continue to actively work on it while still making it accessible within the OpenTopography workflow for processing capabilities. We will utilize a software framework for remote integration of these scientific applications into the OpenTopography system. This will be accomplished by

  15. Performance evaluation of scientific programs on advanced architecture computers

    International Nuclear Information System (INIS)

    Walker, D.W.; Messina, P.; Baille, C.F.

    1988-01-01

    Recently a number of advanced architecture machines have become commercially available. These new machines promise better cost-performance then traditional computers, and some of them have the potential of competing with current supercomputers, such as the Cray X/MP, in terms of maximum performance. This paper describes an on-going project to evaluate a broad range of advanced architecture computers using a number of complete scientific application programs. The computers to be evaluated include distributed- memory machines such as the NCUBE, INTEL and Caltech/JPL hypercubes, and the MEIKO computing surface, shared-memory, bus architecture machines such as the Sequent Balance and the Alliant, very long instruction word machines such as the Multiflow Trace 7/200 computer, traditional supercomputers such as the Cray X.MP and Cray-2, and SIMD machines such as the Connection Machine. Currently 11 application codes from a number of scientific disciplines have been selected, although it is not intended to run all codes on all machines. Results are presented for two of the codes (QCD and missile tracking), and future work is proposed

  16. Scientific applications of frequency-stabilized laser technology in space

    Science.gov (United States)

    Schumaker, Bonny L.

    1990-01-01

    A synoptic investigation of the uses of frequency-stabilized lasers for scientific applications in space is presented. It begins by summarizing properties of lasers, characterizing their frequency stability, and describing limitations and techniques to achieve certain levels of frequency stability. Limits to precision set by laser frequency stability for various kinds of measurements are investigated and compared with other sources of error. These other sources include photon-counting statistics, scattered laser light, fluctuations in laser power, and intensity distribution across the beam, propagation effects, mechanical and thermal noise, and radiation pressure. Methods are explored to improve the sensitivity of laser-based interferometric and range-rate measurements. Several specific types of science experiments that rely on highly precise measurements made with lasers are analyzed, and anticipated errors and overall performance are discussed. Qualitative descriptions are given of a number of other possible science applications involving frequency-stabilized lasers and related laser technology in space. These applications will warrant more careful analysis as technology develops.

  17. Parallel Backprojection: A Case Study in High-Performance Reconfigurable Computing

    Directory of Open Access Journals (Sweden)

    Cordes Ben

    2009-01-01

    Full Text Available High-performance reconfigurable computing (HPRC is a novel approach to provide large-scale computing power to modern scientific applications. Using both general-purpose processors and FPGAs allows application designers to exploit fine-grained and coarse-grained parallelism, achieving high degrees of speedup. One scientific application that benefits from this technique is backprojection, an image formation algorithm that can be used as part of a synthetic aperture radar (SAR processing system. We present an implementation of backprojection for SAR on an HPRC system. Using simulated data taken at a variety of ranges, our implementation runs over 200 times faster than a similar software program, with an overall application speedup better than 50x. The backprojection application is easily parallelizable, achieving near-linear speedup when run on multiple nodes of a clustered HPRC system. The results presented can be applied to other systems and other algorithms with similar characteristics.

  18. Parallel Backprojection: A Case Study in High-Performance Reconfigurable Computing

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available High-performance reconfigurable computing (HPRC is a novel approach to provide large-scale computing power to modern scientific applications. Using both general-purpose processors and FPGAs allows application designers to exploit fine-grained and coarse-grained parallelism, achieving high degrees of speedup. One scientific application that benefits from this technique is backprojection, an image formation algorithm that can be used as part of a synthetic aperture radar (SAR processing system. We present an implementation of backprojection for SAR on an HPRC system. Using simulated data taken at a variety of ranges, our implementation runs over 200 times faster than a similar software program, with an overall application speedup better than 50x. The backprojection application is easily parallelizable, achieving near-linear speedup when run on multiple nodes of a clustered HPRC system. The results presented can be applied to other systems and other algorithms with similar characteristics.

  19. ERAST: Scientific Applications and Technology Commercialization

    Science.gov (United States)

    Hunley, John D. (Compiler); Kellogg, Yvonne (Compiler)

    2000-01-01

    This is a conference publication for an event designed to inform potential contractors and appropriate personnel in various scientific disciplines that the ERAST (Environmental Research Aircraft and Sensor Technology) vehicles have reached a certain level of maturity and are available to perform a variety of missions ranging from data gathering to telecommunications. There are multiple applications of the technology and a great many potential commercial and governmental markets. As high altitude platforms, the ERAST vehicles can gather data at higher resolution than satellites and can do so continuously, whereas satellites pass over a particular area only once each orbit. Formal addresses are given by Rich Christiansen, (Director of Programs, NASA Aerospace Technology Ent.), Larry Roeder, (Senior Policy Advisor, U.S. Dept. of State), and Dr. Marianne McCarthy, (DFRC Education Dept.). The Commercialization Workshop is chaired by Dale Tietz (President, New Vista International) and the Science Workshop is chaired by Steve Wegener, (Deputy Manager of NASA ERAST, NASA Ames Research Center.

  20. Applications of industrial computed tomography at Los Alamos Scientific Laboratory

    International Nuclear Information System (INIS)

    Kruger, R.P.; Morris, R.A.; Wecksung, G.W.

    1980-01-01

    A research and development program was begun three years ago at the Los Alamos Scientific Laboratory (LASL) to study nonmedical applications of computed tomography. This program had several goals. The first goal was to develop the necessary reconstruction algorithms to accurately reconstruct cross sections of nonmedical industrial objects. The second goal was to be able to perform extensive tomographic simulations to determine the efficacy of tomographic reconstruction with a variety of hardware configurations. The final goal was to construct an inexpensive industrial prototype scanner with a high degree of design flexibility. The implementation of these program goals is described

  1. Performance Evaluation of an Intel Haswell- and Ivy Bridge-Based Supercomputer Using Scientific and Engineering Applications

    Science.gov (United States)

    Saini, Subhash; Hood, Robert T.; Chang, Johnny; Baron, John

    2016-01-01

    We present a performance evaluation conducted on a production supercomputer of the Intel Xeon Processor E5- 2680v3, a twelve-core implementation of the fourth-generation Haswell architecture, and compare it with Intel Xeon Processor E5-2680v2, an Ivy Bridge implementation of the third-generation Sandy Bridge architecture. Several new architectural features have been incorporated in Haswell including improvements in all levels of the memory hierarchy as well as improvements to vector instructions and power management. We critically evaluate these new features of Haswell and compare with Ivy Bridge using several low-level benchmarks including subset of HPCC, HPCG and four full-scale scientific and engineering applications. We also present a model to predict the performance of HPCG and Cart3D within 5%, and Overflow within 10% accuracy.

  2. High-performance computing in seismology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  3. Multicore Challenges and Benefits for High Performance Scientific Computing

    Directory of Open Access Journals (Sweden)

    Ida M.B. Nielsen

    2008-01-01

    Full Text Available Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexity of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.

  4. Assessing Scientific Performance.

    Science.gov (United States)

    Weiner, John M.; And Others

    1984-01-01

    A method for assessing scientific performance based on relationships displayed numerically in published documents is proposed and illustrated using published documents in pediatric oncology for the period 1979-1982. Contributions of a major clinical investigations group, the Childrens Cancer Study Group, are analyzed. Twenty-nine references are…

  5. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    CERN Document Server

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2014-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  6. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    Science.gov (United States)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  7. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    International Nuclear Information System (INIS)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Muzaffar, Shahzad; Knight, Robert

    2015-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG). (paper)

  8. Autonomy vs. dependency of scientific collaboration in scientific performance

    Energy Technology Data Exchange (ETDEWEB)

    Chinchilla-Rodriguez, Z.; Miguel, S.; Perianes-Rodriguez, A.; Ovalle-Perandones, M.A.; Olmeda-Gomez, C.

    2016-07-01

    This article explores the capacity of Latin America in the generation of scientific knowledge and its visibility at the global level. The novelty of the contribution lies in the decomposition of leadership, plus its combination with the results of performance indicators. We compare the normalized citation of all output against the leading output, as well as scientific excellence (Chinchilla, et al. 2016a; 2016b), technological impact and the trends in collaboration types and normalized citation. The main goal is to determine to what extent the main Latin American producers of scientific output depend on collaboration to heighten research performance in terms of citation; or to the contrary, whether there is enough autonomy and capacity to leverage its competitiveness through the design of research and development agendas. To the best of our knowledge this is the first study adopting this approach at the country level within the field of N&N. (Author)

  9. Scientific Applications Performance Evaluation on Burst Buffer

    KAUST Repository

    Markomanolis, George S.; Hadri, Bilel; Khurram, Rooh Ul Amin; Feki, Saber

    2017-01-01

    Parallel I/O is an integral component of modern high performance computing, especially in storing and processing very large datasets, such as the case of seismic imaging, CFD, combustion and weather modeling. The storage hierarchy includes nowadays

  10. A New Approach in Advance Network Reservation and Provisioning for High-Performance Scientific Data Transfers

    Energy Technology Data Exchange (ETDEWEB)

    Balman, Mehmet; Chaniotakis, Evangelos; Shoshani, Arie; Sim, Alex

    2010-01-28

    Scientific applications already generate many terabytes and even petabytes of data from supercomputer runs and large-scale experiments. The need for transferring data chunks of ever-increasing sizes through the network shows no sign of abating. Hence, we need high-bandwidth high speed networks such as ESnet (Energy Sciences Network). Network reservation systems, i.e. ESnet's OSCARS (On-demand Secure Circuits and Advance Reservation System) establish guaranteed bandwidth of secure virtual circuits at a certain time, for a certain bandwidth and length of time. OSCARS checks network availability and capacity for the specified period of time, and allocates requested bandwidth for that user if it is available. If the requested reservation cannot be granted, no further suggestion is returned back to the user. Further, there is no possibility from the users view-point to make an optimal choice. We report a new algorithm, where the user specifies the total volume that needs to be transferred, a maximum bandwidth that he/she can use, and a desired time period within which the transfer should be done. The algorithm can find alternate allocation possibilities, including earliest time for completion, or shortest transfer duration - leaving the choice to the user. We present a novel approach for path finding in time-dependent networks, and a new polynomial algorithm to find possible reservation options according to given constraints. We have implemented our algorithm for testing and incorporation into a future version of ESnet?s OSCARS. Our approach provides a basis for provisioning end-to-end high performance data transfers over storage and network resources.

  11. Rapid Prototyping of High Performance Signal Processing Applications

    Science.gov (United States)

    Sane, Nimish

    Advances in embedded systems for digital signal processing (DSP) are enabling many scientific projects and commercial applications. At the same time, these applications are key to driving advances in many important kinds of computing platforms. In this region of high performance DSP, rapid prototyping is critical for faster time-to-market (e.g., in the wireless communications industry) or time-to-science (e.g., in radio astronomy). DSP system architectures have evolved from being based on application specific integrated circuits (ASICs) to incorporate reconfigurable off-the-shelf field programmable gate arrays (FPGAs), the latest multiprocessors such as graphics processing units (GPUs), or heterogeneous combinations of such devices. We, thus, have a vast design space to explore based on performance trade-offs, and expanded by the multitude of possibilities for target platforms. In order to allow systematic design space exploration, and develop scalable and portable prototypes, model based design tools are increasingly used in design and implementation of embedded systems. These tools allow scalable high-level representations, model based semantics for analysis and optimization, and portable implementations that can be verified at higher levels of abstractions and targeted toward multiple platforms for implementation. The designer can experiment using such tools at an early stage in the design cycle, and employ the latest hardware at later stages. In this thesis, we have focused on dataflow-based approaches for rapid DSP system prototyping. This thesis contributes to various aspects of dataflow-based design flows and tools as follows: 1. We have introduced the concept of topological patterns, which exploits commonly found repetitive patterns in DSP algorithms to allow scalable, concise, and parameterizable representations of large scale dataflow graphs in high-level languages. We have shown how an underlying design tool can systematically exploit a high

  12. High performance computing in linear control

    International Nuclear Information System (INIS)

    Datta, B.N.

    1993-01-01

    Remarkable progress has been made in both theory and applications of all important areas of control. The theory is rich and very sophisticated. Some beautiful applications of control theory are presently being made in aerospace, biomedical engineering, industrial engineering, robotics, economics, power systems, etc. Unfortunately, the same assessment of progress does not hold in general for computations in control theory. Control Theory is lagging behind other areas of science and engineering in this respect. Nowadays there is a revolution going on in the world of high performance scientific computing. Many powerful computers with vector and parallel processing have been built and have been available in recent years. These supercomputers offer very high speed in computations. Highly efficient software, based on powerful algorithms, has been developed to use on these advanced computers, and has also contributed to increased performance. While workers in many areas of science and engineering have taken great advantage of these hardware and software developments, control scientists and engineers, unfortunately, have not been able to take much advantage of these developments

  13. An Application-Based Performance Evaluation of NASAs Nebula Cloud Computing Platform

    Science.gov (United States)

    Saini, Subhash; Heistand, Steve; Jin, Haoqiang; Chang, Johnny; Hood, Robert T.; Mehrotra, Piyush; Biswas, Rupak

    2012-01-01

    The high performance computing (HPC) community has shown tremendous interest in exploring cloud computing as it promises high potential. In this paper, we examine the feasibility, performance, and scalability of production quality scientific and engineering applications of interest to NASA on NASA's cloud computing platform, called Nebula, hosted at Ames Research Center. This work represents the comprehensive evaluation of Nebula using NUTTCP, HPCC, NPB, I/O, and MPI function benchmarks as well as four applications representative of the NASA HPC workload. Specifically, we compare Nebula performance on some of these benchmarks and applications to that of NASA s Pleiades supercomputer, a traditional HPC system. We also investigate the impact of virtIO and jumbo frames on interconnect performance. Overall results indicate that on Nebula (i) virtIO and jumbo frames improve network bandwidth by a factor of 5x, (ii) there is a significant virtualization layer overhead of about 10% to 25%, (iii) write performance is lower by a factor of 25x, (iv) latency for short MPI messages is very high, and (v) overall performance is 15% to 48% lower than that on Pleiades for NASA HPC applications. We also comment on the usability of the cloud platform.

  14. Research initiatives for plug-and-play scientific computing

    International Nuclear Information System (INIS)

    McInnes, Lois Curfman; Dahlgren, Tamara; Nieplocha, Jarek; Bernholdt, David; Allan, Ben; Armstrong, Rob; Chavarria, Daniel; Elwasif, Wael; Gorton, Ian; Kenny, Joe; Krishan, Manoj; Malony, Allen; Norris, Boyana; Ray, Jaideep; Shende, Sameer

    2007-01-01

    This paper introduces three component technology initiatives within the SciDAC Center for Technology for Advanced Scientific Component Software (TASCS) that address ever-increasing productivity challenges in creating, managing, and applying simulation software to scientific discovery. By leveraging the Common Component Architecture (CCA), a new component standard for high-performance scientific computing, these initiatives tackle difficulties at different but related levels in the development of component-based scientific software: (1) deploying applications on massively parallel and heterogeneous architectures, (2) investigating new approaches to the runtime enforcement of behavioral semantics, and (3) developing tools to facilitate dynamic composition, substitution, and reconfiguration of component implementations and parameters, so that application scientists can explore tradeoffs among factors such as accuracy, reliability, and performance

  15. Managing Scientific Software Complexity with Bocca and CCA

    Directory of Open Access Journals (Sweden)

    Benjamin A. Allan

    2008-01-01

    Full Text Available In high-performance scientific software development, the emphasis is often on short time to first solution. Even when the development of new components mostly reuses existing components or libraries and only small amounts of new code must be created, dealing with the component glue code and software build processes to obtain complete applications is still tedious and error-prone. Component-based software meant to reduce complexity at the application level increases complexity to the extent that the user must learn and remember the interfaces and conventions of the component model itself. To address these needs, we introduce Bocca, the first tool to enable application developers to perform rapid component prototyping while maintaining robust software-engineering practices suitable to HPC environments. Bocca provides project management and a comprehensive build environment for creating and managing applications composed of Common Component Architecture components. Of critical importance for high-performance computing (HPC applications, Bocca is designed to operate in a language-agnostic way, simultaneously handling components written in any of the languages commonly used in scientific applications: C, C++, Fortran, Python and Java. Bocca automates the tasks related to the component glue code, freeing the user to focus on the scientific aspects of the application. Bocca embraces the philosophy pioneered by Ruby on Rails for web applications: start with something that works, and evolve it to the user's purpose.

  16. Cray XT4: An Early Evaluation for Petascale Scientific Simulation

    International Nuclear Information System (INIS)

    Alam, Sadaf R.; Barrett, Richard F.; Fahey, Mark R.; Kuehn, Jeffery A.; Sankaran, Ramanan; Worley, Patrick H.; Larkin, Jeffrey M.

    2007-01-01

    The scientific simulation capabilities of next generation high-end computing technology will depend on striking a balance among memory, processor, I/O, and local and global network performance across the breadth of the scientific simulation space. The Cray XT4 combines commodity AMD dual core Opteron processor technology with the second generation of Cray's custom communication accelerator in a system design whose balance is claimed to be driven by the demands of scientific simulation. This paper presents an evaluation of the Cray XT4 using microbenchmarks to develop a controlled understanding of individual system components, providing the context for analyzing and comprehending the performance of several petascale-ready applications. Results gathered from several strategic application domains are compared with observations on the previous generation Cray XT3 and other high-end computing systems, demonstrating performance improvements across a wide variety of application benchmark problems.

  17. Impulse: Memory System Support for Scientific Applications

    Directory of Open Access Journals (Sweden)

    John B. Carter

    1999-01-01

    Full Text Available Impulse is a new memory system architecture that adds two important features to a traditional memory controller. First, Impulse supports application‐specific optimizations through configurable physical address remapping. By remapping physical addresses, applications control how their data is accessed and cached, improving their cache and bus utilization. Second, Impulse supports prefetching at the memory controller, which can hide much of the latency of DRAM accesses. Because it requires no modification to processor, cache, or bus designs, Impulse can be adopted in conventional systems. In this paper we describe the design of the Impulse architecture, and show how an Impulse memory system can improve the performance of memory‐bound scientific applications. For instance, Impulse decreases the running time of the NAS conjugate gradient benchmark by 67%. We expect that Impulse will also benefit regularly strided, memory‐bound applications of commercial importance, such as database and multimedia programs.

  18. AMRZone: A Runtime AMR Data Sharing Framework For Scientific Applications

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Wenzhao; Tang, Houjun; Harenberg, Steven; Byna, Suren; Zou, Xiaocheng; Devendran, Dharshi; Martin, Daniel; Wu, Kesheng; Dong, Bin; Klasky, Scott; Samatova, Nagiza

    2017-08-31

    Frameworks that facilitate runtime data sharing across multiple applications are of great importance for scientific data analytics. Although existing frameworks work well over uniform mesh data, they can not effectively handle adaptive mesh refinement (AMR) data. Among the challenges to construct an AMR-capable framework include: (1) designing an architecture that facilitates online AMR data management; (2) achieving a load-balanced AMR data distribution for the data staging space at runtime; and (3) building an effective online index to support the unique spatial data retrieval requirements for AMR data. Towards addressing these challenges to support runtime AMR data sharing across scientific applications, we present the AMRZone framework. Experiments over real-world AMR datasets demonstrate AMRZone's effectiveness at achieving a balanced workload distribution, reading/writing large-scale datasets with thousands of parallel processes, and satisfying queries with spatial constraints. Moreover, AMRZone's performance and scalability are even comparable with existing state-of-the-art work when tested over uniform mesh data with up to 16384 cores; in the best case, our framework achieves a 46% performance improvement.

  19. High-performance insulator structures for accelerator applications

    International Nuclear Information System (INIS)

    Sampayan, S.E.; Caporaso, G.J.; Sanders, D.M.; Stoddard, R.D.; Trimble, D.O.; Elizondo, J.; Krogh, M.L.; Wieskamp, T.F.

    1997-05-01

    A new, high gradient insulator technology has been developed for accelerator systems. The concept involves the use of alternating layers of conductors and insulators with periods of order 1 mm or less. These structures perform many times better (about 1.5 to 4 times higher breakdown electric field) than conventional insulators in long pulse, short pulse, and alternating polarity applications. We describe our ongoing studies investigating the degradation of the breakdown electric field resulting from alternate fabrication techniques, the effect of gas pressure, the effect of the insulator-to-electrode interface gap spacing, and the performance of the insulator structure under bi-polar stress

  20. High performance computing and communications: Advancing the frontiers of information technology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental in the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.

  1. Basic research in the East and West: a comparison of the scientific performance of high-energy physics accelerators

    International Nuclear Information System (INIS)

    Irvine, J.; Martin, B.R.

    1985-01-01

    This paper presents the results of a study comparing the past scientific performance of high-energy physics accelerators in the Eastern bloc with that of their main Western counterparts. Output-evaluation indicators are used. After carefully examining the extent to which the output indicators used may be biased against science in the Eastern bloc, various conclusions are drawn about the relative contributions to science made by these accelerators. Where significant differences in performance are apparent, an attempt is made to identify the main factors responsible. (author)

  2. Automatic Energy Schemes for High Performance Applications

    Energy Technology Data Exchange (ETDEWEB)

    Sundriyal, Vaibhav [Iowa State Univ., Ames, IA (United States)

    2013-01-01

    Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale. Drastic increases in the power consumption of supercomputers affect significantly their operating costs and failure rates. In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This work first studies two important collective communication operations, all-to-all and allgather and proposes energy saving strategies on the per-call basis. Next, it targets point-to-point communications to group them into phases and apply frequency scaling to them to save energy by exploiting the architectural and communication stalls. Finally, it proposes an automatic runtime system which combines both collective and point-to-point communications into phases, and applies throttling to them apart from DVFS to maximize energy savings. The experimental results are presented for NAS parallel benchmark problems as well as for the realistic parallel electronic structure calculations performed by the widely used quantum chemistry package GAMESS. Close to the maximum energy savings were obtained with a substantially low performance loss on the given platform.

  3. Determination of performance characteristics of scientific applications on IBM Blue Gene/Q

    Energy Technology Data Exchange (ETDEWEB)

    Evangelinos, C. [IBM Research Division, Cambridge, MA (United States); Walkup, R. E. [IBM, Yorktown Heights, NY (United States). Thomas J. Watson Research Center; Sachdeva, V. [IBM Research Division, Cambridge, MA (United States); Jordan, K. E. [IBM Research Division, Cambridge, MA (United States); Gahvari, H. [Univ. of Illinois, Urbana-Champaign, IL (United States). Computer Science Dept.; Chung, I. -H. [IBM, Yorktown Heights, NY (United States). Thomas J. Watson Research Center; Perrone, M. P. [IBM, Yorktown Heights, NY (United States). Thomas J. Watson Research Center; Lu, L. [IBM, Yorktown Heights, NY (United States). Thomas J. Watson Research Center; Liu, L. -K. [IBM, Yorktown Heights, NY (United States). Thomas J. Watson Research Center; Magerlein, K. [IBM, Yorktown Heights, NY (United States). Thomas J. Watson Research Center

    2013-02-13

    The IBM Blue Gene®/Q platform presents scientists and engineers with a rich set of hardware features such as 16 cores per chip sharing a Level 2 cache, a wide SIMD (single-instruction, multiple-data) unit, a five-dimensional torus network, and hardware support for collective operations. Especially important is the feature related to cores that have four “hardware threads,” which makes it possible to hide latencies and obtain a high fraction of the peak issue rate from each core. All of these hardware resources present unique performance-tuning opportunities on Blue Gene/Q. We provide an overview of several important applications and solvers and study them on Blue Gene/Q using performance counters and Message Passing Interface profiles. We also discuss how Blue Gene/Q tools help us understand the interaction of the application with the hardware and software layers and provide guidance for optimization. Furthermore, on the basis of our analysis, we discuss code improvement strategies targeting Blue Gene/Q. Information about how these algorithms map to the Blue Gene® architecture is expected to have an impact on future system design as we move to the exascale era.

  4. High performance statistical computing with parallel R: applications to biology and climate modelling

    International Nuclear Information System (INIS)

    Samatova, Nagiza F; Branstetter, Marcia; Ganguly, Auroop R; Hettich, Robert; Khan, Shiraj; Kora, Guruprasad; Li, Jiangtian; Ma, Xiaosong; Pan, Chongle; Shoshani, Arie; Yoginath, Srikanth

    2006-01-01

    Ultrascale computing and high-throughput experimental technologies have enabled the production of scientific data about complex natural phenomena. With this opportunity, comes a new problem - the massive quantities of data so produced. Answers to fundamental questions about the nature of those phenomena remain largely hidden in the produced data. The goal of this work is to provide a scalable high performance statistical data analysis framework to help scientists perform interactive analyses of these raw data to extract knowledge. Towards this goal we have been developing an open source parallel statistical analysis package, called Parallel R, that lets scientists employ a wide range of statistical analysis routines on high performance shared and distributed memory architectures without having to deal with the intricacies of parallelizing these routines

  5. Final Report for 'Center for Technology for Advanced Scientific Component Software'

    International Nuclear Information System (INIS)

    Shasharina, Svetlana

    2010-01-01

    The goal of the Center for Technology for Advanced Scientific Component Software is to fundamentally changing the way scientific software is developed and used by bringing component-based software development technologies to high-performance scientific and engineering computing. The role of Tech-X work in TASCS project is to provide an outreach to accelerator physics and fusion applications by introducing TASCS tools into applications, testing tools in the applications and modifying the tools to be more usable.

  6. High performance hybrid magnetic structure for biotechnology applications

    Science.gov (United States)

    Humphries, David E [El Cerrito, CA; Pollard, Martin J [El Cerrito, CA; Elkin, Christopher J [San Ramon, CA

    2009-02-03

    The present disclosure provides a high performance hybrid magnetic structure made from a combination of permanent magnets and ferromagnetic pole materials which are assembled in a predetermined array. The hybrid magnetic structure provides means for separation and other biotechnology applications involving holding, manipulation, or separation of magnetic or magnetizable molecular structures and targets. Also disclosed are further improvements to aspects of the hybrid magnetic structure, including additional elements and for adapting the use of the hybrid magnetic structure for use in biotechnology and high throughput processes.

  7. Solving Enterprise Applications Performance Puzzles Queuing Models to the Rescue

    CERN Document Server

    Grinshpan, Leonid

    2012-01-01

    A groundbreaking scientific approach to solving enterprise applications performance problems Enterprise applications are the information backbone of today's corporations, supporting vital business functions such as operational management, supply chain maintenance, customer relationship administration, business intelligence, accounting, procurement logistics, and more. Acceptable performance of enterprise applications is critical for a company's day-to-day operations as well as for its profitability. Unfortunately, troubleshooting poorly performing enterprise applications has traditionally

  8. 3D printed high performance strain sensors for high temperature applications

    Science.gov (United States)

    Rahman, Md Taibur; Moser, Russell; Zbib, Hussein M.; Ramana, C. V.; Panat, Rahul

    2018-01-01

    Realization of high temperature physical measurement sensors, which are needed in many of the current and emerging technologies, is challenging due to the degradation of their electrical stability by drift currents, material oxidation, thermal strain, and creep. In this paper, for the first time, we demonstrate that 3D printed sensors show a metamaterial-like behavior, resulting in superior performance such as high sensitivity, low thermal strain, and enhanced thermal stability. The sensors were fabricated using silver (Ag) nanoparticles (NPs), using an advanced Aerosol Jet based additive printing method followed by thermal sintering. The sensors were tested under cyclic strain up to a temperature of 500 °C and showed a gauge factor of 3.15 ± 0.086, which is about 57% higher than that of those available commercially. The sensor thermal strain was also an order of magnitude lower than that of commercial gages for operation up to a temperature of 500 °C. An analytical model was developed to account for the enhanced performance of such printed sensors based on enhanced lateral contraction of the NP films due to the porosity, a behavior akin to cellular metamaterials. The results demonstrate the potential of 3D printing technology as a pathway to realize highly stable and high-performance sensors for high temperature applications.

  9. High-performance floating-point image computing workstation for medical applications

    Science.gov (United States)

    Mills, Karl S.; Wong, Gilman K.; Kim, Yongmin

    1990-07-01

    The medical imaging field relies increasingly on imaging and graphics techniques in diverse applications with needs similar to (or more stringent than) those of the military, industrial and scientific communities. However, most image processing and graphics systems available for use in medical imaging today are either expensive, specialized, or in most cases both. High performance imaging and graphics workstations which can provide real-time results for a number of applications, while maintaining affordability and flexibility, can facilitate the application of digital image computing techniques in many different areas. This paper describes the hardware and software architecture of a medium-cost floating-point image processing and display subsystem for the NeXT computer, and its applications as a medical imaging workstation. Medical imaging applications of the workstation include use in a Picture Archiving and Communications System (PACS), in multimodal image processing and 3-D graphics workstation for a broad range of imaging modalities, and as an electronic alternator utilizing its multiple monitor display capability and large and fast frame buffer. The subsystem provides a 2048 x 2048 x 32-bit frame buffer (16 Mbytes of image storage) and supports both 8-bit gray scale and 32-bit true color images. When used to display 8-bit gray scale images, up to four different 256-color palettes may be used for each of four 2K x 2K x 8-bit image frames. Three of these image frames can be used simultaneously to provide pixel selectable region of interest display. A 1280 x 1024 pixel screen with 1: 1 aspect ratio can be windowed into the frame buffer for display of any portion of the processed image or images. In addition, the system provides hardware support for integer zoom and an 82-color cursor. This subsystem is implemented on an add-in board occupying a single slot in the NeXT computer. Up to three boards may be added to the NeXT for multiple display capability (e

  10. High performance protection circuit for power electronics applications

    Energy Technology Data Exchange (ETDEWEB)

    Tudoran, Cristian D., E-mail: cristian.tudoran@itim-cj.ro; Dădârlat, Dorin N.; Toşa, Nicoleta; Mişan, Ioan [National Institute for Research and Development of Isotopic and Molecular Technologies, 67-103 Donat, PO 5 Box 700, 400293 Cluj-Napoca (Romania)

    2015-12-23

    In this paper we present a high performance protection circuit designed for the power electronics applications where the load currents can increase rapidly and exceed the maximum allowed values, like in the case of high frequency induction heating inverters or high frequency plasma generators. The protection circuit is based on a microcontroller and can be adapted for use on single-phase or three-phase power systems. Its versatility comes from the fact that the circuit can communicate with the protected system, having the role of a “sensor” or it can interrupt the power supply for protection, in this case functioning as an external, independent protection circuit.

  11. Efficient Machine Learning Approach for Optimizing Scientific Computing Applications on Emerging HPC Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Arumugam, Kamesh [Old Dominion Univ., Norfolk, VA (United States)

    2017-05-01

    Efficient parallel implementations of scientific applications on multi-core CPUs with accelerators such as GPUs and Xeon Phis is challenging. This requires - exploiting the data parallel architecture of the accelerator along with the vector pipelines of modern x86 CPU architectures, load balancing, and efficient memory transfer between different devices. It is relatively easy to meet these requirements for highly structured scientific applications. In contrast, a number of scientific and engineering applications are unstructured. Getting performance on accelerators for these applications is extremely challenging because many of these applications employ irregular algorithms which exhibit data-dependent control-ow and irregular memory accesses. Furthermore, these applications are often iterative with dependency between steps, and thus making it hard to parallelize across steps. As a result, parallelism in these applications is often limited to a single step. Numerical simulation of charged particles beam dynamics is one such application where the distribution of work and memory access pattern at each time step is irregular. Applications with these properties tend to present significant branch and memory divergence, load imbalance between different processor cores, and poor compute and memory utilization. Prior research on parallelizing such irregular applications have been focused around optimizing the irregular, data-dependent memory accesses and control-ow during a single step of the application independent of the other steps, with the assumption that these patterns are completely unpredictable. We observed that the structure of computation leading to control-ow divergence and irregular memory accesses in one step is similar to that in the next step. It is possible to predict this structure in the current step by observing the computation structure of previous steps. In this dissertation, we present novel machine learning based optimization techniques to address

  12. Scientific applications of symbolic computation

    International Nuclear Information System (INIS)

    Hearn, A.C.

    1976-02-01

    The use of symbolic computation systems for problem solving in scientific research is reviewed. The nature of the field is described, and particular examples are considered from celestial mechanics, quantum electrodynamics and general relativity. Symbolic integration and some more recent applications of algebra systems are also discussed [fr

  13. Accelerating the scientific exploration process with scientific workflows

    International Nuclear Information System (INIS)

    Altintas, Ilkay; Barney, Oscar; Cheng, Zhengang; Critchlow, Terence; Ludaescher, Bertram; Parker, Steve; Shoshani, Arie; Vouk, Mladen

    2006-01-01

    Although an increasing amount of middleware has emerged in the last few years to achieve remote data access, distributed job execution, and data management, orchestrating these technologies with minimal overhead still remains a difficult task for scientists. Scientific workflow systems improve this situation by creating interfaces to a variety of technologies and automating the execution and monitoring of the workflows. Workflow systems provide domain-independent customizable interfaces and tools that combine different tools and technologies along with efficient methods for using them. As simulations and experiments move into the petascale regime, the orchestration of long running data and compute intensive tasks is becoming a major requirement for the successful steering and completion of scientific investigations. A scientific workflow is the process of combining data and processes into a configurable, structured set of steps that implement semi-automated computational solutions of a scientific problem. Kepler is a cross-project collaboration, co-founded by the SciDAC Scientific Data Management (SDM) Center, whose purpose is to develop a domain-independent scientific workflow system. It provides a workflow environment in which scientists design and execute scientific workflows by specifying the desired sequence of computational actions and the appropriate data flow, including required data transformations, between these steps. Currently deployed workflows range from local analytical pipelines to distributed, high-performance and high-throughput applications, which can be both data- and compute-intensive. The scientific workflow approach offers a number of advantages over traditional scripting-based approaches, including ease of configuration, improved reusability and maintenance of workflows and components (called actors), automated provenance management, 'smart' re-running of different versions of workflow instances, on-the-fly updateable parameters, monitoring

  14. Research and Application of New Type of High Performance Titanium Alloy

    Directory of Open Access Journals (Sweden)

    ZHU Zhishou

    2016-06-01

    Full Text Available With the continuous extension of the application quantity and range for titanium alloy in the fields of national aviation, space, weaponry, marine and chemical industry, etc., even more critical requirements to the comprehensive mechanical properties, low cost and process technological properties of titanium alloy have been raised. Through the alloying based on the microstructure parameters design, and the comprehensive strengthening and toughening technologies of fine grain strengthening, phase transformation and process control of high toughening, the new type of high performance titanium alloy which has good comprehensive properties of high strength and toughness, anti-fatigue, failure resistance and anti-impact has been researched and manufactured. The new titanium alloy has extended the application quantity and application level in the high end field, realized the industrial upgrading and reforming, and met the application requirements of next generation equipment.

  15. High-performance mass storage system for workstations

    Science.gov (United States)

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive

  16. Workshop on scientific applications of short wavelength coherent light sources

    International Nuclear Information System (INIS)

    Spicer, W.; Arthur, J.; Winick, H.

    1993-02-01

    This report contains paper on the following topics: A 2 to 4nm High Power FEL On the SLAC Linac; Atomic Physics with an X-ray Laser; High Resolution, Three Dimensional Soft X-ray Imaging; The Role of X-ray Induced Damage in Biological Micro-imaging; Prospects for X-ray Microscopy in Biology; Femtosecond Optical Pulses?; Research in Chemical Physics Surface Science, and Materials Science, with a Linear Accelerator Coherent Light Source; Application of 10 GeV Electron Driven X-ray Laser in Gamma-ray Laser Research; Non-Linear Optics, Fluorescence, Spectromicroscopy, Stimulated Desorption: We Need LCLS' Brightness and Time Scale; Application of High Intensity X-rays to Materials Synthesis and Processing; LCLS Optics: Selected Technological Issues and Scientific Opportunities; Possible Applications of an FEL for Materials Studies in the 60 eV to 200 eV Spectral Region

  17. Construction of Blaze at the University of Illinois at Chicago: A Shared, High-Performance, Visual Computer for Next-Generation Cyberinfrastructure-Accelerated Scientific, Engineering, Medical and Public Policy Research

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Maxine D. [Acting Director, EVL; Leigh, Jason [PI

    2014-02-17

    The Blaze high-performance visual computing system serves the high-performance computing research and education needs of University of Illinois at Chicago (UIC). Blaze consists of a state-of-the-art, networked, computer cluster and ultra-high-resolution visualization system called CAVE2(TM) that is currently not available anywhere in Illinois. This system is connected via a high-speed 100-Gigabit network to the State of Illinois' I-WIRE optical network, as well as to national and international high speed networks, such as the Internet2, and the Global Lambda Integrated Facility. This enables Blaze to serve as an on-ramp to national cyberinfrastructure, such as the National Science Foundation’s Blue Waters petascale computer at the National Center for Supercomputing Applications at the University of Illinois at Chicago and the Department of Energy’s Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. DOE award # DE-SC005067, leveraged with NSF award #CNS-0959053 for “Development of the Next-Generation CAVE Virtual Environment (NG-CAVE),” enabled us to create a first-of-its-kind high-performance visual computing system. The UIC Electronic Visualization Laboratory (EVL) worked with two U.S. companies to advance their commercial products and maintain U.S. leadership in the global information technology economy. New applications are being enabled with the CAVE2/Blaze visual computing system that is advancing scientific research and education in the U.S. and globally, and help train the next-generation workforce.

  18. Scientific Services on the Cloud

    Science.gov (United States)

    Chapman, David; Joshi, Karuna P.; Yesha, Yelena; Halem, Milt; Yesha, Yaacov; Nguyen, Phuong

    Scientific Computing was one of the first every applications for parallel and distributed computation. To this date, scientific applications remain some of the most compute intensive, and have inspired creation of petaflop compute infrastructure such as the Oak Ridge Jaguar and Los Alamos RoadRunner. Large dedicated hardware infrastructure has become both a blessing and a curse to the scientific community. Scientists are interested in cloud computing for much the same reason as businesses and other professionals. The hardware is provided, maintained, and administrated by a third party. Software abstraction and virtualization provide reliability, and fault tolerance. Graduated fees allow for multi-scale prototyping and execution. Cloud computing resources are only a few clicks away, and by far the easiest high performance distributed platform to gain access to. There may still be dedicated infrastructure for ultra-scale science, but the cloud can easily play a major part of the scientific computing initiative.

  19. Methods for Specifying Scientific Data Standards and Modeling Relationships with Applications to Neuroscience

    Science.gov (United States)

    Rübel, Oliver; Dougherty, Max; Prabhat; Denes, Peter; Conant, David; Chang, Edward F.; Bouchard, Kristofer

    2016-01-01

    Neuroscience continues to experience a tremendous growth in data; in terms of the volume and variety of data, the velocity at which data is acquired, and in turn the veracity of data. These challenges are a serious impediment to sharing of data, analyses, and tools within and across labs. Here, we introduce BRAINformat, a novel data standardization framework for the design and management of scientific data formats. The BRAINformat library defines application-independent design concepts and modules that together create a general framework for standardization of scientific data. We describe the formal specification of scientific data standards, which facilitates sharing and verification of data and formats. We introduce the concept of Managed Objects, enabling semantic components of data formats to be specified as self-contained units, supporting modular and reusable design of data format components and file storage. We also introduce the novel concept of Relationship Attributes for modeling and use of semantic relationships between data objects. Based on these concepts we demonstrate the application of our framework to design and implement a standard format for electrophysiology data and show how data standardization and relationship-modeling facilitate data analysis and sharing. The format uses HDF5, enabling portable, scalable, and self-describing data storage and integration with modern high-performance computing for data-driven discovery. The BRAINformat library is open source, easy-to-use, and provides detailed user and developer documentation and is freely available at: https://bitbucket.org/oruebel/brainformat. PMID:27867355

  20. Undergraduate medical academic performance is improved by scientific training.

    Science.gov (United States)

    Zhang, Lili; Zhang, Wei; Wu, Chong; Liu, Zhongming; Cai, Yunfei; Cao, Xingguo; He, Yushan; Liu, Guoxiang; Miao, Hongming

    2017-09-01

    The effect of scientific training on course learning in undergraduates is still controversial. In this study, we investigated the academic performance of undergraduate students with and without scientific training. The results show that scientific training improves students' test scores in general medical courses, such as biochemistry and molecular biology, cell biology, physiology, and even English. We classified scientific training into four levels. We found that literature reading could significantly improve students' test scores in general courses. Students who received scientific training carried out experiments more effectively and published articles performed better than their untrained counterparts in biochemistry and molecular biology examinations. The questionnaire survey demonstrated that the trained students were more confident of their course learning, and displayed more interest, motivation and capability in course learning. In summary, undergraduate academic performance is improved by scientific training. Our findings shed light on the novel strategies in the management of undergraduate education in the medical school. © 2017 by The International Union of Biochemistry and Molecular Biology, 45(5):379-384, 2017. © 2017 The International Union of Biochemistry and Molecular Biology.

  1. DEVICE TECHNOLOGY. Nanomaterials in transistors: From high-performance to thin-film applications.

    Science.gov (United States)

    Franklin, Aaron D

    2015-08-14

    For more than 50 years, silicon transistors have been continuously shrunk to meet the projections of Moore's law but are now reaching fundamental limits on speed and power use. With these limits at hand, nanomaterials offer great promise for improving transistor performance and adding new applications through the coming decades. With different transistors needed in everything from high-performance servers to thin-film display backplanes, it is important to understand the targeted application needs when considering new material options. Here the distinction between high-performance and thin-film transistors is reviewed, along with the benefits and challenges to using nanomaterials in such transistors. In particular, progress on carbon nanotubes, as well as graphene and related materials (including transition metal dichalcogenides and X-enes), outlines the advances and further research needed to enable their use in transistors for high-performance computing, thin films, or completely new technologies such as flexible and transparent devices. Copyright © 2015, American Association for the Advancement of Science.

  2. CUDA/GPU Technology : Parallel Programming For High Performance Scientific Computing

    OpenAIRE

    YUHENDRA; KUZE, Hiroaki; JOSAPHAT, Tetuko Sri Sumantyo

    2009-01-01

    [ABSTRACT]Graphics processing units (GP Us) originally designed for computer video cards have emerged as the most powerful chip in a high-performance workstation. In the high performance computation capabilities, graphic processing units (GPU) lead to much more powerful performance than conventional CPUs by means of parallel processing. In 2007, the birth of Compute Unified Device Architecture (CUDA) and CUDA-enabled GPUs by NVIDIA Corporation brought a revolution in the general purpose GPU a...

  3. Leveraging Transcultural Enrollments to Enhance Application of the Scientific Method

    Science.gov (United States)

    Loudin, M.

    2013-12-01

    Continued growth of transcultural academic programs presents an opportunity for all of the students involved to improve utilization of the scientific method. Our own business success depends on how effectively we apply the scientific method, and so it is unsurprising that our hiring programs focus on three broad areas of capability among applicants which are strongly related to the scientific method. These are 1) ability to continually learn up-to-date earth science concepts, 2) ability to effectively and succinctly communicate in the English language, both oral and written, and 3) ability to employ behaviors that are advantageous with respect to the various phases of the scientific method. This third area is often the most difficult to develop, because neither so-called Western nor Eastern cultures encourage a suite of behaviors that are ideally suited. Generally, the acceptance of candidates into academic programs, together with subsequent high performance evidenced by grades, is a highly valid measure of continuous learning capability. Certainly, students for whom English is not a native language face additional challenges, but succinct and effective communication is an art which requires practice and development, regardless of native language. The ability to communicate in English is crucial, since it is today's lingua franca for both science and commerce globally. Therefore, we strongly support the use of frequent English written assignments and oral presentations as an integral part of all scientific academic programs. There is no question but that this poses additional work for faculty; nevertheless it is a key ingredient to the optimal development of students. No one culture has a monopoly with respect to behaviors that promote effective leveraging of the scientific method. For instance, the growing complexity of experimental protocols argues for a high degree of interdependent effort, which is more often associated with so-called Eastern than Western

  4. Undergraduate Medical Academic Performance is Improved by Scientific Training

    Science.gov (United States)

    Zhang, Lili; Zhang, Wei; Wu, Chong; Liu, Zhongming; Cai, Yunfei; Cao, Xingguo; He, Yushan; Liu, Guoxiang; Miao, Hongming

    2017-01-01

    The effect of scientific training on course learning in undergraduates is still controversial. In this study, we investigated the academic performance of undergraduate students with and without scientific training. The results show that scientific training improves students' test scores in general medical courses, such as biochemistry and…

  5. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm.

    Science.gov (United States)

    Abdulhamid, Shafi'i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques.

  6. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm

    Science.gov (United States)

    Abdulhamid, Shafi’i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques. PMID:27384239

  7. Precision ring rolling technique and application in high-performance bearing manufacturing

    Directory of Open Access Journals (Sweden)

    Hua Lin

    2015-01-01

    Full Text Available High-performance bearing has significant application in many important industry fields, like automobile, precision machine tool, wind power, etc. Precision ring rolling is an advanced rotary forming technique to manufacture high-performance seamless bearing ring thus can improve the working life of bearing. In this paper, three kinds of precision ring rolling techniques adapt to different dimensional ranges of bearings are introduced, which are cold ring rolling for small-scale bearing, hot radial ring rolling for medium-scale bearing and hot radial-axial ring rolling for large-scale bearing. The forming principles, technological features and forming equipments for three kinds of precision ring rolling techniques are summarized, the technological development and industrial application in China are introduced, and the main technological development trend is described.

  8. Porting of Scientific Applications to Grid Computing on GridWay

    Directory of Open Access Journals (Sweden)

    J. Herrera

    2005-01-01

    Full Text Available The expansion and adoption of Grid technologies is prevented by the lack of a standard programming paradigm to port existing applications among different environments. The Distributed Resource Management Application API has been proposed to aid the rapid development and distribution of these applications across different Distributed Resource Management Systems. In this paper we describe an implementation of the DRMAA standard on a Globus-based testbed, and show its suitability to express typical scientific applications, like High-Throughput and Master-Worker applications. The DRMAA routines are supported by the functionality offered by the GridWay2 framework, which provides the runtime mechanisms needed for transparently executing jobs on a dynamic Grid environment based on Globus. As cases of study, we consider the implementation with DRMAA of a bioinformatics application, a genetic algorithm and the NAS Grid Benchmarks.

  9. Historic Learning Approach for Auto-tuning OpenACC Accelerated Scientific Applications

    KAUST Repository

    Siddiqui, Shahzeb

    2015-04-17

    The performance optimization of scientific applications usually requires an in-depth knowledge of the hardware and software. A performance tuning mechanism is suggested to automatically tune OpenACC parameters to adapt to the execution environment on a given system. A historic learning based methodology is suggested to prune the parameter search space for a more efficient auto-tuning process. This approach is applied to tune the OpenACC gang and vector clauses for a better mapping of the compute kernels onto the underlying architecture. Our experiments show a significant performance improvement against the default compiler parameters and drastic reduction in tuning time compared to a brute force search-based approach.

  10. Scientific production and technological production: transforming a scientific paper into patent applications.

    Science.gov (United States)

    Dias, Cleber Gustavo; Almeida, Roberto Barbosa de

    2013-01-01

    Brazil has been presenting in the last years a scientific production well-recognized in the international scenario, in several areas of knowledge, according to the impact of their publications in important events and especially in indexed journals of wide circulation. On the other hand, the country does not seem to be in the same direction regarding to the technological production and wealth creation from the established scientific development, and particularly from the applied research. The present paper covers such issue and discloses the main similarities and differences between a scientific paper and a patent application, in order to contribute to a better understanding of both types of documents and help the researchers to chose and select the results with technological potential, decide what is appropriated for industrial protection, as well as foster new business opportunities for each technology which has been created.

  11. Application of secondary ion mass spectrometry for the characterization of commercial high performance materials

    International Nuclear Information System (INIS)

    Gritsch, M.

    2000-09-01

    The industry today offers an uncounted number of high performance materials, that have to meet highest standards. Commercial high performance materials, though often sold in large quantities, still require ongoing research and development to keep up to date with increasing needs and decreasing tolerances. Furthermore, a variety of materials is on the market that are not fully understood in their microstructure, in the way they react under application conditions, and in which mechanisms are responsible for their degradation. Secondary Ion Mass Spectrometry (SIMS) is an analytical method that is now in commercial use for over 30 years. Its main advantages are the very high detection sensitivity (down to ppb), the ability to measure all elements with isotopic sensitivity, the ability of gaining laterally resolved images, and the inherent capability of depth-profiling. These features make it an ideal tool for a wide field of applications within advanced material science. The present work gives an introduction into the principles of SIMS and shows the successful application for the characterization of commercially used high performance materials. Finally, a selected collection of my publications in reviewed journals will illustrate the state of the art in applied materials research and development with dynamic SIMS. All publications focus on the application of dynamic SIMS to analytical questions that stem from questions arising during the production and improvement of high-performance materials. (author)

  12. High-End Scientific Computing

    Science.gov (United States)

    EPA uses high-end scientific computing, geospatial services and remote sensing/imagery analysis to support EPA's mission. The Center for Environmental Computing (CEC) assists the Agency's program offices and regions to meet staff needs in these areas.

  13. Scientific production on the applicability of phenytoin in wound healing

    Directory of Open Access Journals (Sweden)

    Flávia Firmino

    2014-02-01

    Full Text Available Phenytoin is an anticonvulsant that has been used in wound healing. The objectives of this study were to describe how the scientific production presents the use ofphenytoinas a healing agent and to discuss its applicability in wounds. A literature review and hierarchy analysis of evidence-based practices was performed. Eighteen articles were analyzed that tested the intervention in wounds such as leprosy ulcers, leg ulcers, diabetic foot ulcers, pressure ulcers, trophic ulcers, war wounds, burns, preparation of recipient graft area, radiodermatitis and post-extraction of melanocytic nevi. Systemic use ofphenytoinin the treatment of fistulas and the hypothesis of topical use in the treatment of vitiligo were found. In conclusion, topical use ofphenytoinis scientifically evidenced. However robust research is needed that supports a protocol for the use ofphenytoinas another option of a healing agent in clinical practice.

  14. Getting high utilization of peak GFLOPS in real applications in the Cray X1

    Energy Technology Data Exchange (ETDEWEB)

    Levesque, J.M. [Cray Research, Inc., Los Alamos, NM (United States)

    2003-07-01

    This paper will show the advanced characteristics of the Cray X1 and discuss how they are used to achieve high-utilized GFLOPS on many real world applications. On most MPP systems, other than the Earth Simulator and the Cray Inc. X1, advanced scientific applications do not obtain a high percentage of peak performance, in some cases less than 2%. When this small percentage of peak is attained on the processor, one needs to have more individual processors to achieve a TFLOP of sustained performance. Larger numbers of processors result in a tremendous burden on the interconnect. Here again MPPs other than the Earth Simulator and the X1 do not have the interconnect to support the increased number of processors. Combining low processor performance with insufficient scaling results in less than desired performance for many applications. (author)

  15. Getting high utilization of peak GFLOPS in real applications in the Cray X1

    International Nuclear Information System (INIS)

    Levesque, J.M.

    2003-01-01

    This paper will show the advanced characteristics of the Cray X1 and discuss how they are used to achieve high-utilized GFLOPS on many real world applications. On most MPP systems, other than the Earth Simulator and the Cray Inc. X1, advanced scientific applications do not obtain a high percentage of peak performance, in some cases less than 2%. When this small percentage of peak is attained on the processor, one needs to have more individual processors to achieve a TFLOP of sustained performance. Larger numbers of processors result in a tremendous burden on the interconnect. Here again MPPs other than the Earth Simulator and the X1 do not have the interconnect to support the increased number of processors. Combining low processor performance with insufficient scaling results in less than desired performance for many applications. (author)

  16. Brazilian academic search filter: application to the scientific literature on physical activity.

    Science.gov (United States)

    Sanz-Valero, Javier; Ferreira, Marcos Santos; Castiel, Luis David; Wanden-Berghe, Carmina; Guilam, Maria Cristina Rodrigues

    2010-10-01

    To develop a search filter in order to retrieve scientific publications on physical activity from Brazilian academic institutions. The academic search filter consisted of the descriptor "exercise" associated through the term AND, to the names of the respective academic institutions, which were connected by the term OR. The MEDLINE search was performed with PubMed on 11/16/2008. The institutions were selected according to the classification from the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) for interuniversity agreements. A total of 407 references were retrieved, corresponding to about 0.9% of all articles about physical activity and 0.5% of the Brazilian academic publications indexed in MEDLINE on the search date. When compared with the manual search undertaken, the search filter (descriptor + institutional filter) showed a sensitivity of 99% and a specificity of 100%. The institutional search filter showed high sensitivity and specificity, and is applicable to other areas of knowledge in health sciences. It is desirable that every Brazilian academic institution establish its "standard name/brand" in order to efficiently retrieve their scientific literature.

  17. Advanced scientific computational methods and their applications to nuclear technologies. (4) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (4)

    International Nuclear Information System (INIS)

    Sekimura, Naoto; Okita, Taira

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the fourth issue showing the overview of scientific computational methods with the introduction of continuum simulation methods and their applications. Simulation methods on physical radiation effects on materials are reviewed based on the process such as binary collision approximation, molecular dynamics, kinematic Monte Carlo method, reaction rate method and dislocation dynamics. (T. Tanaka)

  18. High-performance heat pipes for heat recovery applications

    Science.gov (United States)

    Saaski, E. W.; Hartl, J. H.

    1980-01-01

    Methods to improve the performance of reflux heat pipes for heat recovery applications were examined both analytically and experimentally. Various models for the estimation of reflux heat pipe transport capacity were surveyed in the literature and compared with experimental data. A high transport capacity reflux heat pipe was developed that provides up to a factor of 10 capacity improvement over conventional open tube designs; analytical models were developed for this device and incorporated into a computer program HPIPE. Good agreement of the model predictions with data for R-11 and benzene reflux heat pipes was obtained.

  19. Scientific Literacy of High School Students.

    Science.gov (United States)

    Lucas, Keith B.; Tulip, David F.

    This investigation was undertaken in order to establish the status of scientific literacy among three groups of secondary school students in four Brisbane, Australia high schools, and to reduce the apparent reticence of science teachers to evaluate students' achievement in the various dimensions of scientific literacy by demonstrating appropriate…

  20. Advanced scientific computational methods and their applications of nuclear technologies. (1) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (1)

    International Nuclear Information System (INIS)

    Oka, Yoshiaki; Okuda, Hiroshi

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the first issue showing their overview and introduction of continuum simulation methods. Finite element method as their applications is also reviewed. (T. Tanaka)

  1. First scientific application of the membrane cryostat technology

    Energy Technology Data Exchange (ETDEWEB)

    Montanari, David; Adamowski, Mark; Baller, Bruce R.; Barger, Robert K.; Chi, Edward C.; Davis, Ronald P.; Johnson, Bryan D.; Kubinski, Bob M.; Najdzion, John J.; Rucinski, Russel A.; Schmitt, Rich L.; Tope, Terry E. [Particle Physics Division, Fermilab, P.O. Box 500, Batavia, IL 60510 (United States); Mahoney, Ryan; Norris, Barry L.; Watkins, Daniel J. [Technical Division, Fermilab, P.O. Box 500, Batavia, IL 60510 (United States); McCluskey, Elaine G. [LBNE Project, Fermilab, P.O. Box 500, Batavia, IL 60510 (United States); Stewart, James [Physics Department, Brookhaven National Laboratory, P.O. Box 5000, Uptown, NY 11973 (United States)

    2014-01-29

    We report on the design, fabrication, performance and commissioning of the first membrane cryostat to be used for scientific application. The Long Baseline Neutrino Experiment (LBNE) has designed and fabricated a membrane cryostat prototype in collaboration with IHI Corporation (IHI). Original goals of the prototype are: to demonstrate the membrane cryostat technology in terms of thermal performance, feasibility for liquid argon, and leak tightness; to demonstrate that we can remove all the impurities from the vessel and achieve the purity requirements in a membrane cryostat without evacuation and using only a controlled gaseous argon purge; to demonstrate that we can achieve and maintain the purity requirements of the liquid argon during filling, purification, and maintenance mode using mole sieve and copper filters from the Liquid Argon Purity Demonstrator (LAPD) R and D project. The purity requirements of a large liquid argon detector such as LBNE are contaminants below 200 parts per trillion oxygen equivalent. This paper gives the requirements, design, construction, and performance of the LBNE membrane cryostat prototype, with experience and results important to the development of the LBNE detector.

  2. Characterization of high performance silicon-based VMJ PV cells for laser power transmission applications

    Science.gov (United States)

    Perales, Mico; Yang, Mei-huan; Wu, Cheng-liang; Hsu, Chin-wei; Chao, Wei-sheng; Chen, Kun-hsien; Zahuranec, Terry

    2016-03-01

    Continuing improvements in the cost and power of laser diodes have been critical in launching the emerging fields of power over fiber (PoF), and laser power beaming. Laser power is transmitted either over fiber (for PoF), or through free space (power beaming), and is converted to electricity by photovoltaic cells designed to efficiently convert the laser light. MH GoPower's vertical multi-junction (VMJ) PV cell, designed for high intensity photovoltaic applications, is fueling the emergence of this market, by enabling unparalleled photovoltaic receiver flexibility in voltage, cell size, and power output. Our research examined the use of the VMJ PV cell for laser power transmission applications. We fully characterized the performance of the VMJ PV cell under various laser conditions, including multiple near IR wavelengths and light intensities up to tens of watts per cm2. Results indicated VMJ PV cell efficiency over 40% for 9xx nm wavelengths, at laser power densities near 30 W/cm2. We also investigated the impact of the physical dimensions (length, width, and height) of the VMJ PV cell on its performance, showing similarly high performance across a wide range of cell dimensions. We then evaluated the VMJ PV cell performance within the power over fiber application, examining the cell's effectiveness in receiver packages that deliver target voltage, intensity, and power levels. By designing and characterizing multiple receivers, we illustrated techniques for packaging the VMJ PV cell for achieving high performance (> 30%), high power (> 185 W), and target voltages for power over fiber applications.

  3. Highly-reliable laser diodes and modules for spaceborne applications

    Science.gov (United States)

    Deichsel, E.

    2017-11-01

    Laser applications become more and more interesting in contemporary missions such as earth observations or optical communication in space. One of these applications is light detection and ranging (LIDAR), which comprises huge scientific potential in future missions. The Nd:YAG solid-state laser of such a LIDAR system is optically pumped using 808nm emitting pump sources based on semiconductor laser-diodes in quasi-continuous wave (qcw) operation. Therefore reliable and efficient laser diodes with increased output powers are an important requirement for a spaceborne LIDAR-system. In the past, many tests were performed regarding the performance and life-time of such laser-diodes. There were also studies for spaceborne applications, but a test with long operation times at high powers and statistical relevance is pending. Other applications, such as science packages (e.g. Raman-spectroscopy) on planetary rovers require also reliable high-power light sources. Typically fiber-coupled laser diode modules are used for such applications. Besides high reliability and life-time, designs compatible to the harsh environmental conditions must be taken in account. Mechanical loads, such as shock or strong vibration are expected due to take-off or landing procedures. Many temperature cycles with high change rates and differences must be taken in account due to sun-shadow effects in planetary orbits. Cosmic radiation has strong impact on optical components and must also be taken in account. Last, a hermetic sealing must be considered, since vacuum can have disadvantageous effects on optoelectronics components.

  4. Performance analysis of InSb based QWFET for ultra high speed applications

    International Nuclear Information System (INIS)

    Subash, T. D.; Gnanasekaran, T.; Divya, C.

    2015-01-01

    An indium antimonide based QWFET (quantum well field effect transistor) with the gate length down to 50 nm has been designed and investigated for the first time for L-band radar applications at 230 GHz. QWFETs are designed at the high performance node of the International Technology Road Map for Semiconductors (ITRS) requirements of drive current (Semiconductor Industry Association 2010). The performance of the device is investigated using the SYNOPSYS CAD (TCAD) software. InSb based QWFET could be a promising device technology for very low power and ultra-high speed performance with 5–10 times low DC power dissipation. (semiconductor devices)

  5. Building a High Performance Computing Infrastructure for Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Belov, S; Kaplin, V; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2011-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies (ICT), and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of the computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for the particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. Recently a dedicated optical network with the initial bandwidth of 10 Gbps connecting these three facilities was built in order to make it possible to share the computing resources among the research communities of participating institutes, thus providing a common platform for building the computing infrastructure for various scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technologies based on XEN and KVM platforms. The solution implemented was tested thoroughly within the computing environment of KEDR detector experiment which is being carried out at BINP, and foreseen to be applied to the use cases of other HEP experiments in the upcoming future.

  6. Accelerating scientific discovery : 2007 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Beckman, P.; Dave, P.; Drugan, C.

    2008-11-14

    As a gateway for scientific discovery, the Argonne Leadership Computing Facility (ALCF) works hand in hand with the world's best computational scientists to advance research in a diverse span of scientific domains, ranging from chemistry, applied mathematics, and materials science to engineering physics and life sciences. Sponsored by the U.S. Department of Energy's (DOE) Office of Science, researchers are using the IBM Blue Gene/L supercomputer at the ALCF to study and explore key scientific problems that underlie important challenges facing our society. For instance, a research team at the University of California-San Diego/ SDSC is studying the molecular basis of Parkinson's disease. The researchers plan to use the knowledge they gain to discover new drugs to treat the disease and to identify risk factors for other diseases that are equally prevalent. Likewise, scientists from Pratt & Whitney are using the Blue Gene to understand the complex processes within aircraft engines. Expanding our understanding of jet engine combustors is the secret to improved fuel efficiency and reduced emissions. Lessons learned from the scientific simulations of jet engine combustors have already led Pratt & Whitney to newer designs with unprecedented reductions in emissions, noise, and cost of ownership. ALCF staff members provide in-depth expertise and assistance to those using the Blue Gene/L and optimizing user applications. Both the Catalyst and Applications Performance Engineering and Data Analytics (APEDA) teams support the users projects. In addition to working with scientists running experiments on the Blue Gene/L, we have become a nexus for the broader global community. In partnership with the Mathematics and Computer Science Division at Argonne National Laboratory, we have created an environment where the world's most challenging computational science problems can be addressed. Our expertise in high-end scientific computing enables us to provide

  7. High Performance Multi-GPU SpMV for Multi-component PDE-Based Applications

    KAUST Repository

    Abdelfattah, Ahmad; Ltaief, Hatem; Keyes, David E.

    2015-01-01

    -block structure. While these optimizations are important for high performance dense kernel executions, they are even more critical when dealing with sparse linear algebra operations. The most time-consuming phase of many multicomponent applications, such as models

  8. High performance parallel computing of flows in complex geometries: II. Applications

    International Nuclear Information System (INIS)

    Gourdain, N; Gicquel, L; Staffelbach, G; Vermorel, O; Duchaine, F; Boussuge, J-F; Poinsot, T

    2009-01-01

    Present regulations in terms of pollutant emissions, noise and economical constraints, require new approaches and designs in the fields of energy supply and transportation. It is now well established that the next breakthrough will come from a better understanding of unsteady flow effects and by considering the entire system and not only isolated components. However, these aspects are still not well taken into account by the numerical approaches or understood whatever the design stage considered. The main challenge is essentially due to the computational requirements inferred by such complex systems if it is to be simulated by use of supercomputers. This paper shows how new challenges can be addressed by using parallel computing platforms for distinct elements of a more complex systems as encountered in aeronautical applications. Based on numerical simulations performed with modern aerodynamic and reactive flow solvers, this work underlines the interest of high-performance computing for solving flow in complex industrial configurations such as aircrafts, combustion chambers and turbomachines. Performance indicators related to parallel computing efficiency are presented, showing that establishing fair criterions is a difficult task for complex industrial applications. Examples of numerical simulations performed in industrial systems are also described with a particular interest for the computational time and the potential design improvements obtained with high-fidelity and multi-physics computing methods. These simulations use either unsteady Reynolds-averaged Navier-Stokes methods or large eddy simulation and deal with turbulent unsteady flows, such as coupled flow phenomena (thermo-acoustic instabilities, buffet, etc). Some examples of the difficulties with grid generation and data analysis are also presented when dealing with these complex industrial applications.

  9. Expert opinions and scientific evidence for colonoscopy key performance indicators.

    Science.gov (United States)

    Rees, Colin J; Bevan, Roisin; Zimmermann-Fraedrich, Katharina; Rutter, Matthew D; Rex, Douglas; Dekker, Evelien; Ponchon, Thierry; Bretthauer, Michael; Regula, Jaroslaw; Saunders, Brian; Hassan, Cesare; Bourke, Michael J; Rösch, Thomas

    2016-12-01

    Colonoscopy is a widely performed procedure with procedural volumes increasing annually throughout the world. Many procedures are now performed as part of colorectal cancer screening programmes. Colonoscopy should be of high quality and measures of this quality should be evidence based. New UK key performance indicators and quality assurance standards have been developed by a working group with consensus agreement on each standard reached. This paper reviews the scientific basis for each of the quality measures published in the UK standards. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  10. Enabling high performance computational science through combinatorial algorithms

    International Nuclear Information System (INIS)

    Boman, Erik G; Bozdag, Doruk; Catalyurek, Umit V; Devine, Karen D; Gebremedhin, Assefaw H; Hovland, Paul D; Pothen, Alex; Strout, Michelle Mills

    2007-01-01

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation

  11. Enabling high performance computational science through combinatorial algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Boman, Erik G [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Bozdag, Doruk [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Catalyurek, Umit V [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Devine, Karen D [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Gebremedhin, Assefaw H [Computer Science and Center for Computational Science, Old Dominion University (United States); Hovland, Paul D [Mathematics and Computer Science Division, Argonne National Laboratory (United States); Pothen, Alex [Computer Science and Center for Computational Science, Old Dominion University (United States); Strout, Michelle Mills [Computer Science, Colorado State University (United States)

    2007-07-15

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation.

  12. Accelerating Scientific Applications using High Performance Dense and Sparse Linear Algebra Kernels on GPUs

    KAUST Repository

    Abdelfattah, Ahmad

    2015-01-01

    applications rely on crucial memory-bound kernels and may witness bottlenecks due to the overhead of the memory bus latency. They can still take advantage of the GPU compute power capabilities, provided that an efficient architecture-aware design is achieved

  13. DEVELOPMENT OF NEW VALVE STEELS FOR APPLICATION IN HIGH PERFORMANCE ENGINES

    Directory of Open Access Journals (Sweden)

    Alexandre Bellegard Farina

    2013-12-01

    Full Text Available UNS N07751 and UNS N07080 alloys are commonly applied for automotive valves production for high performance internal combustion engines. These alloys present high hot resistance to mechanical strength, oxidation, corrosion, creep and microstructural stability. However, these alloys presents low wear resistance and high cost due to the high nickel contents. In this work it is presented the development of two new Ni-based alloys for application in high performance automotive valve as an alternative to the alloys UNS N07751 and UNS N07080. The new developed alloys are based on a high nickel-chromium austenitic matrix with dispersion of γ’ and γ’’ phases and containing different NbC contents. Due to the nickel content reduction in the developed alloys in comparison with these actually used alloys, the new alloys present an economical advantage for substitution of UNS N07751 and UNS N0780 alloys.

  14. High Performance Computing - Power Application Programming Interface Specification.

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  15. Energy Smart Management of Scientific Data

    Energy Technology Data Exchange (ETDEWEB)

    Otoo, Ekow; Rotem, Dron; Tsao, Shih-Chiang

    2009-04-12

    Scientific data centers comprised of high-powered computing equipment and large capacity disk storage systems consume considerable amount of energy. Dynamic power management techniques (DPM) are commonly used for saving energy in disk systems. These involve powering down disks that exhibit long idle periods and placing them in standby mode. A file request from a disk in standby mode will incur both energy and performance penalties as it takes energy (and time) to spin up the disk before it can serve a file. For this reason, DPM has to make decisions as to when to transition the disk into standby mode such that the energy saved is greater than the energy needed to spin it up again and the performance penalty is tolerable. The length of the idle period until the DPM decides to power down a disk is called idlenessthreshold. In this paper, we study both analytically and experimentally dynamic power management techniques that save energy subject to performance constraints on file access costs. Based on observed workloads of scientific applications and disk characteristics, we provide a methodology for determining file assignment to disks and computing idleness thresholds that result in significant improvements to the energy saved by existing DPMsolutions while meeting response time constraints. We validate our methods with simulations that use traces taken from scientific applications.

  16. High-performance computing using FPGAs

    CERN Document Server

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  17. High-Performance MIM Capacitors for a Secondary Power Supply Application

    Directory of Open Access Journals (Sweden)

    Jiliang Mu

    2018-02-01

    Full Text Available Microstructure is important to the development of energy devices with high performance. In this work, a three-dimensional Si-based metal-insulator-metal (MIM capacitor has been reported, which is fabricated by microelectromechanical systems (MEMS technology. Area enlargement is achieved by forming deep trenches in a silicon substrate using the deep reactive ion etching method. The results indicate that an area of 2.45 × 103 mm2 can be realized in the deep trench structure with a high aspect ratio of 30:1. Subsequently, a dielectric Al2O3 layer and electrode W/TiN layers are deposited by atomic layer deposition. The obtained capacitor has superior performance, such as a high breakdown voltage (34.1 V, a moderate energy density (≥1.23 mJ/cm2 per unit planar area, a high breakdown electric field (6.1 ± 0.1 MV/cm, a low leakage current (10−7 A/cm2 at 22.5 V, and a low quadratic voltage coefficient of capacitance (VCC (≤63.1 ppm/V2. In addition, the device’s performance has been theoretically examined. The results show that the high energy supply and small leakage current can be attributed to the Poole–Frenkel emission in the high-field region and the trap-assisted tunneling in the low-field region. The reported capacitor has potential application as a secondary power supply.

  18. High performance graphics processors for medical imaging applications

    International Nuclear Information System (INIS)

    Goldwasser, S.M.; Reynolds, R.A.; Talton, D.A.; Walsh, E.S.

    1989-01-01

    This paper describes a family of high- performance graphics processors with special hardware for interactive visualization of 3D human anatomy. The basic architecture expands to multiple parallel processors, each processor using pipelined arithmetic and logical units for high-speed rendering of Computed Tomography (CT), Magnetic Resonance (MR) and Positron Emission Tomography (PET) data. User-selectable display alternatives include multiple 2D axial slices, reformatted images in sagittal or coronal planes and shaded 3D views. Special facilities support applications requiring color-coded display of multiple datasets (such as radiation therapy planning), or dynamic replay of time- varying volumetric data (such as cine-CT or gated MR studies of the beating heart). The current implementation is a single processor system which generates reformatted images in true real time (30 frames per second), and shaded 3D views in a few seconds per frame. It accepts full scale medical datasets in their native formats, so that minimal preprocessing delay exists between data acquisition and display

  19. High performance parallel I/O

    CERN Document Server

    Prabhat

    2014-01-01

    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  20. High speed global shutter image sensors for professional applications

    Science.gov (United States)

    Wu, Xu; Meynants, Guy

    2015-04-01

    Global shutter imagers expand the use to miscellaneous applications, such as machine vision, 3D imaging, medical imaging, space etc. to eliminate motion artifacts in rolling shutter imagers. A low noise global shutter pixel requires more than one non-light sensitive memory to reduce the read noise. But larger memory area reduces the fill-factor of the pixels. Modern micro-lenses technology can compensate this fill-factor loss. Backside illumination (BSI) is another popular technique to improve the pixel fill-factor. But some pixel architecture may not reach sufficient shutter efficiency with backside illumination. Non-light sensitive memory elements make the fabrication with BSI possible. Machine vision like fast inspection system, medical imaging like 3D medical or scientific applications always ask for high frame rate global shutter image sensors. Thanks to the CMOS technology, fast Analog-to-digital converters (ADCs) can be integrated on chip. Dual correlated double sampling (CDS) on chip ADC with high interface digital data rate reduces the read noise and makes more on-chip operation control. As a result, a global shutter imager with digital interface is a very popular solution for applications with high performance and high frame rate requirements. In this paper we will review the global shutter architectures developed in CMOSIS, discuss their optimization process and compare their performances after fabrication.

  1. [Performance analysis of scientific researchers in biomedicine].

    Science.gov (United States)

    Gamba, Gerardo

    2013-01-01

    There is no data about the performance of scientific researchers in biomedicine in our environment that can be use by individual subjects to compare their execution with their pairs. Using the Scopus browser the following data from 115 scientific researchers in biomedicine were obtained: actual institution, number of articles published, place on each article within the author list as first, last or unique author, total number of citations, percentage of citations due to the most cited paper, and h-index. Results were analyzed with descriptive statistics and simple lineal regressions. Most of scientific researches in the sample are from the National Institutes of the Health Ministry or some of the research institutes or faculties at the Universidad Nacional Autónoma de México. Total number of publications was biomedicine in Mexico City, which can be used to compare the productivity of individual subjects with their pairs.

  2. Application of Logic Models in a Large Scientific Research Program

    Science.gov (United States)

    O'Keefe, Christine M.; Head, Richard J.

    2011-01-01

    It is the purpose of this article to discuss the development and application of a logic model in the context of a large scientific research program within the Commonwealth Scientific and Industrial Research Organisation (CSIRO). CSIRO is Australia's national science agency and is a publicly funded part of Australia's innovation system. It conducts…

  3. Training Elementary Teachers to Prepare Students for High School Authentic Scientific Research

    Science.gov (United States)

    Danch, J. M.

    2017-12-01

    The Woodbridge Township New Jersey School District has a 4-year high school Science Research program that depends on the enrollment of students with the prerequisite skills to conduct authentic scientific research at the high school level. A multifaceted approach to training elementary teachers in the methods of scientific investigation, data collection and analysis and communication of results was undertaken in 2017. Teachers of predominately grades 4 and 5 participated in hands on workshops at a Summer Tech Academy, an EdCamp, a District Inservice Day and a series of in-class workshops for teachers and students together. Aspects of the instruction for each of these activities was facilitated by high school students currently enrolled in the High School Science Research Program. Much of the training activities centered around a "Learning With Students" model where teachers and their students simultaneously learn to perform inquiry activities and conduct scientific research fostering inquiry as it is meant to be: where participants produce original data are not merely working to obtain previously determined results.

  4. Development of high performance scientific components for interoperability of computing packages

    Energy Technology Data Exchange (ETDEWEB)

    Gulabani, Teena Pratap [Iowa State Univ., Ames, IA (United States)

    2008-01-01

    Three major high performance quantum chemistry computational packages, NWChem, GAMESS and MPQC have been developed by different research efforts following different design patterns. The goal is to achieve interoperability among these packages by overcoming the challenges caused by the different communication patterns and software design of each of these packages. A chemistry algorithm is hard to develop as well as being a time consuming process; integration of large quantum chemistry packages will allow resource sharing and thus avoid reinvention of the wheel. Creating connections between these incompatible packages is the major motivation of the proposed work. This interoperability is achieved by bringing the benefits of Component Based Software Engineering through a plug-and-play component framework called Common Component Architecture (CCA). In this thesis, I present a strategy and process used for interfacing two widely used and important computational chemistry methodologies: Quantum Mechanics and Molecular Mechanics. To show the feasibility of the proposed approach the Tuning and Analysis Utility (TAU) has been coupled with NWChem code and its CCA components. Results show that the overhead is negligible when compared to the ease and potential of organizing and coping with large-scale software applications.

  5. Applications of artificial intelligence to scientific research

    Science.gov (United States)

    Prince, Mary Ellen

    1986-01-01

    Artificial intelligence (AI) is a growing field which is just beginning to make an impact on disciplines other than computer science. While a number of military and commercial applications were undertaken in recent years, few attempts were made to apply AI techniques to basic scientific research. There is no inherent reason for the discrepancy. The characteristics of the problem, rather than its domain, determines whether or not it is suitable for an AI approach. Expert system, intelligent tutoring systems, and learning programs are examples of theoretical topics which can be applied to certain areas of scientific research. Further research and experimentation should eventurally make it possible for computers to act as intelligent assistants to scientists.

  6. Load Balancing Scientific Applications

    Energy Technology Data Exchange (ETDEWEB)

    Pearce, Olga Tkachyshyn [Texas A & M Univ., College Station, TX (United States)

    2014-12-01

    The largest supercomputers have millions of independent processors, and concurrency levels are rapidly increasing. For ideal efficiency, developers of the simulations that run on these machines must ensure that computational work is evenly balanced among processors. Assigning work evenly is challenging because many large modern parallel codes simulate behavior of physical systems that evolve over time, and their workloads change over time. Furthermore, the cost of imbalanced load increases with scale because most large-scale scientific simulations today use a Single Program Multiple Data (SPMD) parallel programming model, and an increasing number of processors will wait for the slowest one at the synchronization points. To address load imbalance, many large-scale parallel applications use dynamic load balance algorithms to redistribute work evenly. The research objective of this dissertation is to develop methods to decide when and how to load balance the application, and to balance it effectively and affordably. We measure and evaluate the computational load of the application, and develop strategies to decide when and how to correct the imbalance. Depending on the simulation, a fast, local load balance algorithm may be suitable, or a more sophisticated and expensive algorithm may be required. We developed a model for comparison of load balance algorithms for a specific state of the simulation that enables the selection of a balancing algorithm that will minimize overall runtime.

  7. Measurement of Scientific Productivity in R&D Sector: Changing paradigm.

    Science.gov (United States)

    Kumar, Abhishek; Srivastava, Alpana; Kumar, R P Jeevan; Tiwari, Rajesh K

    2017-01-01

    Scientific Productivity is a demand of policy makers for a judicious utilization of massive R&D budget allocated and utilized. A huge mass of intellectual assets is employed, which after investing manpower, infrastructure and lab consumables demand for a major outcome which contributes towards building nation's economy. Scientific productivity was only measured through publications or patents. Patents, earmarked as a strong parameter for innovation generation, where, Word Intellectual Property Organisation generated a data on applications for the top 20 offices for patents, where Australia, Brazil and Canada occupied top 3 positions. India ranked 9th with the total patent applications rising from 39762 (2010) to 42854 (2014) i.e. 15%, whereas, it contributes around 2% Patents (innovative productivity) on global scale. Many studies have come forward interestingly within scientific and academic domains in the form of measurement of scientific performance, however, development of productivity indicators and calculation of Scientific Productivity (SP) as a holistic evaluation system is a significant demand. SP, a herculean task is envisaged for productivity analysis and would submit significant factors towards fabricating an effective measurement engine in a holistic manner viable for an individual and organization, being supplementary to each other. This review projects the significance of performance measurement system in R&D through identification and standardization of key parameters. It also includes emphasis on inclusion of standardized parameters, effective for performance measurement which is applicable for scientists, technical staff as well as lab as a facility. This review aims at providing an insight to the evaluators, policy makers, and high level scientific panels to stimulate the scientific intellects on identified indicators so that their work proceeds to generate productive outcome contributing to the economic growth. Copyright© Bentham Science

  8. Performance of large-scale scientific applications on the IBM ASCI Blue-Pacific system

    International Nuclear Information System (INIS)

    Mirin, A.

    1998-01-01

    The IBM ASCI Blue-Pacific System is a scalable, distributed/shared memory architecture designed to reach multi-teraflop performance. The IBM SP pieces together a large number of nodes, each having a modest number of processors. The system is designed to accommodate a mixed programming model as well as a pure message-passing paradigm. We examine a number of applications on this architecture and evaluate their performance and scalability

  9. Application of Text Analytics to Extract and Analyze Material–Application Pairs from a Large Scientific Corpus

    Directory of Open Access Journals (Sweden)

    Nikhil Kalathil

    2018-01-01

    Full Text Available When assessing the importance of materials (or other components to a given set of applications, machine analysis of a very large corpus of scientific abstracts can provide an analyst a base of insights to develop further. The use of text analytics reduces the time required to conduct an evaluation, while allowing analysts to experiment with a multitude of different hypotheses. Because the scope and quantity of metadata analyzed can, and should, be large, any divergence from what a human analyst determines and what the text analysis shows provides a prompt for the human analyst to reassess any preliminary findings. In this work, we have successfully extracted material–application pairs and ranked them on their importance. This method provides a novel way to map scientific advances in a particular material to the application for which it is used. Approximately 438,000 titles and abstracts of scientific papers published from 1992 to 2011 were used to examine 16 materials. This analysis used coclustering text analysis to associate individual materials with specific clean energy applications, evaluate the importance of materials to specific applications, and assess their importance to clean energy overall. Our analysis reproduced the judgments of experts in assigning material importance to applications. The validated methods were then used to map the replacement of one material with another material in a specific application (batteries.

  10. High Performance Multivariate Visual Data Exploration for Extremely Large Data

    International Nuclear Information System (INIS)

    Ruebel, Oliver; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Ahern, Sean; Weber, Gunther H.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes; Prabhat

    2008-01-01

    One of the central challenges in modern science is the need to quickly derive knowledge and understanding from large, complex collections of data. We present a new approach that deals with this challenge by combining and extending techniques from high performance visual data analysis and scientific data management. This approach is demonstrated within the context of gaining insight from complex, time-varying datasets produced by a laser wakefield accelerator simulation. Our approach leverages histogram-based parallel coordinates for both visual information display as well as a vehicle for guiding a data mining operation. Data extraction and subsetting are implemented with state-of-the-art index/query technology. This approach, while applied here to accelerator science, is generally applicable to a broad set of science applications, and is implemented in a production-quality visual data analysis infrastructure. We conduct a detailed performance analysis and demonstrate good scalability on a distributed memory Cray XT4 system

  11. High Performance Multivariate Visual Data Exploration for Extremely Large Data

    Energy Technology Data Exchange (ETDEWEB)

    Rubel, Oliver; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Ahern, Sean; Weber, Gunther H.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes; Prabhat,

    2008-08-22

    One of the central challenges in modern science is the need to quickly derive knowledge and understanding from large, complex collections of data. We present a new approach that deals with this challenge by combining and extending techniques from high performance visual data analysis and scientific data management. This approach is demonstrated within the context of gaining insight from complex, time-varying datasets produced by a laser wakefield accelerator simulation. Our approach leverages histogram-based parallel coordinates for both visual information display as well as a vehicle for guiding a data mining operation. Data extraction and subsetting are implemented with state-of-the-art index/query technology. This approach, while applied here to accelerator science, is generally applicable to a broad set of science applications, and is implemented in a production-quality visual data analysis infrastructure. We conduct a detailed performance analysis and demonstrate good scalability on a distributed memory Cray XT4 system.

  12. Scientific applications and numerical algorithms on the midas multiprocessor system

    International Nuclear Information System (INIS)

    Logan, D.; Maples, C.

    1986-01-01

    The MIDAS multiprocessor system is a multi-level, hierarchial structure designed at the Advanced Computer Architecture Laboratory of the University of California's Lawrence Berkeley Laboratory. A two-stage, 11-processor system has been operational for over a year and is currently undergoing expansion. It has been employed to investigate the performance of different methods of decomposing various problems and algorithms into a multiprocessor environment. The results of such tests on a variety of applications such as scientific data analysis, Monte Carlo calculations, and image processing, are discussed. Often such decompositions involve investigating the parallel structure of fundamental algorithms. Several basic algorithms dealing with random number generation, matrix diagonalization, fast Fourier transforms, and finite element methods in solving partial differential equations are also discussed. The performance and projected extensibilities of these decompositions on the MIDAS system are reported

  13. Improving Performances in the Public Sector: The Scientific ...

    African Journals Online (AJOL)

    Improving Performances in the Public Sector: The Scientific Management Theory ... adopts the principles for enhanced productivity, efficiency and the attainment of ... of the public sector, as observed and reported by several scholars over time.

  14. 8th International Workshop on Parallel Tools for High Performance Computing

    CERN Document Server

    Gracia, José; Knüpfer, Andreas; Resch, Michael; Nagel, Wolfgang

    2015-01-01

    Numerical simulation and modelling using High Performance Computing has evolved into an established technique in academic and industrial research. At the same time, the High Performance Computing infrastructure is becoming ever more complex. For instance, most of the current top systems around the world use thousands of nodes in which classical CPUs are combined with accelerator cards in order to enhance their compute power and energy efficiency. This complexity can only be mastered with adequate development and optimization tools. Key topics addressed by these tools include parallelization on heterogeneous systems, performance optimization for CPUs and accelerators, debugging of increasingly complex scientific applications, and optimization of energy usage in the spirit of green IT. This book represents the proceedings of the 8th International Parallel Tools Workshop, held October 1-2, 2014 in Stuttgart, Germany – which is a forum to discuss the latest advancements in the parallel tools.

  15. High Performance Object-Oriented Scientific Programming in Fortran 90

    Science.gov (United States)

    Norton, Charles D.; Decyk, Viktor K.; Szymanski, Boleslaw K.

    1997-01-01

    We illustrate how Fortran 90 supports object-oriented concepts by example of plasma particle computations on the IBM SP. Our experience shows that Fortran 90 and object-oriented methodology give high performance while providing a bridge from Fortran 77 legacy codes to modern programming principles. All of our object-oriented Fortran 90 codes execute more quickly thatn the equeivalent C++ versions, yet the abstraction modelling capabilities used for scentific programming are comparably powereful.

  16. An ontology model for execution records of Grid scientific applications

    NARCIS (Netherlands)

    Baliś, B.; Bubak, M.

    2008-01-01

    Records of past application executions are particularly important in the case of loosely-coupled, workflow driven scientific applications which are used to conduct in silico experiments, often on top of Grid infrastructures. In this paper, we propose an ontology-based model for storing and querying

  17. 78 FR 52760 - Application(s) for Duty-Free Entry of Scientific Instruments

    Science.gov (United States)

    2013-08-26

    ... invite comments on the question of whether instruments of equivalent scientific value, for the purposes... platforms based on self- assembled DNA nanostructures for studying cell biology. DNA nanostructures will be... 23, 2013. Docket Number: 13-033. Applicant: University of Pittsburgh School of Medicine, 3500 Terrace...

  18. Scientific Approach for Optimising Performance, Health and Safety in High-Altitude Observatories

    Science.gov (United States)

    Böcker, Michael; Vogy, Joachim; Nolle-Gösser, Tanja

    2008-09-01

    The ESO coordinated study “Optimising Performance, Health and Safety in High-Altitude Observatories” is based on a psychological approach using a questionnaire for data collection and assessment of high-altitude effects. During 2007 and 2008, data from 28 staff and visitors involved in APEX and ALMA were collected and analysed and the first results of the study are summarised. While there is a lot of information about biomedical changes at high altitude, relatively few studies have focussed on psychological changes, for example with respect to performance of mental tasks, safety consciousness and emotions. Both, biomedical and psychological changes are relevant factors in occupational safety and health. The results of the questionnaire on safety, health and performance issues demonstrate that the working conditions at high altitude are less detrimental than expected.

  19. Strategy Guideline: High Performance Residential Lighting

    Energy Technology Data Exchange (ETDEWEB)

    Holton, J.

    2012-02-01

    The Strategy Guideline: High Performance Residential Lighting has been developed to provide a tool for the understanding and application of high performance lighting in the home. The high performance lighting strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner's expectations for high quality lighting.

  20. Core-Shell Columns in High-Performance Liquid Chromatography: Food Analysis Applications

    OpenAIRE

    Preti, Raffaella

    2016-01-01

    The increased separation efficiency provided by the new technology of column packed with core-shell particles in high-performance liquid chromatography (HPLC) has resulted in their widespread diffusion in several analytical fields: from pharmaceutical, biological, environmental, and toxicological. The present paper presents their most recent applications in food analysis. Their use has proved to be particularly advantageous for the determination of compounds at trace levels or when a large am...

  1. Lifetime laser damage performance of β-Ga2O3 for high power applications

    Directory of Open Access Journals (Sweden)

    Jae-Hyuck Yoo

    2018-03-01

    Full Text Available Gallium oxide (Ga2O3 is an emerging wide bandgap semiconductor with potential applications in power electronics and high power optical systems where gallium nitride and silicon carbide have already demonstrated unique advantages compared to gallium arsenide and silicon-based devices. Establishing the stability and breakdown conditions of these next-generation materials is critical to assessing their potential performance in devices subjected to large electric fields. Here, using systematic laser damage performance tests, we establish that β-Ga2O3 has the highest lifetime optical damage performance of any conductive material measured to date, above 10 J/cm2 (1.4 GW/cm2. This has direct implications for its use as an active component in high power laser systems and may give insight into its utility for high-power switching applications. Both heteroepitaxial and bulk β-Ga2O3 samples were benchmarked against a heteroepitaxial gallium nitride sample, revealing an order of magnitude higher optical lifetime damage threshold for β-Ga2O3. Photoluminescence and Raman spectroscopy results suggest that the exceptional damage performance of β-Ga2O3 is due to lower absorptive defect concentrations and reduced epitaxial stress.

  2. Lifetime laser damage performance of β -Ga2O3 for high power applications

    Science.gov (United States)

    Yoo, Jae-Hyuck; Rafique, Subrina; Lange, Andrew; Zhao, Hongping; Elhadj, Selim

    2018-03-01

    Gallium oxide (Ga2O3) is an emerging wide bandgap semiconductor with potential applications in power electronics and high power optical systems where gallium nitride and silicon carbide have already demonstrated unique advantages compared to gallium arsenide and silicon-based devices. Establishing the stability and breakdown conditions of these next-generation materials is critical to assessing their potential performance in devices subjected to large electric fields. Here, using systematic laser damage performance tests, we establish that β-Ga2O3 has the highest lifetime optical damage performance of any conductive material measured to date, above 10 J/cm2 (1.4 GW/cm2). This has direct implications for its use as an active component in high power laser systems and may give insight into its utility for high-power switching applications. Both heteroepitaxial and bulk β-Ga2O3 samples were benchmarked against a heteroepitaxial gallium nitride sample, revealing an order of magnitude higher optical lifetime damage threshold for β-Ga2O3. Photoluminescence and Raman spectroscopy results suggest that the exceptional damage performance of β-Ga2O3 is due to lower absorptive defect concentrations and reduced epitaxial stress.

  3. I/O Performance Characterization of Lustre and NASA Applications on Pleiades

    Science.gov (United States)

    Saini, Subhash; Rappleye, Jason; Chang, Johnny; Barker, David Peter; Biswas, Rupak; Mehrotra, Piyush

    2012-01-01

    In this paper we study the performance of the Lustre file system using five scientific and engineering applications representative of NASA workload on large-scale supercomputing systems such as NASA s Pleiades. In order to facilitate the collection of Lustre performance metrics, we have developed a software tool that exports a wide variety of client and server-side metrics using SGI's Performance Co-Pilot (PCP), and generates a human readable report on key metrics at the end of a batch job. These performance metrics are (a) amount of data read and written, (b) number of files opened and closed, and (c) remote procedure call (RPC) size distribution (4 KB to 1024 KB, in powers of 2) for I/O operations. RPC size distribution measures the efficiency of the Lustre client and can pinpoint problems such as small write sizes, disk fragmentation, etc. These extracted statistics are useful in determining the I/O pattern of the application and can assist in identifying possible improvements for users applications. Information on the number of file operations enables a scientist to optimize the I/O performance of their applications. Amount of I/O data helps users choose the optimal stripe size and stripe count to enhance I/O performance. In this paper, we demonstrate the usefulness of this tool on Pleiades for five production quality NASA scientific and engineering applications. We compare the latency of read and write operations under Lustre to that with NFS by tracing system calls and signals. We also investigate the read and write policies and study the effect of page cache size on I/O operations. We examine the performance impact of Lustre stripe size and stripe count along with performance evaluation of file per process and single shared file accessed by all the processes for NASA workload using parameterized IOR benchmark.

  4. Predicting environmental aspects of CCSR leachates through the application of scientifically valid leaching protocols

    International Nuclear Information System (INIS)

    Hassett, D.J.

    1993-01-01

    The disposal of solid wastes from energy production, particularly solid wastes from coal conversion processes, requires a thorough understanding of the waste material as well as the disposal environment. Many coal conversion solid residues (CCSRs) have chemical, mineralogical, and physical properties advantageous for use as engineering construction materials and in other industrial applications. If disposal is to be the final disposition of CCSRs from any source, the very properties that can make ash useful also contribute to behavior that must be understood for scientifically logical and environmentally responsible disposal. This paper describes the application of scientifically valid leaching and characterization tests designed to predict field phenomena. The key to proper characterization of these unique materials is the recognition of and compensation for the hydration reactions that can occur during long-term leaching. Many of these reactions, such as the formation of the mineral ettringite, can have a profound effect on the concentration of potentially problematic trace elements such as boron, chromium, and selenium. The mobility of these elements, which may be concentrated in CCSRs due to the conversion process, must be properly evaluated for the formation of informed and scientifically sound decisions regarding safe disposal. Groundwater is an extremely important and relatively scarce resource. Contamination of this resource is a threat to life, which is highly dependent on it, so management of materials that can impact groundwater must be carefully planned and executed. The application of scientifically valid leaching protocols and complete testing are critical to proper waste management

  5. The profile of high school students’ scientific literacy on fluid dynamics

    Science.gov (United States)

    Parno; Yuliati, L.; Munfaridah, N.

    2018-05-01

    This study aims to describe the profile of scientific literacy of high school students on Fluid Dynamics materials. Scientific literacy is one of the ability to solve daily problems in accordance with the context of materials related to science and technology. The study was conducted on 90 high school students in Sumbawa using survey design. Data were collected using an instrument of scientific literacy for high school students on dynamic fluid materials. Data analysis was conducted descriptively to determine the students’ profile of scientific literacy. The results showed that high school students’ scientific literacy on Fluid Dynamics materials was in the low category. The highest average is obtained on indicators of scientific literacy i.e. the ability to interpret data and scientific evidence. The ability of scientific literacy is related to the mastery of concepts and learning experienced by students, therefore it is necessary to use learning that can trace this ability such as Science, Technology, Engineering, and Mathematics (STEM).

  6. Application of High Temperature Superconductors to Accelerators

    CERN Document Server

    Ballarino, A

    2000-01-01

    Since the discovery of high temperature superconductivity, a large effort has been made by the scientific community to investigate this field towards a possible application of the new oxide superconductors to different devices like SMES, magnetic bearings, flywheels energy storage, magnetic shielding, transmission cables, fault current limiters, etc. However, all present day large scale applications using superconductivity in accelerator technology are based on conventional materials operating at liquid helium temperatures. Poor mechanical properties, low critical current density and sensitivity to the magnetic field at high temperature are the key parameters whose improvement is essential for a large scale application of high temperature superconductors to such devices. Current leads, used for transferring currents from the power converters, working at room temperature, into the liquid helium environment, where the magnets are operating, represent an immediate application of the emerging technology of high t...

  7. Application of Ionic Liquids in High Performance Reversed-Phase Chromatography

    Directory of Open Access Journals (Sweden)

    Wentao Bi

    2009-06-01

    Full Text Available Ionic liquids, considered “green” chemicals, are widely used in many areas of analytical chemistry due to their unique properties. Recently, ionic liquids have been used as a kind of novel additive in separation and combined with silica to synthesize new stationary phase as separation media. This review will focus on the properties and mechanisms of ionic liquids and their potential applications as mobile phase modifier and surface-bonded stationary phase in reversed-phase high performance liquid chromatography (RP-HPLC. Ionic liquids demonstrate advantages and potential in chromatographic field.

  8. In search of novel, high performance and intelligent materials for applications in severe and unconditioned environments

    International Nuclear Information System (INIS)

    Gyeabour Ayensu, A. I.; Normeshie, C. M. K.

    2007-01-01

    For extreme operating conditions in aerospace, nuclear power plants and medical applications, novel materials have become more competitive over traditional materials because of the unique characteristics. Extensive research programmes are being undertaken to develop high performance and knowledge-intensive new materials, since existing materials cannot meet the stringent technological requirements of advanced materials for emerging industries. The technologies of intermetallic compounds, nanostructural materials, advanced composites, and photonics materials are presented. In addition, medical biomaterial implants of high functional performance based on biocompatibility, resistance against corrosion and degradation, and for applications in hostile environment of human body are discussed. The opportunities for African researchers to collaborate in international research programmes to develop local raw materials into high performance materials are also highlighted. (au)

  9. Cross-language Babel structs—making scientific interfaces more efficient

    International Nuclear Information System (INIS)

    Prantl, Adrian; Epperly, Thomas G W; Ebner, Dietmar

    2013-01-01

    Babel is an open-source language interoperability framework tailored to the needs of high-performance scientific computing. As an integral element of the Common Component Architecture, it is employed in a wide range of scientific applications where it is used to connect components written in different programming languages. In this paper we describe how we extended Babel to support interoperable tuple data types (structs). Structs are a common idiom in (mono-lingual) scientific application programming interfaces (APIs); they are an efficient way to pass tuples of nonuniform data between functions, and are supported natively by most programming languages. Using our extended version of Babel, developers of scientific codes can now pass structs as arguments between functions implemented in any of the supported languages. In C, C++, Fortran 2003/2008 and Chapel, structs can be passed without the overhead of data marshaling or copying, providing language interoperability at minimal cost. Other supported languages are Fortran 77, Fortran 90/95, Java and Python. We will show how we designed a struct implementation that is interoperable with all of the supported languages and present benchmark data to compare the performance of all language bindings, highlighting the differences between languages that offer native struct support and an object-oriented interface with getter/setter methods. A case study shows how structs can help simplify the interfaces of scientific codes significantly. (paper)

  10. Applications of field-programmable gate arrays in scientific research

    CERN Document Server

    Sadrozinski, Hartmut F W

    2011-01-01

    Focusing on resource awareness in field-programmable gate array (FPGA) design, Applications of Field-Programmable Gate Arrays in Scientific Research covers the principle of FPGAs and their functionality. It explores a host of applications, ranging from small one-chip laboratory systems to large-scale applications in ""big science."" The book first describes various FPGA resources, including logic elements, RAM, multipliers, microprocessors, and content-addressable memory. It then presents principles and methods for controlling resources, such as process sequencing, location constraints, and in

  11. Mechanical Performance of Ferritic Martensitic Steels for High Dose Applications in Advanced Nuclear Reactors

    Science.gov (United States)

    Anderoglu, Osman; Byun, Thak Sang; Toloczko, Mychailo; Maloy, Stuart A.

    2013-01-01

    Ferritic/martensitic (F/M) steels are considered for core applications and pressure vessels in Generation IV reactors as well as first walls and blankets for fusion reactors. There are significant scientific data on testing and industrial experience in making this class of alloys worldwide. This experience makes F/M steels an attractive candidate. In this article, tensile behavior, fracture toughness and impact property, and creep behavior of the F/M steels under neutron irradiations to high doses with a focus on high Cr content (8 to 12) are reviewed. Tensile properties are very sensitive to irradiation temperature. Increase in yield and tensile strength (hardening) is accompanied with a loss of ductility and starts at very low doses under irradiation. The degradation of mechanical properties is most pronounced at martensitic steels exhibit a high fracture toughness after irradiation at all temperatures even below 673 K (400 °C), except when tested at room temperature after irradiations below 673 K (400 °C), which shows a significant reduction in fracture toughness. Creep studies showed that for the range of expected stresses in a reactor environment, the stress exponent is expected to be approximately one and the steady state creep rate in the absence of swelling is usually better than austenitic stainless steels both in terms of the creep rate and the temperature sensitivity of creep. In short, F/M steels show excellent promise for high dose applications in nuclear reactors.

  12. Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data.

    Science.gov (United States)

    Aji, Ablimit; Wang, Fusheng; Saltz, Joel H

    2012-11-06

    Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the "big data" challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce.

  13. Manuscript Architect: a Web application for scientific writing in virtual interdisciplinary groups

    Directory of Open Access Journals (Sweden)

    Menezes Andreia P

    2005-06-01

    Full Text Available Abstract Background Although scientific writing plays a central role in the communication of clinical research findings and consumes a significant amount of time from clinical researchers, few Web applications have been designed to systematically improve the writing process. This application had as its main objective the separation of the multiple tasks associated with scientific writing into smaller components. It was also aimed at providing a mechanism where sections of the manuscript (text blocks could be assigned to different specialists. Manuscript Architect was built using Java language in conjunction with the classic lifecycle development method. The interface was designed for simplicity and economy of movements. Manuscripts are divided into multiple text blocks that can be assigned to different co-authors by the first author. Each text block contains notes to guide co-authors regarding the central focus of each text block, previous examples, and an additional field for translation when the initial text is written in a language different from the one used by the target journal. Usability was evaluated using formal usability tests and field observations. Results The application presented excellent usability and integration with the regular writing habits of experienced researchers. Workshops were developed to train novice researchers, presenting an accelerated learning curve. The application has been used in over 20 different scientific articles and grant proposals. Conclusion The current version of Manuscript Architect has proven to be very useful in the writing of multiple scientific texts, suggesting that virtual writing by interdisciplinary groups is an effective manner of scientific writing when interdisciplinary work is required.

  14. Strategies for application of scientific findings in prevention.

    Science.gov (United States)

    Wei, S H

    1995-07-01

    Dental research in the last 50 years has accomplished numerous significant advances in preventive dentistry, particularly in the area of research in fluorides, periodontal diseases, restorative dentistry, and dental materials, as well as craniofacial development and molecular biology. The transfer of scientific knowledge to clinical practitioners requires additional effort. It is the responsibility of the scientific communities to transfer the fruits of their findings to society through publications, conferences, media, and the press. Specific programs that the International Association for Dental Research (IADR) has developed to transmit science to the profession and the public have included science transfer seminars, the Visiting Lecture Program, and hands-on workshops. The IADR Strategic Plan also has a major outreach goal. In addition, the Federation Dentaire Internationale (FDI) and the World Health Organization (WHO) have initiated plans to celebrate World Health Day and the Year of Oral Health in 1994. These are important strategies for the application of scientific findings in prevention.

  15. Reading, Writing, and Presenting Original Scientific Research: A Nine-Week Course in Scientific Communication for High School Students†

    Science.gov (United States)

    Danka, Elizabeth S.; Malpede, Brian M.

    2015-01-01

    High school students are not often given opportunities to communicate scientific findings to their peers, the general public, and/or people in the scientific community, and therefore they do not develop scientific communication skills. We present a nine-week course that can be used to teach high school students, who may have no previous experience, how to read and write primary scientific articles and how to discuss scientific findings with a broad audience. Various forms of this course have been taught for the past 10 years as part of an intensive summer research program for rising high school seniors that is coordinated by the Young Scientist Program at Washington University in St. Louis. The format presented here includes assessments for efficacy through both rubric-based methods and student self-assessment surveys. PMID:26753027

  16. Open-Source 3-D Platform for Low-Cost Scientific Instrument Ecosystem.

    Science.gov (United States)

    Zhang, C; Wijnen, B; Pearce, J M

    2016-08-01

    The combination of open-source software and hardware provides technically feasible methods to create low-cost, highly customized scientific research equipment. Open-source 3-D printers have proven useful for fabricating scientific tools. Here the capabilities of an open-source 3-D printer are expanded to become a highly flexible scientific platform. An automated low-cost 3-D motion control platform is presented that has the capacity to perform scientific applications, including (1) 3-D printing of scientific hardware; (2) laboratory auto-stirring, measuring, and probing; (3) automated fluid handling; and (4) shaking and mixing. The open-source 3-D platform not only facilities routine research while radically reducing the cost, but also inspires the creation of a diverse array of custom instruments that can be shared and replicated digitally throughout the world to drive down the cost of research and education further. © 2016 Society for Laboratory Automation and Screening.

  17. US QCD computational performance studies with PERI

    International Nuclear Information System (INIS)

    Zhang, Y; Fowler, R; Huck, K; Malony, A; Porterfield, A; Reed, D; Shende, S; Taylor, V; Wu, X

    2007-01-01

    We report on some of the interactions between two SciDAC projects: The National Computational Infrastructure for Lattice Gauge Theory (USQCD), and the Performance Engineering Research Institute (PERI). Many modern scientific programs consistently report the need for faster computational resources to maintain global competitiveness. However, as the size and complexity of emerging high end computing (HEC) systems continue to rise, achieving good performance on such systems is becoming ever more challenging. In order to take full advantage of the resources, it is crucial to understand the characteristics of relevant scientific applications and the systems these applications are running on. Using tools developed under PERI and by other performance measurement researchers, we studied the performance of two applications, MILC and Chroma, on several high performance computing systems at DOE laboratories. In the case of Chroma, we discuss how the use of C++ and modern software engineering and programming methods are driving the evolution of performance tools

  18. Scientific impact: opportunity and necessity.

    Science.gov (United States)

    Cohen, Marlene Z; Alexander, Gregory L; Wyman, Jean F; Fahrenwald, Nancy L; Porock, Davina; Wurzbach, Mary E; Rawl, Susan M; Conn, Vicki S

    2010-08-01

    Recent National Institutes of Health changes have focused attention on the potential scientific impact of research projects. Research with the excellent potential to change subsequent science or health care practice may have high scientific impact. Only rigorous studies that address highly significant problems can generate change. Studies with high impact may stimulate new research approaches by changing understanding of a phenomenon, informing theory development, or creating new research methods that allow a field of science to move forward. Research with high impact can transition health care to more effective and efficient approaches. Studies with high impact may propel new policy developments. Research with high scientific impact typically has both immediate and sustained influence on the field of study. The article includes ideas to articulate potential scientific impact in grant applications as well as possible dissemination strategies to enlarge the impact of completed projects.

  19. A framework for distributed mixed-language scientific applications

    International Nuclear Information System (INIS)

    Quarrie, D.R.

    1996-01-01

    The Object Management Group has defined an architecture (COBRA) for distributed object applications based on an Object Broker and Interface Definition Language. This project builds upon this architecture to establish a framework for the creation of mixed language scientific applications. A prototype compiler has been written that generates FORTRAN 90 or Eiffel subs and skeletons and the required C++ glue code from an input IDL file that specifies object interfaces. This generated code can be used directly for non-distributed mixed language applications or in conjunction with the C++ code generated from a commercial IDL compiler for distributed applications. A feasibility study is presently to see whether a fully integrated software development environment for distributed, mixed-language applications can be created by modifying the back-end code generator of a commercial CASE tool to emit IDL. (author)

  20. Science on Stage: Engaging and teaching scientific content through performance art

    Science.gov (United States)

    Posner, Esther

    2016-04-01

    Engaging teaching material through performance art and music can improve the long-term retention of scientific content. Additionally, the development of effective performance skills are a powerful tool to communicate scientific concepts and information to a broader audience that can have many positive benefits in terms of career development and the delivery of professional presentations. While arts integration has been shown to increase student engagement and achievement, relevant artistic materials are still required for use as supplemental activities in STEM (science, technology, engineering, mathematics) courses. I will present an original performance poem, "Tectonic Petrameter: A Journey Through Earth History," with instructions for its implementation as a play in pre-university and undergraduate geoscience classrooms. "Tectonic Petrameter" uses a dynamic combination of rhythm and rhyme to teach the geological time scale, fundamental concepts in geology and important events in Earth history. I propose that using performance arts, such as "Tectonic Petrameter" and other creative art forms, may be an avenue for breaking down barriers related to teaching students and the broader non-scientific community about Earth's long and complex history.

  1. Tokamaks with high-performance resistive magnets: advanced test reactors and prospects for commercial applications

    International Nuclear Information System (INIS)

    Bromberg, L.; Cohn, D.R.; Williams, J.E.C.; Becker, H.; Leclaire, R.; Yang, T.

    1981-10-01

    Scoping studies have been made of tokamak reactors with high performance resistive magnets which maximize advantages gained from high field operation and reduced shielding requirements, and minimize resistive power requirements. High field operation can provide very high values of fusion power density and n tau/sub e/ while the resistive power losses can be kept relatively small. Relatively high values of Q' = Fusion Power/Magnet Resistive Power can be obtained. The use of high field also facilitates operation in the DD-DT advanced fuel mode. The general engineering and operational features of machines with high performance magnets are discussed. Illustrative parameters are given for advanced test reactors and for possible commercial reactors. Commercial applications that are discussed are the production of fissile fuel, electricity generation with and without fissioning blankets and synthetic fuel production

  2. Numerical Platon: A unified linear equation solver interface by CEA for solving open foe scientific applications

    International Nuclear Information System (INIS)

    Secher, Bernard; Belliard, Michel; Calvin, Christophe

    2009-01-01

    This paper describes a tool called 'Numerical Platon' developed by the French Atomic Energy Commission (CEA). It provides a freely available (GNU LGPL license) interface for coupling scientific computing applications to various freeware linear solver libraries (essentially PETSc, SuperLU and HyPre), together with some proprietary CEA solvers, for high-performance computers that may be used in industrial software written in various programming languages. This tool was developed as part of considerable efforts by the CEA Nuclear Energy Division in the past years to promote massively parallel software and on-shelf parallel tools to help develop new generation simulation codes. After the presentation of the package architecture and the available algorithms, we show examples of how Numerical Platon is used in sequential and parallel CEA codes. Comparing with in-house solvers, the gain in terms of increases in computation capacities or in terms of parallel performances is notable, without considerable extra development cost

  3. Numerical Platon: A unified linear equation solver interface by CEA for solving open foe scientific applications

    Energy Technology Data Exchange (ETDEWEB)

    Secher, Bernard [French Atomic Energy Commission (CEA), Nuclear Energy Division (DEN) (France); CEA Saclay DM2S/SFME/LGLS, Bat. 454, F-91191 Gif-sur-Yvette Cedex (France)], E-mail: bsecher@cea.fr; Belliard, Michel [French Atomic Energy Commission (CEA), Nuclear Energy Division (DEN) (France); CEA Cadarache DER/SSTH/LMDL, Bat. 238, F-13108 Saint-Paul-lez-Durance Cedex (France); Calvin, Christophe [French Atomic Energy Commission (CEA), Nuclear Energy Division (DEN) (France); CEA Saclay DM2S/SERMA/LLPR, Bat. 470, F-91191 Gif-sur-Yvette Cedex (France)

    2009-01-15

    This paper describes a tool called 'Numerical Platon' developed by the French Atomic Energy Commission (CEA). It provides a freely available (GNU LGPL license) interface for coupling scientific computing applications to various freeware linear solver libraries (essentially PETSc, SuperLU and HyPre), together with some proprietary CEA solvers, for high-performance computers that may be used in industrial software written in various programming languages. This tool was developed as part of considerable efforts by the CEA Nuclear Energy Division in the past years to promote massively parallel software and on-shelf parallel tools to help develop new generation simulation codes. After the presentation of the package architecture and the available algorithms, we show examples of how Numerical Platon is used in sequential and parallel CEA codes. Comparing with in-house solvers, the gain in terms of increases in computation capacities or in terms of parallel performances is notable, without considerable extra development cost.

  4. Scientific computer simulation review

    International Nuclear Information System (INIS)

    Kaizer, Joshua S.; Heller, A. Kevin; Oberkampf, William L.

    2015-01-01

    Before the results of a scientific computer simulation are used for any purpose, it should be determined if those results can be trusted. Answering that question of trust is the domain of scientific computer simulation review. There is limited literature that focuses on simulation review, and most is specific to the review of a particular type of simulation. This work is intended to provide a foundation for a common understanding of simulation review. This is accomplished through three contributions. First, scientific computer simulation review is formally defined. This definition identifies the scope of simulation review and provides the boundaries of the review process. Second, maturity assessment theory is developed. This development clarifies the concepts of maturity criteria, maturity assessment sets, and maturity assessment frameworks, which are essential for performing simulation review. Finally, simulation review is described as the application of a maturity assessment framework. This is illustrated through evaluating a simulation review performed by the U.S. Nuclear Regulatory Commission. In making these contributions, this work provides a means for a more objective assessment of a simulation’s trustworthiness and takes the next step in establishing scientific computer simulation review as its own field. - Highlights: • We define scientific computer simulation review. • We develop maturity assessment theory. • We formally define a maturity assessment framework. • We describe simulation review as the application of a maturity framework. • We provide an example of a simulation review using a maturity framework

  5. A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations

    International Nuclear Information System (INIS)

    Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel; Buluc, Aydin; Shao, Meiyue

    2017-01-01

    As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using the compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.

  6. APPLICATION OF ULTRA-HIGH PERFORMANCE CONCRETE TO PEDESTRIAN CABLE-STAYED BRIDGES

    Directory of Open Access Journals (Sweden)

    CHI-DONG LEE

    2013-06-01

    Full Text Available The use of ultra-high performance concrete (UHPC, which enables reducing the cross sectional dimension of the structures due to its high strength, is expected in the construction of the super-long span bridges. Unlike conventional concrete, UHPC experiences less variation of material properties such as creep and drying shrinkage and can reduce uncertainties in predicting time-dependent behavior over the long term. This study describes UHPC’s material characteristics and benefits when applied to super-long span bridges. A UHPC girder pedestrian cable-stayed bridge was designed and successfully constructed. The UHPC reduced the deflections in both the short and long term. The cost analysis demonstrates a highly competitive price for UHPC. This study indicates that UHPC has a strong potential for application in the super-long span bridges.

  7. Mode-locked thin-disk lasers and their potential application for high-power terahertz generation

    Science.gov (United States)

    Saraceno, Clara J.

    2018-04-01

    The progress achieved in the last few decades in the performance of ultrafast laser systems with high average power has been tremendous, and continues to provide momentum to new exciting applications, both in scientific research and technology. Among the various technological advances that have shaped this progress, mode-locked thin-disk oscillators have attracted significant attention as a unique technology capable of providing ultrashort pulses with high energy (tens to hundreds of microjoules) and at very high repetition rates (in the megahertz regime) from a single table-top oscillator. This technology opens the door to compact high repetition rate ultrafast sources spanning the entire electromagnetic spectrum from the XUV to the terahertz regime, opening various new application fields. In this article, we focus on their unexplored potential as compact driving sources for high average power terahertz generation.

  8. Designing a High Performance Parallel Personal Cluster

    OpenAIRE

    Kapanova, K. G.; Sellier, J. M.

    2016-01-01

    Today, many scientific and engineering areas require high performance computing to perform computationally intensive experiments. For example, many advances in transport phenomena, thermodynamics, material properties, computational chemistry and physics are possible only because of the availability of such large scale computing infrastructures. Yet many challenges are still open. The cost of energy consumption, cooling, competition for resources have been some of the reasons why the scientifi...

  9. Using the Eclipse Parallel Tools Platform to Assist Earth Science Model Development and Optimization on High Performance Computers

    Science.gov (United States)

    Alameda, J. C.

    2011-12-01

    Development and optimization of computational science models, particularly on high performance computers, and with the advent of ubiquitous multicore processor systems, practically on every system, has been accomplished with basic software tools, typically, command-line based compilers, debuggers, performance tools that have not changed substantially from the days of serial and early vector computers. However, model complexity, including the complexity added by modern message passing libraries such as MPI, and the need for hybrid code models (such as openMP and MPI) to be able to take full advantage of high performance computers with an increasing core count per shared memory node, has made development and optimization of such codes an increasingly arduous task. Additional architectural developments, such as many-core processors, only complicate the situation further. In this paper, we describe how our NSF-funded project, "SI2-SSI: A Productive and Accessible Development Workbench for HPC Applications Using the Eclipse Parallel Tools Platform" (WHPC) seeks to improve the Eclipse Parallel Tools Platform, an environment designed to support scientific code development targeted at a diverse set of high performance computing systems. Our WHPC project to improve Eclipse PTP takes an application-centric view to improve PTP. We are using a set of scientific applications, each with a variety of challenges, and using PTP to drive further improvements to both the scientific application, as well as to understand shortcomings in Eclipse PTP from an application developer perspective, to drive our list of improvements we seek to make. We are also partnering with performance tool providers, to drive higher quality performance tool integration. We have partnered with the Cactus group at Louisiana State University to improve Eclipse's ability to work with computational frameworks and extremely complex build systems, as well as to develop educational materials to incorporate into

  10. SCEAPI: A unified Restful Web API for High-Performance Computing

    Science.gov (United States)

    Rongqiang, Cao; Haili, Xiao; Shasha, Lu; Yining, Zhao; Xiaoning, Wang; Xuebin, Chi

    2017-10-01

    The development of scientific computing is increasingly moving to collaborative web and mobile applications. All these applications need high-quality programming interface for accessing heterogeneous computing resources consisting of clusters, grid computing or cloud computing. In this paper, we introduce our high-performance computing environment that integrates computing resources from 16 HPC centers across China. Then we present a bundle of web services called SCEAPI and describe how it can be used to access HPC resources with HTTP or HTTPs protocols. We discuss SCEAPI from several aspects including architecture, implementation and security, and address specific challenges in designing compatible interfaces and protecting sensitive data. We describe the functions of SCEAPI including authentication, file transfer and job management for creating, submitting and monitoring, and how to use SCEAPI in an easy-to-use way. Finally, we discuss how to exploit more HPC resources quickly for the ATLAS experiment by implementing the custom ARC compute element based on SCEAPI, and our work shows that SCEAPI is an easy-to-use and effective solution to extend opportunistic HPC resources.

  11. Core-Shell Columns in High-Performance Liquid Chromatography: Food Analysis Applications

    Science.gov (United States)

    Preti, Raffaella

    2016-01-01

    The increased separation efficiency provided by the new technology of column packed with core-shell particles in high-performance liquid chromatography (HPLC) has resulted in their widespread diffusion in several analytical fields: from pharmaceutical, biological, environmental, and toxicological. The present paper presents their most recent applications in food analysis. Their use has proved to be particularly advantageous for the determination of compounds at trace levels or when a large amount of samples must be analyzed fast using reliable and solvent-saving apparatus. The literature hereby described shows how the outstanding performances provided by core-shell particles column on a traditional HPLC instruments are comparable to those obtained with a costly UHPLC instrumentation, making this novel column a promising key tool in food analysis. PMID:27143972

  12. Initial explorations of ARM processors for scientific computing

    International Nuclear Information System (INIS)

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Muzaffar, Shahzad

    2014-01-01

    Power efficiency is becoming an ever more important metric for both high performance and high throughput computing. Over the course of next decade it is expected that flops/watt will be a major driver for the evolution of computer architecture. Servers with large numbers of ARM processors, already ubiquitous in mobile computing, are a promising alternative to traditional x86-64 computing. We present the results of our initial investigations into the use of ARM processors for scientific computing applications. In particular we report the results from our work with a current generation ARMv7 development board to explore ARM-specific issues regarding the software development environment, operating system, performance benchmarks and issues for porting High Energy Physics software

  13. High Performance Grinding and Advanced Cutting Tools

    CERN Document Server

    Jackson, Mark J

    2013-01-01

    High Performance Grinding and Advanced Cutting Tools discusses the fundamentals and advances in high performance grinding processes, and provides a complete overview of newly-developing areas in the field. Topics covered are grinding tool formulation and structure, grinding wheel design and conditioning and applications using high performance grinding wheels. Also included are heat treatment strategies for grinding tools, using grinding tools for high speed applications, laser-based and diamond dressing techniques, high-efficiency deep grinding, VIPER grinding, and new grinding wheels.

  14. High Performance Relaxor-Based Ferroelectric Single Crystals for Ultrasonic Transducer Applications

    Directory of Open Access Journals (Sweden)

    Yan Chen

    2014-07-01

    Full Text Available Relaxor-based ferroelectric single crystals Pb(Mg1/3Nb2/3O3-PbTiO3 (PMN-PT have drawn much attention in the ferroelectric field because of their excellent piezoelectric properties and high electromechanical coupling coefficients (d33~2000 pC/N, kt~60% near the morphotropic phase boundary (MPB. Ternary Pb(In1/2Nb1/2O3-Pb(Mg1/3Nb2/3O3-PbTiO3 (PIN-PMN-PT single crystals also possess outstanding performance comparable with PMN-PT single crystals, but have higher phase transition temperatures (rhombohedral to tetragonal Trt, and tetragonal to cubic Tc and larger coercive field Ec. Therefore, these relaxor-based single crystals have been extensively employed for ultrasonic transducer applications. In this paper, an overview of our work and perspectives on using PMN-PT and PIN-PMN-PT single crystals for ultrasonic transducer applications is presented. Various types of single-element ultrasonic transducers, including endoscopic transducers, intravascular transducers, high-frequency and high-temperature transducers fabricated using the PMN-PT and PIN-PMN-PT crystals and their 2-2 and 1-3 composites are reported. Besides, the fabrication and characterization of the array transducers, such as phased array, cylindrical shaped linear array, high-temperature linear array, radial endoscopic array, and annular array, are also addressed.

  15. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  16. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  17. Superconductivity of high Tc Scientific revolution?

    International Nuclear Information System (INIS)

    Marquina, J.E.; Ridaura, R.; Gomez, R.; Marquina, V.; Alvarez, J.L.

    1997-01-01

    A short history of superconductivity, since its discovery by Bednorz and Muller to the development of new materials with high transition temperatures, is presented. Further evolvements are analyzed in terms of T.s. Kuhn conceptions expressed in his book. The Structure of Scientific Revolutions. (Author) 4 refs

  18. Incorporating Primary Scientific Literature in Middle and High School Education

    Directory of Open Access Journals (Sweden)

    Sarah C. Fankhauser

    2015-11-01

    Full Text Available Primary literature is the most reliable and direct source of scientific information, but most middle school and high school science is taught using secondary and tertiary sources. One reason for this is that primary science articles can be difficult to access and interpret for young students and for their teachers, who may lack exposure to this type of writing. The Journal of Emerging Investigators (JEI was created to fill this gap and provide primary research articles that can be accessed and read by students and their teachers. JEI is a non-profit, online, open-access, peer-reviewed science journal dedicated to mentoring and publishing the scientific research of middle and high school students. JEI articles provide reliable scientific information that is written by students and therefore at a level that their peers can understand. For student-authors who publish in JEI, the review process and the interaction with scientists provide invaluable insight into the scientific process. Moreover, the resulting repository of free, student-written articles allows teachers to incorporate age-appropriate primary literature into the middle and high school science classroom. JEI articles can be used for teaching specific scientific content or for teaching the process of the scientific method itself. The critical thinking skills that students learn by engaging with the primary literature will be invaluable for the development of a scientifically-literate public.

  19. Incorporating Primary Scientific Literature in Middle and High School Education.

    Science.gov (United States)

    Fankhauser, Sarah C; Lijek, Rebeccah S

    2016-03-01

    Primary literature is the most reliable and direct source of scientific information, but most middle school and high school science is taught using secondary and tertiary sources. One reason for this is that primary science articles can be difficult to access and interpret for young students and for their teachers, who may lack exposure to this type of writing. The Journal of Emerging Investigators (JEI) was created to fill this gap and provide primary research articles that can be accessed and read by students and their teachers. JEI is a non-profit, online, open-access, peer-reviewed science journal dedicated to mentoring and publishing the scientific research of middle and high school students. JEI articles provide reliable scientific information that is written by students and therefore at a level that their peers can understand. For student-authors who publish in JEI, the review process and the interaction with scientists provide invaluable insight into the scientific process. Moreover, the resulting repository of free, student-written articles allows teachers to incorporate age-appropriate primary literature into the middle and high school science classroom. JEI articles can be used for teaching specific scientific content or for teaching the process of the scientific method itself. The critical thinking skills that students learn by engaging with the primary literature will be invaluable for the development of a scientifically-literate public.

  20. High Performance Wideband CMOS CCI and its Application in Inductance Simulator Design

    Directory of Open Access Journals (Sweden)

    ARSLAN, E.

    2012-08-01

    Full Text Available In this paper, a new, differential pair based, low-voltage, high performance and wideband CMOS first generation current conveyor (CCI is proposed. The proposed CCI has high voltage swings on ports X and Y and very low equivalent impedance on port X due to super source follower configuration. It also has high voltage swings (close to supply voltages on input and output ports and wideband current and voltage transfer ratios. Furthermore, two novel grounded inductance simulator circuits are proposed as application examples. Using HSpice, it is shown that the simulation results of the proposed CCI and also of the presented inductance simulators are in very good agreement with the expected ones.

  1. Application of software quality assurance to a specific scientific code development task

    International Nuclear Information System (INIS)

    Dronkers, J.J.

    1986-03-01

    This paper describes an application of software quality assurance to a specific scientific code development program. The software quality assurance program consists of three major components: administrative control, configuration management, and user documentation. The program attempts to be consistent with existing local traditions of scientific code development while at the same time providing a controlled process of development

  2. FY01 Supplemental Science and Performance Analysis: Volume 1, Scientific Bases and Analyses

    International Nuclear Information System (INIS)

    Bodvarsson, G.S.; Dobson, David

    2001-01-01

    The U.S. Department of Energy (DOE) is considering the possible recommendation of a site at Yucca Mountain, Nevada, for development as a geologic repository for the disposal of high-level radioactive waste and spent nuclear fuel. To facilitate public review and comment, in May 2001 the DOE released the Yucca Mountain Science and Engineering Report (S and ER) (DOE 2001 [DIRS 153849]), which presents technical information supporting the consideration of the possible site recommendation. The report summarizes the results of more than 20 years of scientific and engineering studies. A decision to recommend the site has not been made: the DOE has provided the S and ER and its supporting documents as an aid to the public in formulating comments on the possible recommendation. When the S and ER (DOE 2001 [DIRS 153849]) was released, the DOE acknowledged that technical and scientific analyses of the site were ongoing. Therefore, the DOE noted in the Federal Register Notice accompanying the report (66 FR 23013 [DIRS 155009], p. 2) that additional technical information would be released before the dates, locations, and times for public hearings on the possible recommendation were announced. This information includes: (1) the results of additional technical studies of a potential repository at Yucca Mountain, contained in this FY01 Supplemental Science and Performance Analyses: Vol. 1, Scientific Bases and Analyses; and FY01 Supplemental Science and Performance Analyses: Vol. 2, Performance Analyses (McNeish 2001 [DIRS 155023]) (collectively referred to as the SSPA) and (2) a preliminary evaluation of the Yucca Mountain site's preclosure and postclosure performance against the DOE's proposed site suitability guidelines (10 CFR Part 963 [64 FR 67054 [DIRS 124754

  3. FY01 Supplemental Science and Performance Analysis: Volume 1,Scientific Bases and Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Bodvarsson, G.S.; Dobson, David

    2001-05-30

    The U.S. Department of Energy (DOE) is considering the possible recommendation of a site at Yucca Mountain, Nevada, for development as a geologic repository for the disposal of high-level radioactive waste and spent nuclear fuel. To facilitate public review and comment, in May 2001 the DOE released the Yucca Mountain Science and Engineering Report (S&ER) (DOE 2001 [DIRS 153849]), which presents technical information supporting the consideration of the possible site recommendation. The report summarizes the results of more than 20 years of scientific and engineering studies. A decision to recommend the site has not been made: the DOE has provided the S&ER and its supporting documents as an aid to the public in formulating comments on the possible recommendation. When the S&ER (DOE 2001 [DIRS 153849]) was released, the DOE acknowledged that technical and scientific analyses of the site were ongoing. Therefore, the DOE noted in the Federal Register Notice accompanying the report (66 FR 23013 [DIRS 155009], p. 2) that additional technical information would be released before the dates, locations, and times for public hearings on the possible recommendation were announced. This information includes: (1) the results of additional technical studies of a potential repository at Yucca Mountain, contained in this FY01 Supplemental Science and Performance Analyses: Vol. 1, Scientific Bases and Analyses; and FY01 Supplemental Science and Performance Analyses: Vol. 2, Performance Analyses (McNeish 2001 [DIRS 155023]) (collectively referred to as the SSPA) and (2) a preliminary evaluation of the Yucca Mountain site's preclosure and postclosure performance against the DOE's proposed site suitability guidelines (10 CFR Part 963 [64 FR 67054 [DIRS 124754

  4. Advanced I/O for large-scale scientific applications

    International Nuclear Information System (INIS)

    Klasky, Scott; Schwan, Karsten; Oldfield, Ron A.; Lofstead, Gerald F. II

    2010-01-01

    As scientific simulations scale to use petascale machines and beyond, the data volumes generated pose a dual problem. First, with increasing machine sizes, the careful tuning of IO routines becomes more and more important to keep the time spent in IO acceptable. It is not uncommon, for instance, to have 20% of an application's runtime spent performing IO in a 'tuned' system. Careful management of the IO routines can move that to 5% or even less in some cases. Second, the data volumes are so large, on the order of 10s to 100s of TB, that trying to discover the scientifically valid contributions requires assistance at runtime to both organize and annotate the data. Waiting for offline processing is not feasible due both to the impact on the IO system and the time required. To reduce this load and improve the ability of scientists to use the large amounts of data being produced, new techniques for data management are required. First, there is a need for techniques for efficient movement of data from the compute space to storage. These techniques should understand the underlying system infrastructure and adapt to changing system conditions. Technologies include aggregation networks, data staging nodes for a closer parity to the IO subsystem, and autonomic IO routines that can detect system bottlenecks and choose different approaches, such as splitting the output into multiple targets, staggering output processes. Such methods must be end-to-end, meaning that even with properly managed asynchronous techniques, it is still essential to properly manage the later synchronous interaction with the storage system to maintain acceptable performance. Second, for the data being generated, annotations and other metadata must be incorporated to help the scientist understand output data for the simulation run as a whole, to select data and data features without concern for what files or other storage technologies were employed. All of these features should be attained while

  5. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools

  6. High performance ultraviolet photodetectors based on ZnO nanoflakes/PVK heterojunction

    Energy Technology Data Exchange (ETDEWEB)

    Cai, Yuhua; Xiang, Jinzhong, E-mail: jzhxiang@ynu.edu.cn [School of Physical and Astronomy, Yunnan University, Kunming 650091 (China); Tang, Libin, E-mail: scitang@163.com; Ji, Rongbin, E-mail: jirongbin@gmail.com; Zhao, Jun; Kong, Jincheng [Kunming Institute of Physics, Kunming 650223 (China); Lai, Sin Ki; Lau, Shu Ping [Department of Applied Physics, The Hong Kong Polytechnic University, Hung Hom, Kowloon (Hong Kong); Zhang, Kai [Suzhou Institute of Nano-Tech and Nano-Bionics (SINANO), Chinese Academy of Science, Suzhou 215123 (China)

    2016-08-15

    A high performance ultraviolet (UV) photodetector is receiving increasing attention due to its significant applications in fire warning, environmental monitoring, scientific research, astronomical observation, etc. The enhancement in performance of the UV photodetector has been impeded by lacking of a high-efficiency heterojunction in which UV photons can efficiently convert into charges. In this work, the high performance UV photodetectors have been realized by utilizing organic/inorganic heterojunctions based on a ZnO nanoflakes/poly (N-vinylcarbazole) hybrid. A transparent conducting polymer poly(3,4-ethylene-dioxythiophene):poly(styrenesulfonate)-coated quartz substrate is employed as the anode in replacement of the commonly ITO-coated glass in order to harvest shorter UV light. The devices show a lower dark current density, with a high responsivity (R) of 7.27 × 10{sup 3 }A/W and a specific detectivity (D*) of 6.20 × 10{sup 13} cm Hz{sup 1/2}/W{sup −1} at 2 V bias voltage in ambient environment (1.30 mW/cm{sup 2} at λ = 365 nm), resulting in the enhancements in R and D* by 49% and one order of magnitude, respectively. The study sheds light on developing high-performance, large scale-array, flexible UV detectors using the solution processable method.

  7. A new TDRSS Compatible Transceiver for Long Duration HIgh Altitude Scientific Balloon Missions

    Science.gov (United States)

    Stilwell, B.; Siemon, M.

    High altitude scientific balloons have been used for many years to provide scientists with access to near space at a fraction of the cost of satellite based or sounding rocket experiments. In recent years, these balloons have been successfully used for long duration missions of up to several weeks. Longer missions with durations of up to 100 days (Ultra-Long) are on the drawing board. An enabling technology for the growth of the scientific balloon missions is the use of the NASA Tracking and Data Relay Satellite System (TDRSS) for telemetering the health, status, position and payload science data to mission operations personnel. The TDRSS system provides global coverage by relaying the data through geostationary relay satellites to a single ground station in White Sands New Mexico. Data passes from the White Sands station to the user via commercial telecommunications services including the Internet. A forward command link can also be established to the balloon for real- time command and control. Early TDRSS communications equipment used by the National Scientific Balloon Facility was either unreliable or too expensive. The equipment must be a le tob endure the rigors of space flight including radiation exposure, high temperature extremes and the shock of landing and recovery. Since a payload may occasionally be lost, the cost of the TDRSS communications gear is a limiting factor in the number of missions that can be supported. Under sponsorship of the NSBF, General Dynamics Decision Systems has developed a new TDRSS compatible transceiver that reduces the size, weight and cost to approximately one half that of the prior generation of hardware. This paper describes the long and ultra-long balloon missions and the role that TDRSS communications plays in mission success. The new transceiver design is described, along with its interfaces, performance characteristics, qualification and production status. The transceiver can also be used in other space, avionics or

  8. 77 FR 39682 - Application(s) for Duty-Free Entry of Scientific Instruments

    Science.gov (United States)

    2012-07-05

    ... invite comments on the question of whether instruments of equivalent scientific value, for the purposes... components with increased reliability, performance, reduction of cost, and improved safety, using technology... reliability investigations on the nanometer scale, to identify porosity, fracture surface features, fiber...

  9. Emerging Nanophotonic Applications Explored with Advanced Scientific Parallel Computing

    Science.gov (United States)

    Meng, Xiang

    The domain of nanoscale optical science and technology is a combination of the classical world of electromagnetics and the quantum mechanical regime of atoms and molecules. Recent advancements in fabrication technology allows the optical structures to be scaled down to nanoscale size or even to the atomic level, which are far smaller than the wavelength they are designed for. These nanostructures can have unique, controllable, and tunable optical properties and their interactions with quantum materials can have important near-field and far-field optical response. Undoubtedly, these optical properties can have many important applications, ranging from the efficient and tunable light sources, detectors, filters, modulators, high-speed all-optical switches; to the next-generation classical and quantum computation, and biophotonic medical sensors. This emerging research of nanoscience, known as nanophotonics, is a highly interdisciplinary field requiring expertise in materials science, physics, electrical engineering, and scientific computing, modeling and simulation. It has also become an important research field for investigating the science and engineering of light-matter interactions that take place on wavelength and subwavelength scales where the nature of the nanostructured matter controls the interactions. In addition, the fast advancements in the computing capabilities, such as parallel computing, also become as a critical element for investigating advanced nanophotonic devices. This role has taken on even greater urgency with the scale-down of device dimensions, and the design for these devices require extensive memory and extremely long core hours. Thus distributed computing platforms associated with parallel computing are required for faster designs processes. Scientific parallel computing constructs mathematical models and quantitative analysis techniques, and uses the computing machines to analyze and solve otherwise intractable scientific challenges. In

  10. Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce.

    Science.gov (United States)

    Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel

    2013-08-01

    Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS - a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive.

  11. Development of high performance Schottky barrier diode and its application to plasma diagnostics

    International Nuclear Information System (INIS)

    Fujita, Junji; Kawahata, Kazuo; Okajima, Shigeki

    1993-10-01

    At the conclusion of the Supporting Collaboration Research on 'Development of High Performance Detectors in the Far Infrared Range' carried out from FY1990 to FY1992, the results of developing Schottky barrier diode and its application to plasma diagnostics are summarized. Some remarks as well as technical know-how for the correct use of diodes are also described. (author)

  12. Architecting Web Sites for High Performance

    Directory of Open Access Journals (Sweden)

    Arun Iyengar

    2002-01-01

    Full Text Available Web site applications are some of the most challenging high-performance applications currently being developed and deployed. The challenges emerge from the specific combination of high variability in workload characteristics and of high performance demands regarding the service level, scalability, availability, and costs. In recent years, a large body of research has addressed the Web site application domain, and a host of innovative software and hardware solutions have been proposed and deployed. This paper is an overview of recent solutions concerning the architectures and the software infrastructures used in building Web site applications. The presentation emphasizes three of the main functions in a complex Web site: the processing of client requests, the control of service levels, and the interaction with remote network caches.

  13. XVIS: Visualization for the Extreme-Scale Scientific-Computation Ecosystem Final Scientific/Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk [Kitware, Inc., Clifton Park, NY (United States); Maynard, Robert [Kitware, Inc., Clifton Park, NY (United States)

    2017-10-27

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. The XVis project brought together collaborators from predominant DOE projects for visualization on accelerators and combining their respective features into a new visualization toolkit called VTK-m.

  14. High-performance parallel processors based on star-coupled wavelength division multiplexing optical interconnects

    Science.gov (United States)

    Deri, Robert J.; DeGroot, Anthony J.; Haigh, Ronald E.

    2002-01-01

    As the performance of individual elements within parallel processing systems increases, increased communication capability between distributed processor and memory elements is required. There is great interest in using fiber optics to improve interconnect communication beyond that attainable using electronic technology. Several groups have considered WDM, star-coupled optical interconnects. The invention uses a fiber optic transceiver to provide low latency, high bandwidth channels for such interconnects using a robust multimode fiber technology. Instruction-level simulation is used to quantify the bandwidth, latency, and concurrency required for such interconnects to scale to 256 nodes, each operating at 1 GFLOPS performance. Performance scales have been shown to .apprxeq.100 GFLOPS for scientific application kernels using a small number of wavelengths (8 to 32), only one wavelength received per node, and achievable optoelectronic bandwidth and latency.

  15. Visualization and Data Analysis for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  16. The IEEE 1355 Standard. Developments, performance and application in high energy physics

    International Nuclear Information System (INIS)

    Haas, S.

    1998-12-01

    The data acquisition systems of the next generation High Energy Physics experiments at the Large Hadron Collider (LHC) at CERN will rely on high-speed point-to-point links and switching networks for their higher level trigger and event building systems. This thesis provides a detailed evaluation of the DS-Link and switch technology, which is based on the IEEE 1355 standard for Heterogeneous Interconnect (HIC). The DS-Link is a bidirectional point-to-point serial interconnect, operating at speeds up to 200 MBaud. The objective of this thesis was to study the performance of the IEEE 1355 link and switch technology and to demonstrate that switching networks using this technology would scale to meet the requirements of the High Energy Physics applications

  17. High-Performance Operating Systems

    DEFF Research Database (Denmark)

    Sharp, Robin

    1999-01-01

    Notes prepared for the DTU course 49421 "High Performance Operating Systems". The notes deal with quantitative and qualitative techniques for use in the design and evaluation of operating systems in computer systems for which performance is an important parameter, such as real-time applications......, communication systems and multimedia systems....

  18. CONCERT A high power proton accelerator driven multi-application facility concept

    CERN Document Server

    Laclare, J L

    2000-01-01

    A new generation of High Power Proton Accelerator (HPPA) is being made available. It opens new avenues to a long series of scientific applications in fundamental and applied research, which can make use of the boosted flux of secondary particles. Presently, in Europe, several disciplines are preparing their project of dedicated facility, based on the upgraded performances of HPPAs. Given the potential synergies between these different projects, for reasons of cost effectiveness, it was considered appropriate to look into the possibility to group a certain number of these applications around a single HPPA: CONCERT project left bracket 1 right bracket . The ensuing 2-year feasibility study organized in collaboration between the European Spallation Source and the CEA just started. EURISOL left bracket 2 right bracket project and CERN participate in the steering committee.

  19. High Performance Computing - Power Application Programming Interface Specification Version 2.0.

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Levenhagen, Michael J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Olivier, Stephen Lecler [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ward, H. Lee [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-03-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  20. High-performance computing for airborne applications

    International Nuclear Information System (INIS)

    Quinn, Heather M.; Manuzatto, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-01-01

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  1. [Reversed-phase high-performance liquid chromatograph--application to serum aluminium monitoring].

    Science.gov (United States)

    Hoshino, H; Kaneko, E

    1996-01-01

    High-Performance Liquid Chromatography (HPLC) with the reversed-phase partition mode separation (including ion-pair one) towards metal chelate compounds prepared in an off-line fashion (precolumn chelation) is most versatile in terms of high sensitivity with base-line flatness, unique selectivity and cost effectiveness. The extraordinary toughness to the complicated matrices encountered in clinical testing is exemplified by the successful application to the aluminium monitoring of human serum samples. The A1 chelate with 2,2'-dihydroxyazobenzene is efficiently chromatographed on a LiChroCART RP-18 column using an aqueous methanol eluent (63.6 wt%) containing tetrabutylammonium bromide as an ion-pair agent. The serum concentration level of A1 down to 6 micrograms dm-3 is readily monitored without influences from iron, chyle and haemolysis.

  2. A Lightweight I/O Scheme to Facilitate Spatial and Temporal Queries of Scientific Data Analytics

    Science.gov (United States)

    Tian, Yuan; Liu, Zhuo; Klasky, Scott; Wang, Bin; Abbasi, Hasan; Zhou, Shujia; Podhorszki, Norbert; Clune, Tom; Logan, Jeremy; Yu, Weikuan

    2013-01-01

    In the era of petascale computing, more scientific applications are being deployed on leadership scale computing platforms to enhance the scientific productivity. Many I/O techniques have been designed to address the growing I/O bottleneck on large-scale systems by handling massive scientific data in a holistic manner. While such techniques have been leveraged in a wide range of applications, they have not been shown as adequate for many mission critical applications, particularly in data post-processing stage. One of the examples is that some scientific applications generate datasets composed of a vast amount of small data elements that are organized along many spatial and temporal dimensions but require sophisticated data analytics on one or more dimensions. Including such dimensional knowledge into data organization can be beneficial to the efficiency of data post-processing, which is often missing from exiting I/O techniques. In this study, we propose a novel I/O scheme named STAR (Spatial and Temporal AggRegation) to enable high performance data queries for scientific analytics. STAR is able to dive into the massive data, identify the spatial and temporal relationships among data variables, and accordingly organize them into an optimized multi-dimensional data structure before storing to the storage. This technique not only facilitates the common access patterns of data analytics, but also further reduces the application turnaround time. In particular, STAR is able to enable efficient data queries along the time dimension, a practice common in scientific analytics but not yet supported by existing I/O techniques. In our case study with a critical climate modeling application GEOS-5, the experimental results on Jaguar supercomputer demonstrate an improvement up to 73 times for the read performance compared to the original I/O method.

  3. Advanced scientific computational methods and their applications to nuclear technologies. (3) Introduction of continuum simulation methods and their applications (3)

    International Nuclear Information System (INIS)

    Satake, Shin-ichi; Kunugi, Tomoaki

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the third issue showing the introduction of continuum simulation methods and their applications. Spectral methods and multi-interface calculation methods in fluid dynamics are reviewed. (T. Tanaka)

  4. Video performance for high security applications

    International Nuclear Information System (INIS)

    Connell, Jack C.; Norman, Bradley C.

    2010-01-01

    The complexity of physical protection systems has increased to address modern threats to national security and emerging commercial technologies. A key element of modern physical protection systems is the data presented to the human operator used for rapid determination of the cause of an alarm, whether false (e.g., caused by an animal, debris, etc.) or real (e.g., a human adversary). Alarm assessment, the human validation of a sensor alarm, primarily relies on imaging technologies and video systems. Developing measures of effectiveness (MOE) that drive the design or evaluation of a video system or technology becomes a challenge, given the subjectivity of the application (e.g., alarm assessment). Sandia National Laboratories has conducted empirical analysis using field test data and mathematical models such as binomial distribution and Johnson target transfer functions to develop MOEs for video system technologies. Depending on the technology, the task of the security operator and the distance to the target, the Probability of Assessment (PAs) can be determined as a function of a variety of conditions or assumptions. PAs used as an MOE allows the systems engineer to conduct trade studies, make informed design decisions, or evaluate new higher-risk technologies. This paper outlines general video system design trade-offs, discusses ways video can be used to increase system performance and lists MOEs for video systems used in subjective applications such as alarm assessment.

  5. Workshop on scientific and industrial applications of free electron lasers

    International Nuclear Information System (INIS)

    Difilippo, F.C.; Perez, R.B.

    1990-05-01

    A Workshop on Scientific and Industrial Applications of Free Electron Lasers was organized to address potential uses of a Free Electron Laser in the infrared wavelength region. A total of 13 speakers from national laboratories, universities, and the industry gave seminars to an average audience of 30 persons during June 12 and 13, 1989. The areas covered were: Free Electron Laser Technology, Chemistry and Surface Science, Atomic and Molecular Physics, Condensed Matter, and Biomedical Applications, Optical Damage, and Optoelectronics

  6. Award for Distinguished Scientific Applications of Psychology: Nancy E. Adler

    Science.gov (United States)

    American Psychologist, 2009

    2009-01-01

    Nancy E. Adler, winner of the Award for Distinguished Scientific Applications of Psychology, is cited for her research on reproductive health examining adolescent decision making with regard to contraception, conscious and preconscious motivations for pregnancy, and perception of risk for sexually transmitted diseases, and for her groundbreaking…

  7. Enhancing GIS Capabilities for High Resolution Earth Science Grids

    Science.gov (United States)

    Koziol, B. W.; Oehmke, R.; Li, P.; O'Kuinghttons, R.; Theurich, G.; DeLuca, C.

    2017-12-01

    Applications for high performance GIS will continue to increase as Earth system models pursue more realistic representations of Earth system processes. Finer spatial resolution model input and output, unstructured or irregular modeling grids, data assimilation, and regional coordinate systems present novel challenges for GIS frameworks operating in the Earth system modeling domain. This presentation provides an overview of two GIS-driven applications that combine high performance software with big geospatial datasets to produce value-added tools for the modeling and geoscientific community. First, a large-scale interpolation experiment using National Hydrography Dataset (NHD) catchments, a high resolution rectilinear CONUS grid, and the Earth System Modeling Framework's (ESMF) conservative interpolation capability will be described. ESMF is a parallel, high-performance software toolkit that provides capabilities (e.g. interpolation) for building and coupling Earth science applications. ESMF is developed primarily by the NOAA Environmental Software Infrastructure and Interoperability (NESII) group. The purpose of this experiment was to test and demonstrate the utility of high performance scientific software in traditional GIS domains. Special attention will be paid to the nuanced requirements for dealing with high resolution, unstructured grids in scientific data formats. Second, a chunked interpolation application using ESMF and OpenClimateGIS (OCGIS) will demonstrate how spatial subsetting can virtually remove computing resource ceilings for very high spatial resolution interpolation operations. OCGIS is a NESII-developed Python software package designed for the geospatial manipulation of high-dimensional scientific datasets. An overview of the data processing workflow, why a chunked approach is required, and how the application could be adapted to meet operational requirements will be discussed here. In addition, we'll provide a general overview of OCGIS

  8. Scientific Letter: High-intent suicide and the Beck's Suicide Intent ...

    African Journals Online (AJOL)

    Scientific Letter: High-intent suicide and the Beck's Suicide Intent scale: a case report. ... African Journal of Psychiatry. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current ... Abstract. Scientific Letter - No Abstract Available ...

  9. High-Performance Matrix-Vector Multiplication on the GPU

    DEFF Research Database (Denmark)

    Sørensen, Hans Henrik Brandenborg

    2012-01-01

    In this paper, we develop a high-performance GPU kernel for one of the most popular dense linear algebra operations, the matrix-vector multiplication. The target hardware is the most recent Nvidia Tesla 20-series (Fermi architecture), which is designed from the ground up for scientific computing...

  10. High-Performance Secure Database Access Technologies for HEP Grids

    Energy Technology Data Exchange (ETDEWEB)

    Matthew Vranicar; John Weicher

    2006-04-17

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysis capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the

  11. High-Performance Secure Database Access Technologies for HEP Grids

    International Nuclear Information System (INIS)

    Vranicar, Matthew; Weicher, John

    2006-01-01

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysis capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist's computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that 'Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications'. There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure

  12. High temperature superconductivity: Concept, preparation and testing of high Tc superconductor compounds, and applications

    International Nuclear Information System (INIS)

    Harara, Wafik

    1992-06-01

    Many studies have been carried out on high temperature superconductors with transition temperature above that of the liquid nitrogen. In this scientific study the concept and the mechanism of this phenomena are discussed, in addition the examples of preparation and testing of high temperature superconductors compounds are shown. Also the most important applications in industry are explained. (author). 15 refs., 2 tabs., 18 figs

  13. Scientific computing and algorithms in industrial simulations projects and products of Fraunhofer SCAI

    CERN Document Server

    Schüller, Anton; Schweitzer, Marc

    2017-01-01

    The contributions gathered here provide an overview of current research projects and selected software products of the Fraunhofer Institute for Algorithms and Scientific Computing SCAI. They show the wide range of challenges that scientific computing currently faces, the solutions it offers, and its important role in developing applications for industry. Given the exciting field of applied collaborative research and development it discusses, the book will appeal to scientists, practitioners, and students alike. The Fraunhofer Institute for Algorithms and Scientific Computing SCAI combines excellent research and application-oriented development to provide added value for our partners. SCAI develops numerical techniques, parallel algorithms and specialized software tools to support and optimize industrial simulations. Moreover, it implements custom software solutions for production and logistics, and offers calculations on high-performance computers. Its services and products are based on state-of-the-art metho...

  14. The impact of the degree of application of e-commerce on operational performance among Taiwans high-tech manufacturers

    Directory of Open Access Journals (Sweden)

    Yi-Chan Chung

    2013-11-01

    Full Text Available This study probes the correlation of types of operational strategy, degrees of organisational learning, types of organisational culture, the degree of the application of e-commerce, and operational performance among high-tech firms in Taiwan. The data was collected by questionnaires distributed via mail to senior supervisors at high-tech firms in six industries at three Taiwanese science parks. The results showed that a higher degree of e-commerce application leads to a significant and positive effect on operational performance. This study suggests that, in order to upgrade operational performance, firms should enhance their organisational learning and e-commerce, along with their rational, hierarchical, consensual, and developmental cultures, and the execution of prospector and defender strategies.

  15. Improved performance of high average power semiconductor arrays for applications in diode pumped solid state lasers

    International Nuclear Information System (INIS)

    Beach, R.; Emanuel, M.; Benett, W.; Freitas, B.; Ciarlo, D.; Carlson, N.; Sutton, S.; Skidmore, J.; Solarz, R.

    1994-01-01

    The average power performance capability of semiconductor diode laser arrays has improved dramatically over the past several years. These performance improvements, combined with cost reductions pursued by LLNL and others in the fabrication and packaging of diode lasers, have continued to reduce the price per average watt of laser diode radiation. Presently, we are at the point where the manufacturers of commercial high average power solid state laser systems used in material processing applications can now seriously consider the replacement of their flashlamp pumps with laser diode pump sources. Additionally, a low cost technique developed and demonstrated at LLNL for optically conditioning the output radiation of diode laser arrays has enabled a new and scalable average power diode-end-pumping architecture that can be simply implemented in diode pumped solid state laser systems (DPSSL's). This development allows the high average power DPSSL designer to look beyond the Nd ion for the first time. Along with high average power DPSSL's which are appropriate for material processing applications, low and intermediate average power DPSSL's are now realizable at low enough costs to be attractive for use in many medical, electronic, and lithographic applications

  16. Efficient, High-Power Mid-Infrared Laser for National Securityand Scientific Applications

    Energy Technology Data Exchange (ETDEWEB)

    Kiani, Leily S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-11-02

    The LLNL fiber laser group developed a unique short-wave-infrared, high-pulse energy, highaverage- power fiber based laser. This unique laser source has been used in combination with a nonlinear frequency converter to generate wavelengths, useful for remote sensing and other applications in the mid-wave infrared (MWIR). Sources with high average power and high efficiency in this MWIR wavelength region are not yet available with the size, weight, and power requirements or energy efficiency necessary for future deployment. The LLNL developed Fiber Laser Pulsed Source (FiLPS) design was adapted to Erbium doped silica fibers for 1.55 μm pumping of Cadmium Silicon Phosphide (CSP). We have demonstrated, for the first time optical parametric amplification of 2.4 μm light via difference frequency generation using CSP with an Erbium doped fiber source. In addition, for efficiency comparison purposes, we also demonstrated direct optical parametric generation (OPG) as well as optical parametric oscillation (OPO).

  17. The graphics future in scientific applications

    International Nuclear Information System (INIS)

    Enderle, G.

    1982-01-01

    Computer graphics methods and tools are being used to a great extent in scientific research. The future development in this area will be influenced both by new hardware developments and by software advances. On the hardware sector, the development of the raster technology will lead to the increased use of colour workstations with more local processing power. Colour hardcopy devices for creating plots, slides, or movies will be available at a lower price than today. The first real 3D-workstations appear on the marketplace. One of the main activities on the software sector is the standardization of computer graphics systems, graphical files, and device interfaces. This will lead to more portable graphical application programs and to a common base for computer graphics education. (orig.)

  18. Educational and Scientific Applications of the \\itTime Navigator}

    Science.gov (United States)

    Cole, M.; Snow, J. T.; Slatt, R. M.

    2001-05-01

    Several recent conferences have noted the need to focus on the evolving interface between research and education at all levels of science, mathematics, engineering, and technology education. This interface, which is a distinguishing feature of graduate education in the U.S., is increasingly in demand at the undergraduate and K-12 levels, particularly in the earth sciences. In this talk, we present a new database for earth systems science and will explore applications to K-12 and undergraduate education, as well as the scientific and graduate role. The University of Oklahoma, College of Geosciences is in the process of acquiring the \\itTime Navigator}, a multi-disciplinary, multimedia database, which will form the core asset of the Center for Earth Systems Science. The Center, whose mission is to further the understanding of the dynamic Earth within both the academic and the general public communities, will serve as a portal for research, information, and education for scientists and educators. \\itTime Navigator} was developed over a period of some twenty years by the noted British geoscience author, Ron Redfern, in connection with the recently published, \\itOrigins, the evolution of continents, oceans and life}, the third in a series of books for the educated layperson. Over the years \\itTime Navigator} has evolved into an interactive, multimedia database displaying much of the significant geological, paleontological, climatological, and tectonic events from the latest Proterozoic (750 MYA) through to the present. The focus is mainly on the Western Hemisphere and events associated with the coalescence and breakup of Pangea and the evolution of the earth into its present form. \\itOrigins} will be available as early as Fall 2001 as an interactive electronic book for the general, scientifically-literate public. While electronic books are unlikely to replace traditional print books, the format does allow non-linear exploration of content. We believe that the

  19. [Investigation methodology and application on scientific and technological personnel of traditional Chinese medical resources based on data from Chinese scientific research paper].

    Science.gov (United States)

    Li, Hai-yan; Li, Yuan-hai; Yang, Yang; Liu, Fang-zhou; Wang, Jing; Tian, Ye; Yang, Ce; Liu, Yang; Li, Meng; Sun Li-ying

    2015-12-01

    The aim of this study is to identify the present status of the scientific and technological personnel in the field of traditional Chinese medicine (TCM) resource science. Based on the data from Chinese scientific research paper, an investigation regarding the number of the personnel, the distribution, their output of paper, their scientific research teams, high-yield authors and high-cited authors was conducted. The study covers seven subfields of traditional Chinese medicine identification, quality standard, Chinese medicine cultivation, harvest processing of TCM, market development and resource protection and resource management, as well as 82 widely used Chinese medicine species, such as Ginseng and Radix Astragali. One hundred and fifteen domain authority experts were selected based on the data of high-yield authors and high-cited authors. The database system platform "Skilled Scientific and Technological Personnel in the field of Traditional Chinese Medicine Resource Science-Chinese papers" was established. This platform successfully provided the retrieval result of the personnel, output of paper, and their core research team by input the study field, year, and Chinese medicine species. The investigation provides basic data of scientific and technological personnel in the field of traditional Chinese medicine resource science for administrative agencies and also evidence for the selection of scientific and technological personnel and construction of scientific research teams.

  20. Introducing high availability to non high available designed applications

    Energy Technology Data Exchange (ETDEWEB)

    Zelnicek, Pierre; Kebschull, Udo [Kirchhoff Institute of Physics, Ruprecht-Karls-University Heidelberg (Germany); Haaland, Oystein Senneset [Physic Institut, University of Bergen, Bergen (Norway); Lindenstruth, Volker [Frankfurt Institut fuer Advanced Studies, University Frankfurt (Germany)

    2010-07-01

    A common problem in scientific computing environments and compute clusters today, is how to apply high availability to legacy applications. These applications are becoming more and more a problem in increasingly complex environments and with business grade availability constraints that requires 24 x 7 x 365 hours of operation. For a majority of applications, redesign is not an option. Either because of being closed source or the effort involved would be just as great as re-writing the application from scratch. Neither is letting normal operators restart and reconfigure the applications on backup nodes a solution. In addition to the possibility of mistakes from non-experts and the cost of keeping personnel at work 24/7, these kind of operations would require administrator privileges within the compute environment and would therefore be a security risk. Therefore, these legacy applications have to be monitored and if a failure occurs autonomously migrated to a working node. The pacemaker framework is designed for both tasks and ensures the availability of the legacy applications. Distributed redundant block devices are used for fault tolerant distributed data storage. The result is an Availability Environment Classification 2 (AEC-2).

  1. ForistomApp a Web application for scientific and technological information management of Forsitom foundation

    Science.gov (United States)

    Saavedra-Duarte, L. A.; Angarita-Jerardino, A.; Ruiz, P. A.; Dulce-Moreno, H. J.; Vera-Rivera, F. H.; V-Niño, E. D.

    2017-12-01

    Information and Communication Technologies (ICT) are essential in the transfer of knowledge, and the Web tools, as part of ICT, are important for institutions seeking greater visibility of the products developed by their researchers. For this reason, we implemented an application that allows the information management of the FORISTOM Foundation (Foundation of Researchers in Science and Technology of Materials). The application shows a detailed description, not only of all its members also of all the scientific production that they carry out, such as technological developments, research projects, articles, presentations, among others. This application can be implemented by other entities committed to the scientific dissemination and transfer of technology and knowledge.

  2. A novel ToF-SIMS operation mode for sub 100 nm lateral resolution: Application and performance

    International Nuclear Information System (INIS)

    Kubicek, Markus; Holzlechner, Gerald; Opitz, Alexander K.; Larisegger, Silvia; Hutter, Herbert; Fleig, Jürgen

    2014-01-01

    A novel operation mode for time of flight-secondary ion mass spectrometry (ToF-SIMS) is described for a TOF.SIMS 5 instrument with a Bi-ion gun. It features sub 100 nm lateral resolution, adjustable primary ion currents and the possibility to measure with high lateral resolution as well as high mass resolution. The adjustment and performance of the novel operation mode are described and compared to established ToF-SIMS operation modes. Several examples of application featuring novel scientific results show the capabilities of the operation mode in terms of lateral resolution, accuracy of isotope analysis of oxygen, and combination of high lateral and mass resolution. The relationship between high lateral resolution and operation of SIMS in static mode is discussed.

  3. High Performance Multi-GPU SpMV for Multi-component PDE-Based Applications

    KAUST Repository

    Abdelfattah, Ahmad

    2015-07-25

    Leveraging optimization techniques (e.g., register blocking and double buffering) introduced in the context of KBLAS, a Level 2 BLAS high performance library on GPUs, the authors implement dense matrix-vector multiplications within a sparse-block structure. While these optimizations are important for high performance dense kernel executions, they are even more critical when dealing with sparse linear algebra operations. The most time-consuming phase of many multicomponent applications, such as models of reacting flows or petroleum reservoirs, is the solution at each implicit time step of large, sparse spatially structured or unstructured linear systems. The standard method is a preconditioned Krylov solver. The Sparse Matrix-Vector multiplication (SpMV) is, in turn, one of the most time-consuming operations in such solvers. Because there is no data reuse of the elements of the matrix within a single SpMV, kernel performance is limited by the speed at which data can be transferred from memory to registers, making the bus bandwidth the major bottleneck. On the other hand, in case of a multi-species model, the resulting Jacobian has a dense block structure. For contemporary petroleum reservoir simulations, the block size typically ranges from three to a few dozen among different models, and still larger blocks are relevant within adaptively model-refined regions of the domain, though generally the size of the blocks, related to the number of conserved species, is constant over large regions within a given model. This structure can be exploited beyond the convenience of a block compressed row data format, because it offers opportunities to hide the data motion with useful computations. The new SpMV kernel outperforms existing state-of-the-art implementations on single and multi-GPUs using matrices with dense block structure representative of porous media applications with both structured and unstructured multi-component grids.

  4. Kelly D. Brownell: Award for Distinguished Scientific Applications of Psychology

    Science.gov (United States)

    American Psychologist, 2012

    2012-01-01

    Presents a short biography of Kelly D. Brownwell, winner of the American Psychological Association's Award for Distinguished Scientific Applications of Psychology (2012). He won the award for outstanding contributions to our understanding of the etiology and management of obesity and the crisis it poses for the modern world. A seminal thinker in…

  5. High performance liquid chromatographic determination of ...

    African Journals Online (AJOL)

    STORAGESEVER

    2010-02-08

    ) high performance liquid chromatography (HPLC) grade .... applications. These are important requirements if the reagent is to be applicable to on-line pre or post column derivatisation in a possible automation of the analytical.

  6. Performance-Driven Interface Contract Enforcement for Scientific Components

    Energy Technology Data Exchange (ETDEWEB)

    Dahlgren, Tamara Lynn [Univ. of California, Davis, CA (United States)

    2008-01-01

    Performance-driven interface contract enforcement research aims to improve the quality of programs built from plug-and-play scientific components. Interface contracts make the obligations on the caller and all implementations of the specified methods explicit. Runtime contract enforcement is a well-known technique for enhancing testing and debugging. However, checking all of the associated constraints during deployment is generally considered too costly from a performance stand point. Previous solutions enforced subsets of constraints without explicit consideration of their performance implications. Hence, this research measures the impacts of different interface contract sampling strategies and compares results with new techniques driven by execution time estimates. Results from three studies indicate automatically adjusting the level of checking based on performance constraints improves the likelihood of detecting contract violations under certain circumstances. Specifically, performance-driven enforcement is better suited to programs exercising constraints whose costs are at most moderately expensive relative to normal program execution.

  7. High Performance Computing - Power Application Programming Interface Specification Version 1.4

    Energy Technology Data Exchange (ETDEWEB)

    Laros III, James H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); DeBonis, David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kelly, Suzanne M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Levenhagen, Michael J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Olivier, Stephen Lecler [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-10-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  8. Applications of plasma spectrometry and high performance liquid chromatography in environmental and food science

    International Nuclear Information System (INIS)

    Iordache, Andreea-Maria; Biraruti, Elisabeta-Irina; Ionete, Roxana-Elena

    2008-01-01

    Full text: Plasma spectrometry has many applications in food science in analysis of a wide range of samples in the food chain. Food science in the broadest sense can be extended to include soil chemistry, plant uptake and, at the other end of the food chain, studies into the metabolic fate of particular elements or elemental species when the foods are consumed by humans or animals. Inductively Coupled Plasma Mass Spectrometry allows multi-element measurements of most elements in the periodic table. A very sensitive analytical technique for trace analysis of samples can be performed by inductively plasma mass spectrometer with quadrupolar detector using ultrasonic nebulization. High Performance Liquid Chromatography (HPLC) is an analytical technique for the separation and determination of organic and inorganic solutes in any samples especially biological, pharmaceutical, food, environmental. The present paper emphasizes that the future tendencies HPLC-ICP-MS is often the preferred analytical technique for these applications due to the simplicity of the coupling between the HPLC and ICP-MS Varian 820 using ultrasonic nebulization, potential for on-line separations with high species specificity and the capability for optimum limits of detection without the necessity of using complex hydride generation mechanisms. (authors)

  9. Teaching Scientific Communication Skills in Science Studies: Does It Make a Difference?

    Science.gov (United States)

    Spektor-Levy, Ornit; Eylon, Bat-Sheva; Scherz, Zahava

    2009-01-01

    This study explores the impact of "Scientific Communication" (SC) skills instruction on students' performances in scientific literacy assessment tasks. We present a general model for skills instruction, characterized by explicit and spiral instruction, integration into content learning, practice in several scientific topics, and application of…

  10. Performance of a Boron-Coated-Straw-Based HLNCC for International Safeguards Applications

    Energy Technology Data Exchange (ETDEWEB)

    Simone, Angela T. [ORNL; Croft, Stephen [ORNL; McElroy, Robert Dennis [ORNL; Sun, Liang [Proportional Technologies Inc.; Hayward, Jason P. [ORNL

    2017-08-01

    3He gas has been used in various scientific and security applications for decades, but it is now in short supply. Alternatives to 3He detectors are currently being integrated and tested in neutron coincidence counter designs, of a type which are widely used in nuclear safeguards for nuclear materials assay. A boron-coated-straw-based design, similar to the High-Level Neutron Coincidence Counter-II, was built by Proportional Technologies Inc., and has been tested by the Oak Ridge National Laboratory (ORNL) at both the JRC in Ispra and ORNL. Characterization measurements, along with nondestructive assays of various plutonium samples, have been conducted to determine the performance of this coincidence counter replacement in comparison with other similar counters. This paper presents results of these measurements.

  11. Strategy Guideline. High Performance Residential Lighting

    Energy Technology Data Exchange (ETDEWEB)

    Holton, J. [IBACOS, Inc., Pittsburgh, PA (United States)

    2012-02-01

    This report has been developed to provide a tool for the understanding and application of high performance lighting in the home. The strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner’s expectations for high quality lighting.

  12. Analysis of Application Power and Schedule Composition in a High Performance Computing Environment

    Energy Technology Data Exchange (ETDEWEB)

    Elmore, Ryan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gruchalla, Kenny [National Renewable Energy Lab. (NREL), Golden, CO (United States); Phillips, Caleb [National Renewable Energy Lab. (NREL), Golden, CO (United States); Purkayastha, Avi [National Renewable Energy Lab. (NREL), Golden, CO (United States); Wunder, Nick [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-01-05

    As the capacity of high performance computing (HPC) systems continues to grow, small changes in energy management have the potential to produce significant energy savings. In this paper, we employ an extensive informatics system for aggregating and analyzing real-time performance and power use data to evaluate energy footprints of jobs running in an HPC data center. We look at the effects of algorithmic choices for a given job on the resulting energy footprints, and analyze application-specific power consumption, and summarize average power use in the aggregate. All of these views reveal meaningful power variance between classes of applications as well as chosen methods for a given job. Using these data, we discuss energy-aware cost-saving strategies based on reordering the HPC job schedule. Using historical job and power data, we present a hypothetical job schedule reordering that: (1) reduces the facility's peak power draw and (2) manages power in conjunction with a large-scale photovoltaic array. Lastly, we leverage this data to understand the practical limits on predicting key power use metrics at the time of submission.

  13. Understanding the Impact of an Apprenticeship-Based Scientific Research Program on High School Students' Understanding of Scientific Inquiry

    Science.gov (United States)

    Aydeniz, Mehmet; Baksa, Kristen; Skinner, Jane

    2011-01-01

    The purpose of this study was to understand the impact of an apprenticeship program on high school students' understanding of the nature of scientific inquiry. Data related to seventeen students' understanding of science and scientific inquiry were collected through open-ended questionnaires. Findings suggest that although engagement in authentic…

  14. Mixed-Language High-Performance Computing for Plasma Simulations

    Directory of Open Access Journals (Sweden)

    Quanming Lu

    2003-01-01

    Full Text Available Java is receiving increasing attention as the most popular platform for distributed computing. However, programmers are still reluctant to embrace Java as a tool for writing scientific and engineering applications due to its still noticeable performance drawbacks compared with other programming languages such as Fortran or C. In this paper, we present a hybrid Java/Fortran implementation of a parallel particle-in-cell (PIC algorithm for plasma simulations. In our approach, the time-consuming components of this application are designed and implemented as Fortran subroutines, while less calculation-intensive components usually involved in building the user interface are written in Java. The two types of software modules have been glued together using the Java native interface (JNI. Our mixed-language PIC code was tested and its performance compared with pure Java and Fortran versions of the same algorithm on a Sun E6500 SMP system and a Linux cluster of Pentium~III machines.

  15. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  16. Scientific Applications of Optical Instruments to Materials Research

    Science.gov (United States)

    Witherow, William K.

    1997-01-01

    Microgravity is a unique environment for materials and biotechnology processing. Microgravity minimizes or eliminates some of the effects that occur in one g. This can lead to the production of new materials or crystal structures. It is important to understand the processes that create these new materials. Thus, experiments are designed so that optical data collection can take place during the formation of the material. This presentation will discuss scientific application of optical instruments at MSFC. These instruments include a near-field scanning optical microscope, a miniaturized holographic system, and a phase-shifting interferometer.

  17. High-Performance Java Codes for Computational Fluid Dynamics

    Science.gov (United States)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  18. A novel ToF-SIMS operation mode for sub 100 nm lateral resolution: Application and performance.

    Science.gov (United States)

    Kubicek, Markus; Holzlechner, Gerald; Opitz, Alexander K; Larisegger, Silvia; Hutter, Herbert; Fleig, Jürgen

    2014-01-15

    A novel operation mode for time of flight-secondary ion mass spectrometry (ToF-SIMS) is described for a TOF.SIMS 5 instrument with a Bi-ion gun. It features sub 100 nm lateral resolution, adjustable primary ion currents and the possibility to measure with high lateral resolution as well as high mass resolution. The adjustment and performance of the novel operation mode are described and compared to established ToF-SIMS operation modes. Several examples of application featuring novel scientific results show the capabilities of the operation mode in terms of lateral resolution, accuracy of isotope analysis of oxygen, and combination of high lateral and mass resolution. The relationship between high lateral resolution and operation of SIMS in static mode is discussed.

  19. Mapping the research on scientific collaboration

    Institute of Scientific and Technical Information of China (English)

    HOU Jianhua; CHEN Chaomei; YAN Jianxin

    2010-01-01

    The aim of this paper was to identify the trends and hot topics in the study of scientific collaboration via scientometric analysis.Information visualization and knowledge domain visualization techniques were adopted to determine how the study of scientific collaboration has evolved.A total of 1,455 articles on scientific cooperation published between 1993 and 2007 were retrieved from the SCI,SSCI and A&HCI databases with a topic search of scientific collaboration or scientific cooperation for the analysis.By using CiteSpace,the knowledge bases,research foci,and research fronts in the field of scientific collaboration were studied.The results indicated that research fronts and research foci are highly consistent in terms of the concept,origin,measurement,and theory of scientific collaboration.It also revealed that research fronts included scientific collaboration networks,international scientific collaboration,social network analysis and techniques,and applications of bibliometrical indicators,webmetrics,and health care related areas.

  20. 77 FR 9896 - Proposed Information Collection; Comment Request; Application and Reports for Scientific Research...

    Science.gov (United States)

    2012-02-21

    ... Collection; Comment Request; Application and Reports for Scientific Research and Enhancement Permits Under... allows permits authorizing the taking of endangered species for research/enhancement purposes. The... sets of information collections: (1) Applications for research/enhancement permits, and (2) reporting...

  1. FY 1992 Blue Book: Grand Challenges: High Performance Computing and Communications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — High performance computing and computer communications networks are becoming increasingly important to scientific advancement, economic competition, and national...

  2. Customizable scientific web portal for fusion research

    International Nuclear Information System (INIS)

    Abla, G.; Kim, E.N.; Schissel, D.P.; Flanagan, S.M.

    2010-01-01

    Web browsers have become a major application interface for participating in scientific experiments such as those in magnetic fusion. The recent advances in web technologies motivated the deployment of interactive web applications with rich features. In the scientific world, web applications have been deployed in portal environments. When used in a scientific research environment, such as fusion experiments, web portals can present diverse sources of information in a unified interface. However, the design and development of a scientific web portal has its own challenges. One such challenge is that a web portal needs to be fast and interactive despite the high volume of information and number of tools it presents. Another challenge is that the visual output of the web portal must not be overwhelming to the end users, despite the high volume of data generated by fusion experiments. Therefore, the applications and information should be customizable depending on the needs of end users. In order to meet these challenges, the design and implementation of a web portal needs to support high interactivity and user customization. A web portal has been designed to support the experimental activities of DIII-D researchers worldwide by providing multiple services, such as real-time experiment status monitoring, diagnostic data access and interactive data visualization. The web portal also supports interactive collaborations by providing a collaborative logbook, shared visualization and online instant messaging services. The portal's design utilizes the multi-tier software architecture and has been implemented utilizing web 2.0 technologies, such as AJAX, Django, and Memcached, to develop a highly interactive and customizable user interface. It offers a customizable interface with personalized page layouts and list of services, which allows users to create a unique, personalized working environment to fit their own needs and interests. This paper describes the software

  3. Customizable scientific web portal for fusion research

    Energy Technology Data Exchange (ETDEWEB)

    Abla, G., E-mail: abla@fusion.gat.co [General Atomics, P.O. Box 85608, San Diego, CA (United States); Kim, E.N.; Schissel, D.P.; Flanagan, S.M. [General Atomics, P.O. Box 85608, San Diego, CA (United States)

    2010-07-15

    Web browsers have become a major application interface for participating in scientific experiments such as those in magnetic fusion. The recent advances in web technologies motivated the deployment of interactive web applications with rich features. In the scientific world, web applications have been deployed in portal environments. When used in a scientific research environment, such as fusion experiments, web portals can present diverse sources of information in a unified interface. However, the design and development of a scientific web portal has its own challenges. One such challenge is that a web portal needs to be fast and interactive despite the high volume of information and number of tools it presents. Another challenge is that the visual output of the web portal must not be overwhelming to the end users, despite the high volume of data generated by fusion experiments. Therefore, the applications and information should be customizable depending on the needs of end users. In order to meet these challenges, the design and implementation of a web portal needs to support high interactivity and user customization. A web portal has been designed to support the experimental activities of DIII-D researchers worldwide by providing multiple services, such as real-time experiment status monitoring, diagnostic data access and interactive data visualization. The web portal also supports interactive collaborations by providing a collaborative logbook, shared visualization and online instant messaging services. The portal's design utilizes the multi-tier software architecture and has been implemented utilizing web 2.0 technologies, such as AJAX, Django, and Memcached, to develop a highly interactive and customizable user interface. It offers a customizable interface with personalized page layouts and list of services, which allows users to create a unique, personalized working environment to fit their own needs and interests. This paper describes the software

  4. How does the entrepreneurial orientation of scientists affect their scientific performance? Evidence from the Quadrant Model

    OpenAIRE

    Naohiro Shichijo; Silvia Rita Sedita; Yasunori Baba

    2013-01-01

    Using Stokes's (1997) "quadrant model of scientific research", this paper deals with how the entrepreneurial orientation of scientists affects their scientific performance by considering its impact on scientific production (number of publications), scientific prestige (number of forward citations), and breadth of research activities (interdisciplinarity). The results of a quantitative analysis applied to a sample of 1,957 scientific papers published by 66 scientists active in advanced materia...

  5. The 1 MV multi-element AMS system for biomedical applications at the Netherlands Organization for Applied Scientific Research (TNO)

    Energy Technology Data Exchange (ETDEWEB)

    Klein, Matthias, E-mail: mklein@highvolteng.com [High Voltage Engineering Europa B.V., P.O. Box 99, 3800 AB Amersfoort (Netherlands); Vaes, W.H.J.; Fabriek, B.; Sandman, H. [TNO, P.O. Box 360, 3700 AJ Zeist (Netherlands); Mous, D.J.W.; Gottdang, A. [High Voltage Engineering Europa B.V., P.O. Box 99, 3800 AB Amersfoort (Netherlands)

    2013-01-15

    The Netherlands Organization for Applied Scientific Research (TNO) has installed a compact 1 MV multi-element AMS system manufactured by High Voltage Engineering Europa B.V., The Netherlands. TNO performs clinical research programs for pharmaceutical and innovative foods industry to obtain early pharmacokinetic data and to provide anti-osteoporotic efficacy data of new treatments. The AMS system will analyze carbon, iodine and calcium samples for this purpose. The first measurements on blank samples indicate background levels in the low 10{sup -12} for calcium and iodine, making the system well suited for these biomedical applications. Carbon blanks have been measured at low 10{sup -16}. For unattended, around-the-clock analysis, the system features the 200 sample version of the SO110 hybrid ion source and user friendly control software.

  6. Applications of high power microwaves

    International Nuclear Information System (INIS)

    Benford, J.; Swegle, J.

    1993-01-01

    The authors address a number of applications for HPM technology. There is a strong symbiotic relationship between a developing technology and its emerging applications. New technologies can generate new applications. Conversely, applications can demand development of new technological capability. High-power microwave generating systems come with size and weight penalties and problems associated with the x-radiation and collection of the electron beam. Acceptance of these difficulties requires the identification of a set of applications for which high-power operation is either demanded or results in significant improvements in peRFormance. The authors identify the following applications, and discuss their requirements and operational issues: (1) High-energy RF acceleration; (2) Atmospheric modification (both to produce artificial ionospheric mirrors for radio waves and to save the ozone layer); (3) Radar; (4) Electronic warfare; and (5) Laser pumping. In addition, they discuss several applications requiring high average power than border on HPM, power beaming and plasma heating

  7. File-System Workload on a Scientific Multiprocessor

    Science.gov (United States)

    Kotz, David; Nieuwejaar, Nils

    1995-01-01

    Many scientific applications have intense computational and I/O requirements. Although multiprocessors have permitted astounding increases in computational performance, the formidable I/O needs of these applications cannot be met by current multiprocessors a their I/O subsystems. To prevent I/O subsystems from forever bottlenecking multiprocessors and limiting the range of feasible applications, new I/O subsystems must be designed. The successful design of computer systems (both hardware and software) depends on a thorough understanding of their intended use. A system designer optimizes the policies and mechanisms for the cases expected to most common in the user's workload. In the case of multiprocessor file systems, however, designers have been forced to build file systems based only on speculation about how they would be used, extrapolating from file-system characterizations of general-purpose workloads on uniprocessor and distributed systems or scientific workloads on vector supercomputers (see sidebar on related work). To help these system designers, in June 1993 we began the Charisma Project, so named because the project sought to characterize 1/0 in scientific multiprocessor applications from a variety of production parallel computing platforms and sites. The Charisma project is unique in recording individual read and write requests-in live, multiprogramming, parallel workloads (rather than from selected or nonparallel applications). In this article, we present the first results from the project: a characterization of the file-system workload an iPSC/860 multiprocessor running production, parallel scientific applications at NASA's Ames Research Center.

  8. High-Performance Liquid Chromatography-Mass Spectrometry.

    Science.gov (United States)

    Vestal, Marvin L.

    1984-01-01

    Reviews techniques for online coupling of high-performance liquid chromatography with mass spectrometry, emphasizing those suitable for application to nonvolatile samples. Also summarizes the present status, strengths, and weaknesses of various techniques and discusses potential applications of recently developed techniques for combined liquid…

  9. Contributing to the design of run-time systems dedicated to high performance computing

    International Nuclear Information System (INIS)

    Perache, M.

    2006-10-01

    In the field of intensive scientific computing, the quest for performance has to face the increasing complexity of parallel architectures. Nowadays, these machines exhibit a deep memory hierarchy which complicates the design of efficient parallel applications. This thesis proposes a programming environment allowing to design efficient parallel programs on top of clusters of multi-processors. It features a programming model centered around collective communications and synchronizations, and provides load balancing facilities. The programming interface, named MPC, provides high level paradigms which are optimized according to the underlying architecture. The environment is fully functional and used within the CEA/DAM (TERANOVA) computing center. The evaluations presented in this document confirm the relevance of our approach. (author)

  10. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  11. Radiation protection. Scientific fundamentals, legal regulations, practical applications. Compendium

    International Nuclear Information System (INIS)

    Buchert, Guido; Gay, Juergen; Kirchner, Gerald; Michel, Rolf; Niggemann, Guenter; Schumann, Joerg; Wust, Peter; Jaehnert, Susanne; Strilek, Ralf; Martini, Ekkehard

    2011-06-01

    The compendium on radiation protection, scientific fundamentals, legal regulations and practical applications includes contributions to the following issues: (1) Effects and risk of ionizing radiation: fundamentals on effects and risk of ionizing radiation, news in radiation biology, advantages and disadvantages of screening investigations; (2) trends and legal regulations concerning radiation protection: development of European and national radiation protection laws, new regulations concerning X-rays, culture and ethics of radiation protection; (3) dosimetry and radiation measuring techniques: personal scanning using GHz radiation, new ''dose characteristics'' in practice, measuring techniques for the nuclear danger prevention and emergency hazard control; (4) radiation exposure in medicine: radiation exposure of modern medical techniques, heavy ion radiotherapy, deterministic and stochastic risks of the high-conformal photon radiotherapy, STEMO project - mobile CT for apoplectic stroke patients; (5) radiation exposure in technology: legal control of high-level radioactive sources, technical and public safety using enclosed radioactive sources for materials testing, radiation exposure in aviation, radon in Bavaria, NPP Fukushima-Daiichi - a status report; (6) radiation exposure in nuclear engineering: The Chernobyl accident - historical experiences or sustaining problem? European standards for radioactive waste disposal, radioactive material disposal in Germany risk assessment of ionizing and non-ionizing radiation (7) Case studies.

  12. High-Performance Compute Infrastructure in Astronomy: 2020 Is Only Months Away

    Science.gov (United States)

    Berriman, B.; Deelman, E.; Juve, G.; Rynge, M.; Vöckler, J. S.

    2012-09-01

    By 2020, astronomy will be awash with as much as 60 PB of public data. Full scientific exploitation of such massive volumes of data will require high-performance computing on server farms co-located with the data. Development of this computing model will be a community-wide enterprise that has profound cultural and technical implications. Astronomers must be prepared to develop environment-agnostic applications that support parallel processing. The community must investigate the applicability and cost-benefit of emerging technologies such as cloud computing to astronomy, and must engage the Computer Science community to develop science-driven cyberinfrastructure such as workflow schedulers and optimizers. We report here the results of collaborations between a science center, IPAC, and a Computer Science research institute, ISI. These collaborations may be considered pathfinders in developing a high-performance compute infrastructure in astronomy. These collaborations investigated two exemplar large-scale science-driver workflow applications: 1) Calculation of an infrared atlas of the Galactic Plane at 18 different wavelengths by placing data from multiple surveys on a common plate scale and co-registering all the pixels; 2) Calculation of an atlas of periodicities present in the public Kepler data sets, which currently contain 380,000 light curves. These products have been generated with two workflow applications, written in C for performance and designed to support parallel processing on multiple environments and platforms, but with different compute resource needs: the Montage image mosaic engine is I/O-bound, and the NASA Star and Exoplanet Database periodogram code is CPU-bound. Our presentation will report cost and performance metrics and lessons-learned for continuing development. Applicability of Cloud Computing: Commercial Cloud providers generally charge for all operations, including processing, transfer of input and output data, and for storage of data

  13. EFSA NDA Panel (EFSA Panel on Dietetic Products, Nutrition and Allergies), 2014. Scientific Opinion on the substantiation of a health claim related to beta-alanine and increase in physical performance during short-duration, high-intensity exercise pursuant to Article 13(5) of Regulation (EC

    DEFF Research Database (Denmark)

    Tetens, Inge

    2014-01-01

    Following an application from Natural Alternative International, Inc. (NAI), submitted pursuant to Article 13(5) of Regulation (EC) No 1924/2006 via the Competent Authority of the United Kingdom, the Panel on Dietetic Products, Nutrition and Allergies (NDA) was asked to deliver an opinion...... on the scientific substantiation of a health claim related to beta-alanine and increase in physical performance during short-duration, high-intensity exercise. The food constituent that is the subject of the claim is beta-alanine, which is sufficiently characterised. The Panel considers that an increase in physical...... performance during short-duration, high-intensity exercise is a beneficial physiological effect. In weighing the evidence the Panel took into account that only one out of 11 pertinent human intervention studies (including 14 pertinent outcomes) from which conclusions could be drawn showed an effect of beta...

  14. Nanoporous CuS nano-hollow spheres as advanced material for high-performance supercapacitors

    Energy Technology Data Exchange (ETDEWEB)

    Heydari, Hamid [Faculty of Sciences, Razi University, Kermanshah (Iran, Islamic Republic of); Moosavifard, Seyyed Ebrahim, E-mail: info_seyyed@yahoo.com [Young Researchers and Elite Club, Central Tehran Branch, Islamic Azad University, Tehran (Iran, Islamic Republic of); Elyasi, Saeed [Department of Chemical Engineering, Sharif University of Technology, Tehran (Iran, Islamic Republic of); Shahraki, Mohammad [Department of Chemistry, University of Sistan and Baluchestan, Zahedan (Iran, Islamic Republic of)

    2017-02-01

    Highlights: • Nanoporous CuS nano-hollow spheres were synthesized by a facile method. • Nano-hollow spheres have a large specific surface area (97 m{sup 2} g{sup −1}) and nanoscale shell thickness (<20 nm). • Such unique structures exhibit excellent electrochemical properties for high-performance SCs. - Abstract: Due to unique advantages, the development of high-performance supercapacitors has stimulated a great deal of scientific research over the past decade. The electrochemical performance of a supercapacitor is strongly affected by the surface and structural properties of its electrode materials. Herein, we report a facile synthesis of high-performance supercapacitor electrode material based on CuS nano-hollow spheres with nanoporous structures, large specific surface area (97 m{sup 2} g{sup −1}) and nanoscale shell thickness (<20 nm). This interesting electrode structure plays a key role in providing more active sites for electrochemical reactions, short ion and electron diffusion pathways and facilitated ion transport. The CuS nano-hollow spheres electrode exhibits excellent electrochemical performance including a maximum specific capacitance of 948 F g{sup −1} at 1 A g{sup −1}, significant rate capability of 46% capacitance retention at a high current density of 50 A g{sup −1}, and outstanding long-term cycling stability at various current densities. This work not only demonstrates the promising potential of the CuS-NHS electrodes for application in high-performance supercapacitors, but also sheds a new light on the metal sulfides design philosophy.

  15. High-Performance Networking

    CERN Multimedia

    CERN. Geneva

    2003-01-01

    The series will start with an historical introduction about what people saw as high performance message communication in their time and how that developed to the now to day known "standard computer network communication". It will be followed by a far more technical part that uses the High Performance Computer Network standards of the 90's, with 1 Gbit/sec systems as introduction for an in depth explanation of the three new 10 Gbit/s network and interconnect technology standards that exist already or emerge. If necessary for a good understanding some sidesteps will be included to explain important protocols as well as some necessary details of concerned Wide Area Network (WAN) standards details including some basics of wavelength multiplexing (DWDM). Some remarks will be made concerning the rapid expanding applications of networked storage.

  16. Exploring performance and energy tradeoffs for irregular applications: A case study on the Tilera many-core architecture

    Energy Technology Data Exchange (ETDEWEB)

    Panyala, Ajay; Chavarría-Miranda, Daniel; Manzano, Joseph B.; Tumeo, Antonino; Halappanavar, Mahantesh

    2017-06-01

    High performance, parallel applications with irregular data accesses are becoming a critical workload class for modern systems. In particular, the execution of such workloads on emerging many-core systems is expected to be a significant component of applications in data mining, machine learning, scientific computing and graph analytics. However, power and energy constraints limit the capabilities of individual cores, memory hierarchy and on-chip interconnect of such systems, thus leading to architectural and software trade-os that must be understood in the context of the intended application’s behavior. Irregular applications are notoriously hard to optimize given their data-dependent access patterns, lack of structured locality and complex data structures and code patterns. We have ported two irregular applications, graph community detection using the Louvain method (Grappolo) and high-performance conjugate gradient (HPCCG), to the Tilera many-core system and have conducted a detailed study of platform-independent and platform-specific optimizations that improve their performance as well as reduce their overall energy consumption. To conduct this study, we employ an auto-tuning based approach that explores the optimization design space along three dimensions - memory layout schemes, GCC compiler flag choices and OpenMP loop scheduling options. We leverage MIT’s OpenTuner auto-tuning framework to explore and recommend energy optimal choices for different combinations of parameters. We then conduct an in-depth architectural characterization to understand the memory behavior of the selected workloads. Finally, we perform a correlation study to demonstrate the interplay between the hardware behavior and application characteristics. Using auto-tuning, we demonstrate whole-node energy savings and performance improvements of up to 49:6% and 60% relative to a baseline instantiation, and up to 31% and 45:4% relative to manually optimized variants.

  17. Highly parallel machines and future of scientific computing

    International Nuclear Information System (INIS)

    Singh, G.S.

    1992-01-01

    Computing requirement of large scale scientific computing has always been ahead of what state of the art hardware could supply in the form of supercomputers of the day. And for any single processor system the limit to increase in the computing power was realized a few years back itself. Now with the advent of parallel computing systems the availability of machines with the required computing power seems a reality. In this paper the author tries to visualize the future large scale scientific computing in the penultimate decade of the present century. The author summarized trends in parallel computers and emphasize the need for a better programming environment and software tools for optimal performance. The author concludes this paper with critique on parallel architectures, software tools and algorithms. (author). 10 refs., 2 tabs

  18. The present status of scientific applications of nuclear explosions

    International Nuclear Information System (INIS)

    Cowan, G.A.; Diven, B.C.

    1970-01-01

    This is the fourth in a series of symposia which started, in 1957 at Livermore with the purpose of examining the peaceful uses of nuclear explosives. Although principal emphasis has b een placed on technological applications, the discussions have, from the outset, included the fascinating question of scientific uses. Of the possible scientific applications which were mentioned at the 1957 meeting, the proposals which attracted most attention involved uses of nuclear explosions for research in seismology. It is interesting to note that since then a very large and stimulating body of data in the field of seismology has been collected from nuclear tests. Ideas for scientific applications of nuclear explosions go back considerably further than 1957. During the war days Otto Frisch at Los Alamos suggested that a fission bomb would provide an excellent source of fast neutrons which could be led down a vacuum pipe and used for experiments in a relatively unscattered state. This idea, reinvented, modified, and elaborated upon in the ensuing twenty-five years, provides the basis for much of the research discussed in this morning's program. In 1952 a somewhat different property of nuclear explosions, their ability to produce intense neutron exposures on internal targets and to synthesize large quantities of multiple neutron capture products, was dramatically brought to our attention by analysis of debris from the first large thermonuclear explosion (Mike) in which the elements einsteinium and fermiun were observed for the first time. The reports of the next two Plowshare symposia in 1959 and 1964 help record the fascinating development of the scientific uses of neutrons in nuclear explosions. Starting with two 'wheel' experiments in 1958 to measure symmetry of fission in 235-U resonances, the use of external beams of energy-resolved neutrons was expanded on the 'Gnome' experiment in 1961 to include the measurement of neutron capture excitation functions for 238-U, 232-Th

  19. Development of student performance assessment based on scientific approach for a basic physics practicum in simple harmonic motion materials

    Science.gov (United States)

    Serevina, V.; Muliyati, D.

    2018-05-01

    This research aims to develop students’ performance assessment instrument based on scientific approach is valid and reliable in assessing the performance of students on basic physics lab of Simple Harmonic Motion (SHM). This study uses the ADDIE consisting of stages: Analyze, Design, Development, Implementation, and Evaluation. The student performance assessment developed can be used to measure students’ skills in observing, asking, conducting experiments, associating and communicate experimental results that are the ‘5M’ stages in a scientific approach. Each grain of assessment in the instrument is validated by the instrument expert and the evaluation with the result of all points of assessment shall be eligible to be used with a 100% eligibility percentage. The instrument is then tested for the quality of construction, material, and language by panel (lecturer) with the result: 85% or very good instrument construction aspect, material aspect 87.5% or very good, and language aspect 83% or very good. For small group trial obtained instrument reliability level of 0.878 or is in the high category, where r-table is 0.707. For large group trial obtained instrument reliability level of 0.889 or is in the high category, where r-table is 0.320. Instruments declared valid and reliable for 5% significance level. Based on the result of this research, it can be concluded that the student performance appraisal instrument based on the developed scientific approach is declared valid and reliable to be used in assessing student skill in SHM experimental activity.

  20. Development of CSS-42L{trademark}, a high performance carburizing stainless steel for high temperature aerospace applications

    Energy Technology Data Exchange (ETDEWEB)

    Burrier, H.I.; Milam, L. [Timken Co., Canton, OH (United States); Tomasello, C.M.; Balliett, S.A.; Maloney, J.L. [Latrobe Steel Co., Latrobe, PA (United States); Ogden, W.P. [MPB Corp., Lebanon, NH (United States)

    1998-12-31

    Today`s aerospace engineering challenges demand materials which can operate under conditions of temperature extremes, high loads and harsh, corrosive environments. This paper presents a technical overview of the on-going development of CSS-42L (US Patent No. 5,424,028). This alloy is a case-carburizable, stainless steel alloy suitable for use in applications up to 427 C, particularly suited to high performance rolling element bearings, gears, shafts and fasteners. The nominal chemistry of CSS-42L includes: (by weight) 0.12% carbon, 14.0% chromium, 0.60% vanadium, 2.0% nickel, 4.75% molybdenum and 12.5% cobalt. Careful balancing of these components combined with VIM-VAR melting produces an alloy that can be carburized and heat treated to achieve a high surface hardness (>58 HRC at 1mm (0.040 in) depth) with excellent corrosion resistance. The hot hardness of the carburized case is equal to or better than all competitive grades, exceeding 60 HRC at 427 C. The fracture toughness and impact resistance of the heat treated core material have likewise been evaluated in detail and found to be better than M50-NiL steel. The corrosion resistance has been shown to be equivalent to that of 440C steel in tests performed to date.

  1. Physics through the 1990s: Scientific interfaces and technological applications

    International Nuclear Information System (INIS)

    1986-01-01

    Physics traditionally serves mankind through its fundamental discoveries, which enrich our understanding of nature and the cosmos. While the basic driving force for physics research is intellectual curiosity and the search for understanding, the nation's support for physics is also motivated by strategic national goals, by the pride of world scientific leadership, by societal impact through symbiosis with other natural sciences, and through the stimulus of advanced technology provided by applications of physics. This Physics Survey volume looks outward from physics to report its profound impact on society and the economy through interactions at the interfaces with other natural sciences and through applications of physics to technology, medicine, and national defense

  2. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2013-01-01

    Contemporary High Performance Computing: From Petascale toward Exascale focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. The first part of the book examines significant trends in HPC systems, including computer architectures, applications, performance, and software. It discusses the growth from terascale to petascale computing and the influence of the TOP500 and Green500 lists. The second part of the

  3. Physics through the 1990s: scientific interfaces and technological applications

    International Nuclear Information System (INIS)

    1986-01-01

    The volume examines the scientific interfaces and technological applications of physics. Twelve areas are dealt with: biological physics--biophysics, the brain, and theoretical biology; the physics-chemistry interface--instrumentation, surfaces, neutron and synchrotron radiation, polymers, organic electronic materials; materials science; geophysics--tectonics, the atmosphere and oceans, planets, drilling and seismic exploration, and remote sensing; computational physics--complex systems and applications in basic research; mathematics--field theory and chaos; microelectronics--integrated circuits, miniaturization, future trends; optical information technologies--fiber optics and photonics; instrumentation; physics applications to energy needs and the environment; national security--devices, weapons, and arms control; medical physics--radiology, ultrasonics, NMR, and photonics. An executive summary and many chapters contain recommendations regarding funding, education, industry participation, small-group university research and large facility programs, government agency programs, and computer database needs

  4. Powder metallurgical high performance materials. Proceedings. Volume 1: high performance P/M metals

    International Nuclear Information System (INIS)

    Kneringer, G.; Roedhammer, P.; Wildner, H.

    2001-01-01

    The proceedings of this sequence of seminars form an impressive chronicle of the continued progress in the understanding of refractory metals and cemented carbides and in their manufacture and application. There the ingenuity and assiduous work of thousands of scientists and engineers striving for progress in the field of powder metallurgy is documented in more than 2000 contributions covering some 30000 pages. The 15th Plansee Seminar was convened under the general theme 'Powder Metallurgical High Performance Materials'. Under this broadened perspective the seminar will strive to look beyond the refractory metals and cemented carbides, which remain at its focus, to novel classes of materials, such as intermetallic compounds, with potential for high temperature applications. (author)

  5. Powder metallurgical high performance materials. Proceedings. Volume 1: high performance P/M metals

    Energy Technology Data Exchange (ETDEWEB)

    Kneringer, G; Roedhammer, P; Wildner, H [eds.

    2001-07-01

    The proceedings of this sequence of seminars form an impressive chronicle of the continued progress in the understanding of refractory metals and cemented carbides and in their manufacture and application. There the ingenuity and assiduous work of thousands of scientists and engineers striving for progress in the field of powder metallurgy is documented in more than 2000 contributions covering some 30000 pages. The 15th Plansee Seminar was convened under the general theme 'Powder Metallurgical High Performance Materials'. Under this broadened perspective the seminar will strive to look beyond the refractory metals and cemented carbides, which remain at its focus, to novel classes of materials, such as intermetallic compounds, with potential for high temperature applications. (author)

  6. High-Performance Energy Applications and Systems

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Barton [Univ. of Wisconsin, Madison, WI (United States)

    2014-01-01

    The Paradyn project has a history of developing algorithms, techniques, and software that push the cutting edge of tool technology for high-end computing systems. Under this funding, we are working on a three-year agenda to make substantial new advances in support of new and emerging Petascale systems. The overall goal for this work is to address the steady increase in complexity of these petascale systems. Our work covers two key areas: (1) The analysis, instrumentation and control of binary programs. Work in this area falls under the general framework of the Dyninst API tool kits. (2) Infrastructure for building tools and applications at extreme scale. Work in this area falls under the general framework of the MRNet scalability framework. Note that work done under this funding is closely related to work done under a contemporaneous grant, “Foundational Tools for Petascale Computing”, SC0003922/FG02-10ER25940, UW PRJ27NU.

  7. Visualization of scientific data for high energy physics: PAW, a general-purpose portable software tool for data analysis and presentation

    International Nuclear Information System (INIS)

    Brun, R.; Couet, O.; Vandoni, C.E.; Zanarini, P.

    1990-01-01

    Visualization of scientific data although a fashionable word in the world of computer graphics, is not a new invention, but it is hundreds years old. With the advent of computer graphics the visualization of Scientific Data has now become a well understood and widely used technology, with hundreds of applications in the most different fields, ranging from media applications to real scientific ones. In the present paper, we shall discuss the design concepts of the Visualization of Scientific Data systems in particular in the specific field of High Energy Physics. During the last twenty years, CERN has played a leading role as the focus for development of packages and software libraries to solve problems related to High Energy Physics (HEP). The results of the integration of resources from many different Laboratories can be expressed in several million lines of code written at CERN during this period of time, used at CERN and distributed to collaborating laboratories. Nowadays, this role of software developer is considered very important by the entire HEP community. In this paper a large software package, where man-machine interaction and graphics play a key role (PAW-Physics Analysis Workstation), is described. PAW is essentially an interactive system which includes many different software tools, strongly oriented towards data analysis and data presentation. Some of these tools have been available in different forms and with different human interfaces for several years. 6 figs

  8. Perspectives for high-performance permanent magnets: applications, coercivity, and new materials

    Science.gov (United States)

    Hirosawa, Satoshi; Nishino, Masamichi; Miyashita, Seiji

    2017-03-01

    High-performance permanent magnets are indispensable in the production of high-efficiency motors and generators and ultimately for sustaining the green earth. The central issue of modern permanent magnetism is to realize high coercivity near and above room temperature on marginally hard magnetic materials without relying upon the critical elements such as heavy rare earths by means of nanostructure engineering. Recent investigations based on advanced nanostructure analysis and large-scale first principles calculations have led to significant paradigm shifts in the understandings of coercivity mechanism in Nd-Fe-B permanent magnets, which includes the discovery of the ferromagnetism of the thin (2 nm) intergranular phase surrounding the Nd2Fe14B grains, the occurrence of negative (in-plane) magnetocrystalline anisotropy of Nd ions and some Fe atoms at the interface which degrades coercivity, and visualization of the stochastic behaviors of magnetization in the magnetization reversal process at high temperatures. A major change may occur also in the motor topologies, which is currently overwhelmed by the magnetic flux weakening interior permanent magnet motor type, to other types with variable flux permanent magnet type in some applications to open up a niche for new permanent magnet materials. Keynote talk at 8th International Workshop on Advanced Materials Science and Nanotechnology (IWAMSN2016), 8-12 November 2016, Ha Long City, Vietnam.

  9. Laying the foundation to use Raspberry Pi 3 V2 camera module imagery for scientific and engineering purposes

    Science.gov (United States)

    Pagnutti, Mary; Ryan, Robert E.; Cazenavette, George; Gold, Maxwell; Harlan, Ryan; Leggett, Edward; Pagnutti, James

    2017-01-01

    A comprehensive radiometric characterization of raw-data format imagery acquired with the Raspberry Pi 3 and V2.1 camera module is presented. The Raspberry Pi is a high-performance single-board computer designed to educate and solve real-world problems. This small computer supports a camera module that uses a Sony IMX219 8 megapixel CMOS sensor. This paper shows that scientific and engineering-grade imagery can be produced with the Raspberry Pi 3 and its V2.1 camera module. Raw imagery is shown to be linear with exposure and gain (ISO), which is essential for scientific and engineering applications. Dark frame, noise, and exposure stability assessments along with flat fielding results, spectral response measurements, and absolute radiometric calibration results are described. This low-cost imaging sensor, when calibrated to produce scientific quality data, can be used in computer vision, biophotonics, remote sensing, astronomy, high dynamic range imaging, and security applications, to name a few.

  10. 15 CFR 301.3 - Application for duty-free entry of scientific instruments.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Application for duty-free entry of scientific instruments. 301.3 Section 301.3 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE MISCELLANEOUS...

  11. FY 1993 Blue Book: Grand Challenges 1993: High Performance Computing and Communications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — High performance computing and computer communications networks are becoming increasingly important to scientific advancement, economic competition, and national...

  12. Engineering and Scientific Applications: Using MatLab(Registered Trademark) for Data Processing and Visualization

    Science.gov (United States)

    Sen, Syamal K.; Shaykhian, Gholam Ali

    2011-01-01

    MatLab(TradeMark)(MATrix LABoratory) is a numerical computation and simulation tool that is used by thousands Scientists and Engineers in many countries. MatLab does purely numerical calculations, which can be used as a glorified calculator or interpreter programming language; its real strength is in matrix manipulations. Computer algebra functionalities are achieved within the MatLab environment using "symbolic" toolbox. This feature is similar to computer algebra programs, provided by Maple or Mathematica to calculate with mathematical equations using symbolic operations. MatLab in its interpreter programming language form (command interface) is similar with well known programming languages such as C/C++, support data structures and cell arrays to define classes in object oriented programming. As such, MatLab is equipped with most of the essential constructs of a higher programming language. MatLab is packaged with an editor and debugging functionality useful to perform analysis of large MatLab programs and find errors. We believe there are many ways to approach real-world problems; prescribed methods to ensure foregoing solutions are incorporated in design and analysis of data processing and visualization can benefit engineers and scientist in gaining wider insight in actual implementation of their perspective experiments. This presentation will focus on data processing and visualizations aspects of engineering and scientific applications. Specifically, it will discuss methods and techniques to perform intermediate-level data processing covering engineering and scientific problems. MatLab programming techniques including reading various data files formats to produce customized publication-quality graphics, importing engineering and/or scientific data, organizing data in tabular format, exporting data to be used by other software programs such as Microsoft Excel, data presentation and visualization will be discussed.

  13. Python in the NERSC Exascale Science Applications Program for Data

    Energy Technology Data Exchange (ETDEWEB)

    Ronaghi, Zahra; Thomas, Rollin; Deslippe, Jack; Bailey, Stephen; Gursoy, Doga; Kisner, Theodore; Keskitalo, Reijo; Borrill, Julian

    2017-11-12

    We describe a new effort at the National Energy Re- search Scientific Computing Center (NERSC) in performance analysis and optimization of scientific Python applications targeting the Intel Xeon Phi (Knights Landing, KNL) many- core architecture. The Python-centered work outlined here is part of a larger effort called the NERSC Exascale Science Applications Program (NESAP) for Data. NESAP for Data focuses on applications that process and analyze high-volume, high-velocity data sets from experimental/observational science (EOS) facilities supported by the US Department of Energy Office of Science. We present three case study applications from NESAP for Data that use Python. These codes vary in terms of “Python purity” from applications developed in pure Python to ones that use Python mainly as a convenience layer for scientists without expertise in lower level programming lan- guages like C, C++ or Fortran. The science case, requirements, constraints, algorithms, and initial performance optimizations for each code are discussed. Our goal with this paper is to contribute to the larger conversation around the role of Python in high-performance computing today and tomorrow, highlighting areas for future work and emerging best practices

  14. A primer on scientific programming with Python

    CERN Document Server

    Langtangen, Hans Petter

    2014-01-01

    The book serves as a first introduction to computer programming of scientific applications, using the high-level Python language. The exposition is example and problem-oriented, where the applications are taken from mathematics, numerical calculus, statistics, physics, biology and finance. The book teaches "Matlab-style" and procedural programming as well as object-oriented programming. High school mathematics is a required background and it is advantageous to study classical and numerical one-variable calculus in parallel with reading this book. Besides learning how to program computers, the reader will also learn how to solve mathematical problems, arising in various branches of science and engineering, with the aid of numerical methods and programming. By blending programming, mathematics and scientific applications, the book lays a solid foundation for practicing computational science. From the reviews: Langtangen … does an excellent job of introducing programming as a set of skills in problem solving. ...

  15. A primer on scientific programming with Python

    CERN Document Server

    Langtangen, Hans Petter

    2016-01-01

    The book serves as a first introduction to computer programming of scientific applications, using the high-level Python language. The exposition is example and problem-oriented, where the applications are taken from mathematics, numerical calculus, statistics, physics, biology and finance. The book teaches "Matlab-style" and procedural programming as well as object-oriented programming. High school mathematics is a required background and it is advantageous to study classical and numerical one-variable calculus in parallel with reading this book. Besides learning how to program computers, the reader will also learn how to solve mathematical problems, arising in various branches of science and engineering, with the aid of numerical methods and programming. By blending programming, mathematics and scientific applications, the book lays a solid foundation for practicing computational science. From the reviews: Langtangen … does an excellent job of introducing programming as a set of skills in problem solving. ...

  16. Jenkins-CI, an Open-Source Continuous Integration System, as a Scientific Data and Image-Processing Platform

    Science.gov (United States)

    Moutsatsos, Ioannis K.; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J.; Jenkins, Jeremy L.; Holway, Nicholas; Tallarico, John; Parker, Christian N.

    2016-01-01

    High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an “off-the-shelf,” open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community. PMID:27899692

  17. Jenkins-CI, an Open-Source Continuous Integration System, as a Scientific Data and Image-Processing Platform.

    Science.gov (United States)

    Moutsatsos, Ioannis K; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J; Jenkins, Jeremy L; Holway, Nicholas; Tallarico, John; Parker, Christian N

    2017-03-01

    High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an "off-the-shelf," open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community.

  18. MaMR: High-performance MapReduce programming model for material cloud applications

    Science.gov (United States)

    Jing, Weipeng; Tong, Danyu; Wang, Yangang; Wang, Jingyuan; Liu, Yaqiu; Zhao, Peng

    2017-02-01

    With the increasing data size in materials science, existing programming models no longer satisfy the application requirements. MapReduce is a programming model that enables the easy development of scalable parallel applications to process big data on cloud computing systems. However, this model does not directly support the processing of multiple related data, and the processing performance does not reflect the advantages of cloud computing. To enhance the capability of workflow applications in material data processing, we defined a programming model for material cloud applications that supports multiple different Map and Reduce functions running concurrently based on hybrid share-memory BSP called MaMR. An optimized data sharing strategy to supply the shared data to the different Map and Reduce stages was also designed. We added a new merge phase to MapReduce that can efficiently merge data from the map and reduce modules. Experiments showed that the model and framework present effective performance improvements compared to previous work.

  19. Proceeding on the Scientific Meeting and Presentation on Accelerator Technology and Its Applications

    International Nuclear Information System (INIS)

    Susilo Widodo; Darsono; Slamet Santosa; Sudjatmoko; Tjipto Sujitno; Pramudita Anggraita; Wahini Nurhayati

    2015-11-01

    The scientific meeting and presentation on accelerator technology and its applications was held by PSTA BATAN on 30 November 2015. This meeting aims to promote the technology and its applications to accelerator scientists, academics, researchers and technology users as well as accelerator-based accelerator research that have been conducted by researchers in and outside BATAN. This proceeding contains 20 papers about physics and nuclear reactor. (PPIKSN)

  20. Application of BIM technology in green scientific research office building

    Science.gov (United States)

    Ni, Xin; Sun, Jianhua; Wang, Bo

    2017-05-01

    BIM technology as a kind of information technology, has been along with the advancement of building industrialization application in domestic building industry gradually. Based on reasonable construction BIM model, using BIM technology platform, through collaborative design tools can effectively improve the design efficiency and design quality. Vanda northwest engineering design and research institute co., LTD., the scientific research office building project in combination with the practical situation of engineering using BIM technology, formed in the BIM model combined with related information according to the energy energy model (BEM) and the application of BIM technology in construction management stage made exploration, and the direct experience and the achievements gained by the architectural design part made a summary.

  1. The present status of scientific applications of nuclear explosions

    Energy Technology Data Exchange (ETDEWEB)

    Cowan, G A; Diven, B C [Los Alamos Scientific Laboratory, University of California, Los Alamos, NM (United States)

    1970-05-15

    This is the fourth in a series of symposia which started, in 1957 at Livermore with the purpose of examining the peaceful uses of nuclear explosives. Although principal emphasis has {sup b}een placed on technological applications, the discussions have, from the outset, included the fascinating question of scientific uses. Of the possible scientific applications which were mentioned at the 1957 meeting, the proposals which attracted most attention involved uses of nuclear explosions for research in seismology. It is interesting to note that since then a very large and stimulating body of data in the field of seismology has been collected from nuclear tests. Ideas for scientific applications of nuclear explosions go back considerably further than 1957. During the war days Otto Frisch at Los Alamos suggested that a fission bomb would provide an excellent source of fast neutrons which could be led down a vacuum pipe and used for experiments in a relatively unscattered state. This idea, reinvented, modified, and elaborated upon in the ensuing twenty-five years, provides the basis for much of the research discussed in this morning's program. In 1952 a somewhat different property of nuclear explosions, their ability to produce intense neutron exposures on internal targets and to synthesize large quantities of multiple neutron capture products, was dramatically brought to our attention by analysis of debris from the first large thermonuclear explosion (Mike) in which the elements einsteinium and fermiun were observed for the first time. The reports of the next two Plowshare symposia in 1959 and 1964 help record the fascinating development of the scientific uses of neutrons in nuclear explosions. Starting with two 'wheel' experiments in 1958 to measure symmetry of fission in 235-U resonances, the use of external beams of energy-resolved neutrons was expanded on the 'Gnome' experiment in 1961 to include the measurement of neutron capture excitation functions for 238-U, 232

  2. Graphene/CuS/ZnO hybrid nanocomposites for high performance photocatalytic applications

    International Nuclear Information System (INIS)

    Varghese, Jini; Varghese, K.T.

    2015-01-01

    We herein report a novel, high performance ternary nanocomposite composed of Graphene doped with nano Copper Sulphide and Zinc Oxide nanotubes (GCZ) for photodegradation of organic pollutants. Investigations were made to estimate and compare the Methyl Orange dye (MO) degradation using GCZ, synthesized pristine Graphene (Gr) and Graphene–ZnO hybrid nanocomposite (GZ) under UV light irradiations. The synthesis of nanocomposites involves the simple ultra-sonication and mixing methods. The nanocomposites were characterized using transmission electron microscopy (TEM), high resolution transmission electron microscopy (HR-TEM), X-ray diffraction (XRD), Raman spectroscopy, UV–vis absorption spectroscopy and Brunauer–Emmett–Teller (BET) surface area method. The as synthesized GCZ shows better surface area, porosity and band gap energy than as synthesized Gr and GZ. The photocatalytic degradation of methyl orange dye follows as Gr  > GZ due to the stronger adsorbability, large number of photo induced electrons and highest inhibition of charge carrier's recombination of GCZ. The kinetic investigation demonstrates that dye degradation exhibit the pseudo first order kinetic model with rate constant 0.1322, 0.049 and0.0109 min"−"1 corresponding to GCZ, GZ and Gr. The mechanism of dye degradation in presence of photocatalyst is also discussed. This study confirms that GCZ is a more promising material for high performance catalytic applications especially in the dye waste water purification. - Highlights: • Graphene–CuS–ZnO hybrid composites show better surface area, porosity and adsorbability. • CuS–ZnO hybrid nanostructure highly enhanced the photocatalytic activity of Graphene. • Graphene–CuS–ZnO hybrid composites show superior photocatalytic efficiency, rate constant and quantum yield.

  3. Graphene/CuS/ZnO hybrid nanocomposites for high performance photocatalytic applications

    Energy Technology Data Exchange (ETDEWEB)

    Varghese, Jini, E-mail: jini.nano@gmail.com; Varghese, K.T., E-mail: ktvscs@gmail.com

    2015-11-01

    We herein report a novel, high performance ternary nanocomposite composed of Graphene doped with nano Copper Sulphide and Zinc Oxide nanotubes (GCZ) for photodegradation of organic pollutants. Investigations were made to estimate and compare the Methyl Orange dye (MO) degradation using GCZ, synthesized pristine Graphene (Gr) and Graphene–ZnO hybrid nanocomposite (GZ) under UV light irradiations. The synthesis of nanocomposites involves the simple ultra-sonication and mixing methods. The nanocomposites were characterized using transmission electron microscopy (TEM), high resolution transmission electron microscopy (HR-TEM), X-ray diffraction (XRD), Raman spectroscopy, UV–vis absorption spectroscopy and Brunauer–Emmett–Teller (BET) surface area method. The as synthesized GCZ shows better surface area, porosity and band gap energy than as synthesized Gr and GZ. The photocatalytic degradation of methyl orange dye follows as Gr <<< GCZ >> GZ due to the stronger adsorbability, large number of photo induced electrons and highest inhibition of charge carrier's recombination of GCZ. The kinetic investigation demonstrates that dye degradation exhibit the pseudo first order kinetic model with rate constant 0.1322, 0.049 and0.0109 min{sup −1} corresponding to GCZ, GZ and Gr. The mechanism of dye degradation in presence of photocatalyst is also discussed. This study confirms that GCZ is a more promising material for high performance catalytic applications especially in the dye waste water purification. - Highlights: • Graphene–CuS–ZnO hybrid composites show better surface area, porosity and adsorbability. • CuS–ZnO hybrid nanostructure highly enhanced the photocatalytic activity of Graphene. • Graphene–CuS–ZnO hybrid composites show superior photocatalytic efficiency, rate constant and quantum yield.

  4. Scientific and technical guidance for the preparation and presentation of a health claim application (Revision 2)

    DEFF Research Database (Denmark)

    Sjödin, Anders Mikael

    2017-01-01

    EFSA asked the Panel on Dietetic Products, Nutrition and Allergies (NDA) to update the scientific and technical guidance for the preparation and presentation of an application for authorisation of a health claim published in 2011. Since then, the NDA Panel has gained considerable experience...... developments in this area. This guidance document presents a common format for the organisation of information for the preparation of a well-structured application for authorisation of health claims which fall under Articles 13(5), 14 and 19 of Regulation (EC) No 1924/2006. This guidance outlines...... the information and scientific data which must be included in the application, the hierarchy of different types of data and study designs, and the key issues which should be addressed in the application to substantiate the health claim....

  5. High Performance Computing in Science and Engineering '14

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2015-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS). The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and   engineers. The book comes with a wealth of color illustrations and tables of results.  

  6. Proceeding of the Scientific Meeting and Presentation on Accelerator Technology and its Application

    International Nuclear Information System (INIS)

    Sudjatmoko; Anggraita, P.; Darsono; Sudiyanto; Kusminarto; Karyono

    1999-07-01

    The proceeding contains papers presented on Scientific Meeting and Presentation on Accelerator Technology and Its Application, held in Yogyakarta, 16 january 1996. This proceeding contains papers on accelerator technology, especially electron beam machine. There are 11 papers indexed individually. (ID)

  7. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  8. Key scientific challenges in geological disposal of high level radioactive waste

    International Nuclear Information System (INIS)

    Wang Ju

    2007-01-01

    The geological disposal of high radioactive waste is a challenging task facing the scientific and technical world. This paper introduces the latest progress of high level radioactive disposal programs in the latest progress of high level radioactive disposal programs in the world, and discusses the following key scientific challenges: (1) precise prediction of the evolution of a repository site; (2) characteristics of deep geological environment; (3) behaviour of deep rock mass, groundwater and engineering material under coupled con-ditions (intermediate to high temperature, geostress, hydraulic, chemical, biological and radiation process, etc); (4) geo-chemical behaviour of transuranic radionuclides with low concentration and its migration with groundwater; and (5) safety assessment of disposal system. Several large-scale research projects and several hot topics related with high-level waste disposal are also introduced. (authors)

  9. Performance evaluation of a high power DC-DC boost converter for PV applications using SiC power devices

    Science.gov (United States)

    Almasoudi, Fahad M.; Alatawi, Khaled S.; Matin, Mohammad

    2016-09-01

    The development of Wide band gap (WBG) power devices has been attracted by many commercial companies to be available in the market because of their enormous advantages over the traditional Si power devices. An example of WBG material is SiC, which offers a number of advantages over Si material. For example, SiC has the ability of blocking higher voltages, reducing switching and conduction losses and supports high switching frequency. Consequently, SiC power devices have become the affordable choice for high frequency and power application. The goal of this paper is to study the performance of 4.5 kW, 200 kHz, 600V DC-DC boost converter operating in continuous conduction mode (CCM) for PV applications. The switching behavior and turn on and turn off losses of different switching power devices such as SiC MOSFET, SiC normally ON JFET and Si MOSFET are investigated and analyzed. Moreover, a detailed comparison is provided to show the overall efficiency of the DC-DC boost converter with different switching power devices. It is found that the efficiency of SiC power switching devices are higher than the efficiency of Si-based switching devices due to low switching and conduction losses when operating at high frequencies. According to the result, the performance of SiC switching power devices dominate the conventional Si power devices in terms of low losses, high efficiency and high power density. Accordingly, SiC power switching devices are more appropriate for PV applications where a converter of smaller size with high efficiency, and cost effective is required.

  10. Exploring the Performance of Spark for a Scientific Use Case

    Energy Technology Data Exchange (ETDEWEB)

    Sehrish, Saba [Fermilab; Kowalkowski, Jim [Fermilab; Paterno, Marc [Fermilab

    2016-01-01

    We present an evaluation of the performance of a Spark implementation of a classification algorithm in the domain of High Energy Physics (HEP). Spark is a general engine for in-memory, large-scale data processing, and is designed for applications where similar repeated analysis is performed on the same large data sets. Classification problems are one of the most common and critical data processing tasks across many domains. Many of these data processing tasks are both computation- and data-intensive, involving complex numerical computations employing extremely large data sets. We evaluated the performance of the Spark implementation on Cori, a NERSC resource, and compared the results to an untuned MPI implementation of the same algorithm. While the Spark implementation scaled well, it is not competitive in speed to our MPI implementation, even when using significantly greater computational resources.

  11. The use of physics practicum to train science process skills and its effect on scientific attitude of vocational high school students

    Science.gov (United States)

    Wiwin, E.; Kustijono, R.

    2018-03-01

    The purpose of the study is to describe the use of Physics practicum to train the science process skills and its effect on the scientific attitudes of the vocational high school students. The components of science process skills are: observing, classifying, inferring, predicting, and communicating. The established scientific attitudes are: curiosity, honesty, collaboration, responsibility, and open-mindedness. This is an experimental research with the one-shot case study design. The subjects are 30 Multimedia Program students of SMK Negeri 12 Surabaya. The data collection techniques used are observation and performance tests. The score of science process skills and scientific attitudes are taken from observational and performance instruments. Data analysis used are descriptive statistics and correlation. The results show that: 1) the physics practicum can train the science process skills and scientific attitudes in good category, 2) the relationship between the science process skills and the students' scientific attitude is good category 3) Student responses to the learning process using the practicum in the good category, The results of the research conclude that the physics practicum can train the science process skill and have a significant effect on the scientific attitude of the vocational highschool students.

  12. EPISTEMOLOGICAL PERCEPTION AND SCIENTIFIC LITERACY IN LEVEL HIGH SCHOOL TEACHERS

    Directory of Open Access Journals (Sweden)

    Ramiro Álvarez-Valenzuela

    2016-07-01

    Full Text Available Research in science education has helped to find some difficulties that hinder the teaching-learning process. These problems include conceptual content of school subjects, the influence of prior knowledge of the student and the teachers have not been trained in their university education epistemologically. This research presents the epistemological conceptions of a sample of 114 high school teachers university science area, which refer the ideas about the role of observation in scientific knowledge development and the work of scientists in the process of knowledge generation. It also includes the level of scientific literacy from the literature that is used as a source of information on the teaching. The result also identifies the level of scientific literacy in students and their influence on learning.

  13. 78 FR 13864 - Atlantic Highly Migratory Species; Exempted Fishing, Scientific Research, Display, and Chartering...

    Science.gov (United States)

    2013-03-01

    ... Highly Migratory Species; Exempted Fishing, Scientific Research, Display, and Chartering Permits; Letters... Permits (EFPs), Scientific Research Permits (SRPs), Display Permits, Letters of Acknowledgment (LOAs), and... scientific research, the acquisition of information and data, the enhancement of safety at sea, the purpose...

  14. 77 FR 69593 - Atlantic Highly Migratory Species; Exempted Fishing, Scientific Research, Display, and Chartering...

    Science.gov (United States)

    2012-11-20

    ... Highly Migratory Species; Exempted Fishing, Scientific Research, Display, and Chartering Permits; Letters... intent to issue Exempted Fishing Permits (EFPs), Scientific Research Permits (SRPs), Display Permits... public display and scientific research that is exempt from regulations (e.g., fishing seasons, prohibited...

  15. 75 FR 75458 - Atlantic Highly Migratory Species; Exempted Fishing, Scientific Research, Display, and Chartering...

    Science.gov (United States)

    2010-12-03

    ... Highly Migratory Species; Exempted Fishing, Scientific Research, Display, and Chartering Permits; Letters... intent to issue Exempted Fishing Permits (EFPs), Scientific Research Permits (SRPs), Display Permits... of HMS for public display and scientific research that is exempt from regulations (e.g., seasons...

  16. High performance bio-integrated devices

    Science.gov (United States)

    Kim, Dae-Hyeong; Lee, Jongha; Park, Minjoon

    2014-06-01

    In recent years, personalized electronics for medical applications, particularly, have attracted much attention with the rise of smartphones because the coupling of such devices and smartphones enables the continuous health-monitoring in patients' daily life. Especially, it is expected that the high performance biomedical electronics integrated with the human body can open new opportunities in the ubiquitous healthcare. However, the mechanical and geometrical constraints inherent in all standard forms of high performance rigid wafer-based electronics raise unique integration challenges with biotic entities. Here, we describe materials and design constructs for high performance skin-mountable bio-integrated electronic devices, which incorporate arrays of single crystalline inorganic nanomembranes. The resulting electronic devices include flexible and stretchable electrophysiology electrodes and sensors coupled with active electronic components. These advances in bio-integrated systems create new directions in the personalized health monitoring and/or human-machine interfaces.

  17. Educational and Scientific Applications of Climate Model Diagnostic Analyzer

    Science.gov (United States)

    Lee, S.; Pan, L.; Zhai, C.; Tang, B.; Kubar, T. L.; Zhang, J.; Bao, Q.

    2016-12-01

    Climate Model Diagnostic Analyzer (CMDA) is a web-based information system designed for the climate modeling and model analysis community to analyze climate data from models and observations. CMDA provides tools to diagnostically analyze climate data for model validation and improvement, and to systematically manage analysis provenance for sharing results with other investigators. CMDA utilizes cloud computing resources, multi-threading computing, machine-learning algorithms, web service technologies, and provenance-supporting technologies to address technical challenges that the Earth science modeling and model analysis community faces in evaluating and diagnosing climate models. As CMDA infrastructure and technology have matured, we have developed the educational and scientific applications of CMDA. Educationally, CMDA supported the summer school of the JPL Center for Climate Sciences for three years since 2014. In the summer school, the students work on group research projects where CMDA provide datasets and analysis tools. Each student is assigned to a virtual machine with CMDA installed in Amazon Web Services. A provenance management system for CMDA is developed to keep track of students' usages of CMDA, and to recommend datasets and analysis tools for their research topic. The provenance system also allows students to revisit their analysis results and share them with their group. Scientifically, we have developed several science use cases of CMDA covering various topics, datasets, and analysis types. Each use case developed is described and listed in terms of a scientific goal, datasets used, the analysis tools used, scientific results discovered from the use case, an analysis result such as output plots and data files, and a link to the exact analysis service call with all the input arguments filled. For example, one science use case is the evaluation of NCAR CAM5 model with MODIS total cloud fraction. The analysis service used is Difference Plot Service of

  18. Capacitor performance limitations in high power converter applications

    DEFF Research Database (Denmark)

    El-Khatib, Walid Ziad; Holbøll, Joachim; Rasmussen, Tonny Wederberg

    2013-01-01

    High voltage low inductance capacitors are used in converters as HVDC-links, snubber circuits and sub model (MMC) capacitances. They facilitate the possibility of large peak currents under high frequent or transient voltage applications. On the other hand, using capacitors with larger equivalent...... series inductances include the risk of transient overvoltages, with a negative effect on life time and reliability of the capacitors. These allowable limits of such current and voltage peaks are decided by the ability of the converter components, including the capacitors, to withstand them over...... the expected life time. In this paper results are described from investigations on the electrical environment of these capacitors, including all the conditions they would be exposed to, thereby trying to find the tradeoffs needed to find a suitable capacitor. Different types of capacitors with the same voltage...

  19. Synchrotron Applications of High Magnetic Fields

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-07-01

    This workshop aims at discussing the scientific potential of X-ray diffraction and spectroscopy in magnetic fields above 30 T. Pulsed magnetic fields in the range of 30 to 40 T have recently become available at Spring-8 and the ESRF (European synchrotron radiation facility). This document gathers the transparencies of the 6 following presentations: 1) pulsed magnetic fields at ESRF: first results; 2) X-ray spectroscopy and diffraction experiments by using mini-coils: applications to valence state transition and frustrated magnet; 3) R{sub 5}(Si{sub x}Ge{sub 1-x}){sub 4}: an ideal system to be studied in X-ray under high magnetic field?; 4) high field studies at the Advanced Photon Source: present status and future plans; 5) synchrotron X-ray diffraction studies under extreme conditions; and 6) projects for pulsed and steady high magnetic fields at the ESRF.

  20. Web-based visualization of very large scientific astronomy imagery

    Science.gov (United States)

    Bertin, E.; Pillay, R.; Marmo, C.

    2015-04-01

    Visualizing and navigating through large astronomy images from a remote location with current astronomy display tools can be a frustrating experience in terms of speed and ergonomics, especially on mobile devices. In this paper, we present a high performance, versatile and robust client-server system for remote visualization and analysis of extremely large scientific images. Applications of this work include survey image quality control, interactive data query and exploration, citizen science, as well as public outreach. The proposed software is entirely open source and is designed to be generic and applicable to a variety of datasets. It provides access to floating point data at terabyte scales, with the ability to precisely adjust image settings in real-time. The proposed clients are light-weight, platform-independent web applications built on standard HTML5 web technologies and compatible with both touch and mouse-based devices. We put the system to the test and assess the performance of the system and show that a single server can comfortably handle more than a hundred simultaneous users accessing full precision 32 bit astronomy data.

  1. Potential applications of high temperature helium

    International Nuclear Information System (INIS)

    Schleicher, R.W. Jr.; Kennedy, A.J.

    1992-09-01

    This paper discusses the DOE MHTGR-SC program's recent activity to improve the economics of the MHTGR without sacrificing safety performance and two potential applications of high temperature helium, the MHTGR gas turbine plant and a process heat application for methanol production from coal

  2. Model My Watershed: A high-performance cloud application for public engagement, watershed modeling and conservation decision support

    Science.gov (United States)

    Aufdenkampe, A. K.; Tarboton, D. G.; Horsburgh, J. S.; Mayorga, E.; McFarland, M.; Robbins, A.; Haag, S.; Shokoufandeh, A.; Evans, B. M.; Arscott, D. B.

    2017-12-01

    The Model My Watershed Web app (https://app.wikiwatershed.org/) and the BiG-CZ Data Portal (http://portal.bigcz.org/) and are web applications that share a common codebase and a common goal to deliver high-performance discovery, visualization and analysis of geospatial data in an intuitive user interface in web browser. Model My Watershed (MMW) was designed as a decision support system for watershed conservation implementation. BiG CZ Data Portal was designed to provide context and background data for research sites. Users begin by creating an Area of Interest, via an automated watershed delineation tool, a free draw tool, selection of a predefined area such as a county or USGS Hydrological Unit (HUC), or uploading a custom polygon. Both Web apps visualize and provide summary statistics of land use, soil groups, streams, climate and other geospatial information. MMW then allows users to run a watershed model to simulate different scenarios of human impacts on stormwater runoff and water-quality. BiG CZ Data Portal allows users to search for scientific and monitoring data within the Area of Interest, which also serves as a prototype for the upcoming Monitor My Watershed web app. Both systems integrate with CUAHSI cyberinfrastructure, including visualizing observational data from CUAHSI Water Data Center and storing user data via CUAHSI HydroShare. Both systems also integrate with the new EnviroDIY Water Quality Data Portal (http://data.envirodiy.org/), a system for crowd-sourcing environmental monitoring data using open-source sensor stations (http://envirodiy.org/mayfly/) and based on the Observations Data Model v2.

  3. Design and implementation of bitmap indices for scientific data

    CERN Document Server

    Stockinger, K

    2001-01-01

    Bitmap indices are efficient multi-dimensional index data structures for handling complex adhoc queries in read-mostly environments. They have been implemented in several commercial database systems but are only well suited for discrete attribute values which are very common in typical business applications. However, many scientific applications usually operate on floating point numbers and cannot take advantage of the optimisation techniques offered by current database solutions. We thus present a novel algorithm called Generic RangeEval for processing one-sided range queries over floating point values. In addition, we present a cost model for predicting the performance of bitmap indices for high-dimensional search spaces. We verify our analytical results by a detailed experimental study, and show that the presented bitmap evaluation algorithm scales well also for high-dimensional search spaces requiring only a fairly small index. Because of its simple arithmetic structure, the cost model could easily be int...

  4. Back-End of the web application for the scientific journal Studia Kinanthropologica

    OpenAIRE

    ŠIMÁK, Lubomír

    2017-01-01

    The bachelor thesis deals with the creation of the server part of the web application of the scientific reviewed magazine Studia Kinanthropologica, which will serve for the review of the articles for printing. The bachelor thesis describes how to work on this system and how to solve the problems that have arisen in this work.

  5. High-Performance Linear Algebra Processor using FPGA

    National Research Council Canada - National Science Library

    Johnson, J

    2004-01-01

    With recent advances in FPGA (Field Programmable Gate Array) technology it is now feasible to use these devices to build special purpose processors for floating point intensive applications that arise in scientific computing...

  6. Development and applications of high energy industrial computed tomography in China

    International Nuclear Information System (INIS)

    Xiao, YongShun; Chen, Zhiqiang

    2016-01-01

    In recent years, China's rapid development of high-end equipment manufacturing industry in the high-speed railway, aircraft, carrier rocket, etc. brings the growing requirements of the high quality assurance of the product. The accelerator based high-energy X-ray Industrial CT has the advantages of strong penetrating power, high sensitivity defect detection and quantitative measurement with image visualization, can meet the needs of the large complicated structure inspection demands. This paper introduces the current research and development status of high energy industrial CT system in China. Research achievements by the Tsinghua University and the Granpect company are discussed, including the ICT system design, high-power LINAC accelerator X-ray source and high detection efficiency detector development, fast and accurate reconstruction algorithms research, etc. This paper also introduces the particularized NDT applications from dozens of industrial CT systems made by Granpect in China, including welding structure nondestructive testing, assembly quality inspection, reverse engineering, scientific research and other applications. Then the future development and application of high energy industrial CT is prospected.

  7. Impact of research investment on scientific productivity of junior researchers.

    Science.gov (United States)

    Farrokhyar, Forough; Bianco, Daniela; Dao, Dyda; Ghert, Michelle; Andruszkiewicz, Nicole; Sussman, Jonathan; Ginsberg, Jeffrey S

    2016-12-01

    There is a demand for providing evidence on the effectiveness of research investments on the promotion of novice researchers' scientific productivity and production of research with new initiatives and innovations. We used a mixed method approach to evaluate the funding effect of the New Investigator Fund (NIF) by comparing scientific productivity between award recipients and non-recipients. We reviewed NIF grant applications submitted from 2004 to 2013. Scientific productivity was assessed by confirming the publication of the NIF-submitted application. Online databases were searched, independently and in duplicate, to locate the publications. Applicants' perceptions and experiences were collected through a short survey and categorized into specified themes. Multivariable logistic regression was performed. Odds ratios (OR) with 95 % confidence intervals (CI) are reported. Of 296 applicants, 163 (55 %) were awarded. Gender, affiliation, and field of expertise did not affect funding decisions. More physicians with graduate education (32.0 %) and applicants with a doctorate degree (21.5 %) were awarded than applicants without postgraduate education (9.8 %). Basic science research (28.8 %), randomized controlled trials (24.5 %), and feasibility/pilot trials (13.3 %) were awarded more than observational designs (p   scientific productivity and professional growth of novice investigators and production of research with new initiatives and innovations. Further efforts are recommended to enhance the support of small grant funding programs.

  8. 75 FR 51439 - Proposed Information Collection; Comment Request; Application and Reports for Scientific Research...

    Science.gov (United States)

    2010-08-20

    ... DEPARTMENT OF COMMERCE National Oceanic and Atmospheric Administration Proposed Information Collection; Comment Request; Application and Reports for Scientific Research and Enhancement Permits Under the Endangered Species Act AGENCY: National Oceanic and Atmospheric Administration (NOAA), Commerce...

  9. Dimensional characteristics of welds performed on AISI 1045 steel by means of the application of high power diode laser

    International Nuclear Information System (INIS)

    Sanchez-Castillo, A.; Pou, J.; Lusquinos, F.; Quintero, F.; Soto, R.; Boutinguiza, M.; Saavedra, M.; Perez-Amor, M.

    2004-01-01

    The named High Power diode Laser (HPDL), emits a beam of optical energy generated by diode stimulation and offers the capability of supplying levels of power up to 6 kW. The objective of this research work was to study the main welding variables and their effects on dimensional characteristics of the beads performed by means of application of this novel laser. The results obtained, show that HPDL, is an energy source able to perform welds on AISI 1045 steel plates under conduction mode, without any kind of mechanized preparation, preheating or post-weld treatment and, without filler metal application. (Author) 16 refs

  10. Active learning-based information structure analysis of full scientific articles and two applications for biomedical literature review.

    Science.gov (United States)

    Guo, Yufan; Silins, Ilona; Stenius, Ulla; Korhonen, Anna

    2013-06-01

    Techniques that are capable of automatically analyzing the information structure of scientific articles could be highly useful for improving information access to biomedical literature. However, most existing approaches rely on supervised machine learning (ML) and substantial labeled data that are expensive to develop and apply to different sub-fields of biomedicine. Recent research shows that minimal supervision is sufficient for fairly accurate information structure analysis of biomedical abstracts. However, is it realistic for full articles given their high linguistic and informational complexity? We introduce and release a novel corpus of 50 biomedical articles annotated according to the Argumentative Zoning (AZ) scheme, and investigate active learning with one of the most widely used ML models-Support Vector Machines (SVM)-on this corpus. Additionally, we introduce two novel applications that use AZ to support real-life literature review in biomedicine via question answering and summarization. We show that active learning with SVM trained on 500 labeled sentences (6% of the corpus) performs surprisingly well with the accuracy of 82%, just 2% lower than fully supervised learning. In our question answering task, biomedical researchers find relevant information significantly faster from AZ-annotated than unannotated articles. In the summarization task, sentences extracted from particular zones are significantly more similar to gold standard summaries than those extracted from particular sections of full articles. These results demonstrate that active learning of full articles' information structure is indeed realistic and the accuracy is high enough to support real-life literature review in biomedicine. The annotated corpus, our AZ classifier and the two novel applications are available at http://www.cl.cam.ac.uk/yg244/12bioinfo.html

  11. Return on Scientific Investment - RoSI: a PMO dynamical index proposal for scientific projects performance evaluation and management.

    Science.gov (United States)

    Caous, Cristofer André; Machado, Birajara; Hors, Cora; Zeh, Andrea Kaufmann; Dias, Cleber Gustavo; Amaro Junior, Edson

    2012-01-01

    To propose a measure (index) of expected risks to evaluate and follow up the performance analysis of research projects involving financial and adequate structure parameters for its development. A ranking of acceptable results regarding research projects with complex variables was used as an index to gauge a project performance. In order to implement this method the ulcer index as the basic model to accommodate the following variables was applied: costs, high impact publication, fund raising, and patent registry. The proposed structured analysis, named here as RoSI (Return on Scientific Investment) comprises a pipeline of analysis to characterize the risk based on a modeling tool that comprises multiple variables interacting in semi-quantitatively environments. This method was tested with data from three different projects in our Institution (projects A, B and C). Different curves reflected the ulcer indexes identifying the project that may have a minor risk (project C) related to development and expected results according to initial or full investment. The results showed that this model contributes significantly to the analysis of risk and planning as well as to the definition of necessary investments that consider contingency actions with benefits to the different stakeholders: the investor or donor, the project manager and the researchers.

  12. A Hybrid Metaheuristic for Multi-Objective Scientific Workflow Scheduling in a Cloud Environment

    Directory of Open Access Journals (Sweden)

    Nazia Anwar

    2018-03-01

    Full Text Available Cloud computing has emerged as a high-performance computing environment with a large pool of abstracted, virtualized, flexible, and on-demand resources and services. Scheduling of scientific workflows in a distributed environment is a well-known NP-complete problem and therefore intractable with exact solutions. It becomes even more challenging in the cloud computing platform due to its dynamic and heterogeneous nature. The aim of this study is to optimize multi-objective scheduling of scientific workflows in a cloud computing environment based on the proposed metaheuristic-based algorithm, Hybrid Bio-inspired Metaheuristic for Multi-objective Optimization (HBMMO. The strong global exploration ability of the nature-inspired metaheuristic Symbiotic Organisms Search (SOS is enhanced by involving an efficient list-scheduling heuristic, Predict Earliest Finish Time (PEFT, in the proposed algorithm to obtain better convergence and diversity of the approximate Pareto front in terms of reduced makespan, minimized cost, and efficient load balance of the Virtual Machines (VMs. The experiments using different scientific workflow applications highlight the effectiveness, practicality, and better performance of the proposed algorithm.

  13. Teleconference versus face-to-face scientific peer review of grant application: effects on review outcomes.

    Directory of Open Access Journals (Sweden)

    Stephen A Gallo

    Full Text Available Teleconferencing as a setting for scientific peer review is an attractive option for funding agencies, given the substantial environmental and cost savings. Despite this, there is a paucity of published data validating teleconference-based peer review compared to the face-to-face process. Our aim was to conduct a retrospective analysis of scientific peer review data to investigate whether review setting has an effect on review process and outcome measures. We analyzed reviewer scoring data from a research program that had recently modified the review setting from face-to-face to a teleconference format with minimal changes to the overall review procedures. This analysis included approximately 1600 applications over a 4-year period: two years of face-to-face panel meetings compared to two years of teleconference meetings. The average overall scientific merit scores, score distribution, standard deviations and reviewer inter-rater reliability statistics were measured, as well as reviewer demographics and length of time discussing applications. The data indicate that few differences are evident between face-to-face and teleconference settings with regard to average overall scientific merit score, scoring distribution, standard deviation, reviewer demographics or inter-rater reliability. However, some difference was found in the discussion time. These findings suggest that most review outcome measures are unaffected by review setting, which would support the trend of using teleconference reviews rather than face-to-face meetings. However, further studies are needed to assess any correlations among discussion time, application funding and the productivity of funded research projects.

  14. Teleconference versus Face-to-Face Scientific Peer Review of Grant Application: Effects on Review Outcomes

    Science.gov (United States)

    Gallo, Stephen A.; Carpenter, Afton S.; Glisson, Scott R.

    2013-01-01

    Teleconferencing as a setting for scientific peer review is an attractive option for funding agencies, given the substantial environmental and cost savings. Despite this, there is a paucity of published data validating teleconference-based peer review compared to the face-to-face process. Our aim was to conduct a retrospective analysis of scientific peer review data to investigate whether review setting has an effect on review process and outcome measures. We analyzed reviewer scoring data from a research program that had recently modified the review setting from face-to-face to a teleconference format with minimal changes to the overall review procedures. This analysis included approximately 1600 applications over a 4-year period: two years of face-to-face panel meetings compared to two years of teleconference meetings. The average overall scientific merit scores, score distribution, standard deviations and reviewer inter-rater reliability statistics were measured, as well as reviewer demographics and length of time discussing applications. The data indicate that few differences are evident between face-to-face and teleconference settings with regard to average overall scientific merit score, scoring distribution, standard deviation, reviewer demographics or inter-rater reliability. However, some difference was found in the discussion time. These findings suggest that most review outcome measures are unaffected by review setting, which would support the trend of using teleconference reviews rather than face-to-face meetings. However, further studies are needed to assess any correlations among discussion time, application funding and the productivity of funded research projects. PMID:23951223

  15. Research on high-performance mass storage system

    International Nuclear Information System (INIS)

    Cheng Yaodong; Wang Lu; Huang Qiulan; Zheng Wei

    2010-01-01

    With the enlargement of scientific experiments, more and more data will be produced, which brings great challenge to storage system. Large storage capacity and high data access performance are both important to Mass storage system. This paper firstly reviews some kinds of popular storage systems including network storage system, SAN-based sharing system, WAN File system, object-based parallel file system, hierarchical storage system and cloud storage systems. Then some key technologies are presented. Finally, this paper takes BES storage system as an example and introduces its requirements, architecture and operation results. (authors)

  16. The sixth Nordic conference on the application of scientific methods in archaeology

    International Nuclear Information System (INIS)

    1993-01-01

    The Sixth Nordic Conference on the Application of Scientific Methods in Archaeology with 73 participants was convened in Esbjerg (Denmark), 19-23 September 1993. Isotope dating of archaeological, paleoecological and geochronological objects, neutron activation and XRF analytical methods, magnetometry, thermoluminescence etc. have been discussed. The program included excursions to archaeological sites and a poster session with 12 posters. (EG)

  17. High-Performance Integrated Virtual Environment (HIVE) Tools and Applications for Big Data Analysis.

    Science.gov (United States)

    Simonyan, Vahan; Mazumder, Raja

    2014-09-30

    The High-performance Integrated Virtual Environment (HIVE) is a high-throughput cloud-based infrastructure developed for the storage and analysis of genomic and associated biological data. HIVE consists of a web-accessible interface for authorized users to deposit, retrieve, share, annotate, compute and visualize Next-generation Sequencing (NGS) data in a scalable and highly efficient fashion. The platform contains a distributed storage library and a distributed computational powerhouse linked seamlessly. Resources available through the interface include algorithms, tools and applications developed exclusively for the HIVE platform, as well as commonly used external tools adapted to operate within the parallel architecture of the system. HIVE is composed of a flexible infrastructure, which allows for simple implementation of new algorithms and tools. Currently, available HIVE tools include sequence alignment and nucleotide variation profiling tools, metagenomic analyzers, phylogenetic tree-building tools using NGS data, clone discovery algorithms, and recombination analysis algorithms. In addition to tools, HIVE also provides knowledgebases that can be used in conjunction with the tools for NGS sequence and metadata analysis.

  18. High-Performance Integrated Virtual Environment (HIVE Tools and Applications for Big Data Analysis

    Directory of Open Access Journals (Sweden)

    Vahan Simonyan

    2014-09-01

    Full Text Available The High-performance Integrated Virtual Environment (HIVE is a high-throughput cloud-based infrastructure developed for the storage and analysis of genomic and associated biological data. HIVE consists of a web-accessible interface for authorized users to deposit, retrieve, share, annotate, compute and visualize Next-generation Sequencing (NGS data in a scalable and highly efficient fashion. The platform contains a distributed storage library and a distributed computational powerhouse linked seamlessly. Resources available through the interface include algorithms, tools and applications developed exclusively for the HIVE platform, as well as commonly used external tools adapted to operate within the parallel architecture of the system. HIVE is composed of a flexible infrastructure, which allows for simple implementation of new algorithms and tools. Currently, available HIVE tools include sequence alignment and nucleotide variation profiling tools, metagenomic analyzers, phylogenetic tree-building tools using NGS data, clone discovery algorithms, and recombination analysis algorithms. In addition to tools, HIVE also provides knowledgebases that can be used in conjunction with the tools for NGS sequence and metadata analysis.

  19. Prototyping of a highly performant and integrated piezoresistive force sensor for microscale applications

    International Nuclear Information System (INIS)

    Komati, Bilal; Agnus, Joël; Clévy, Cédric; Lutz, Philippe

    2014-01-01

    In this paper, the prototyping of a new piezoresistive microforce sensor is presented. An original design taking advantage of both the mechanical and bulk piezoresistive properties of silicon is presented, which enables the easy fabrication of a very small, large-range, high-sensitivity with high integration potential sensor. The sensor is made of two silicon strain gauges for which widespread and known microfabrication processes are used. The strain gauges present a high gauge factor which allows a good sensitivity of this force sensor. The dimensions of this sensor are 700 μm in length, 100 μm in width and 12 μm in thickness. These dimensions make its use convenient with many microscale applications, notably its integration in a microgripper. The fabricated sensor is calibrated using an industrial force sensor. The design, microfabrication process and performances of the fabricated piezoresistive force sensor are innovative thanks to its resolution of 100 nN and its measurement range of 2 mN. This force sensor also presents a high signal-to-noise ratio, typically 50 dB when a 2 mN force is applied at the tip of the force sensor. (paper)

  20. Performance and portability of the SciBy virtual machine

    DEFF Research Database (Denmark)

    Andersen, Rasmus; Vinter, Brian

    2010-01-01

    The Scientific Bytecode Virtual Machine is a virtual machine designed specifically for performance, security, and portability of scientific applications deployed in a Grid environment. The performance overhead normally incurred by virtual machines is mitigated using native optimized scientific li...

  1. Scientific Programming Using Java: A Remote Sensing Example

    Science.gov (United States)

    Prados, Don; Mohamed, Mohamed A.; Johnson, Michael; Cao, Changyong; Gasser, Jerry

    1999-01-01

    This paper presents results of a project to port remote sensing code from the C programming language to Java. The advantages and disadvantages of using Java versus C as a scientific programming language in remote sensing applications are discussed. Remote sensing applications deal with voluminous data that require effective memory management, such as buffering operations, when processed. Some of these applications also implement complex computational algorithms, such as Fast Fourier Transformation analysis, that are very performance intensive. Factors considered include performance, precision, complexity, rapidity of development, ease of code reuse, ease of maintenance, memory management, and platform independence. Performance of radiometric calibration code written in Java for the graphical user interface and of using C for the domain model are also presented.

  2. Assessment of microelectronics packaging for high temperature, high reliability applications

    Energy Technology Data Exchange (ETDEWEB)

    Uribe, F.

    1997-04-01

    This report details characterization and development activities in electronic packaging for high temperature applications. This project was conducted through a Department of Energy sponsored Cooperative Research and Development Agreement between Sandia National Laboratories and General Motors. Even though the target application of this collaborative effort is an automotive electronic throttle control system which would be located in the engine compartment, results of this work are directly applicable to Sandia`s national security mission. The component count associated with the throttle control dictates the use of high density packaging not offered by conventional surface mount. An enabling packaging technology was selected and thermal models defined which characterized the thermal and mechanical response of the throttle control module. These models were used to optimize thick film multichip module design, characterize the thermal signatures of the electronic components inside the module, and to determine the temperature field and resulting thermal stresses under conditions that may be encountered during the operational life of the throttle control module. Because the need to use unpackaged devices limits the level of testing that can be performed either at the wafer level or as individual dice, an approach to assure a high level of reliability of the unpackaged components was formulated. Component assembly and interconnect technologies were also evaluated and characterized for high temperature applications. Electrical, mechanical and chemical characterizations of enabling die and component attach technologies were performed. Additionally, studies were conducted to assess the performance and reliability of gold and aluminum wire bonding to thick film conductor inks. Kinetic models were developed and validated to estimate wire bond reliability.

  3. Communication about scientific uncertainty in environmental nanoparticle research - a comparison of scientific literature and mass media

    Science.gov (United States)

    Heidmann, Ilona; Milde, Jutta

    2014-05-01

    The research about the fate and behavior of engineered nanoparticles in the environment is despite its wide applications still in the early stages. 'There is a high level of scientific uncertainty in nanoparticle research' is often stated in the scientific community. Knowledge about these uncertainties might be of interest to other scientists, experts and laymen. But how could these uncertainties be characterized and are they communicated within the scientific literature and the mass media? To answer these questions, the current state of scientific knowledge about scientific uncertainty through the example of environmental nanoparticle research was characterized and the communication of these uncertainties within the scientific literature is compared with its media coverage in the field of nanotechnologies. The scientific uncertainty within the field of environmental fate of nanoparticles is by method uncertainties and a general lack of data concerning the fate and effects of nanoparticles and their mechanisms in the environment, and by the uncertain transferability of results to the environmental system. In the scientific literature, scientific uncertainties, their sources, and consequences are mentioned with different foci and to a different extent. As expected, the authors in research papers focus on the certainty of specific results within their specific research question, whereas in review papers, the uncertainties due to a general lack of data are emphasized and the sources and consequences are discussed in a broader environmental context. In the mass media, nanotechnology is often framed as rather certain and positive aspects and benefits are emphasized. Although reporting about a new technology, only in one-third of the reports scientific uncertainties are mentioned. Scientific uncertainties are most often mentioned together with risk and they arise primarily from unknown harmful effects to human health. Environmental issues itself are seldom mentioned

  4. Prototyping of thermoplastic microfluidic chips and their application in high-performance liquid chromatography separations of small molecules.

    Science.gov (United States)

    Wouters, Sam; De Vos, Jelle; Dores-Sousa, José Luís; Wouters, Bert; Desmet, Gert; Eeltink, Sebastiaan

    2017-11-10

    The present paper discusses practical aspects of prototyping of microfluidic chips using cyclic olefin copolymer as substrate and the application in high-performance liquid chromatography. The developed chips feature a 60mm long straight separation channel with circular cross section (500μm i.d.) that was created using a micromilling robot. To irreversibly seal the top and bottom chip substrates, a solvent-vapor-assisted bonding approach was optimized, allowing to approximate the ideal circular channel geometry. Four different approaches to establish the micro-to-macro interface were pursued. The average burst pressure of the microfluidic chips in combination with an encasing holder was established at 38MPa and the maximum burst pressure was 47MPa, which is believed to be the highest ever report for these polymer-based microfluidic chips. Porous polymer monolithic frits were synthesized in-situ via UV-initiated polymerization and their locations were spatially controlled by the application of a photomask. Next, high-pressure slurry packing was performed to introduce 3μm silica reversed-phase particles as the stationary phase in the separation channel. Finally, the application of the chip technology is demonstrated for the separation of alkyl phenones in gradient mode yielding baseline peak widths of 6s by applying a steep gradient of 1.8min at a flow rate of 10μL/min. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Simulating experiments using a Comsol application for teaching scientific research methods

    NARCIS (Netherlands)

    Schijndel, van A.W.M.

    2015-01-01

    For universities it is important to teach the principles of scientific methods as soon as possible. However, in case of performing experiments, students need to have some knowledge and skills before start doing measurements. In this case, Comsol can be helpfully by simulating the experiments before

  6. Predictive Power of Machine Learning for Optimizing Solar Water Heater Performance: The Potential Application of High-Throughput Screening

    Directory of Open Access Journals (Sweden)

    Hao Li

    2017-01-01

    Full Text Available Predicting the performance of solar water heater (SWH is challenging due to the complexity of the system. Fortunately, knowledge-based machine learning can provide a fast and precise prediction method for SWH performance. With the predictive power of machine learning models, we can further solve a more challenging question: how to cost-effectively design a high-performance SWH? Here, we summarize our recent studies and propose a general framework of SWH design using a machine learning-based high-throughput screening (HTS method. Design of water-in-glass evacuated tube solar water heater (WGET-SWH is selected as a case study to show the potential application of machine learning-based HTS to the design and optimization of solar energy systems.

  7. Scientific basis of development and application of nanotechnologies in oil industry

    International Nuclear Information System (INIS)

    Mirzajanzadeh, A.; Maharramov, A.; Abdullayev, R.; Yuzifzadeh, Kh.; Shahbazov, E.; Qurbanov, R.; Akhmadov, S.; Kazimov, E; Ramazanov, M.; Shafiyev, Sh.; Hajizadeh, N.

    2010-01-01

    Development and introduction of nanotechnologies in the oil industry is one of the most pressing issues of the present times.For the first time in the world practice scientific-methodological basis and application practice of nanotechnologies in oil industry is developed on the basis of uniform, scientifically proven approach by taking into account the specificities of oil and gas industry.The application system of such nanotechnologies was developed in oil and gas production.Mathematical models of nanotechnological processes, i.e. c haos regulation a nd hyper-accidental process were offered. Nanomedium and nanoimpact on the w ell-layer s ystem was studied. Wide application results of nanotechnologies in SOCAR's production fields in oil and gas production are shown.Research results of N ANOSAA o n the basis of N ANO + NANO e ffect are described in the development.For the first time in world practice N ANOOIL , N ANOBITUMEN , N ANOGUDRON' and N ANOMAY' systems on the basis of machine waste oil in the drilling mud were developed for the application in oil and gas drilling. Original property, e ffect of super small concentrations a nd n anomemory i n N ANOOIL a nd N ANOBITUMEN s ystems was discovered.By applying N ANOOIL , N ANOBITUMEN a nd N ANOMAY s ystems in the drilling process was discovered: the increase of linear speed, early turbulence, decrease of hydraulic resistance coefficient and economy in energy consumption.Hyper-accidental evaluation of mathematical expectation of general sum of values of the surface strain on the sample data is spelled out with the various experiment conditions. Estimated hyper-accidental value of the mathematical expectation allows us to offer practical recommendations for the development of new nanotechnologies on the basis of rheological parameters of oil.

  8. Implementing Molecular Dynamics for Hybrid High Performance Computers - 1. Short Range Forces

    International Nuclear Information System (INIS)

    Brown, W. Michael; Wang, Peng; Plimpton, Steven J.; Tharrington, Arnold N.

    2011-01-01

    The use of accelerators such as general-purpose graphics processing units (GPGPUs) have become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high performance computers, machines with more than one type of floating-point processor, are now becoming more prevalent due to these advantages. In this work, we discuss several important issues in porting a large molecular dynamics code for use on parallel hybrid machines - (1) choosing a hybrid parallel decomposition that works on central processing units (CPUs) with distributed memory and accelerator cores with shared memory, (2) minimizing the amount of code that must be ported for efficient acceleration, (3) utilizing the available processing power from both many-core CPUs and accelerators, and (4) choosing a programming model for acceleration. We present our solution to each of these issues for short-range force calculation in the molecular dynamics package LAMMPS. We describe algorithms for efficient short range force calculation on hybrid high performance machines. We describe a new approach for dynamic load balancing of work between CPU and accelerator cores. We describe the Geryon library that allows a single code to compile with both CUDA and OpenCL for use on a variety of accelerators. Finally, we present results on a parallel test cluster containing 32 Fermi GPGPUs and 180 CPU cores.

  9. Scientific Ethics: A New Approach.

    Science.gov (United States)

    Menapace, Marcello

    2018-06-04

    Science is an activity of the human intellect and as such has ethical implications that should be reviewed and taken into account. Although science and ethics have conventionally been considered different, it is herewith proposed that they are essentially similar. The proposal set henceforth is to create a new ethics rooted in science: scientific ethics. Science has firm axiological foundations and searches for truth (as a value, axiology) and knowledge (epistemology). Hence, science cannot be value neutral. Looking at standard scientific principles, it is possible to construct a scientific ethic (that is, an ethical framework based on scientific methods and rules), which can be applied to all sciences. These intellectual standards include the search for truth (honesty and its derivatives), human dignity (and by reflection the dignity of all animals) and respect for life. Through these it is thence achievable to draft a foundation of a ethics based purely on science and applicable beyond the confines of science. A few applications of these will be presented. Scientific ethics can have vast applications in other fields even in non scientific ones.

  10. Scientific performances of the XAA1.2 front-end chip for silicon microstrip detectors

    International Nuclear Information System (INIS)

    Del Monte, Ettore; Soffitta, Paolo; Morelli, Ennio; Pacciani, Luigi; Porrovecchio, Geiland; Rubini, Alda; Uberti, Olga; Costa, Enrico; Di Persio, Giuseppe; Donnarumma, Immacolata; Evangelista, Yuri; Feroci, Marco; Lazzarotto, Francesco; Mastropietro, Marcello; Rapisarda, Massimo

    2007-01-01

    The XAA1.2 is a custom ASIC chip for silicon microstrip detectors adapted by Ideas for the SuperAGILE instrument on board the AGILE space mission. The chip is equipped with 128 input channels, each one containing a charge preamplifier, shaper, peak detector and stretcher. The most important features of the ASIC are the extended linearity, low noise and low power consumption. The XAA1.2 underwent extensive laboratory testing in order to study its commandability and functionality and evaluate its scientific performances. In this paper we describe the XAA1.2 features, report the laboratory measurements and discuss the results emphasizing the scientific performances in the context of the SuperAGILE front-end electronics

  11. MW-assisted synthesis of LiFePO 4 for high power applications

    Science.gov (United States)

    Beninati, Sabina; Damen, Libero; Mastragostino, Marina

    LiFePO 4/C was prepared by solid-state reaction from Li 3PO 4, Fe 3(PO 4) 2·8H 2O, carbon and glucose in a few minutes in a scientific MW (microwave) oven with temperature and power control. The material was characterized by X-ray diffraction, scanning electron microscopy and by TGA analysis to evaluate carbon content. The electrochemical characterization as positive electrode in EC (ethylene carbonate)-DMC (dimethylcarbonate) 1 M LiPF 6 was performed by galvanostatic charge-discharge cycles at C/10 to evaluate specific capacity and by sequences of 10 s discharge-charge pulses, at different high C-rates (5-45C) to evaluate pulse-specific power in simulate operative conditions for full-HEV application. The maximum pulse-specific power and, particularly, pulse efficiency values are quite high and make MW synthesis a very promising route for mass production of LiFePO 4/C for full-HEV batteries at low energy costs.

  12. 78 FR 13860 - Application(s) for Duty-Free Entry of Scientific Instruments

    Science.gov (United States)

    2013-03-01

    ... Scientific Instruments Pursuant to Section 6(c) of the Educational, Scientific and Cultural Materials... invite comments on the question of whether instruments of equivalent scientific value, for the purposes... conformational change of assemblies involved in biological processes such as ATP production, signal transduction...

  13. 76 FR 56156 - Application(s) for Duty-Free Entry of Scientific Instruments

    Science.gov (United States)

    2011-09-12

    ... Scientific Instruments Pursuant to Section 6(c) of the Educational, Scientific and Cultural Materials... invite comments on the question of whether instruments of equivalent scientific value, for the purposes... materials for energy production. The experiments will involve structural and chemical analyses of materials...

  14. Proceedings of the Scientific Meeting on Research and Development of Isotopes Application and Radiation

    International Nuclear Information System (INIS)

    Singgih Sutrisno; Sofyan Yatim; Pattiradjawane, EIsje L.; Ismachin, Moch; Mugiono; Marga Utama; Komaruddin Idris

    2004-02-01

    The Proceedings of Scientific Meeting on Research and Development of Isotopes Application and Radiation has been presented on February 17-18, 2004 in Jakarta. The aims of the Meeting is to disseminate the results of research on application of nuclear techniques on agricultural, animal, industry, hydrology and environment. There were 4 invited papers and 38 papers from BATAN participants as well as outside. The articles are indexing separately. (PPIN)

  15. Service Oriented Architecture for High Level Applications

    International Nuclear Information System (INIS)

    Chu, P.

    2012-01-01

    Standalone high level applications often suffer from poor performance and reliability due to lengthy initialization, heavy computation and rapid graphical update. Service-oriented architecture (SOA) is trying to separate the initialization and computation from applications and to distribute such work to various service providers. Heavy computation such as beam tracking will be done periodically on a dedicated server and data will be available to client applications at all time. Industrial standard service architecture can help to improve the performance, reliability and maintainability of the service. Robustness will also be improved by reducing the complexity of individual client applications.

  16. Scientific Performance of a Nano-satellite MeV Telescope

    Energy Technology Data Exchange (ETDEWEB)

    Lucchetta, Giulio; Berlato, Francesco; Rando, Riccardo; Bastieri, Denis; Urso, Giorgio, E-mail: giulio.lucchetta@desy.de, E-mail: fberlato@mpe.mpg.de [Dipartimento di Fisica and Astronomia “G. Galilei,” Università di Padova, I-35131 Padova (Italy)

    2017-05-01

    Over the past two decades, both X-ray and gamma-ray astronomy have experienced great progress. However, the region of the electromagnetic spectrum around ∼1 MeV is not so thoroughly explored. Future medium-sized gamma-ray telescopes will fill this gap in observations. As the timescale for the development and launch of a medium-class mission is ∼10 years, with substantial costs, we propose a different approach for the immediate future. In this paper, we evaluate the viability of a much smaller and cheaper detector: a nano-satellite Compton telescope, based on the CubeSat architecture. The scientific performance of this telescope would be well below that of the instrument expected for the future larger missions; however, via simulations, we estimate that such a compact telescope will achieve a performance similar to that of COMPTEL.

  17. Cloud Data Storage Federation for Scientific Applications

    NARCIS (Netherlands)

    Koulouzis, S.; Vasyunin, D.; Cushing, R.; Belloum, A.; Bubak, M.; an Mey, D.; Alexander, M.; Bientinesi, P.; Cannataro, M.; Clauss, C.; Costan, A.; Kecskemeti, G.; Morin, C.; Ricci, L.; Sahuquillo, J.; Schulz, M.; Scarano, V.; Scott, S.L.; Weidendorfer, J.

    2014-01-01

    Nowadays, data-intensive scientific research needs storage capabilities that enable efficient data sharing. This is of great importance for many scientific domains such as the Virtual Physiological Human. In this paper, we introduce a solution that federates a variety of systems ranging from file

  18. Center for Center for Technology for Advanced Scientific Component Software (TASCS)

    Energy Technology Data Exchange (ETDEWEB)

    Kostadin, Damevski [Virginia State Univ., Petersburg, VA (United States)

    2015-01-25

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS)1 tackles these these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.

  19. Cloud object store for checkpoints of high performance computing applications using decoupling middleware

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-04-19

    Cloud object storage is enabled for checkpoints of high performance computing applications using a middleware process. A plurality of files, such as checkpoint files, generated by a plurality of processes in a parallel computing system are stored by obtaining said plurality of files from said parallel computing system; converting said plurality of files to objects using a log structured file system middleware process; and providing said objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  20. High-performance liquid chromatography of oligoguanylates at high pH

    Science.gov (United States)

    Stribling, R.; Deamer, D. (Principal Investigator)

    1991-01-01

    Because of the stable self-structures formed by oligomers of guanosine, standard high-performance liquid chromatography techniques for oligonucleotide fractionation are not applicable. Previously, oligoguanylate separations have been carried out at pH 12 using RPC-5 as the packing material. While RPC-5 provides excellent separations, there are several limitations, including the lack of a commercially available source. This report describes a new anion-exchange high-performance liquid chromatography method using HEMA-IEC BIO Q, which successfully separates different forms of the guanosine monomer as well as longer oligoguanylates. The reproducibility and stability at high pH suggests a versatile role for this material.

  1. Quantum Accelerators for High-performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S. [ORNL; Britt, Keith A. [ORNL; Mohiyaddin, Fahd A. [ORNL

    2017-11-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.

  2. Application of quality assurance to scientific activities at Westinghouse Hanford Company

    International Nuclear Information System (INIS)

    Delvin, W.L.; Farwick, D.G.

    1988-01-01

    The application of quality assurance to scientific activities has been an ongoing subject of review, discussion, interpretation, and evaluation within the nuclear community for the past several years. This paper provides a discussion on the natures of science and quality assurance and presents suggestions for integrating the two successfully. The paper shows how those actions were used at the Westinghouse Hanford Company to successfully apply quality assurance to experimental studies and materials testing and evaluation activities that supported a major project. An important factor in developing and implementing the quality assurance program was the close working relationship that existed between the assigned quality engineers and the scientists. The quality engineers, who had had working experience in the scientific disciplines involved, were able to bridge across from the scientists to the more traditional quality assurance personnel who had overall responsibility for the project's quality assurance program

  3. Center for Technology for Advanced Scientific Component Software (TASCS)

    Energy Technology Data Exchange (ETDEWEB)

    Damevski, Kostadin [Virginia State Univ., Petersburg, VA (United States)

    2009-03-30

    A resounding success of the Scientific Discover through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedened computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technology for Advanced Scientific Component Software (TASCS) tackles these issues by exploiting component-based software development to facilitate collaborative hig-performance scientific computing.

  4. Advanced Performance Modeling with Combined Passive and Active Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Dovrolis, Constantine [Georgia Inst. of Technology, Atlanta, GA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-04-15

    To improve the efficiency of resource utilization and scheduling of scientific data transfers on high-speed networks, the "Advanced Performance Modeling with combined passive and active monitoring" (APM) project investigates and models a general-purpose, reusable and expandable network performance estimation framework. The predictive estimation model and the framework will be helpful in optimizing the performance and utilization of networks as well as sharing resources with predictable performance for scientific collaborations, especially in data intensive applications. Our prediction model utilizes historical network performance information from various network activity logs as well as live streaming measurements from network peering devices. Historical network performance information is used without putting extra load on the resources by active measurement collection. Performance measurements collected by active probing is used judiciously for improving the accuracy of predictions.

  5. Processes Utilized by High School Students Reading Scientific Text

    Science.gov (United States)

    Clinger, Alicia Farr

    2014-01-01

    In response to an increased emphasis on disciplinary literacy in the secondary science classroom, an investigation of the literacy processes utilized by high school students while reading scientific text was undertaken. A think-aloud protocol was implemented to collect data on the processes students used when not prompted while reading a magazine…

  6. Application of high performance asynchronous socket communication in power distribution automation

    Science.gov (United States)

    Wang, Ziyu

    2017-05-01

    With the development of information technology and Internet technology, and the growing demand for electricity, the stability and the reliable operation of power system have been the goal of power grid workers. With the advent of the era of big data, the power data will gradually become an important breakthrough to guarantee the safe and reliable operation of the power grid. So, in the electric power industry, how to efficiently and robustly receive the data transmitted by the data acquisition device, make the power distribution automation system be able to execute scientific decision quickly, which is the pursuit direction in power grid. In this paper, some existing problems in the power system communication are analysed, and with the help of the network technology, a set of solutions called Asynchronous Socket Technology to the problem in network communication which meets the high concurrency and the high throughput is proposed. Besides, the paper also looks forward to the development direction of power distribution automation in the era of big data and artificial intelligence.

  7. Fabrication of graphene foam supported carbon nanotube/polyaniline hybrids for high-performance supercapacitor applications

    International Nuclear Information System (INIS)

    Yang, Hongxia; Wang, Nan; Xu, Qun; Chen, Zhimin; Ren, Yumei; Razal, Joselito M; Chen, Jun

    2014-01-01

    A large-scale, high-powered energy storage system is crucial for addressing the energy problem. The development of high-performance materials is a key issue in realizing the grid-scale applications of energy-storage devices. In this work, we describe a simple and scalable method for fabricating hybrids (graphene-pyrrole/carbon nanotube-polyaniline (GPCP)) using graphene foam as the supporting template. Graphene-pyrrole (G-Py) aerogels are prepared via a green hydrothermal route from two-dimensional materials such as graphene sheets, while a carbon nanotube/polyaniline (CNT/PANI) composite dispersion is obtained via the in situ polymerization method. The functional nanohybrid materials of GPCP can be assembled by simply dipping the prepared G-py aerogels into the CNT/PANI dispersion. The morphology of the obtained GPCP is investigated by scanning electron microscopy (SEM) and transmission electron microscopy (TEM), which revealed that the CNT/PANI was uniformly deposited onto the surfaces of the graphene. The as-synthesized GPCP maintains its original three-dimensional hierarchical porous architecture, which favors the diffusion of the electrolyte ions into the inner region of the active materials. Such hybrid materials exhibit significant specific capacitance of up to 350 F g −1 , making them promising in large-scale energy-storage device applications. (paper)

  8. DOE research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-12-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  9. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  10. Return on Scientific Investment – RoSI: a PMO dynamical index proposal for scientific projects performance evaluation and management

    Directory of Open Access Journals (Sweden)

    Cristofer André Caous

    2012-06-01

    Full Text Available Objective: To propose a measure (index of expected risks to evaluateand follow up the performance analysis of research projects involvingfinancial and adequate structure parameters for its development.Methods: A ranking of acceptable results regarding researchprojects with complex variables was used as an index to gauge aproject performance. In order to implement this method the ulcerindex as the basic model to accommodate the following variableswas applied: costs, high impact publication, fund raising, and patentregistry. The proposed structured analysis, named here as RoSI(Return on Scientific Investment comprises a pipeline of analysis tocharacterize the risk based on a modeling tool that comprises multiplevariables interacting in semi-quantitatively environments. Results:This method was tested with data from three different projects in ourInstitution (projects A, B and C. Different curves reflected the ulcer indexes identifying the project that may have a minor risk (project C related to development and expected results according to initial or full investment. Conclusion: The results showed that this model contributes significantly to the analysis of risk and planning as well as to the definition of necessary investments that consider contingency actions with benefits to the different stakeholders: the investor or donor, the project manager and the researchers.

  11. Final Report: Performance Engineering Research Institute

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [Rice Univ., Houston, TX (United States)

    2014-10-27

    This document is a final report about the work performed for cooperative agreement DE-FC02-06ER25764, the Rice University effort of Performance Engineering Research Institute (PERI). PERI was an Enabling Technologies Institute of the Scientific Discovery through Advanced Computing (SciDAC-2) program supported by the Department of Energy's Office of Science Advanced Scientific Computing Research (ASCR) program. The PERI effort at Rice University focused on (1) research and development of tools for measurement and analysis of application program performance, and (2) engagement with SciDAC-2 application teams.

  12. Applications of scientific imaging in environmental toxicology

    Science.gov (United States)

    El-Demerdash, Aref M.

    The national goals of clean air, clean water, and healthy ecosystems are a few of the primary forces that drive the need for better environmental monitoring. As we approach the end of the 1990s, the environmental questions at regional to global scales are being redefined and refined in the light of developments in environmental understanding and technological capability. Research in the use of scientific imaging data for the study of the environment is urgently needed in order to explore the possibilities of utilizing emerging new technologies. The objective of this research proposal is to demonstrate the usability of a wealth of new technology made available in the last decade to providing a better understanding of environmental problems. Research is focused in two imaging techniques macro and micro imaging. Several examples of applications of scientific imaging in research in the field of environmental toxicology were presented. This was achieved on two scales, micro and macro imaging. On the micro level four specific examples were covered. First, the effect of utilizing scanning electron microscopy as an imaging tool in enhancing taxa identification when studying diatoms was presented. Second, scanning electron microscopy combined with energy dispersive x-ray analyzer were demonstrated as a valuable and effective tool for identifying and analyzing household dust samples. Third, electronic autoradiography combined with FT-IR microscopy were used to study the distribution pattern of [14C]-Malathion in rats as a result of dermal exposure. The results of the autoradiography made on skin sections of the application site revealed the presence of [ 14C]-activity in the first region of the skin. These results were evidenced by FT-IR microscopy. The obtained results suggest that the penetration of Malathion into the skin and other tissues is vehicle and dose dependent. The results also suggest the use of FT-IR microscopy imaging for monitoring the disposition of

  13. Recent developments of the MOA thruster, a high performance plasma accelerator for nuclear power and propulsion applications

    International Nuclear Information System (INIS)

    Frischauf, N.; Hettmer, M.; Grassauer, A.; Bartusch, T.; Koudelka, O.

    2008-01-01

    More than 60 years after the late Nobel laureate Hannes Alfven had published a letter stating that oscillating magnetic fields can accelerate ionised matter via magneto-hydrodynamic interactions in a wave like fashion, the technical implementation of Alfven waves for propulsive purposes has been proposed, patented and examined for the first time by a group of inventors. The name of the concept, utilising Alfven waves to accelerate ionised matter for propulsive purposes, is MOA -Magnetic field Oscillating Amplified thruster. Alfven waves are generated by making use of two coils, one being permanently powered and serving also as magnetic nozzle, the other one being switched on and off in a cyclic way, deforming the field lines of the overall system. It is this deformation that generates Alfven waves, which are in the next step used to transport and compress the propulsive medium, in theory leading to a propulsion system with a much higher performance than any other electric propulsion system. Based on computer simulations, which were conducted to get a first estimate on the performance of the system, MOA is a highly flexible propulsion system, whose performance parameters might easily be adapted, by changing the mass flow and/or the power level. As such the system is capable to deliver a maximum specific impulse of 13116 s (12.87 mN) at a power level of 11.16 kW, using Xe as propellant, but can also be attuned to provide a thrust of 236.5 mN (2411 s) at 6.15 kW of power. While space propulsion is expected to be the prime application for MOA and is supported by numerous applications such as Solar and/or Nuclear Electric Propulsion or even as an 'afterburner system' for Nuclear Thermal Propulsion, other terrestrial applications can be thought of as well, making the system highly suited for a common space-terrestrial application research and utilization strategy. This paper presents the recent developments of the MOA Thruster R and D activities at QASAR, the company in

  14. EFFECT SCIENTIFIC INQUIRY TEACHING MODELS AND SCIENTIFIC ATTITUDE TO PHYSICS STUDENT OUTCOMES

    Directory of Open Access Journals (Sweden)

    Dian Clara Natalia Sihotang

    2014-12-01

    Full Text Available The objectives of this study were to determine whether: (1 the student’s achievement taught by using Scientific Inquiry Teaching Models is better than that of taught by using Direct Instruction; (2 the student’s achievement who have a high scientific attitude is better than student who have low scientific attitude; and (3 there is interaction between Scientific Inquiry Teaching Models and scientific attitude for the student’s achievement. The results of research are: (1 the student’s achievement given learning through Scientific Inquiry Teaching Models better than Direct Instruction; (2 the student’s achievement who have a high scientific attitude better than student who have low scientific attitude; and (3 there was interaction between Scientific Inquiry Teaching Models and scientific attitude for student’s achievement which this models is better to apply for student who have a high scientific attitude.

  15. High performance geospatial and climate data visualization using GeoJS

    Science.gov (United States)

    Chaudhary, A.; Beezley, J. D.

    2015-12-01

    GeoJS (https://github.com/OpenGeoscience/geojs) is an open-source library developed to support interactive scientific and geospatial visualization of climate and earth science datasets in a web environment. GeoJS has a convenient application programming interface (API) that enables users to harness the fast performance of WebGL and Canvas 2D APIs with sophisticated Scalable Vector Graphics (SVG) features in a consistent and convenient manner. We started the project in response to the need for an open-source JavaScript library that can combine traditional geographic information systems (GIS) and scientific visualization on the web. Many libraries, some of which are open source, support mapping or other GIS capabilities, but lack the features required to visualize scientific and other geospatial datasets. For instance, such libraries are not be capable of rendering climate plots from NetCDF files, and some libraries are limited in regards to geoinformatics (infovis in a geospatial environment). While libraries such as d3.js are extremely powerful for these kinds of plots, in order to integrate them into other GIS libraries, the construction of geoinformatics visualizations must be completed manually and separately, or the code must somehow be mixed in an unintuitive way.We developed GeoJS with the following motivations:• To create an open-source geovisualization and GIS library that combines scientific visualization with GIS and informatics• To develop an extensible library that can combine data from multiple sources and render them using multiple backends• To build a library that works well with existing scientific visualizations tools such as VTKWe have successfully deployed GeoJS-based applications for multiple domains across various projects. The ClimatePipes project funded by the Department of Energy, for example, used GeoJS to visualize NetCDF datasets from climate data archives. Other projects built visualizations using GeoJS for interactively exploring

  16. Status, performance and scientific highlights from the MAGIC telescope system

    Energy Technology Data Exchange (ETDEWEB)

    Doert, Marlene [Technische Universitaet Dortmund (Germany); Ruhr-Universitaet Bochum (Germany); Collaboration: MAGIC-Collaboration

    2015-07-01

    The MAGIC telescopes are a system of two 17 m Imaging Air Cherenkov Telescopes, which are located at 2200 m above sea level at the Roque de Los Muchachos Observatory on the Canary Island of La Palma. In this presentation, we report on recent scientific highlights gained from MAGIC observations in the galactic and the extragalactic regime. We also present the current status and performance of the MAGIC system after major hardware upgrades in the years 2011 to 2014 and give an overview of future plans.

  17. High dimensional neurocomputing growth, appraisal and applications

    CERN Document Server

    Tripathi, Bipin Kumar

    2015-01-01

    The book presents a coherent understanding of computational intelligence from the perspective of what is known as "intelligent computing" with high-dimensional parameters. It critically discusses the central issue of high-dimensional neurocomputing, such as quantitative representation of signals, extending the dimensionality of neuron, supervised and unsupervised learning and design of higher order neurons. The strong point of the book is its clarity and ability of the underlying theory to unify our understanding of high-dimensional computing where conventional methods fail. The plenty of application oriented problems are presented for evaluating, monitoring and maintaining the stability of adaptive learning machine. Author has taken care to cover the breadth and depth of the subject, both in the qualitative as well as quantitative way. The book is intended to enlighten the scientific community, ranging from advanced undergraduates to engineers, scientists and seasoned researchers in computational intelligenc...

  18. RAID Disk Arrays for High Bandwidth Applications

    Science.gov (United States)

    Moren, Bill

    1996-01-01

    High bandwidth applications require large amounts of data transferred to/from storage devices at extremely high data rates. Further, these applications often are 'real time' in which access to the storage device must take place on the schedule of the data source, not the storage. A good example is a satellite downlink - the volume of data is quite large and the data rates quite high (dozens of MB/sec). Further, a telemetry downlink must take place while the satellite is overhead. A storage technology which is ideally suited to these types of applications is redundant arrays of independent discs (RAID). Raid storage technology, while offering differing methodologies for a variety of applications, supports the performance and redundancy required in real-time applications. Of the various RAID levels, RAID-3 is the only one which provides high data transfer rates under all operating conditions, including after a drive failure.

  19. Evaluation of medical research performance--position paper of the Association of the Scientific Medical Societies in Germany (AWMF).

    Science.gov (United States)

    Herrmann-Lingen, Christoph; Brunner, Edgar; Hildenbrand, Sibylle; Loew, Thomas H; Raupach, Tobias; Spies, Claudia; Treede, Rolf-Detlef; Vahl, Christian-Friedrich; Wenz, Hans-Jürgen

    2014-01-01

    The evaluation of medical research performance is a key prerequisite for the systematic advancement of medical faculties, research foci, academic departments, and individual scientists' careers. However, it is often based on vaguely defined aims and questionable methods and can thereby lead to unwanted regulatory effects. The current paper aims at defining the position of German academic medicine toward the aims, methods, and consequences of its evaluation. During the Berlin Forum of the Association of the Scientific Medical Societies in Germany (AWMF) held on 18 October 2013, international experts presented data on methods for evaluating medical research performance. Subsequent discussions among representatives of relevant scientific organizations and within three ad-hoc writing groups led to a first draft of this article. Further discussions within the AWMF Committee for Evaluation of Performance in Research and Teaching and the AWMF Executive Board resulted in the final consented version presented here. The AWMF recommends modifications to the current system of evaluating medical research performance. Evaluations should follow clearly defined and communicated aims and consist of both summative and formative components. Informed peer reviews are valuable but feasible in longer time intervals only. They can be complemented by objective indicators. However, the Journal Impact Factor is not an appropriate measure for evaluating individual publications or their authors. The scientific "impact" rather requires multidimensional evaluation. Indicators of potential relevance in this context may include, e.g., normalized citation rates of scientific publications, other forms of reception by the scientific community and the public, and activities in scientific organizations, research synthesis and science communication. In addition, differentiated recommendations are made for evaluating the acquisition of third-party funds and the promotion of junior scientists. With the

  20. Commissioning of Temelin NPP as seen by scientific supervisory group

    International Nuclear Information System (INIS)

    Svoboda, C.

    2003-01-01

    Scientific Supervisory Group worked during the Temelin NPP commissioning process as an independent supervisor. The main tasks and main results of its activity are described in this contribution. The characteristic common features of commissioning process and most important events from the Scientific Supervisory Group point of view are presented. In April 1999 the Czech Power Utility with the objective to achieve maximum level of nuclear safety and quality within the NPP Temelin commissioning procedures has established a special body / Scientific Supervisory Group and requested Nuclear Research Institute Rez plc to perform the required function. The Scientific Supervisory Group proceeds in accordance with its Statute and provides an independent specialised professional and expert work focused on nuclear safety assurance, assesment of the selected documentation related to plant preparedness for the individual commissioning stages, and, of course. on assessment of the commissioning tests results. While performing its function the Scientific Supervisory Group is guided by the Atomic Act and the relevant Directives of State Office for Nuclear Safety; its activities are in compliance with the applicable IAEA recommendations (Authors)

  1. The economic scientific research, a production neo-factor

    Directory of Open Access Journals (Sweden)

    Elena Ciucur

    2007-12-01

    Full Text Available The scientific research represents a modern production neo-factor that implies two groups of coordinates: preparation and scientific research. The scientific research represents a complex of elements that confer a new orientation of high performance and is materialized in resources and new availabilities brought in active shape by the contribution of the creators and by the attraction in a specific way in the economic circuit. It is the creator of new ideas, lifting the performance and understanding to the highest international standards of competitive economic efficiency. In the present, the role of the scientific research stands before some new challenges generated by the stage of society. It.s propose a unitary, coherent scientific research and educational system, created in corresponding proportions, based on the type, level and utility of the system, by the state, the economic-social environment and the citizen himself.

  2. PSI Scientific report 2009

    International Nuclear Information System (INIS)

    Piwnicki, P.

    2010-04-01

    This annual report issued by the Paul Scherrer Institute (PSI) in Switzerland takes a look at work done at the institute in the year 2009. In particular, the SwissFEL X-ray Laser facility that will allow novel investigations of femtosecond molecular dynamics in chemical, biochemical and condensed-matter systems and permit coherent diffraction imaging of individual nanostructures is commented on. Potential scientific applications of the SwissFEL are noted. Further, the institute's research focus and its findings are commented on. Synchrotron light is looked at and results obtained using neutron scattering and muon spin resonance are reported on. Work done in the micro and nano-technology, biomolecular research and radiopharmacy areas is also reported on Work performed in the biology, general energy and environmental sciences area is also reported on. The institute's comprehensive research facilities are reviewed and the facilities provided for users from the national and international scientific community, in particular regarding condensed matter, materials science and biology research are noted. In addition to the user facilities at the accelerators, other PSI laboratories are also open to external users, e.g. the Hot Laboratory operated by the Nuclear Energy and Safety Department that allows experiments to be performed on highly radioactive samples. The Technology Transfer Office at PSI is also reported on. This department assists representatives from industry in their search for opportunities and sources of innovation at the PSI. Further, an overview is presented of the people who work at the PSI, how the institute is organised and how the money it receives is distributed and used. Finally, a comprehensive list of publications completes the report

  3. Building and measuring a high performance network architecture

    Energy Technology Data Exchange (ETDEWEB)

    Kramer, William T.C.; Toole, Timothy; Fisher, Chuck; Dugan, Jon; Wheeler, David; Wing, William R; Nickless, William; Goddard, Gregory; Corbato, Steven; Love, E. Paul; Daspit, Paul; Edwards, Hal; Mercer, Linden; Koester, David; Decina, Basil; Dart, Eli; Paul Reisinger, Paul; Kurihara, Riki; Zekauskas, Matthew J; Plesset, Eric; Wulf, Julie; Luce, Douglas; Rogers, James; Duncan, Rex; Mauth, Jeffery

    2001-04-20

    Once a year, the SC conferences present a unique opportunity to create and build one of the most complex and highest performance networks in the world. At SC2000, large-scale and complex local and wide area networking connections were demonstrated, including large-scale distributed applications running on different architectures. This project was designed to use the unique opportunity presented at SC2000 to create a testbed network environment and then use that network to demonstrate and evaluate high performance computational and communication applications. This testbed was designed to incorporate many interoperable systems and services and was designed for measurement from the very beginning. The end results were key insights into how to use novel, high performance networking technologies and to accumulate measurements that will give insights into the networks of the future.

  4. High performance coronagraphy for direct imaging of exoplanets

    Directory of Open Access Journals (Sweden)

    Guyon O.

    2011-07-01

    Full Text Available Coronagraphy has recently been an extremely active field of research, with several high performance concepts proposed, and several new coronagraphs tested in laboratories and telescopes. Coronagraph concepts can be grouped in a few broad categories: Lyot-type coronagraphs, pupil apodization and nulling interferometers. Among existing coronagraph concepts, several approach the fundamental performance limit imposed by the physical nature of light. To achieve their full potential, coronagraphs require exquisite wavefront control and calibration. This has been, and still is, the main bottleneck for the scientifically productive use of coronagraphs on ground-based telescopes. New and promising wavefront sensing techniques suitable for high contrast imaging have however been developed in the last few years and are started to be realized in laboratories. I will review some of these enabling technologies, and show that coronagraphs are now ready for “prime time” on existing and future telescopes.

  5. BEAGLE: an application programming interface and high-performance computing library for statistical phylogenetics.

    Science.gov (United States)

    Ayres, Daniel L; Darling, Aaron; Zwickl, Derrick J; Beerli, Peter; Holder, Mark T; Lewis, Paul O; Huelsenbeck, John P; Ronquist, Fredrik; Swofford, David L; Cummings, Michael P; Rambaut, Andrew; Suchard, Marc A

    2012-01-01

    Phylogenetic inference is fundamental to our understanding of most aspects of the origin and evolution of life, and in recent years, there has been a concentration of interest in statistical approaches such as Bayesian inference and maximum likelihood estimation. Yet, for large data sets and realistic or interesting models of evolution, these approaches remain computationally demanding. High-throughput sequencing can yield data for thousands of taxa, but scaling to such problems using serial computing often necessitates the use of nonstatistical or approximate approaches. The recent emergence of graphics processing units (GPUs) provides an opportunity to leverage their excellent floating-point computational performance to accelerate statistical phylogenetic inference. A specialized library for phylogenetic calculation would allow existing software packages to make more effective use of available computer hardware, including GPUs. Adoption of a common library would also make it easier for other emerging computing architectures, such as field programmable gate arrays, to be used in the future. We present BEAGLE, an application programming interface (API) and library for high-performance statistical phylogenetic inference. The API provides a uniform interface for performing phylogenetic likelihood calculations on a variety of compute hardware platforms. The library includes a set of efficient implementations and can currently exploit hardware including GPUs using NVIDIA CUDA, central processing units (CPUs) with Streaming SIMD Extensions and related processor supplementary instruction sets, and multicore CPUs via OpenMP. To demonstrate the advantages of a common API, we have incorporated the library into several popular phylogenetic software packages. The BEAGLE library is free open source software licensed under the Lesser GPL and available from http://beagle-lib.googlecode.com. An example client program is available as public domain software.

  6. Performance Analysis of Memory Transfers and GEMM Subroutines on NVIDIA Tesla GPU Cluster

    Energy Technology Data Exchange (ETDEWEB)

    Allada, Veerendra, Benjegerdes, Troy; Bode, Brett

    2009-08-31

    Commodity clusters augmented with application accelerators are evolving as competitive high performance computing systems. The Graphical Processing Unit (GPU) with a very high arithmetic density and performance per price ratio is a good platform for the scientific application acceleration. In addition to the interconnect bottlenecks among the cluster compute nodes, the cost of memory copies between the host and the GPU device have to be carefully amortized to improve the overall efficiency of the application. Scientific applications also rely on efficient implementation of the BAsic Linear Algebra Subroutines (BLAS), among which the General Matrix Multiply (GEMM) is considered as the workhorse subroutine. In this paper, they study the performance of the memory copies and GEMM subroutines that are critical to port the computational chemistry algorithms to the GPU clusters. To that end, a benchmark based on the NetPIPE framework is developed to evaluate the latency and bandwidth of the memory copies between the host and the GPU device. The performance of the single and double precision GEMM subroutines from the NVIDIA CUBLAS 2.0 library are studied. The results have been compared with that of the BLAS routines from the Intel Math Kernel Library (MKL) to understand the computational trade-offs. The test bed is a Intel Xeon cluster equipped with NVIDIA Tesla GPUs.

  7. Performance Analysis of Memory Transfers and GEMM Subroutines on NVIDIA Tesla GPU Cluster

    International Nuclear Information System (INIS)

    Allada, Veerendra; Benjegerdes, Troy; Bode, Brett

    2009-01-01

    Commodity clusters augmented with application accelerators are evolving as competitive high performance computing systems. The Graphical Processing Unit (GPU) with a very high arithmetic density and performance per price ratio is a good platform for the scientific application acceleration. In addition to the interconnect bottlenecks among the cluster compute nodes, the cost of memory copies between the host and the GPU device have to be carefully amortized to improve the overall efficiency of the application. Scientific applications also rely on efficient implementation of the BAsic Linear Algebra Subroutines (BLAS), among which the General Matrix Multiply (GEMM) is considered as the workhorse subroutine. In this paper, they study the performance of the memory copies and GEMM subroutines that are critical to port the computational chemistry algorithms to the GPU clusters. To that end, a benchmark based on the NetPIPE framework is developed to evaluate the latency and bandwidth of the memory copies between the host and the GPU device. The performance of the single and double precision GEMM subroutines from the NVIDIA CUBLAS 2.0 library are studied. The results have been compared with that of the BLAS routines from the Intel Math Kernel Library (MKL) to understand the computational trade-offs. The test bed is a Intel Xeon cluster equipped with NVIDIA Tesla GPUs.

  8. Contributing to the design of run-time systems dedicated to high performance computing; Contribution a l'elaboration d'environnements de programmation dedies au calcul scientifique hautes performances

    Energy Technology Data Exchange (ETDEWEB)

    Perache, M

    2006-10-15

    In the field of intensive scientific computing, the quest for performance has to face the increasing complexity of parallel architectures. Nowadays, these machines exhibit a deep memory hierarchy which complicates the design of efficient parallel applications. This thesis proposes a programming environment allowing to design efficient parallel programs on top of clusters of multi-processors. It features a programming model centered around collective communications and synchronizations, and provides load balancing facilities. The programming interface, named MPC, provides high level paradigms which are optimized according to the underlying architecture. The environment is fully functional and used within the CEA/DAM (TERANOVA) computing center. The evaluations presented in this document confirm the relevance of our approach. (author)

  9. Management of scientific and engineering data collected during site characterization of a potential high-level waste repository

    International Nuclear Information System (INIS)

    Newbury, C.M.; Heitland, G.W.

    1992-01-01

    This paper discusses the characterization of Yucca Mountain as a potential site for a high-level nuclear waste repository encompasses many diverse investigations to determine the nature of the site. Laboratory and on-site investigations are being conducted of the geology, hydrology, mineralogy, paleoclimate, geotechnical properties, and past use of the area, to name a few. Effective use of the data from these investigations requires development of a system for the collection, storage, and dissemination of those scientific and engineering data needed to support model development, design, and performance assessment. The time and budgetary constraints associated with this project make sharing of technical data within the geoscience community absolutely critical to the successful solution of the complex scientific problem challenging us

  10. Energy Efficient Graphene Based High Performance Capacitors.

    Science.gov (United States)

    Bae, Joonwon; Kwon, Oh Seok; Lee, Chang-Soo

    2017-07-10

    Graphene (GRP) is an interesting class of nano-structured electronic materials for various cutting-edge applications. To date, extensive research activities have been performed on the investigation of diverse properties of GRP. The incorporation of this elegant material can be very lucrative in terms of practical applications in energy storage/conversion systems. Among various those systems, high performance electrochemical capacitors (ECs) have become popular due to the recent need for energy efficient and portable devices. Therefore, in this article, the application of GRP for capacitors is described succinctly. In particular, a concise summary on the previous research activities regarding GRP based capacitors is also covered extensively. It was revealed that a lot of secondary materials such as polymers and metal oxides have been introduced to improve the performance. Also, diverse devices have been combined with capacitors for better use. More importantly, recent patents related to the preparation and application of GRP based capacitors are also introduced briefly. This article can provide essential information for future study. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  11. Predicting future discoveries from current scientific literature.

    Science.gov (United States)

    Petrič, Ingrid; Cestnik, Bojan

    2014-01-01

    Knowledge discovery in biomedicine is a time-consuming process starting from the basic research, through preclinical testing, towards possible clinical applications. Crossing of conceptual boundaries is often needed for groundbreaking biomedical research that generates highly inventive discoveries. We demonstrate the ability of a creative literature mining method to advance valuable new discoveries based on rare ideas from existing literature. When emerging ideas from scientific literature are put together as fragments of knowledge in a systematic way, they may lead to original, sometimes surprising, research findings. If enough scientific evidence is already published for the association of such findings, they can be considered as scientific hypotheses. In this chapter, we describe a method for the computer-aided generation of such hypotheses based on the existing scientific literature. Our literature-based discovery of NF-kappaB with its possible connections to autism was recently approved by scientific community, which confirms the ability of our literature mining methodology to accelerate future discoveries based on rare ideas from existing literature.

  12. High-speed cinematography of gas-tungsten arc welding: theory and application

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, L.D.; Key, J.F.

    1981-06-01

    High-speed photo-instrumentation theory and application are reviewed, with particular emphasis on high-speed cinematography, for the engineer who has not acquired an extensive background in scientific photography. Camera systems, optics, timing system, lighting, photometric equipment, filters, and camera mounts are covered. Manufacturers and other resource material are listed in the Appendices. The properties and processing of photosensitive materials suitable for high-speed photography are reviewed, and selected film data are presented. Methods are described for both qualitative and quantitative film analysis. This technology is applied to the problem of analyzing plasma dynamics in a gas-tungsten welding arc.

  13. Analysis of the lack of scientific and technological talents of high-level women in China

    Science.gov (United States)

    Lin, Wang

    2017-08-01

    The growth and development of high-level female scientific and technological talents has become a global problem, facing severe challenges. The lack of high-level women in science and technology has become a global problem. How to recruit and help female scientists and technological talents grow raises awareness from the industry. To find out the main reasons for the lack of high-level female scientific and technological talent. This paper analyses the impact of gender discrimination on the lack of high-level female scientific and technological talents, the impact of disciplinary differences on female roles. The main reasons are: women’s natural disadvantage of mathematical thinking; female birth, the traditional culture on the role of women and the impact of values.

  14. 76 FR 72678 - Atlantic Highly Migratory Species; Exempted Fishing, Scientific Research, Display, and Chartering...

    Science.gov (United States)

    2011-11-25

    ... require scientists to report their activities associated with these tags. Examples of research conducted... stock assessments. The public display and scientific research quotas for sandbar sharks are now limited... Highly Migratory Species; Exempted Fishing, Scientific Research, Display, and Chartering Permits; Letters...

  15. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  16. Comparison of Resource Platform Selection Approaches for Scientific Workflows

    Energy Technology Data Exchange (ETDEWEB)

    Simmhan, Yogesh; Ramakrishnan, Lavanya

    2010-03-05

    Cloud computing is increasingly considered as an additional computational resource platform for scientific workflows. The cloud offers opportunity to scale-out applications from desktops and local cluster resources. At the same time, it can eliminate the challenges of restricted software environments and queue delays in shared high performance computing environments. Choosing from these diverse resource platforms for a workflow execution poses a challenge for many scientists. Scientists are often faced with deciding resource platform selection trade-offs with limited information on the actual workflows. While many workflow planning methods have explored task scheduling onto different resources, these methods often require fine-scale characterization of the workflow that is onerous for a scientist. In this position paper, we describe our early exploratory work into using blackbox characteristics to do a cost-benefit analysis across of using cloud platforms. We use only very limited high-level information on the workflow length, width, and data sizes. The length and width are indicative of the workflow duration and parallelism. The data size characterizes the IO requirements. We compare the effectiveness of this approach to other resource selection models using two exemplar scientific workflows scheduled on desktops, local clusters, HPC centers, and clouds. Early results suggest that the blackbox model often makes the same resource selections as a more fine-grained whitebox model. We believe the simplicity of the blackbox model can help inform a scientist on the applicability of cloud computing resources even before porting an existing workflow.

  17. High-performance CPW MMIC LNA using GaAs-based metamorphic HEMTs for 94-GHz applications

    International Nuclear Information System (INIS)

    Ryu, Keun-Kwan; Kim, Sung-Chan; An, Dan; Rhee, Jin-Koo

    2010-01-01

    In this paper, we report on a high-performance low-noise amplifier (LNA) using metamorphic high-electron-mobility transistor (MHEMT) technology for 94-GHz applications. The 100 nm x 60 μm MHEMT devices for the coplanar MMIC LNA exhibited DC characteristics with a drain current density of 655 mA/mm and an extrinsic transconductance of 720 mS/mm. The current gain cutoff frequency (f T ) and the maximum oscillation frequency (f max ) were 195 GHz and 305 GHz, respectively. Based on this MHEMT technology, coplanar 94-GHz MMIC LNAs were realized, achieving a small signal gain of more than 13 dB between 90 and 100 GHz and a small signal gain of 14.8 dB and a noise figure of 4.7 dB at 94 GHz.

  18. Summary of scientific investigations for the Waste Isolation Pilot Plant

    International Nuclear Information System (INIS)

    Weart, W.D.

    1996-01-01

    The scientific issues concerning disposal of radioactive wastes in salt formations have received 40 years of attention since the National Academy of Sciences (NAS) first addressed this issue in the mid-50s. For the last 21 years, Sandia National Laboratories (SNL) have directed site specific studies for the Waste Isolation Pilot Plant (WIPP). This paper will focus primarily on the WIPP scientific studies now in their concluding stages, the major scientific controversies regarding the site, and some of the surprises encountered during the course of these scientific investigations. The WIPP project's present understanding of the scientific processes involved continues to support the site as a satisfactory, safe location for the disposal of defense-related transuranic waste and one which will be shown to be in compliance with Environmental Protection Agency (EPA) standards. Compliance will be evaluated by incorporating data from these experiments into Performance Assessment (PA) models developed to describe the physical and chemical processes that could occur at the WIPP during the next 10,000 years under a variety of scenarios. The resulting compliance document is scheduled to be presented to the EPA in October 1996 and all relevant information from scientific studies will be included in this application and the supporting analyses. Studies supporting this compliance application conclude the major period of scientific investigation for the WIPP. Further studies will be of a ''confirmatory'' and monitoring nature

  19. High power, gel polymer lithium-ion cells with improved low temperature performance for NASA and DoD applications

    Science.gov (United States)

    Smart, M. C.; Ratnakumar, B. V.; Whitcanack, L. D.; Chin, K. B.; Surampudi, S.; Narayanan, S. R.; Alamgir, Mohamed; Yu, Ji-Sang; Plichta, Edward P.

    2004-01-01

    Both NASA and the U.S. Army have interest in developing secondary energy storage devices that are capable of meeting the demanding performance requirements of aerospace and man-portable applications. In order to meet these demanding requirements, gel-polymer electrolyte-based lithium-ion cells are being actively considered, due to their promise of providing high specific energy and enhanced safety aspects.

  20. GRID Prototype for imagery processing in scientific applications

    International Nuclear Information System (INIS)

    Stan, Ionel; Zgura, Ion Sorin; Haiduc, Maria; Valeanu, Vlad; Giurgiu, Liviu

    2004-01-01

    The paper presents the results of our study which is part of the InGRID project. This project is supported by ROSA (ROmanian Space Agency). In this paper we will show the possibility to take images from the optical microscope through web camera. The images are then stored on the PC in Linux operating system and distributed to other clusters through GRID technology (using http, php, MySQL, Globus or AliEn systems). The images are provided from nuclear emulsions in the frame of Becquerel Collaboration. The main goal of the project InGRID is to actuate developing and deploying GRID technology for images technique taken from space, different application fields and telemedicine. Also it will create links with the same international projects which use advanced Grid technology and scalable storage solutions. The main topics proposed to be solved in the frame of InGRID project are: - Implementation of two GRID clusters, minimum level Tier 3; - Adapting and updating the common storage and processing computing facility; - Testing the middelware packages developed in the frame of this project; - Testbed production of the prototype; - Build-up and advertise the InGRID prototype in scientific community through current dissemination. InGRID Prototype developed in the frame of this project, will be used by partner institutes as deploying environment of the imaging applications the dynamical features of which will be defined by conditions of contract. Subsequent applications will be deployed by the partners of this project with governmental, nongovernmental and private institutions. (authors)

  1. Manifold compositions, music visualization, and scientific sonification in an immersive virtual-reality environment.

    Energy Technology Data Exchange (ETDEWEB)

    Kaper, H. G.

    1998-01-05

    An interdisciplinary project encompassing sound synthesis, music composition, sonification, and visualization of music is facilitated by the high-performance computing capabilities and the virtual-reality environments available at Argonne National Laboratory. The paper describes the main features of the project's centerpiece, DIASS (Digital Instrument for Additive Sound Synthesis); ''A.N.L.-folds'', an equivalence class of compositions produced with DIASS; and application of DIASS in two experiments in the sonification of complex scientific data. Some of the larger issues connected with this project, such as the changing ways in which both scientists and composers perform their tasks, are briefly discussed.

  2. Research governance and scientific knowledge production in The Gambia

    Directory of Open Access Journals (Sweden)

    Frederick U. Ozor

    2014-09-01

    Full Text Available Public research institutions and scientists are principal actors in the production and transfer of scientific knowledge, technologies and innovations for application in industry as well for social and economic development. Based on the relevance of science and technology actors, the aim of this study was to identify and explain factors in research governance that influence scientific knowledge production and to contribute to empirical discussions on the impact levels of different governance models and structures. These discussions appear limited and mixed in the literature, although still are ongoing. No previous study has examined the possible contribution of the scientific committee model of research governance to scientific performance at the individual level of the scientist. In this context, this study contributes to these discussions, firstly, by suggesting that scientific committee structures with significant research steering autonomy could contribute not only directly to scientific output but also indirectly through moderating effects on research practices. Secondly, it is argued that autonomous scientific committee structures tend to play a better steering role than do management-centric models and structures of research governance.

  3. The application of AFS in high-energy physical domain

    International Nuclear Information System (INIS)

    Xu Dong; Cheng Yaodong; Chen Gang; Yang Dajian; Yang Yi

    2004-01-01

    With the development of high-energy physics, the characteristics of experiments in high-energy physical domain have changed greatly, especially the requirements of comprehensive file-sharing and high performance file transfering. On the other hand, the old management system is too scattered and uncultured to meet the needs of scientific research and international cooperation. According to these new changes, we analyzed the characteristics of experiments and proposed the solution of using some kinds of file systems synthetically, including Ext3, NFS and AFS etc. The solution offers a new method of user management and file management. (authors)

  4. Optimizing the design of very high power, high performance converters

    International Nuclear Information System (INIS)

    Edwards, R.J.; Tiagha, E.A.; Ganetis, G.; Nawrocky, R.J.

    1980-01-01

    This paper describes how various technologies are used to achieve the desired performance in a high current magnet power converter system. It is hoped that the discussions of the design approaches taken will be applicable to other power supply systems where stringent requirements in stability, accuracy and reliability must be met

  5. The MOA thruster. A high performance plasma accelerator for nuclear power and propulsion applications

    International Nuclear Information System (INIS)

    Frischauf, Norbert; Hettmer, Manfred; Grassauer, Andreas; Bartusch, Tobias; Koudelka, Otto

    2009-01-01

    More than 60 years after the late Nobel laureate Hannes Alfven had published a letter stating that oscillating magnetic fields can accelerate ionised matter via magneto-hydrodynamic interactions in a wave like fashion, the technical implementation of Alfven waves for propulsive purposes has been proposed, patented and examined for the first time by a group of inventors. The name of the concept, utilising Alfven waves to accelerate ionised matter for propulsive purposes, is MOA - Magnetic field Oscillating Amplified thruster. Alfven waves are generated by making use of two coils, one being permanently powered and serving also as magnetic nozzle, the other one being switched on and off in a cyclic way, deforming the field lines of the overall system. It is this deformation that generates Alfven waves, which are in the next step used to transport and compress the propulsive medium, in theory leading to a propulsion system with a much higher performance than any other electric propulsion system. While space propulsion is expected to be the prime application for MOA and is supported by numerous applications such as Solar and/or Nuclear Electric Propulsion or even as an 'afterburner system' for Nuclear Thermal Propulsion, other, terrestrial applications, like coating, semiconductor implantation and manufacturing as well as steel cutting can be thought of as well, making the system highly suited for a common space-terrestrial application research and utilisation strategy. This paper presents the recent developments of the MOA Thruster R and D activities at QASAR, the company in Vienna, Austria, which has been set up to further develop and test the Alfven wave technology and its applications. (author)

  6. Investigating Assessment Bias for Constructed Response Explanation Tasks: Implications for Evaluating Performance Expectations for Scientific Practice

    Science.gov (United States)

    Federer, Meghan Rector

    frequently incorporate multivalent concepts into explanations of change, resulting in explanatory practices that were scientifically non-normative. However, use of follow-up question approaches was found to resolve this source of bias and thereby increase the validity of inferences about student understanding. The second study focused on issues of item and instrument structure, specifically item feature effects and item position effects, which have been shown to influence measures of student performance across assessment tasks. Results indicated that, along the instrument item sequence, items with similar surface features produced greater sequencing effects than sequences of items with dissimilar surface features. This bias could be addressed by use of a counterbalanced design (i.e., Latin Square) at the population level of analysis. Explanation scores were also highly correlated with student verbosity, despite verbosity being an intrinsically trivial aspect of explanation quality. Attempting to standardize student response length was one proposed solution to the verbosity bias. The third study explored gender differences in students' performance on constructed-response explanation tasks using impact (i.e., mean raw scores) and differential item function (i.e., item difficulties) patterns. While prior research in science education has suggested that females tend to perform better on constructed-response items, the results of this study revealed no overall differences in gender achievement. However, evaluation of specific item features patterns suggested that female respondents have a slight advantage on unfamiliar explanation tasks. That is, male students tended to incorporate fewer scientifically normative concepts (i.e., key concepts) than females for unfamiliar taxa. Conversely, females tended to incorporate more scientifically non-normative ideas (i.e., naive ideas) than males for familiar taxa. Together these results indicate that gender achievement differences for this

  7. Characteristics and applications of high-performance fiber reinforced asphalt concrete

    Science.gov (United States)

    Park, Philip

    Steel fiber reinforced asphalt concrete (SFRAC) is suggested in this research as a multifunctional high performance material that can potentially lead to a breakthrough in developing a sustainable transportation system. The innovative use of steel fibers in asphalt concrete is expected to improve mechanical performance and electrical conductivity of asphalt concrete that is used for paving 94% of U. S. roadways. In an effort to understand the fiber reinforcing mechanisms in SFRAC, the interaction between a single straight steel fiber and the surrounding asphalt matrix is investigated through single fiber pull-out tests and detailed numerical simulations. It is shown that pull-out failure modes can be classified into three types: matrix, interface, and mixed failure modes and that there is a critical shear stress, independent of temperature and loading rate, beyond which interfacial debonding will occur. The reinforcing effects of SFRAC with various fiber sizes and shapes are investigated through indirect tension tests at low temperature. Compared to unreinforced specimens, fiber reinforced specimens exhibit up to 62.5% increase in indirect tensile strength and 895% improvements in toughness. The documented improvements are the highest attributed to fiber reinforcement in asphalt concrete to date. The use of steel fibers and other conductive additives provides an opportunity to make asphalt pavement electrically conductive, which opens up the possibility for multifunctional applications. Various asphalt mixtures and mastics are tested and the results indicate that the electrical resistivity of asphaltic materials can be manipulated over a wide range by replacing a part of traditional fillers with a specific type of graphite powder. Another important achievement of this study is development and validation of a three dimensional nonlinear viscoelastic constitutive model that is capable of simulating both linear and nonlinear viscoelasticity of asphaltic materials. The

  8. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    Science.gov (United States)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  9. An Analysis of the Supports and Constraints for Scientific Discussion in High School Project-Based Science

    Science.gov (United States)

    Alozie, Nonye M.; Moje, Elizabeth Birr; Krajcik, Joseph S.

    2010-01-01

    One goal of project-based science is to promote the development of scientific discourse communities in classrooms. Holding rich high school scientific discussions is challenging, especially when the demands of content and norms of high school science pose challenges to their enactment. There is little research on how high school teachers enact…

  10. FY01 Supplemental Science and Performance Analyses, Volume 1: Scientific Bases and Analyses, Part 1 and 2

    International Nuclear Information System (INIS)

    Dobson, David

    2001-01-01

    The U.S. Department of Energy (DOE) is considering the possible recommendation of a site at Yucca Mountain, Nevada, for development as a geologic repository for the disposal of high-level radioactive waste and spent nuclear fuel. To facilitate public review and comment, in May 2001 the DOE released the Yucca Mountain Science and Engineering Report (S and ER) (DOE 2001 [DIRS 153849]), which presents technical information supporting the consideration of the possible site recommendation. The report summarizes the results of more than 20 years of scientific and engineering studies. A decision to recommend the site has not been made: the DOE has provided the S and ER and its supporting documents as an aid to the public in formulating comments on the possible recommendation. When the S and ER (DOE 2001 [DIRS 153849]) was released, the DOE acknowledged that technical and scientific analyses of the site were ongoing. Therefore, the DOE noted in the Federal Register Notice accompanying the report (66 FR 23 013 [DIRS 155009], p. 2) that additional technical information would be released before the dates, locations, and times for public hearings on the possible recommendation were announced. This information includes: (1) the results of additional technical studies of a potential repository at Yucca Mountain, contained in this FY01 Supplemental Science and Performance Analyses: Vol. 1, Scientific Bases and Analyses; and FY01 Supplemental Science and Performance Analyses: Vol. 2, Performance Analyses (McNeish 2001 [DIRS 155023]) (collectively referred to as the SSPA) and (2) a preliminary evaluation of the Yucca Mountain site's preclosure and postclosure performance against the DOE's proposed site suitability guidelines (10 CFR Part 963 [64 FR 67054] [DIRS 124754]). By making the large amount of information developed on Yucca Mountain available in stages, the DOE intends to provide the public and interested parties with time to review the available materials and to formulate

  11. Improving UV Resistance of High Performance Fibers

    Science.gov (United States)

    Hassanin, Ahmed

    High performance fibers are characterized by their superior properties compared to the traditional textile fibers. High strength fibers have high modules, high strength to weight ratio, high chemical resistance, and usually high temperature resistance. It is used in application where superior properties are needed such as bulletproof vests, ropes and cables, cut resistant products, load tendons for giant scientific balloons, fishing rods, tennis racket strings, parachute cords, adhesives and sealants, protective apparel and tire cords. Unfortunately, Ultraviolet (UV) radiation causes serious degradation to the most of high performance fibers. UV lights, either natural or artificial, cause organic compounds to decompose and degrade, because the energy of the photons of UV light is high enough to break chemical bonds causing chain scission. This work is aiming at achieving maximum protection of high performance fibers using sheathing approaches. The sheaths proposed are of lightweight to maintain the advantage of the high performance fiber that is the high strength to weight ratio. This study involves developing three different types of sheathing. The product of interest that need be protected from UV is braid from PBO. First approach is extruding a sheath from Low Density Polyethylene (LDPE) loaded with different rutile TiO2 % nanoparticles around the braid from the PBO. The results of this approach showed that LDPE sheath loaded with 10% TiO2 by weight achieved the highest protection compare to 0% and 5% TiO2. The protection here is judged by strength loss of PBO. This trend noticed in different weathering environments, where the sheathed samples were exposed to UV-VIS radiations in different weatheromter equipments as well as exposure to high altitude environment using NASA BRDL balloon. The second approach is focusing in developing a protective porous membrane from polyurethane loaded with rutile TiO2 nanoparticles. Membrane from polyurethane loaded with 4

  12. High-performance integrated field-effect transistor-based sensors

    Energy Technology Data Exchange (ETDEWEB)

    Adzhri, R., E-mail: adzhri@gmail.com [Institute of Nano Electronic Engineering (INEE), Universiti Malaysia Perlis (UniMAP), Kangar, Perlis (Malaysia); Md Arshad, M.K., E-mail: mohd.khairuddin@unimap.edu.my [Institute of Nano Electronic Engineering (INEE), Universiti Malaysia Perlis (UniMAP), Kangar, Perlis (Malaysia); School of Microelectronic Engineering (SoME), Universiti Malaysia Perlis (UniMAP), Kangar, Perlis (Malaysia); Gopinath, Subash C.B., E-mail: subash@unimap.edu.my [Institute of Nano Electronic Engineering (INEE), Universiti Malaysia Perlis (UniMAP), Kangar, Perlis (Malaysia); School of Bioprocess Engineering (SBE), Universiti Malaysia Perlis (UniMAP), Arau, Perlis (Malaysia); Ruslinda, A.R., E-mail: ruslinda@unimap.edu.my [Institute of Nano Electronic Engineering (INEE), Universiti Malaysia Perlis (UniMAP), Kangar, Perlis (Malaysia); Fathil, M.F.M., E-mail: faris.fathil@gmail.com [Institute of Nano Electronic Engineering (INEE), Universiti Malaysia Perlis (UniMAP), Kangar, Perlis (Malaysia); Ayub, R.M., E-mail: ramzan@unimap.edu.my [Institute of Nano Electronic Engineering (INEE), Universiti Malaysia Perlis (UniMAP), Kangar, Perlis (Malaysia); Nor, M. Nuzaihan Mohd, E-mail: m.nuzaihan@unimap.edu.my [Institute of Nano Electronic Engineering (INEE), Universiti Malaysia Perlis (UniMAP), Kangar, Perlis (Malaysia); Voon, C.H., E-mail: chvoon@unimap.edu.my [Institute of Nano Electronic Engineering (INEE), Universiti Malaysia Perlis (UniMAP), Kangar, Perlis (Malaysia)

    2016-04-21

    Field-effect transistors (FETs) have succeeded in modern electronics in an era of computers and hand-held applications. Currently, considerable attention has been paid to direct electrical measurements, which work by monitoring changes in intrinsic electrical properties. Further, FET-based sensing systems drastically reduce cost, are compatible with CMOS technology, and ease down-stream applications. Current technologies for sensing applications rely on time-consuming strategies and processes and can only be performed under recommended conditions. To overcome these obstacles, an overview is presented here in which we specifically focus on high-performance FET-based sensor integration with nano-sized materials, which requires understanding the interaction of surface materials with the surrounding environment. Therefore, we present strategies, material depositions, device structures and other characteristics involved in FET-based devices. Special attention was given to silicon and polyaniline nanowires and graphene, which have attracted much interest due to their remarkable properties in sensing applications. - Highlights: • Performance of FET-based biosensors for the detection of biomolecules is presented. • Silicon nanowire, polyaniline and graphene are the highlighted nanoscaled materials as sensing transducers. • The importance of surface material interaction with the surrounding environment is discussed. • Different device structure architectures for ease in fabrication and high sensitivity of sensing are presented.

  13. High-performance integrated field-effect transistor-based sensors

    International Nuclear Information System (INIS)

    Adzhri, R.; Md Arshad, M.K.; Gopinath, Subash C.B.; Ruslinda, A.R.; Fathil, M.F.M.; Ayub, R.M.; Nor, M. Nuzaihan Mohd; Voon, C.H.

    2016-01-01

    Field-effect transistors (FETs) have succeeded in modern electronics in an era of computers and hand-held applications. Currently, considerable attention has been paid to direct electrical measurements, which work by monitoring changes in intrinsic electrical properties. Further, FET-based sensing systems drastically reduce cost, are compatible with CMOS technology, and ease down-stream applications. Current technologies for sensing applications rely on time-consuming strategies and processes and can only be performed under recommended conditions. To overcome these obstacles, an overview is presented here in which we specifically focus on high-performance FET-based sensor integration with nano-sized materials, which requires understanding the interaction of surface materials with the surrounding environment. Therefore, we present strategies, material depositions, device structures and other characteristics involved in FET-based devices. Special attention was given to silicon and polyaniline nanowires and graphene, which have attracted much interest due to their remarkable properties in sensing applications. - Highlights: • Performance of FET-based biosensors for the detection of biomolecules is presented. • Silicon nanowire, polyaniline and graphene are the highlighted nanoscaled materials as sensing transducers. • The importance of surface material interaction with the surrounding environment is discussed. • Different device structure architectures for ease in fabrication and high sensitivity of sensing are presented.

  14. Input/Output of ab-initio nuclear structure calculations for improved performance and portability

    International Nuclear Information System (INIS)

    Laghave, Nikhil

    2010-01-01

    Many modern scientific applications rely on highly computation intensive calculations. However, most applications do not concentrate as much on the role that input/output operations can play for improved performance and portability. Parallelizing input/output operations of large files can significantly improve the performance of parallel applications where sequential I/O is a bottleneck. A proper choice of I/O library also offers a scope for making input/output operations portable across different architectures. Thus, use of parallel I/O libraries for organizing I/O of large data files offers great scope in improving performance and portability of applications. In particular, sequential I/O has been identified as a bottleneck for the highly scalable MFDn (Many Fermion Dynamics for nuclear structure) code performing ab-initio nuclear structure calculations. We develop interfaces and parallel I/O procedures to use a well-known parallel I/O library in MFDn. As a result, we gain efficient I/O of large datasets along with their portability and ease of use in the down-stream processing. Even situations where the amount of data to be written is not huge, proper use of input/output operations can boost the performance of scientific applications. Application checkpointing offers enormous performance improvement and flexibility by doing a negligible amount of I/O to disk. Checkpointing saves and resumes application state in such a manner that in most cases the application is unaware that there has been an interruption to its execution. This helps in saving large amount of work that has been previously done and continue application execution. This small amount of I/O provides substantial time saving by offering restart/resume capability to applications. The need for checkpointing in optimization code NEWUOA has been identified and checkpoint/restart capability has been implemented in NEWUOA by using simple file I/O.

  15. Topological data analysis for scientific visualization

    CERN Document Server

    Tierny, Julien

    2017-01-01

    Combining theoretical and practical aspects of topology, this book delivers a comprehensive and self-contained introduction to topological methods for the analysis and visualization of scientific data. Theoretical concepts are presented in a thorough but intuitive manner, with many high-quality color illustrations. Key algorithms for the computation and simplification of topological data representations are described in details, and their application is carefully illustrated in a chapter dedicated to concrete use cases. With its fine balance between theory and practice, "Topological Data Analysis for Scientific Visualization" constitutes an appealing introduction to the increasingly important topic of topological data analysis, for lecturers, students and researchers.

  16. TiO_2 hierarchical hollow microspheres with different size for application as anodes in high-performance lithium storage

    International Nuclear Information System (INIS)

    Wang, Xiaobing; Meng, Qiuxia; Wang, Yuanyuan; Liang, Huijun; Bai, Zhengyu; Wang, Kui; Lou, Xiangdong; Cai, Bibo; Yang, Lin

    2016-01-01

    Graphical abstract: In the application of lithium-ion batteries, the influences of microsphere sizes are more significant than the secondary nanoparticles size and crystallinity of TiO_2-HSs for their transfer resistance and cycling performance, so that the bigger sizes of TiO_2-HSs can retain high reversible capacities after 30 recycles. - Highlights: • Hierarchical hollow microspheres have size-effect in the application of lithium ion battery. • The microsphere sizes can significantly affect the cycling capacities of TiO_2. • The nanoparticles size affect the initial discharge capacity and lithium ion diffusion. • Controlled microsphere size is more significant for improving TiO_2 cycling capacities. - Abstract: Nowadays, the safety issue has greatly hindered the development of large capacity lithium-ion batteries (LIBs), especially in electric vehicles applications. TiO_2 is a kind of potential anode candidate for improving the safety of LIBs. However, it still needs to understand how to improve the performance of TiO_2 anode in the practical applications. Herein, we design a contrast experiment by using three sizes of TiO_2 hierarchical hollow microspheres (TiO_2-HSs). The research results indicated that the cycling performance of TiO_2-HSs anode can be affected by the size of microspheres, and the nanoparticles size of microspheres and crystallinity of TiO_2 can affect their initial discharge capacity and lithium ion diffusion. And, the influence of microspheres size is more significant. This may provide a new strategy for improving the lithium-ion storage property of TiO_2 anode material in the practical applications.

  17. The Potential of the Cell Processor for Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel; Shalf, John; Oliker, Leonid; Husbands, Parry; Kamil, Shoaib; Yelick, Katherine

    2005-10-14

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of the using the forth coming STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. We are the first to present quantitative Cell performance data on scientific kernels and show direct comparisons against leading superscalar (AMD Opteron), VLIW (IntelItanium2), and vector (Cray X1) architectures. Since neither Cell hardware nor cycle-accurate simulators are currently publicly available, we develop both analytical models and simulators to predict kernel performance. Our work also explores the complexity of mapping several important scientific algorithms onto the Cells unique architecture. Additionally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  18. A review of the scientific basis for radiation protection of the patient

    International Nuclear Information System (INIS)

    Moores, B. M.; Regulla, D.

    2011-01-01

    The use of ionising radiation in medicine is the single largest man-made source of population exposure. Individual and collective doses to patients arising from the medical use of ionising radiations continue to rise significantly year on year. This is due to the increasing use of medical imaging procedures in modern health care systems as well as the continued development of new high dose techniques. This paper reviews the scientific basis for the principles of radiation protection as defined by the International Commission on Radiological Protection. These principles attempt to include exposures arising from both medical and non-medical applications within a common framework and have evolved over many years and changing socioeconomic considerations. In particular, the concepts of justification and ALARA (doses should be as low as reasonably achievable), which underpin the principles for medical exposures are assessed in terms of their applicability to the scientific process and relevance to a rapidly changing technologically-led health care system. Radiation protection is an integral component of patient safety in medical practices and needs to be evidence based and amenable to the scientific process. The limitations imposed by the existing philosophy of radiation protection to the development of a quantitative framework for adequately assessing the performance of medical imaging systems are highlighted. In particular, medical practitioners will require quantitative guidance as to the risk-benefits arising from modern X-ray imaging methods if they are to make rational judgements as to the applicability of modern high-dose techniques to particular diagnostic and therapeutic tasks. At present such guidance is variable due to the lack of a rational framework for assessing the clinical impact of medical imaging techniques. The possible integration of radiation protection concepts into fundamental bio-medical imaging research activities is discussed. (authors)

  19. High-performance indium gallium phosphide/gallium arsenide heterojunction bipolar transistors

    Science.gov (United States)

    Ahmari, David Abbas

    Heterojunction bipolar transistors (HBTs) have demonstrated the high-frequency characteristics as well as the high linearity, gain, and power efficiency necessary to make them attractive for a variety of applications. Specific applications for which HBTs are well suited include amplifiers, analog-to-digital converters, current sources, and optoelectronic integrated circuits. Currently, most commercially available HBT-based integrated circuits employ the AlGaAs/GaAs material system in applications such as a 4-GHz gain block used in wireless phones. As modern systems require higher-performance and lower-cost devices, HBTs utilizing the newer, InGaP/GaAs and InP/InGaAs material systems will begin to dominate the HBT market. To enable the widespread use of InGaP/GaAs HBTs, much research on the fabrication, performance, and characterization of these devices is required. This dissertation will discuss the design and implementation of high-performance InGaP/GaAs HBTs as well as study HBT device physics and characterization.

  20. Modeling Phase-transitions Using a High-performance, Isogeometric Analysis Framework

    KAUST Repository

    Vignal, Philippe

    2014-06-06

    In this paper, we present a high-performance framework for solving partial differential equations using Isogeometric Analysis, called PetIGA, and show how it can be used to solve phase-field problems. We specifically chose the Cahn-Hilliard equation, and the phase-field crystal equation as test cases. These two models allow us to highlight some of the main advantages that we have access to while using PetIGA for scientific computing.

  1. Scientific and technical guidance for the preparation and presentation of an application for authorisation of a health claim (revision 1)

    DEFF Research Database (Denmark)

    Tetens, Inge

    2011-01-01

    The scientific and technical guidance of the EFSA Panel on Dietetic Products, Nutrition and Allergies for the preparation and presentation of an application for authorisation of a health claim presents a common format for the organisation of information for the preparation of a well......-structured application for authorisation of health claims which fall under Article 14 (referring to children’s development and health, and to disease risk reduction claims), or 13(5) (which are based on newly developed scientific evidence and/or which include a request for the protection of proprietary data......), or for the modification of an existing authorisation in accordance with Article 19 of Regulation (EC) No 1924/2006 on nutrition and health claims made on foods. This guidance outlines: the information and scientific data which must be included in the application, the hierarchy of different types of data and study designs...

  2. High-Performance Tiled WMS and KML Web Server

    Science.gov (United States)

    Plesea, Lucian

    2007-01-01

    This software is an Apache 2.0 module implementing a high-performance map server to support interactive map viewers and virtual planet client software. It can be used in applications that require access to very-high-resolution geolocated images, such as GIS, virtual planet applications, and flight simulators. It serves Web Map Service (WMS) requests that comply with a given request grid from an existing tile dataset. It also generates the KML super-overlay configuration files required to access the WMS image tiles.

  3. A High Performance COTS Based Computer Architecture

    Science.gov (United States)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  4. Integrating a work-flow engine within a commercial SCADA to build end users applications in a scientific environment

    International Nuclear Information System (INIS)

    Ounsy, M.; Pierre-Joseph Zephir, S.; Saintin, K.; Abeille, G.; Ley, E. de

    2012-01-01

    To build integrated high-level applications, SOLEIL is using an original component-oriented approach based on GlobalSCREEN, an industrial Java SCADA. The aim of this integrated development environment is to give SOLEIL's scientific and technical staff a way to develop GUI (Graphical User Interface) applications for external users of beamlines. These GUI applications must address the two following needs: monitoring and supervision of a control system and development and execution of automated processes (as beamline alignment, data collection and on-line data analysis). The first need is now completely answered through a rich set of Java graphical components based on the COMETE library and providing a high level of service for data logging, scanning and so on. To reach the same quality of service for process automation, a big effort has been made for more seamless integration of PASSERELLE, a work-flow engine with dedicated user-friendly interfaces for end users, packaged as JavaBeans in GlobalSCREEN components library. Starting with brief descriptions of software architecture of the PASSERELLE and GlobalSCREEN environments, we will then present the overall system integration design as well as the current status of deployment on SOLEIL beamlines. (authors)

  5. The Organizational-Legal Peculiarities of Application of the Remote Labor Mode and Flexible Working Hours of Scientific Workers at Higher Education Institution

    Directory of Open Access Journals (Sweden)

    Lytovchenko Iryna V.

    2018-01-01

    Full Text Available The article is aimed at defining the main organizational-legal peculiarities of application of the remote labor mode, establishing and accounting the flexible working hours of scientific workers at higher educational institutions and scientific institutes. In the course of research the organizational-legal peculiarities of application of the remote labor mode and flexible working hours of the scientific workers at higher education institutions were analyzed. The article suggests their integration into the activities of higher education institution with the purpose of efficient distribution of their working time, provided that the tasks set are fully executed in a timely manner. As the basic means of control of measurement of results of scientific activity it is suggested to use acts of executed works and other absolute indicators (quantity of the processed scientific sources, quantity of the written pages of scientific papers etc.. The prospective direction of further research is development of practical recommendations on the use of special reports and indicators with an assessment of their impact on the results of activities of scientific workers at higher education institutions.

  6. Enabling High-Performance Computing as a Service

    KAUST Repository

    AbdelBaky, Moustafa

    2012-10-01

    With the right software infrastructure, clouds can provide scientists with as a service access to high-performance computing resources. An award-winning prototype framework transforms the Blue Gene/P system into an elastic cloud to run a representative HPC application. © 2012 IEEE.

  7. National Laboratory for Advanced Scientific Visualization at UNAM - Mexico

    Science.gov (United States)

    Manea, Marina; Constantin Manea, Vlad; Varela, Alfredo

    2016-04-01

    In 2015, the National Autonomous University of Mexico (UNAM) joined the family of Universities and Research Centers where advanced visualization and computing plays a key role to promote and advance missions in research, education, community outreach, as well as business-oriented consulting. This initiative provides access to a great variety of advanced hardware and software resources and offers a range of consulting services that spans a variety of areas related to scientific visualization, among which are: neuroanatomy, embryonic development, genome related studies, geosciences, geography, physics and mathematics related disciplines. The National Laboratory for Advanced Scientific Visualization delivers services through three main infrastructure environments: the 3D fully immersive display system Cave, the high resolution parallel visualization system Powerwall, the high resolution spherical displays Earth Simulator. The entire visualization infrastructure is interconnected to a high-performance-computing-cluster (HPCC) called ADA in honor to Ada Lovelace, considered to be the first computer programmer. The Cave is an extra large 3.6m wide room with projected images on the front, left and right, as well as floor walls. Specialized crystal eyes LCD-shutter glasses provide a strong stereo depth perception, and a variety of tracking devices allow software to track the position of a user's hand, head and wand. The Powerwall is designed to bring large amounts of complex data together through parallel computing for team interaction and collaboration. This system is composed by 24 (6x4) high-resolution ultra-thin (2 mm) bezel monitors connected to a high-performance GPU cluster. The Earth Simulator is a large (60") high-resolution spherical display used for global-scale data visualization like geophysical, meteorological, climate and ecology data. The HPCC-ADA, is a 1000+ computing core system, which offers parallel computing resources to applications that requires

  8. Analysis of student’s scientific attitude behaviour change effects blended learning supported by I-spring Suite 8 application

    Science.gov (United States)

    Budiharti, Rini; Waras, N. S.

    2018-05-01

    This article aims to describe the student’s scientific attitude behaviour change as treatment effect of Blended Learning supported by I-Spring Suite 8 application on the material balance and the rotational dynamics. Blended Learning models is learning strategy that integrate between face-to-face learning and online learning by combination of various media. Blended Learning model supported I-Spring Suite 8 media setting can direct learning becomes interactive. Students are guided to actively interact with the media as well as with other students to discuss getting the concept by the phenomena or facts presented. The scientific attitude is a natural attitude of students in the learning process. In interactive learning, scientific attitude is so needed. The research was conducted using a model Lesson Study which consists of the stages Plan-Do-Check-Act (PDCA) and applied to the subject of learning is students at class XI MIPA 2 of Senior High School 6 Surakarta. The validity of the data used triangulation techniques of observation, interviews and document review. Based on the discussion, it can be concluded that the use of Blended Learning supported media I-Spring Suite 8 is able to give the effect of changes in student behaviour on all dimensions of scientific attitude that is inquisitive, respect the data or fact, critical thinking, discovery and creativity, open minded and cooperation, and perseverance. Display e-learning media supported student worksheet makes the students enthusiastically started earlier, the core until the end of learning

  9. Blaze-DEMGPU: Modular high performance DEM framework for the GPU architecture

    Directory of Open Access Journals (Sweden)

    Nicolin Govender

    2016-01-01

    Full Text Available Blaze-DEMGPU is a modular GPU based discrete element method (DEM framework that supports polyhedral shaped particles. The high level performance is attributed to the light weight and Single Instruction Multiple Data (SIMD that the GPU architecture offers. Blaze-DEMGPU offers suitable algorithms to conduct DEM simulations on the GPU and these algorithms can be extended and modified. Since a large number of scientific simulations are particle based, many of the algorithms and strategies for GPU implementation present in Blaze-DEMGPU can be applied to other fields. Blaze-DEMGPU will make it easier for new researchers to use high performance GPU computing as well as stimulate wider GPU research efforts by the DEM community.

  10. High-Speed Data Recorder for Space, Geodesy, and Other High-Speed Recording Applications

    Science.gov (United States)

    Taveniku, Mikael

    2013-01-01

    A high-speed data recorder and replay equipment has been developed for reliable high-data-rate recording to disk media. It solves problems with slow or faulty disks, multiple disk insertions, high-altitude operation, reliable performance using COTS hardware, and long-term maintenance and upgrade path challenges. The current generation data recor - ders used within the VLBI community are aging, special-purpose machines that are both slow (do not meet today's requirements) and are very expensive to maintain and operate. Furthermore, they are not easily upgraded to take advantage of commercial technology development, and are not scalable to multiple 10s of Gbit/s data rates required by new applications. The innovation provides a softwaredefined, high-speed data recorder that is scalable with technology advances in the commercial space. It maximally utilizes current technologies without being locked to a particular hardware platform. The innovation also provides a cost-effective way of streaming large amounts of data from sensors to disk, enabling many applications to store raw sensor data and perform post and signal processing offline. This recording system will be applicable to many applications needing realworld, high-speed data collection, including electronic warfare, softwaredefined radar, signal history storage of multispectral sensors, development of autonomous vehicles, and more.

  11. Proceeding on the scientific meeting and presentation on accelerator technology and its applications: physics, nuclear reactor

    International Nuclear Information System (INIS)

    Pramudita Anggraita; Sudjatmoko; Darsono; Tri Marji Atmono; Tjipto Sujitno; Wahini Nurhayati

    2012-01-01

    The scientific meeting and presentation on accelerator technology and its applications was held by PTAPB BATAN on 13 December 2011. This meeting aims to promote the technology and its applications to accelerator scientists, academics, researchers and technology users as well as accelerator-based accelerator research that have been conducted by researchers in and outside BATAN. This proceeding contains 23 papers about physics and nuclear reactor. (PPIKSN)

  12. Impact of measuring electron tracks in high-resolution scientific charge-coupled devices within Compton imaging systems

    International Nuclear Information System (INIS)

    Chivers, D.H.; Coffer, A.; Plimley, B.; Vetter, K.

    2011-01-01

    We have implemented benchmarked models to determine the gain in sensitivity of electron-tracking based Compton imaging relative to conventional Compton imaging by the use of high-resolution scientific charge-coupled devices (CCD). These models are based on the recently demonstrated ability of electron-tracking based Compton imaging by using fully depleted scientific CCDs. Here we evaluate the gain in sensitivity by employing Monte Carlo simulations in combination with advanced charge transport models to calculate two-dimensional charge distributions corresponding to experimentally obtained tracks. In order to reconstruct the angle of the incident γ-ray, a trajectory determination algorithm was used on each track and integrated into a back-projection routine utilizing a geodesic-vertex ray tracing technique. Analysis was performed for incident γ-ray energies of 662 keV and results show an increase in sensitivity consistent with tracking of the Compton electron to approximately ±30 o .

  13. Contributing to the design of run-time systems dedicated to high performance computing; Contribution a l'elaboration d'environnements de programmation dedies au calcul scientifique hautes performances

    Energy Technology Data Exchange (ETDEWEB)

    Perache, M

    2006-10-15

    In the field of intensive scientific computing, the quest for performance has to face the increasing complexity of parallel architectures. Nowadays, these machines exhibit a deep memory hierarchy which complicates the design of efficient parallel applications. This thesis proposes a programming environment allowing to design efficient parallel programs on top of clusters of multi-processors. It features a programming model centered around collective communications and synchronizations, and provides load balancing facilities. The programming interface, named MPC, provides high level paradigms which are optimized according to the underlying architecture. The environment is fully functional and used within the CEA/DAM (TERANOVA) computing center. The evaluations presented in this document confirm the relevance of our approach. (author)

  14. GPU Implementation of High Rayleigh Number Three-Dimensional Mantle Convection

    Science.gov (United States)

    Sanchez, D. A.; Yuen, D. A.; Wright, G. B.; Barnett, G. A.

    2010-12-01

    Although we have entered the age of petascale computing, many factors are still prohibiting high-performance computing (HPC) from infiltrating all suitable scientific disciplines. For this reason and others, application of GPU to HPC is gaining traction in the scientific world. With its low price point, high performance potential, and competitive scalability, GPU has been an option well worth considering for the last few years. Moreover with the advent of NVIDIA's Fermi architecture, which brings ECC memory, better double-precision performance, and more RAM to GPU, there is a strong message of corporate support for GPU in HPC. However many doubts linger concerning the practicality of using GPU for scientific computing. In particular, GPU has a reputation for being difficult to program and suitable for only a small subset of problems. Although inroads have been made in addressing these concerns, for many scientists GPU still has hurdles to clear before becoming an acceptable choice. We explore the applicability of GPU to geophysics by implementing a three-dimensional, second-order finite-difference model of Rayleigh-Benard thermal convection on an NVIDIA GPU using C for CUDA. Our code reaches sufficient resolution, on the order of 500x500x250 evenly-spaced finite-difference gridpoints, on a single GPU. We make extensive use of highly optimized CUBLAS routines, allowing us to achieve performance on the order of O( 0.1 ) µs per timestep*gridpoint at this resolution. This performance has allowed us to study high Rayleigh number simulations, on the order of 2x10^7, on a single GPU.

  15. Internal criteria for scientific choice: an evaluation of research in high-energy physics using electron accelerators

    International Nuclear Information System (INIS)

    Martin, B.R.; Irvine, J.

    1981-01-01

    The economic situation of scientific research is now very different from what it was in the early 1960s when Dr. Alvin Weinberg opened the debate on the criteria for scientific choice. Annual rates of growth of 10 per cent. or more in the budget for science were then common in most Western countries, while today scientists face the prospect of no growth at all or even a decline. Some progress has also been made in developing techniques for the evaluation of the scientific performance of research groups. These two facts make it interesting to reconsider the question of scientific choice. (author)

  16. Brand alliance. Building block for scientific organisations´ marketing strategy

    Directory of Open Access Journals (Sweden)

    Joern Redler

    2016-03-01

    Full Text Available This paper addresses management issues of brand alliances as part of a scientific organisation´s marketing strategy. Though brand alliances have become quite popular with consumer products they seem to be exceptions in the marketing context of academic or scientific organisations. Against this background, the paper develops a brand alliance approach considering requirements of strategically marketing scientific organisations. As a starting point, brand alliances are discussed as a sub-category to brand combinations. Furthermore, opportunities for scientific organisations associated with the alliance approach are elucidated as well as, from a more general perspective, major threats. In the following course, the paper focuses on modelling a framework of customer-based brand alliance effects, referring to the behavioural science-based view of brands which conceptualises brands as the psychological reaction to the exposure of brand elements like a name, logo or symbols. In that context, prerequisites for success are examined as well. Further, essential components of a brand alliance management process are discussed and its application to scientific organisations is expounded. Aspects like, e.g., choosing and evaluating a partner brand, positioning a brand alliance or monitoring brand alliance performance are illuminated. In regard to practical application also factors and requirements for organisation´s brand alliance success are outlined.

  17. The Benefits and Complexities of Operating Geographic Information Systems (GIS) in a High Performance Computing (HPC) Environment

    Science.gov (United States)

    Shute, J.; Carriere, L.; Duffy, D.; Hoy, E.; Peters, J.; Shen, Y.; Kirschbaum, D.

    2017-12-01

    The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center is building and maintaining an Enterprise GIS capability for its stakeholders, to include NASA scientists, industry partners, and the public. This platform is powered by three GIS subsystems operating in a highly-available, virtualized environment: 1) the Spatial Analytics Platform is the primary NCCS GIS and provides users discoverability of the vast DigitalGlobe/NGA raster assets within the NCCS environment; 2) the Disaster Mapping Platform provides mapping and analytics services to NASA's Disaster Response Group; and 3) the internal (Advanced Data Analytics Platform/ADAPT) enterprise GIS provides users with the full suite of Esri and open source GIS software applications and services. All systems benefit from NCCS's cutting edge infrastructure, to include an InfiniBand network for high speed data transfers; a mixed/heterogeneous environment featuring seamless sharing of information between Linux and Windows subsystems; and in-depth system monitoring and warning systems. Due to its co-location with the NCCS Discover High Performance Computing (HPC) environment and the Advanced Data Analytics Platform (ADAPT), the GIS platform has direct access to several large NCCS datasets including DigitalGlobe/NGA, Landsat, MERRA, and MERRA2. Additionally, the NCCS ArcGIS Desktop Windows virtual machines utilize existing NetCDF and OPeNDAP assets for visualization, modelling, and analysis - thus eliminating the need for data duplication. With the advent of this platform, Earth scientists have full access to vast data repositories and the industry-leading tools required for successful management and analysis of these multi-petabyte, global datasets. The full system architecture and integration with scientific datasets will be presented. Additionally, key applications and scientific analyses will be explained, to include the NASA Global Landslide Catalog (GLC) Reporter crowdsourcing application, the

  18. Scientific evaluation at the CEA

    International Nuclear Information System (INIS)

    1999-01-01

    This report presents a statement of the scientific and technical activity of the French atomic energy commission (CEA) for the year 1998. This evaluation is made by external and independent experts and requires some specific dispositions for the nuclear protection and safety institute (IPSN) and for the direction of military applications (DAM). The report is divided into 5 parts dealing successively with: part 1 - the CEA, a public research organization (civil nuclear research, technology research and transfers, defence activities); the scientific and technical evaluation at the CEA (general framework, evaluation of the IPSN and DAM); part 2 - the scientific and technical councils (directions of fuel cycle, of nuclear reactors, and of advanced technologies); part 3 - the scientific councils (directions of matter and of life sciences); the nuclear protection and safety institute; the direction of military applications; part 4 - the corresponding members of the evaluation; part 5 - the list of scientific and technical councils and members. (J.S.)

  19. Parallel Libraries to support High-Level Programming

    DEFF Research Database (Denmark)

    Larsen, Morten Nørgaard

    and the Microsoft .NET iv framework. Normally, one would not directly think of the .NET framework when talking scientific applications, but Microsoft has in the last couple of versions of .NET introduce a number of tools for writing parallel and high performance code. The first section examines how programmers can...

  20. Scientific-creative thinking and academic achievement

    Directory of Open Access Journals (Sweden)

    Rosario Bermejo

    2014-07-01

    Full Text Available The aim of this work is to study the relationship between scientific-creative thinking construct and academic performance in a sample of adolescents. In addition, the scientific-creative thinking instrument’s reliability will be tested. The sample was composed of 98 students (aged between 12-16 years old attending to a Secondary School in Murcia Region (Spain. The used instruments were: a the Scientific-Creative Thinking Test designed by Hu and Adey (2002, which was adapted to the Spanish culture by the High Abilities research team at Murcia University. The test is composed of 7 task based in the Scientific Creative Structure Model. It assesses the dimensions fluency, flexibility and originality; b The General and Factorial Intelligence Test (IGF/5r; Yuste, 2002, which assess the abilities of general intelligence and logic reasoning, verbal reasoning, numerical reasoning and spatial reasoning; c Students’ academic achievement by domains (scientific-technological, social-linguistic and artistic was collected. The results showed positive and statistical significant correlations between the scientific-creative tasks and academic achievement of different domains.

  1. High performance polypyrrole coating for corrosion protection and biocidal applications

    Science.gov (United States)

    Nautiyal, Amit; Qiao, Mingyu; Cook, Jonathan Edwin; Zhang, Xinyu; Huang, Tung-Shi

    2018-01-01

    Polypyrrole (PPy) coating was electrochemically synthesized on carbon steel using sulfonic acids as dopants: p-toluene sulfonic acid (p-TSA), sulfuric acid (SA), (±) camphor sulfonic acid (CSA), sodium dodecyl sulfate (SDS), and sodium dodecylbenzene sulfonate (SDBS). The effect of acidic dopants (p-TSA, SA, CSA) on passivation of carbon steel was investigated by linear potentiodynamic and compared with morphology and corrosion protection performance of the coating produced. The types of the dopants used were significantly affecting the protection efficiency of the coating against chloride ion attack on the metal surface. The corrosion performance depends on size and alignment of dopant in the polymer backbone. Both p-TSA and SDBS have extra benzene ring that stack together to form a lamellar sheet like barrier to chloride ions thus making them appropriate dopants for PPy coating in suppressing the corrosion at significant level. Further, adhesion performance was enhanced by adding long chain carboxylic acid (decanoic acid) directly in the monomer solution. In addition, PPy coating doped with SDBS displayed excellent biocidal abilities against Staphylococcus aureus. The polypyrrole coatings on carbon steels with dual function of anti-corrosion and excellent biocidal properties shows great potential application in the industry for anti-corrosion/antimicrobial purposes.

  2. PANTHER. Pattern ANalytics To support High-performance Exploitation and Reasoning.

    Energy Technology Data Exchange (ETDEWEB)

    Czuchlewski, Kristina Rodriguez [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hart, William E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Sandia has approached the analysis of big datasets with an integrated methodology that uses computer science, image processing, and human factors to exploit critical patterns and relationships in large datasets despite the variety and rapidity of information. The work is part of a three-year LDRD Grand Challenge called PANTHER (Pattern ANalytics To support High-performance Exploitation and Reasoning). To maximize data analysis capability, Sandia pursued scientific advances across three key technical domains: (1) geospatial-temporal feature extraction via image segmentation and classification; (2) geospatial-temporal analysis capabilities tailored to identify and process new signatures more efficiently; and (3) domain- relevant models of human perception and cognition informing the design of analytic systems. Our integrated results include advances in geographical information systems (GIS) in which we discover activity patterns in noisy, spatial-temporal datasets using geospatial-temporal semantic graphs. We employed computational geometry and machine learning to allow us to extract and predict spatial-temporal patterns and outliers from large aircraft and maritime trajectory datasets. We automatically extracted static and ephemeral features from real, noisy synthetic aperture radar imagery for ingestion into a geospatial-temporal semantic graph. We worked with analysts and investigated analytic workflows to (1) determine how experiential knowledge evolves and is deployed in high-demand, high-throughput visual search workflows, and (2) better understand visual search performance and attention. Through PANTHER, Sandia's fundamental rethinking of key aspects of geospatial data analysis permits the extraction of much richer information from large amounts of data. The project results enable analysts to examine mountains of historical and current data that would otherwise go untouched, while also gaining meaningful, measurable, and defensible insights into

  3. Feasibility analysis of ultra high performance concrete for prestressed concrete bridge applications.

    Science.gov (United States)

    2010-07-01

    UHPC is an emerging material technology in which concrete develops very high : compressive strengths and exhibits improved tensile strength and toughness. A : comprehensive literature and historical application review was completed to determine the :...

  4. Network effects on scientific collaborations.

    Directory of Open Access Journals (Sweden)

    Shahadat Uddin

    Full Text Available BACKGROUND: The analysis of co-authorship network aims at exploring the impact of network structure on the outcome of scientific collaborations and research publications. However, little is known about what network properties are associated with authors who have increased number of joint publications and are being cited highly. METHODOLOGY/PRINCIPAL FINDINGS: Measures of social network analysis, for example network centrality and tie strength, have been utilized extensively in current co-authorship literature to explore different behavioural patterns of co-authorship networks. Using three SNA measures (i.e., degree centrality, closeness centrality and betweenness centrality, we explore scientific collaboration networks to understand factors influencing performance (i.e., citation count and formation (tie strength between authors of such networks. A citation count is the number of times an article is cited by other articles. We use co-authorship dataset of the research field of 'steel structure' for the year 2005 to 2009. To measure the strength of scientific collaboration between two authors, we consider the number of articles co-authored by them. In this study, we examine how citation count of a scientific publication is influenced by different centrality measures of its co-author(s in a co-authorship network. We further analyze the impact of the network positions of authors on the strength of their scientific collaborations. We use both correlation and regression methods for data analysis leading to statistical validation. We identify that citation count of a research article is positively correlated with the degree centrality and betweenness centrality values of its co-author(s. Also, we reveal that degree centrality and betweenness centrality values of authors in a co-authorship network are positively correlated with the strength of their scientific collaborations. CONCLUSIONS/SIGNIFICANCE: Authors' network positions in co

  5. High Performance Thin-Film Composite Forward Osmosis Membrane

    KAUST Repository

    Yip, Ngai Yin; Tiraferri, Alberto; Phillip, William A.; Schiffman, Jessica D.; Elimelech, Menachem

    2010-01-01

    obstacle hindering further advancements of this technology. This work presents the development of a high performance thin-film composite membrane for forward osmosis applications. The membrane consists of a selective polyamide active layer formed

  6. Measurement of low-LET radiation dose aboard the chinese scientific experiment satellite (1988) by highly sensitive LiF (Mg, Cu, P) TL chips

    International Nuclear Information System (INIS)

    Zhang Zhonglun; Zheng Yanzhen.

    1989-01-01

    Low-LET radiation dose is an important portion of spaceflight dose. It is a new application that highly sensitive LiF(Mg, Cu, P) TL chips are used in measurement of low-LET dose aboard the chinese scientific experiment satellite. Avarage dose rate in satellite is 9.2 mrad/day and on the ground is about 0.32 mrad/day

  7. 78 FR 27186 - Application(s) for Duty-Free Entry of Scientific Instruments

    Science.gov (United States)

    2013-05-09

    ... Scientific Instruments Pursuant to Section 6(c) of the Educational, Scientific and Cultural Materials...: New Mexico Institute of Mining and Technology, 801 Leroy Place, Socorro, NM 87801. Instrument: Delay... dimensions. The experiments depend on this fast 3D scanning to capture sufficient data from the dendrites of...

  8. ACCTuner: OpenACC Auto-Tuner For Accelerated Scientific Applications

    KAUST Repository

    Alzayer, Fatemah

    2015-05-17

    We optimize parameters in OpenACC clauses for a stencil evaluation kernel executed on Graphical Processing Units (GPUs) using a variety of machine learning and optimization search algorithms, individually and in hybrid combinations, and compare execution time performance to the best possible obtained from brute force search. Several auto-tuning techniques – historic learning, random walk, simulated annealing, Nelder-Mead, and genetic algorithms – are evaluated over a large two-dimensional parameter space not satisfactorily addressed to date by OpenACC compilers, consisting of gang size and vector length. A hybrid of historic learning and Nelder-Mead delivers the best balance of high performance and low tuning effort. GPUs are employed over an increasing range of applications due to the performance available from their large number of cores, as well as their energy efficiency. However, writing code that takes advantage of their massive fine-grained parallelism requires deep knowledge of the hardware, and is generally a complex task involving program transformation and the selection of many parameters. To improve programmer productivity, the directive-based programming model OpenACC was announced as an industry standard in 2011. Various compilers have been developed to support this model, the most notable being those by Cray, CAPS, and PGI. While the architecture and number of cores have evolved rapidly, the compilers have failed to keep up at configuring the parallel program to run most e ciently on the hardware. Following successful approaches to obtain high performance in kernels for cache-based processors using auto-tuning, we approach this compiler-hardware gap in GPUs by employing auto-tuning for the key parameters “gang” and “vector” in OpenACC clauses. We demonstrate results for a stencil evaluation kernel typical of seismic imaging over a variety of realistically sized three-dimensional grid configurations, with different truncation error orders

  9. Low cost high performance uncertainty quantification

    KAUST Repository

    Bekas, C.

    2009-01-01

    Uncertainty quantification in risk analysis has become a key application. In this context, computing the diagonal of inverse covariance matrices is of paramount importance. Standard techniques, that employ matrix factorizations, incur a cubic cost which quickly becomes intractable with the current explosion of data sizes. In this work we reduce this complexity to quadratic with the synergy of two algorithms that gracefully complement each other and lead to a radically different approach. First, we turned to stochastic estimation of the diagonal. This allowed us to cast the problem as a linear system with a relatively small number of multiple right hand sides. Second, for this linear system we developed a novel, mixed precision, iterative refinement scheme, which uses iterative solvers instead of matrix factorizations. We demonstrate that the new framework not only achieves the much needed quadratic cost but in addition offers excellent opportunities for scaling at massively parallel environments. We based our implementation on BLAS 3 kernels that ensure very high processor performance. We achieved a peak performance of 730 TFlops on 72 BG/P racks, with a sustained performance 73% of theoretical peak. We stress that the techniques presented in this work are quite general and applicable to several other important applications. Copyright © 2009 ACM.

  10. Opal web services for biomedical applications.

    Science.gov (United States)

    Ren, Jingyuan; Williams, Nadya; Clementi, Luca; Krishnan, Sriram; Li, Wilfred W

    2010-07-01

    Biomedical applications have become increasingly complex, and they often require large-scale high-performance computing resources with a large number of processors and memory. The complexity of application deployment and the advances in cluster, grid and cloud computing require new modes of support for biomedical research. Scientific Software as a Service (sSaaS) enables scalable and transparent access to biomedical applications through simple standards-based Web interfaces. Towards this end, we built a production web server (http://ws.nbcr.net) in August 2007 to support the bioinformatics application called MEME. The server has grown since to include docking analysis with AutoDock and AutoDock Vina, electrostatic calculations using PDB2PQR and APBS, and off-target analysis using SMAP. All the applications on the servers are powered by Opal, a toolkit that allows users to wrap scientific applications easily as web services without any modification to the scientific codes, by writing simple XML configuration files. Opal allows both web forms-based access and programmatic access of all our applications. The Opal toolkit currently supports SOAP-based Web service access to a number of popular applications from the National Biomedical Computation Resource (NBCR) and affiliated collaborative and service projects. In addition, Opal's programmatic access capability allows our applications to be accessed through many workflow tools, including Vision, Kepler, Nimrod/K and VisTrails. From mid-August 2007 to the end of 2009, we have successfully executed 239,814 jobs. The number of successfully executed jobs more than doubled from 205 to 411 per day between 2008 and 2009. The Opal-enabled service model is useful for a wide range of applications. It provides for interoperation with other applications with Web Service interfaces, and allows application developers to focus on the scientific tool and workflow development. Web server availability: http://ws.nbcr.net.

  11. High Efficiency Power Converter for Low Voltage High Power Applications

    DEFF Research Database (Denmark)

    Nymand, Morten

    The topic of this thesis is the design of high efficiency power electronic dc-to-dc converters for high-power, low-input-voltage to high-output-voltage applications. These converters are increasingly required for emerging sustainable energy systems such as fuel cell, battery or photo voltaic based......, and remote power generation for light towers, camper vans, boats, beacons, and buoys etc. A review of current state-of-the-art is presented. The best performing converters achieve moderately high peak efficiencies at high input voltage and medium power level. However, system dimensioning and cost are often...

  12. SALTON SEA SCIENTIFIC DRILLING PROJECT: SCIENTIFIC PROGRAM.

    Science.gov (United States)

    Sass, J.H.; Elders, W.A.

    1986-01-01

    The Salton Sea Scientific Drilling Project, was spudded on 24 October 1985, and reached a total depth of 10,564 ft. (3. 2 km) on 17 March 1986. There followed a period of logging, a flow test, and downhole scientific measurements. The scientific goals were integrated smoothly with the engineering and economic objectives of the program and the ideal of 'science driving the drill' in continental scientific drilling projects was achieved in large measure. The principal scientific goals of the project were to study the physical and chemical processes involved in an active, magmatically driven hydrothermal system. To facilitate these studies, high priority was attached to four areas of sample and data collection, namely: (1) core and cuttings, (2) formation fluids, (3) geophysical logging, and (4) downhole physical measurements, particularly temperatures and pressures.

  13. Evaluation of the Thermo Scientific SureTect Listeria species assay. AOAC Performance Tested Method 071304.

    Science.gov (United States)

    Cloke, Jonathan; Evans, Katharine; Crabtree, David; Hughes, Annette; Simpson, Helen; Holopainen, Jani; Wickstrand, Nina; Kauppinen, Mikko; Leon-Velarde, Carlos; Larson, Nathan; Dave, Keron

    2014-01-01

    The Thermo Scientific SureTect Listeria species Assay is a new real-time PCR assay for the detection of all species of Listeria in food and environmental samples. This validation study was conducted using the AOAC Research Institute (RI) Performance Tested Methods program to validate the SureTect Listeria species Assay in comparison to the reference method detailed in International Organization for Standardization 11290-1:1996 including amendment 1:2004 in a variety of foods plus plastic and stainless steel. The food matrixes validated were smoked salmon, processed cheese, fresh bagged spinach, cantaloupe, cooked prawns, cooked sliced turkey meat, cooked sliced ham, salami, pork frankfurters, and raw ground beef. All matrixes were tested by Thermo Fisher Scientific, Microbiology Division, Basingstoke, UK. In addition, three matrixes (pork frankfurters, fresh bagged spinach, and stainless steel surface samples) were analyzed independently as part of the AOAC-RI-controlled independent laboratory study by the University ofGuelph, Canada. Using probability of detection statistical analysis, a significant difference in favour of the SureTect assay was demonstrated between the SureTect and reference method for high level spiked samples of pork frankfurters, smoked salmon, cooked prawns, stainless steel, and low-spiked samples of salami. For all other matrixes, no significant difference was seen between the two methods during the study. Inclusivity testing was conducted with 68 different isolates of Listeria species, all of which were detected by the SureTect Listeria species Assay. None of the 33 exclusivity isolates were detected by the SureTect Listeria species Assay. Ruggedness testing was conducted to evaluate the performance of the assay with specific method deviations outside of the recommended parameters open to variation, which demonstrated that the assay gave reliable performance. Accelerated stability testing was additionally conducted, validating the assay

  14. 76 FR 50997 - Application(s) for Duty-Free Entry of Scientific Instruments

    Science.gov (United States)

    2011-08-17

    ... DEPARTMENT OF COMMERCE International Trade Administration Application(s) for Duty-Free Entry of..., School of Earth Sciences, 275 Mendenhall Laboratory, 125 South Oval Mall, Columbus, OH 43210. Instrument... and high-contrast images, a stage that is easy to move, a focus that does not change with changing...

  15. Decal electronics for printed high performance cmos electronic systems

    KAUST Repository

    Hussain, Muhammad Mustafa

    2017-11-23

    High performance complementary metal oxide semiconductor (CMOS) electronics are critical for any full-fledged electronic system. However, state-of-the-art CMOS electronics are rigid and bulky making them unusable for flexible electronic applications. While there exist bulk material reduction methods to flex them, such thinned CMOS electronics are fragile and vulnerable to handling for high throughput manufacturing. Here, we show a fusion of a CMOS technology compatible fabrication process for flexible CMOS electronics, with inkjet and conductive cellulose based interconnects, followed by additive manufacturing (i.e. 3D printing based packaging) and finally roll-to-roll printing of packaged decal electronics (thin film transistors based circuit components and sensors) focusing on printed high performance flexible electronic systems. This work provides the most pragmatic route for packaged flexible electronic systems for wide ranging applications.

  16. Scientific and Technological Report 2011

    International Nuclear Information System (INIS)

    Lopez Milla, Alcides; Prado Cuba, Antonio; Agapito Panta, Juan; Montoya Rossi, Eduardo

    2013-01-01

    This annual scientific and technological report provides an overview of research and development activities at Peruvian Institute of Nuclear Energy (IPEN) during the period from 1 january to 31 december, 2011. This report includes 30 papers divided in 8 subject matters, such as: physics and chemistry, materials science, nuclear engineering, mining industrial and environmental applications, medical and biological applications, radiation protection and nuclear safety, scientific instrumentation and management aspects. It also includes annexes. (APC)

  17. Scientific and Technological Report 2010

    International Nuclear Information System (INIS)

    Prado Cuba, Antonio; Santiago Contreras, Julio; Solis Veliz, Jose; Lopez Moreno, Edith

    2011-10-01

    This annual scientific and technological report provides an overview of research and development activities at Peruvian Institute of Nuclear Energy (IPEN) during the period from 1 january to 31 december, 2010. This report includes 41 papers divided in 8 subject matters, such as: physics and chemistry, materials science, nuclear engineering, mining industrial and environmental applications, medical and biological applications, radiation protection and nuclear safety, scientific instrumentation and management aspects. It also includes annexes. (APC)

  18. Industrial applications of high-performance computing best global practices

    CERN Document Server

    Osseyran, Anwar

    2015-01-01

    ""This book gives a comprehensive and up-to-date overview of the rapidly expanding field of the industrial use of supercomputers. It is just a pleasure reading through informative country reports and in-depth case studies contributed by leading researchers in the field.""-Jysoo Lee, Principal Researcher, Korea Institute of Science and Technology Information""From telescopes to microscopes, from vacuums to hyperbaric chambers, from sonar waves to laser beams, scientists have perpetually strived to apply technology and invention to new frontiers of scientific advancement. Along the way, they hav

  19. The 'Critical Power' Concept: Applications to Sports Performance with a Focus on Intermittent High-Intensity Exercise.

    Science.gov (United States)

    Jones, Andrew M; Vanhatalo, Anni

    2017-03-01

    The curvilinear relationship between power output and the time for which it can be sustained is a fundamental and well-known feature of high-intensity exercise performance. This relationship 'levels off' at a 'critical power' (CP) that separates power outputs that can be sustained with stable values of, for example, muscle phosphocreatine, blood lactate, and pulmonary oxygen uptake ([Formula: see text]), from power outputs where these variables change continuously with time until their respective minimum and maximum values are reached and exercise intolerance occurs. The amount of work that can be done during exercise above CP (the so-called W') is constant but may be utilized at different rates depending on the proximity of the exercise power output to CP. Traditionally, this two-parameter CP model has been employed to provide insights into physiological responses, fatigue mechanisms, and performance capacity during continuous constant power output exercise in discrete exercise intensity domains. However, many team sports (e.g., basketball, football, hockey, rugby) involve frequent changes in exercise intensity and, even in endurance sports (e.g., cycling, running), intensity may vary considerably with environmental/course conditions and pacing strategy. In recent years, the appeal of the CP concept has been broadened through its application to intermittent high-intensity exercise. With the assumptions that W' is utilized during work intervals above CP and reconstituted during recovery intervals below CP, it can be shown that performance during intermittent exercise is related to four factors: the intensity and duration of the work intervals and the intensity and duration of the recovery intervals. However, while the utilization of W' may be assumed to be linear, studies indicate that the reconstitution of W' may be curvilinear with kinetics that are highly variable between individuals. This has led to the development of a new CP model for intermittent exercise in

  20. Progress on H5Part: A Portable High Performance Parallel Data Interface for Electromagnetics Simulations

    International Nuclear Information System (INIS)

    Adelmann, Andreas; Gsell, Achim; Oswald, Benedikt; Schietinger, Thomas; Bethel, Wes; Shalf, John; Siegerist, Cristina; Stockinger, Kurt

    2007-01-01

    Significant problems facing all experimental and computational sciences arise from growing data size and complexity. Common to all these problems is the need to perform efficient data I/O on diverse computer architectures. In our scientific application, the largest parallel particle simulations generate vast quantities of six-dimensional data. Such a simulation run produces data for an aggregate data size up to several TB per run. Motivated by the need to address data I/O and access challenges, we have implemented H5Part, an open source data I/O API that simplifies the use of the Hierarchical Data Format v5 library (HDF5). HDF5 is an industry standard for high performance, cross-platform data storage and retrieval that runs on all contemporary architectures from large parallel supercomputers to laptops. H5Part, which is oriented to the needs of the particle physics and cosmology communities, provides support for parallel storage and retrieval of particles, structured and in the future unstructured meshes. In this paper, we describe recent work focusing on I/O support for particles and structured meshes and provide data showing performance on modern supercomputer architectures like the IBM POWER 5

  1. Exploring HPCS languages in scientific computing

    International Nuclear Information System (INIS)

    Barrett, R F; Alam, S R; Almeida, V F d; Bernholdt, D E; Elwasif, W R; Kuehn, J A; Poole, S W; Shet, A G

    2008-01-01

    As computers scale up dramatically to tens and hundreds of thousands of cores, develop deeper computational and memory hierarchies, and increased heterogeneity, developers of scientific software are increasingly challenged to express complex parallel simulations effectively and efficiently. In this paper, we explore the three languages developed under the DARPA High-Productivity Computing Systems (HPCS) program to help address these concerns: Chapel, Fortress, and X10. These languages provide a variety of features not found in currently popular HPC programming environments and make it easier to express powerful computational constructs, leading to new ways of thinking about parallel programming. Though the languages and their implementations are not yet mature enough for a comprehensive evaluation, we discuss some of the important features, and provide examples of how they can be used in scientific computing. We believe that these characteristics will be important to the future of high-performance scientific computing, whether the ultimate language of choice is one of the HPCS languages or something else

  2. Exploring HPCS languages in scientific computing

    Science.gov (United States)

    Barrett, R. F.; Alam, S. R.; Almeida, V. F. d.; Bernholdt, D. E.; Elwasif, W. R.; Kuehn, J. A.; Poole, S. W.; Shet, A. G.

    2008-07-01

    As computers scale up dramatically to tens and hundreds of thousands of cores, develop deeper computational and memory hierarchies, and increased heterogeneity, developers of scientific software are increasingly challenged to express complex parallel simulations effectively and efficiently. In this paper, we explore the three languages developed under the DARPA High-Productivity Computing Systems (HPCS) program to help address these concerns: Chapel, Fortress, and X10. These languages provide a variety of features not found in currently popular HPC programming environments and make it easier to express powerful computational constructs, leading to new ways of thinking about parallel programming. Though the languages and their implementations are not yet mature enough for a comprehensive evaluation, we discuss some of the important features, and provide examples of how they can be used in scientific computing. We believe that these characteristics will be important to the future of high-performance scientific computing, whether the ultimate language of choice is one of the HPCS languages or something else.

  3. Benefits of GMR sensors for high spatial resolution NDT applications

    Science.gov (United States)

    Pelkner, M.; Stegemann, R.; Sonntag, N.; Pohl, R.; Kreutzbruck, M.

    2018-04-01

    Magneto resistance sensors like GMR (giant magneto resistance) or TMR (tunnel magneto resistance) are widely used in industrial applications; examples are position measurement and read heads of hard disk drives. However, in case of non-destructive testing (NDT) applications these sensors, although their properties are outstanding like high spatial resolution, high field sensitivity, low cost and low energy consumption, never reached a technical transfer to an application beyond scientific scope. This paper deals with benefits of GMR/TMR sensors in terms of high spatial resolution testing for different NDT applications. The first example demonstrates the preeminent advantages of MR-elements compared with conventional coils used in eddy current testing (ET). The probe comprises one-wire excitation with an array of MR elements. This led to a better spatial resolution in terms of neighboring defects. The second section concentrates on MFL-testing (magnetic flux leakage) with active field excitation during and before testing. The latter illustrated the capability of highly resolved crack detection of a crossed notch. This example is best suited to show the ability of tiny magnetic field sensors for magnetic material characterization of a sample surface. Another example is based on characterization of samples after tensile test. Here, no external field is applied. The magnetization is only changed due to external load and magnetostriction leading to a field signature which GMR sensors can resolve. This gives access to internal changes of the magnetization state of the sample under test.

  4. High performance flexible CMOS SOI FinFETs

    KAUST Repository

    Fahad, Hossain M.

    2014-06-01

    We demonstrate the first ever CMOS compatible soft etch back based high performance flexible CMOS SOI FinFETs. The move from planar to non-planar FinFETs has enabled continued scaling down to the 14 nm technology node. This has been possible due to the reduction in off-state leakage and reduced short channel effects on account of the superior electrostatic charge control of multiple gates. At the same time, flexible electronics is an exciting expansion opportunity for next generation electronics. However, a fully integrated low-cost system will need to maintain ultra-large-scale-integration density, high performance and reliability - same as today\\'s traditional electronics. Up until recently, this field has been mainly dominated by very weak performance organic electronics enabled by low temperature processes, conducive to low melting point plastics. Now however, we show the world\\'s highest performing flexible version of 3D FinFET CMOS using a state-of-the-art CMOS compatible fabrication technique for high performance ultra-mobile consumer applications with stylish design. © 2014 IEEE.

  5. Re-Engineering a High Performance Electrical Series Elastic Actuator for Low-Cost Industrial Applications

    Directory of Open Access Journals (Sweden)

    Kenan Isik

    2017-01-01

    Full Text Available Cost is an important consideration when transferring a technology from research to industrial and educational use. In this paper, we introduce the design of an industrial grade series elastic actuator (SEA performed via re-engineering a research grade version of it. Cost-constrained design requires careful consideration of the key performance parameters for an optimal performance-to-cost component selection. To optimize the performance of the new design, we started by matching the capabilities of a high-performance SEA while cutting down its production cost significantly. Our posit was that performing a re-engineering design process on an existing high-end device will significantly reduce the cost without compromising the performance drastically. As a case study of design for manufacturability, we selected the University of Texas Series Elastic Actuator (UT-SEA, a high-performance SEA, for its high power density, compact design, high efficiency and high speed properties. We partnered with an industrial corporation in China to research the best pricing options and to exploit the retail and production facilities provided by the Shenzhen region. We succeeded in producing a low-cost industrial grade actuator at one-third of the cost of the original device by re-engineering the UT-SEA with commercial off-the-shelf components and reducing the number of custom-made parts. Subsequently, we conducted performance tests to demonstrate that the re-engineered product achieves the same high-performance specifications found in the original device. With this paper, we aim to raise awareness in the robotics community on the possibility of low-cost realization of low-volume, high performance, industrial grade research and education hardware.

  6. 76 FR 48803 - Application(s) for Duty-Free Entry of Scientific Instruments

    Science.gov (United States)

    2011-08-09

    .... Manufacturer: FEI Company, The Netherlands. Intended Use: The instrument will be used for NIH-funded basic... Applied Life Sciences, Austria. Intended Use: The instrument is a highly specialized system for studying a wide range of materials used in very high cycle, high temperature applications, such as light metals...

  7. The emergence of Clostridium thermocellum as a high utility candidate for consolidated bioprocessing applications

    Directory of Open Access Journals (Sweden)

    Arthur eRagauskas

    2014-08-01

    Full Text Available First isolated in 1926, Clostridium thermocellum has recently received increased attention as a high utility candidate for use in consolidated bioprocessing applications. These applications, which seek to process lignocellulosic biomass directly into useful products such as ethanol, are gaining traction as economically feasible routes towards the production of fuel and other high value chemical compounds as the shortcomings of fossil fuels become evident. This review evaluates C. thermocellum’s role in this transitory process by highlighting recent discoveries relating to its genomic, transcriptomic, proteomic, and metabolomic responses to varying biomass sources, with a special emphasis placed on providing an overview of its unique, multivariate enzyme cellulosome complex and the role that this structure performs during biomass degradation. Both naturally evolved and genetically engineered strains are examined in light of their unique attributes and responses to various biomass treatment conditions, and the genetic tools that have been employed for their creation are presented. Several future routes for potential industrial usage are presented, and it is concluded that, although there have been many advances to significantly improve C. thermocellum’s amenability to industrial use, several hurdles still remain to be overcome as this unique organism enjoys increased attention within the scientific community.

  8. Progress in a novel architecture for high performance processing

    Science.gov (United States)

    Zhang, Zhiwei; Liu, Meng; Liu, Zijun; Du, Xueliang; Xie, Shaolin; Ma, Hong; Ding, Guangxin; Ren, Weili; Zhou, Fabiao; Sun, Wenqin; Wang, Huijuan; Wang, Donglin

    2018-04-01

    The high performance processing (HPP) is an innovative architecture which targets on high performance computing with excellent power efficiency and computing performance. It is suitable for data intensive applications like supercomputing, machine learning and wireless communication. An example chip with four application-specific integrated circuit (ASIC) cores which is the first generation of HPP cores has been taped out successfully under Taiwan Semiconductor Manufacturing Company (TSMC) 40 nm low power process. The innovative architecture shows great energy efficiency over the traditional central processing unit (CPU) and general-purpose computing on graphics processing units (GPGPU). Compared with MaPU, HPP has made great improvement in architecture. The chip with 32 HPP cores is being developed under TSMC 16 nm field effect transistor (FFC) technology process and is planed to use commercially. The peak performance of this chip can reach 4.3 teraFLOPS (TFLOPS) and its power efficiency is up to 89.5 gigaFLOPS per watt (GFLOPS/W).

  9. Inclusive vision for high performance computing at the CSIR

    CSIR Research Space (South Africa)

    Gazendam, A

    2006-02-01

    Full Text Available and computationally intensive applications. A number of different technologies and standards were identified as core to the open and distributed high-performance infrastructure envisaged...

  10. Scientific developments of liquid crystal-based optical memory: a review

    Science.gov (United States)

    Prakash, Jai; Chandran, Achu; Biradar, Ashok M.

    2017-01-01

    The memory behavior in liquid crystals (LCs), although rarely observed, has made very significant headway over the past three decades since their discovery in nematic type LCs. It has gone from a mere scientific curiosity to application in variety of commodities. The memory element formed by numerous LCs have been protected by patents, and some commercialized, and used as compensation to non-volatile memory devices, and as memory in personal computers and digital cameras. They also have the low cost, large area, high speed, and high density memory needed for advanced computers and digital electronics. Short and long duration memory behavior for industrial applications have been obtained from several LC materials, and an LC memory with interesting features and applications has been demonstrated using numerous LCs. However, considerable challenges still exist in searching for highly efficient, stable, and long-lifespan materials and methods so that the development of useful memory devices is possible. This review focuses on the scientific and technological approach of fascinating applications of LC-based memory. We address the introduction, development status, novel design and engineering principles, and parameters of LC memory. We also address how the amalgamation of LCs could bring significant change/improvement in memory effects in the emerging field of nanotechnology, and the application of LC memory as the active component for futuristic and interesting memory devices.

  11. The StratusLab cloud distribution: Use-cases and support for scientific applications

    Science.gov (United States)

    Floros, E.

    2012-04-01

    The StratusLab project is integrating an open cloud software distribution that enables organizations to setup and provide their own private or public IaaS (Infrastructure as a Service) computing clouds. StratusLab distribution capitalizes on popular infrastructure virtualization solutions like KVM, the OpenNebula virtual machine manager, Claudia service manager and SlipStream deployment platform, which are further enhanced and expanded with additional components developed within the project. The StratusLab distribution covers the core aspects of a cloud IaaS architecture, namely Computing (life-cycle management of virtual machines), Storage, Appliance management and Networking. The resulting software stack provides a packaged turn-key solution for deploying cloud computing services. The cloud computing infrastructures deployed using StratusLab can support a wide range of scientific and business use cases. Grid computing has been the primary use case pursued by the project and for this reason the initial priority has been the support for the deployment and operation of fully virtualized production-level grid sites; a goal that has already been achieved by operating such a site as part of EGI's (European Grid Initiative) pan-european grid infrastructure. In this area the project is currently working to provide non-trivial capabilities like elastic and autonomic management of grid site resources. Although grid computing has been the motivating paradigm, StratusLab's cloud distribution can support a wider range of use cases. Towards this direction, we have developed and currently provide support for setting up general purpose computing solutions like Hadoop, MPI and Torque clusters. For what concerns scientific applications the project is collaborating closely with the Bioinformatics community in order to prepare VM appliances and deploy optimized services for bioinformatics applications. In a similar manner additional scientific disciplines like Earth Science can take

  12. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2015-01-01

    A continuation of Contemporary High Performance Computing: From Petascale toward Exascale, this second volume continues the discussion of HPC flagship systems, major application workloads, facilities, and sponsors. The book includes of figures and pictures that capture the state of existing systems: pictures of buildings, systems in production, floorplans, and many block diagrams and charts to illustrate system design and performance.

  13. Neo4j high performance

    CERN Document Server

    Raj, Sonal

    2015-01-01

    If you are a professional or enthusiast who has a basic understanding of graphs or has basic knowledge of Neo4j operations, this is the book for you. Although it is targeted at an advanced user base, this book can be used by beginners as it touches upon the basics. So, if you are passionate about taming complex data with the help of graphs and building high performance applications, you will be able to get valuable insights from this book.

  14. High-power free-electron lasers-technology and future applications

    Science.gov (United States)

    Socol, Yehoshua

    2013-03-01

    Free-electron laser (FEL) is an all-electric, high-power, high beam-quality source of coherent radiation, tunable - unlike other laser sources - at any wavelength within wide spectral region from hard X-rays to far-IR and beyond. After the initial push in the framework of the “Star Wars” program, the FEL technology benefited from decades of R&D and scientific applications. Currently, there are clear signs that the FEL technology reached maturity, enabling real-world applications. E.g., successful and unexpectedly smooth commissioning of the world-first X-ray FEL in 2010 increased in one blow by more than an order of magnitude (40×) wavelength region available by FEL technology and thus demonstrated that the theoretical predictions just keep true in real machines. Experience of ordering turn-key electron beamlines from commercial companies is a further demonstration of the FEL technology maturity. Moreover, successful commissioning of the world-first multi-turn energy-recovery linac demonstrated feasibility of reducing FEL size, cost and power consumption by probably an order of magnitude in respect to previous configurations, opening way to applications, previously considered as non-feasible. This review takes engineer-oriented approach to discuss the FEL technology issues, keeping in mind applications in the fields of military and aerospace, next generation semiconductor lithography, photo-chemistry and isotope separation.

  15. Performance Measurements in a High Throughput Computing Environment

    CERN Document Server

    AUTHOR|(CDS)2145966; Gribaudo, Marco

    The IT infrastructures of companies and research centres are implementing new technologies to satisfy the increasing need of computing resources for big data analysis. In this context, resource profiling plays a crucial role in identifying areas where the improvement of the utilisation efficiency is needed. In order to deal with the profiling and optimisation of computing resources, two complementary approaches can be adopted: the measurement-based approach and the model-based approach. The measurement-based approach gathers and analyses performance metrics executing benchmark applications on computing resources. Instead, the model-based approach implies the design and implementation of a model as an abstraction of the real system, selecting only those aspects relevant to the study. This Thesis originates from a project carried out by the author within the CERN IT department. CERN is an international scientific laboratory that conducts fundamental researches in the domain of elementary particle physics. The p...

  16. Progress of scientific researches and project of CSR in IMP

    International Nuclear Information System (INIS)

    Jin Genming

    2004-01-01

    The article reviews the recent progress of the scientific researches including synthesis of new nuclides, investigations of the isospin effects in heavy ion collisions, studies of the nuclear structure in high spin states and the applications of heavy ion beams to other scientific researches, such as biology and material science. It also gives a brief introduction of the development of the design and progress of the new project of heavy ion cooling storage ring (CSR) of Lanzhou. (author)

  17. High-Speed 3D Printing of High-Performance Thermosetting Polymers via Two-Stage Curing.

    Science.gov (United States)

    Kuang, Xiao; Zhao, Zeang; Chen, Kaijuan; Fang, Daining; Kang, Guozheng; Qi, Hang Jerry

    2018-04-01

    Design and direct fabrication of high-performance thermosets and composites via 3D printing are highly desirable in engineering applications. Most 3D printed thermosetting polymers to date suffer from poor mechanical properties and low printing speed. Here, a novel ink for high-speed 3D printing of high-performance epoxy thermosets via a two-stage curing approach is presented. The ink containing photocurable resin and thermally curable epoxy resin is used for the digital light processing (DLP) 3D printing. After printing, the part is thermally cured at elevated temperature to yield an interpenetrating polymer network epoxy composite, whose mechanical properties are comparable to engineering epoxy. The printing speed is accelerated by the continuous liquid interface production assisted DLP 3D printing method, achieving a printing speed as high as 216 mm h -1 . It is also demonstrated that 3D printing structural electronics can be achieved by combining the 3D printed epoxy composites with infilled silver ink in the hollow channels. The new 3D printing method via two-stage curing combines the attributes of outstanding printing speed, high resolution, low volume shrinkage, and excellent mechanical properties, and provides a new avenue to fabricate 3D thermosetting composites with excellent mechanical properties and high efficiency toward high-performance and functional applications. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Playa: High-Performance Programmable Linear Algebra

    Directory of Open Access Journals (Sweden)

    Victoria E. Howle

    2012-01-01

    Full Text Available This paper introduces Playa, a high-level user interface layer for composing algorithms for complex multiphysics problems out of objects from other Trilinos packages. Among other features, Playa provides very high-performance overloaded operators implemented through an expression template mechanism. In this paper, we give an overview of the central Playa objects from a user's perspective, show application to a sequence of increasingly complex solver algorithms, provide timing results for Playa's overloaded operators and other functions, and briefly survey some of the implementation issues involved.

  19. Report from the NSLS workshop: Sources and applications of high intensity uv-vuv light

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, E.D.; Hastings, J.B. (eds.)

    1990-01-01

    A workshop was held to evaluate sources and applications of high intensity, ultra violet (UV) radiation for biological, chemical, and materials sciences. The proposed sources are a UV free electron laser (FEL) driven by a high brightness linac and undulators in long, straight sections of a specially designed low energy (400 MeV) storage ring. These two distinct types of sources will provide a broad range of scientific opportunities that were discussed in detail during the workshop.

  20. Report from the NSLS workshop: Sources and applications of high intensity uv-vuv light

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, E.D.; Hastings, J.B. [eds.

    1990-12-31

    A workshop was held to evaluate sources and applications of high intensity, ultra violet (UV) radiation for biological, chemical, and materials sciences. The proposed sources are a UV free electron laser (FEL) driven by a high brightness linac and undulators in long, straight sections of a specially designed low energy (400 MeV) storage ring. These two distinct types of sources will provide a broad range of scientific opportunities that were discussed in detail during the workshop.