WorldWideScience

Sample records for high-performance scientific applications

  1. RAPPORT: running scientific high-performance computing applications on the cloud.

    Science.gov (United States)

    Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt

    2013-01-28

    Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.

  2. Topic 14+16: High-performance and scientific applications and extreme-scale computing (Introduction)

    KAUST Repository

    Downes, Turlough P.

    2013-01-01

    As our understanding of the world around us increases it becomes more challenging to make use of what we already know, and to increase our understanding still further. Computational modeling and simulation have become critical tools in addressing this challenge. The requirements of high-resolution, accurate modeling have outstripped the ability of desktop computers and even small clusters to provide the necessary compute power. Many applications in the scientific and engineering domains now need very large amounts of compute time, while other applications, particularly in the life sciences, frequently have large data I/O requirements. There is thus a growing need for a range of high performance applications which can utilize parallel compute systems effectively, which have efficient data handling strategies and which have the capacity to utilise current and future systems. The High Performance and Scientific Applications topic aims to highlight recent progress in the use of advanced computing and algorithms to address the varied, complex and increasing challenges of modern research throughout both the "hard" and "soft" sciences. This necessitates being able to use large numbers of compute nodes, many of which are equipped with accelerators, and to deal with difficult I/O requirements. © 2013 Springer-Verlag.

  3. BurstMem: A High-Performance Burst Buffer System for Scientific Applications

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Teng [Auburn University, Auburn, Alabama; Oral, H Sarp [ORNL; Wang, Yandong [Auburn University, Auburn, Alabama; Settlemyer, Bradley W [ORNL; Atchley, Scott [ORNL; Yu, Weikuan [Auburn University, Auburn, Alabama

    2014-01-01

    The growth of computing power on large-scale sys- tems requires commensurate high-bandwidth I/O system. Many parallel file systems are designed to provide fast sustainable I/O in response to applications soaring requirements. To meet this need, a novel system is imperative to temporarily buffer the bursty I/O and gradually flush datasets to long-term parallel file systems. In this paper, we introduce the design of BurstMem, a high- performance burst buffer system. BurstMem provides a storage framework with efficient storage and communication manage- ment strategies. Our experiments demonstrate that BurstMem is able to speed up the I/O performance of scientific applications by up to 8.5 on leadership computer systems.

  4. Accelerating Scientific Applications using High Performance Dense and Sparse Linear Algebra Kernels on GPUs

    KAUST Repository

    Abdelfattah, Ahmad

    2015-01-15

    High performance computing (HPC) platforms are evolving to more heterogeneous configurations to support the workloads of various applications. The current hardware landscape is composed of traditional multicore CPUs equipped with hardware accelerators that can handle high levels of parallelism. Graphical Processing Units (GPUs) are popular high performance hardware accelerators in modern supercomputers. GPU programming has a different model than that for CPUs, which means that many numerical kernels have to be redesigned and optimized specifically for this architecture. GPUs usually outperform multicore CPUs in some compute intensive and massively parallel applications that have regular processing patterns. However, most scientific applications rely on crucial memory-bound kernels and may witness bottlenecks due to the overhead of the memory bus latency. They can still take advantage of the GPU compute power capabilities, provided that an efficient architecture-aware design is achieved. This dissertation presents a uniform design strategy for optimizing critical memory-bound kernels on GPUs. Based on hierarchical register blocking, double buffering and latency hiding techniques, this strategy leverages the performance of a wide range of standard numerical kernels found in dense and sparse linear algebra libraries. The work presented here focuses on matrix-vector multiplication kernels (MVM) as repre- sentative and most important memory-bound operations in this context. Each kernel inherits the benefits of the proposed strategies. By exposing a proper set of tuning parameters, the strategy is flexible enough to suit different types of matrices, ranging from large dense matrices, to sparse matrices with dense block structures, while high performance is maintained. Furthermore, the tuning parameters are used to maintain the relative performance across different GPU architectures. Multi-GPU acceleration is proposed to scale the performance on several devices. The

  5. A Secure Web Application Providing Public Access to High-Performance Data Intensive Scientific Resources - ScalaBLAST Web Application

    International Nuclear Information System (INIS)

    Curtis, Darren S.; Peterson, Elena S.; Oehmen, Chris S.

    2008-01-01

    This work presents the ScalaBLAST Web Application (SWA), a web based application implemented using the PHP script language, MySQL DBMS, and Apache web server under a GNU/Linux platform. SWA is an application built as part of the Data Intensive Computer for Complex Biological Systems (DICCBS) project at the Pacific Northwest National Laboratory (PNNL). SWA delivers accelerated throughput of bioinformatics analysis via high-performance computing through a convenient, easy-to-use web interface. This approach greatly enhances emerging fields of study in biology such as ontology-based homology, and multiple whole genome comparisons which, in the absence of a tool like SWA, require a heroic effort to overcome the computational bottleneck associated with genome analysis. The current version of SWA includes a user account management system, a web based user interface, and a backend process that generates the files necessary for the Internet scientific community to submit a ScalaBLAST parallel processing job on a dedicated cluster

  6. Accelerating Scientific Applications using High Performance Dense and Sparse Linear Algebra Kernels on GPUs

    KAUST Repository

    Abdelfattah, Ahmad

    2015-01-01

    applications rely on crucial memory-bound kernels and may witness bottlenecks due to the overhead of the memory bus latency. They can still take advantage of the GPU compute power capabilities, provided that an efficient architecture-aware design is achieved

  7. NCI's Transdisciplinary High Performance Scientific Data Platform

    Science.gov (United States)

    Evans, Ben; Antony, Joseph; Bastrakova, Irina; Car, Nicholas; Cox, Simon; Druken, Kelsey; Evans, Bradley; Fraser, Ryan; Ip, Alex; Kemp, Carina; King, Edward; Minchin, Stuart; Larraondo, Pablo; Pugh, Tim; Richards, Clare; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    The Australian National Computational Infrastructure (NCI) manages Earth Systems data collections sourced from several domains and organisations onto a single High Performance Data (HPD) Node to further Australia's national priority research and innovation agenda. The NCI HPD Node has rapidly established its value, currently managing over 10 PBytes of datasets from collections that span a wide range of disciplines including climate, weather, environment, geoscience, geophysics, water resources and social sciences. Importantly, in order to facilitate broad user uptake, maximise reuse and enable transdisciplinary access through software and standardised interfaces, the datasets, associated information systems and processes have been incorporated into the design and operation of a unified platform that NCI has called, the National Environmental Research Data Interoperability Platform (NERDIP). The key goal of the NERDIP is to regularise data access so that it is easily discoverable, interoperable for different domains and enabled for high performance methods. It adopts and implements international standards and data conventions, and promotes scientific integrity within a high performance computing and data analysis environment. NCI has established a rich and flexible computing environment to access to this data, through the NCI supercomputer; a private cloud that supports both domain focused virtual laboratories and in-common interactive analysis interfaces; as well as remotely through scalable data services. Data collections of this importance must be managed with careful consideration of both their current use and the needs of the end-communities, as well as its future potential use, such as transitioning to more advanced software and improved methods. It is therefore critical that the data platform is both well-managed and trusted for stable production use (including transparency and reproducibility), agile enough to incorporate new technological advances and

  8. High-performance scientific computing in the cloud

    Science.gov (United States)

    Jorissen, Kevin; Vila, Fernando; Rehr, John

    2011-03-01

    Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.

  9. 3rd International Conference on High Performance Scientific Computing

    CERN Document Server

    Kostina, Ekaterina; Phu, Hoang; Rannacher, Rolf

    2008-01-01

    This proceedings volume contains a selection of papers presented at the Third International Conference on High Performance Scientific Computing held at the Hanoi Institute of Mathematics, Vietnamese Academy of Science and Technology (VAST), March 6-10, 2006. The conference has been organized by the Hanoi Institute of Mathematics, Interdisciplinary Center for Scientific Computing (IWR), Heidelberg, and its International PhD Program ``Complex Processes: Modeling, Simulation and Optimization'', and Ho Chi Minh City University of Technology. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and applications in practice. Subjects covered are mathematical modelling, numerical simulation, methods for optimization and control, parallel computing, software development, applications of scientific computing in physics, chemistry, biology and mechanics, environmental and hydrology problems, transport, logistics and site loca...

  10. 5th International Conference on High Performance Scientific Computing

    CERN Document Server

    Hoang, Xuan; Rannacher, Rolf; Schlöder, Johannes

    2014-01-01

    This proceedings volume gathers a selection of papers presented at the Fifth International Conference on High Performance Scientific Computing, which took place in Hanoi on March 5-9, 2012. The conference was organized by the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) of Heidelberg University, Ho Chi Minh City University of Technology, and the Vietnam Institute for Advanced Study in Mathematics. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and practical applications. Subjects covered include mathematical modeling; numerical simulation; methods for optimization and control; parallel computing; software development; and applications of scientific computing in physics, mechanics and biomechanics, material science, hydrology, chemistry, biology, biotechnology, medicine, sports, psychology, transport, logistics, com...

  11. 6th International Conference on High Performance Scientific Computing

    CERN Document Server

    Phu, Hoang; Rannacher, Rolf; Schlöder, Johannes

    2017-01-01

    This proceedings volume highlights a selection of papers presented at the Sixth International Conference on High Performance Scientific Computing, which took place in Hanoi, Vietnam on March 16-20, 2015. The conference was jointly organized by the Heidelberg Institute of Theoretical Studies (HITS), the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) at Heidelberg University, and the Vietnam Institute for Advanced Study in Mathematics, Ministry of Education The contributions cover a broad, interdisciplinary spectrum of scientific computing and showcase recent advances in theory, methods, and practical applications. Subjects covered numerical simulation, methods for optimization and control, parallel computing, and software development, as well as the applications of scientific computing in physics, mechanics, biomechanics and robotics, material science, hydrology, biotechnology, medicine, transport, scheduling, and in...

  12. A high performance scientific cloud computing environment for materials simulations

    Science.gov (United States)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  13. High Performance Data Distribution for Scientific Community

    Science.gov (United States)

    Tirado, Juan M.; Higuero, Daniel; Carretero, Jesus

    2010-05-01

    Institutions such as NASA, ESA or JAXA find solutions to distribute data from their missions to the scientific community, and their long term archives. This is a complex problem, as it includes a vast amount of data, several geographically distributed archives, heterogeneous architectures with heterogeneous networks, and users spread around the world. We propose a novel architecture (HIDDRA) that solves this problem aiming to reduce user intervention in data acquisition and processing. HIDDRA is a modular system that provides a highly efficient parallel multiprotocol download engine, using a publish/subscribe policy which helps the final user to obtain data of interest transparently. Our system can deal simultaneously with multiple protocols (HTTP,HTTPS, FTP, GridFTP among others) to obtain the maximum bandwidth, reducing the workload in data server and increasing flexibility. It can also provide high reliability and fault tolerance, as several sources of data can be used to perform one file download. HIDDRA architecture can be arranged into a data distribution network deployed on several sites that can cooperate to provide former features. HIDDRA has been addressed by the 2009 e-IRG Report on Data Management as a promising initiative for data interoperability. Our first prototype has been evaluated in collaboration with the ESAC centre in Villafranca del Castillo (Spain) that shows a high scalability and performance, opening a wide spectrum of opportunities. Some preliminary results have been published in the Journal of Astrophysics and Space Science [1]. [1] D. Higuero, J.M. Tirado, J. Carretero, F. Félix, and A. de La Fuente. HIDDRA: a highly independent data distribution and retrieval architecture for space observation missions. Astrophysics and Space Science, 321(3):169-175, 2009

  14. Language interoperability for high-performance parallel scientific components

    International Nuclear Information System (INIS)

    Elliot, N; Kohn, S; Smolinski, B

    1999-01-01

    With the increasing complexity and interdisciplinary nature of scientific applications, code reuse is becoming increasingly important in scientific computing. One method for facilitating code reuse is the use of components technologies, which have been used widely in industry. However, components have only recently worked their way into scientific computing. Language interoperability is an important underlying technology for these component architectures. In this paper, we present an approach to language interoperability for a high-performance parallel, component architecture being developed by the Common Component Architecture (CCA) group. Our approach is based on Interface Definition Language (IDL) techniques. We have developed a Scientific Interface Definition Language (SIDL), as well as bindings to C and Fortran. We have also developed a SIDL compiler and run-time library support for reference counting, reflection, object management, and exception handling (Babel). Results from using Babel to call a standard numerical solver library (written in C) from C and Fortran show that the cost of using Babel is minimal, where as the savings in development time and the benefits of object-oriented development support for C and Fortran far outweigh the costs

  15. A high performance scientific cloud computing environment for materials simulations

    OpenAIRE

    Jorissen, Kevin; Vila, Fernando D.; Rehr, John J.

    2011-01-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including...

  16. Component-based software for high-performance scientific computing

    Energy Technology Data Exchange (ETDEWEB)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

  17. Component-based software for high-performance scientific computing

    International Nuclear Information System (INIS)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly

  18. High Performance Fortran for Aerospace Applications

    National Research Council Canada - National Science Library

    Mehrotra, Piyush

    2000-01-01

    .... HPF is a set of Fortran extensions designed to provide users with a high-level interface for programming data parallel scientific applications while delegating to the compiler/runtime system the task...

  19. High performance cloud auditing and applications

    CERN Document Server

    Choi, Baek-Young; Song, Sejun

    2014-01-01

    This book mainly focuses on cloud security and high performance computing for cloud auditing. The book discusses emerging challenges and techniques developed for high performance semantic cloud auditing, and presents the state of the art in cloud auditing, computing and security techniques with focus on technical aspects and feasibility of auditing issues in federated cloud computing environments.   In summer 2011, the United States Air Force Research Laboratory (AFRL) CyberBAT Cloud Security and Auditing Team initiated the exploration of the cloud security challenges and future cloud auditing research directions that are covered in this book. This work was supported by the United States government funds from the Air Force Office of Scientific Research (AFOSR), the AFOSR Summer Faculty Fellowship Program (SFFP), the Air Force Research Laboratory (AFRL) Visiting Faculty Research Program (VFRP), the National Science Foundation (NSF) and the National Institute of Health (NIH). All chapters were partially suppor...

  20. DURIP: High Performance Computing in Biomathematics Applications

    Science.gov (United States)

    2017-05-10

    Mathematics and Statistics (AMS) at the University of California, Santa Cruz (UCSC) to conduct research and research-related education in areas of...Computing in Biomathematics Applications Report Title The goal of this award was to enhance the capabilities of the Department of Applied Mathematics and...DURIP: High Performance Computing in Biomathematics Applications The goal of this award was to enhance the capabilities of the Department of Applied

  1. Automatic Energy Schemes for High Performance Applications

    Energy Technology Data Exchange (ETDEWEB)

    Sundriyal, Vaibhav [Iowa State Univ., Ames, IA (United States)

    2013-01-01

    Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale. Drastic increases in the power consumption of supercomputers affect significantly their operating costs and failure rates. In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This work first studies two important collective communication operations, all-to-all and allgather and proposes energy saving strategies on the per-call basis. Next, it targets point-to-point communications to group them into phases and apply frequency scaling to them to save energy by exploiting the architectural and communication stalls. Finally, it proposes an automatic runtime system which combines both collective and point-to-point communications into phases, and applies throttling to them apart from DVFS to maximize energy savings. The experimental results are presented for NAS parallel benchmark problems as well as for the realistic parallel electronic structure calculations performed by the widely used quantum chemistry package GAMESS. Close to the maximum energy savings were obtained with a substantially low performance loss on the given platform.

  2. High Performance Object-Oriented Scientific Programming in Fortran 90

    Science.gov (United States)

    Norton, Charles D.; Decyk, Viktor K.; Szymanski, Boleslaw K.

    1997-01-01

    We illustrate how Fortran 90 supports object-oriented concepts by example of plasma particle computations on the IBM SP. Our experience shows that Fortran 90 and object-oriented methodology give high performance while providing a bridge from Fortran 77 legacy codes to modern programming principles. All of our object-oriented Fortran 90 codes execute more quickly thatn the equeivalent C++ versions, yet the abstraction modelling capabilities used for scentific programming are comparably powereful.

  3. Building a High Performance Computing Infrastructure for Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Belov, S; Kaplin, V; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2011-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies (ICT), and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of the computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for the particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. Recently a dedicated optical network with the initial bandwidth of 10 Gbps connecting these three facilities was built in order to make it possible to share the computing resources among the research communities of participating institutes, thus providing a common platform for building the computing infrastructure for various scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technologies based on XEN and KVM platforms. The solution implemented was tested thoroughly within the computing environment of KEDR detector experiment which is being carried out at BINP, and foreseen to be applied to the use cases of other HEP experiments in the upcoming future.

  4. Top scientific research center deploys Zambeel Aztera (TM) network storage system in high performance environment

    CERN Multimedia

    2002-01-01

    " The National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory has implemented a Zambeel Aztera storage system and software to accelerate the productivity of scientists running high performance scientific simulations and computations" (1 page).

  5. High-performance computing for airborne applications

    International Nuclear Information System (INIS)

    Quinn, Heather M.; Manuzatto, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-01-01

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  6. The Centre of High-Performance Scientific Computing, Geoverbund, ABC/J - Geosciences enabled by HPSC

    Science.gov (United States)

    Kollet, Stefan; Görgen, Klaus; Vereecken, Harry; Gasper, Fabian; Hendricks-Franssen, Harrie-Jan; Keune, Jessica; Kulkarni, Ketan; Kurtz, Wolfgang; Sharples, Wendy; Shrestha, Prabhakar; Simmer, Clemens; Sulis, Mauro; Vanderborght, Jan

    2016-04-01

    The Centre of High-Performance Scientific Computing (HPSC TerrSys) was founded 2011 to establish a centre of competence in high-performance scientific computing in terrestrial systems and the geosciences enabling fundamental and applied geoscientific research in the Geoverbund ABC/J (geoscientfic research alliance of the Universities of Aachen, Cologne, Bonn and the Research Centre Jülich, Germany). The specific goals of HPSC TerrSys are to achieve relevance at the national and international level in (i) the development and application of HPSC technologies in the geoscientific community; (ii) student education; (iii) HPSC services and support also to the wider geoscientific community; and in (iv) the industry and public sectors via e.g., useful applications and data products. A key feature of HPSC TerrSys is the Simulation Laboratory Terrestrial Systems, which is located at the Jülich Supercomputing Centre (JSC) and provides extensive capabilities with respect to porting, profiling, tuning and performance monitoring of geoscientific software in JSC's supercomputing environment. We will present a summary of success stories of HPSC applications including integrated terrestrial model development, parallel profiling and its application from watersheds to the continent; massively parallel data assimilation using physics-based models and ensemble methods; quasi-operational terrestrial water and energy monitoring; and convection permitting climate simulations over Europe. The success stories stress the need for a formalized education of students in the application of HPSC technologies in future.

  7. Multicore Challenges and Benefits for High Performance Scientific Computing

    Directory of Open Access Journals (Sweden)

    Ida M.B. Nielsen

    2008-01-01

    Full Text Available Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexity of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.

  8. High-Performance Energy Applications and Systems

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Barton [Univ. of Wisconsin, Madison, WI (United States)

    2014-01-01

    The Paradyn project has a history of developing algorithms, techniques, and software that push the cutting edge of tool technology for high-end computing systems. Under this funding, we are working on a three-year agenda to make substantial new advances in support of new and emerging Petascale systems. The overall goal for this work is to address the steady increase in complexity of these petascale systems. Our work covers two key areas: (1) The analysis, instrumentation and control of binary programs. Work in this area falls under the general framework of the Dyninst API tool kits. (2) Infrastructure for building tools and applications at extreme scale. Work in this area falls under the general framework of the MRNet scalability framework. Note that work done under this funding is closely related to work done under a contemporaneous grant, “Foundational Tools for Petascale Computing”, SC0003922/FG02-10ER25940, UW PRJ27NU.

  9. Scientific Programming with High Performance Fortran: A Case Study Using the xHPF Compiler

    Directory of Open Access Journals (Sweden)

    Eric De Sturler

    1997-01-01

    Full Text Available Recently, the first commercial High Performance Fortran (HPF subset compilers have appeared. This article reports on our experiences with the xHPF compiler of Applied Parallel Research, version 1.2, for the Intel Paragon. At this stage, we do not expect very High Performance from our HPF programs, even though performance will eventually be of paramount importance for the acceptance of HPF. Instead, our primary objective is to study how to convert large Fortran 77 (F77 programs to HPF such that the compiler generates reasonably efficient parallel code. We report on a case study that identifies several problems when parallelizing code with HPF; most of these problems affect current HPF compiler technology in general, although some are specific for the xHPF compiler. We discuss our solutions from the perspective of the scientific programmer, and presenttiming results on the Intel Paragon. The case study comprises three programs of different complexity with respect to parallelization. We use the dense matrix-matrix product to show that the distribution of arrays and the order of nested loops significantly influence the performance of the parallel program. We use Gaussian elimination with partial pivoting to study the parallelization strategy of the compiler. There are various ways to structure this algorithm for a particular data distribution. This example shows how much effort may be demanded from the programmer to support the compiler in generating an efficient parallel implementation. Finally, we use a small application to show that the more complicated structure of a larger program may introduce problems for the parallelization, even though all subroutines of the application are easy to parallelize by themselves. The application consists of a finite volume discretization on a structured grid and a nested iterative solver. Our case study shows that it is possible to obtain reasonably efficient parallel programs with xHPF, although the compiler

  10. High Performance Computing Software Applications for Space Situational Awareness

    Science.gov (United States)

    Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.

    The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.

  11. Scientific Data Services -- A High-Performance I/O System with Array Semantics

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Byna, Surendra; Rotem, Doron; Shoshani, Arie

    2011-09-21

    As high-performance computing approaches exascale, the existing I/O system design is having trouble keeping pace in both performance and scalability. We propose to address this challenge by adopting database principles and techniques in parallel I/O systems. First, we propose to adopt an array data model because many scientific applications represent their data in arrays. This strategy follows a cardinal principle from database research, which separates the logical view from the physical layout of data. This high-level data model gives the underlying implementation more freedom to optimize the physical layout and to choose the most effective way of accessing the data. For example, knowing that a set of write operations is working on a single multi-dimensional array makes it possible to keep the subarrays in a log structure during the write operations and reassemble them later into another physical layout as resources permit. While maintaining the high-level view, the storage system could compress the user data to reduce the physical storage requirement, collocate data records that are frequently used together, or replicate data to increase availability and fault-tolerance. Additionally, the system could generate secondary data structures such as database indexes and summary statistics. We expect the proposed Scientific Data Services approach to create a “live” storage system that dynamically adjusts to user demands and evolves with the massively parallel storage hardware.

  12. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    International Nuclear Information System (INIS)

    Khaleel, Mohammad A.

    2009-01-01

    This report is an account of the deliberations and conclusions of the workshop on 'Forefront Questions in Nuclear Science and the Role of High Performance Computing' held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to (1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; (2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; (3) provide nuclear physicists the opportunity to influence the development of high performance computing; and (4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  13. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.

    2009-10-01

    This report is an account of the deliberations and conclusions of the workshop on "Forefront Questions in Nuclear Science and the Role of High Performance Computing" held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to 1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; 2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; 3) provide nuclear physicists the opportunity to influence the development of high performance computing; and 4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  14. Optical Thermal Characterization Enables High-Performance Electronics Applications

    Energy Technology Data Exchange (ETDEWEB)

    2016-02-01

    NREL developed a modeling and experimental strategy to characterize thermal performance of materials. The technique provides critical data on thermal properties with relevance for electronics packaging applications. Thermal contact resistance and bulk thermal conductivity were characterized for new high-performance materials such as thermoplastics, boron-nitride nanosheets, copper nanowires, and atomically bonded layers. The technique is an important tool for developing designs and materials that enable power electronics packaging with small footprint, high power density, and low cost for numerous applications.

  15. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Yier [Univ. of Central Florida, Orlando, FL (United States)

    2017-07-14

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from this project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.

  16. High-performance silicon photonics technology for telecommunications applications.

    Science.gov (United States)

    Yamada, Koji; Tsuchizawa, Tai; Nishi, Hidetaka; Kou, Rai; Hiraki, Tatsurou; Takeda, Kotaro; Fukuda, Hiroshi; Ishikawa, Yasuhiko; Wada, Kazumi; Yamamoto, Tsuyoshi

    2014-04-01

    By way of a brief review of Si photonics technology, we show that significant improvements in device performance are necessary for practical telecommunications applications. In order to improve device performance in Si photonics, we have developed a Si-Ge-silica monolithic integration platform, on which compact Si-Ge-based modulators/detectors and silica-based high-performance wavelength filters are monolithically integrated. The platform features low-temperature silica film deposition, which cannot damage Si-Ge-based active devices. Using this platform, we have developed various integrated photonic devices for broadband telecommunications applications.

  17. High-performance silicon photonics technology for telecommunications applications

    International Nuclear Information System (INIS)

    Yamada, Koji; Tsuchizawa, Tai; Nishi, Hidetaka; Kou, Rai; Hiraki, Tatsurou; Takeda, Kotaro; Fukuda, Hiroshi; Yamamoto, Tsuyoshi; Ishikawa, Yasuhiko; Wada, Kazumi

    2014-01-01

    By way of a brief review of Si photonics technology, we show that significant improvements in device performance are necessary for practical telecommunications applications. In order to improve device performance in Si photonics, we have developed a Si-Ge-silica monolithic integration platform, on which compact Si-Ge–based modulators/detectors and silica-based high-performance wavelength filters are monolithically integrated. The platform features low-temperature silica film deposition, which cannot damage Si-Ge–based active devices. Using this platform, we have developed various integrated photonic devices for broadband telecommunications applications. (review)

  18. High-performance silicon photonics technology for telecommunications applications

    Science.gov (United States)

    Yamada, Koji; Tsuchizawa, Tai; Nishi, Hidetaka; Kou, Rai; Hiraki, Tatsurou; Takeda, Kotaro; Fukuda, Hiroshi; Ishikawa, Yasuhiko; Wada, Kazumi; Yamamoto, Tsuyoshi

    2014-04-01

    By way of a brief review of Si photonics technology, we show that significant improvements in device performance are necessary for practical telecommunications applications. In order to improve device performance in Si photonics, we have developed a Si-Ge-silica monolithic integration platform, on which compact Si-Ge-based modulators/detectors and silica-based high-performance wavelength filters are monolithically integrated. The platform features low-temperature silica film deposition, which cannot damage Si-Ge-based active devices. Using this platform, we have developed various integrated photonic devices for broadband telecommunications applications.

  19. A New Approach in Advance Network Reservation and Provisioning for High-Performance Scientific Data Transfers

    Energy Technology Data Exchange (ETDEWEB)

    Balman, Mehmet; Chaniotakis, Evangelos; Shoshani, Arie; Sim, Alex

    2010-01-28

    Scientific applications already generate many terabytes and even petabytes of data from supercomputer runs and large-scale experiments. The need for transferring data chunks of ever-increasing sizes through the network shows no sign of abating. Hence, we need high-bandwidth high speed networks such as ESnet (Energy Sciences Network). Network reservation systems, i.e. ESnet's OSCARS (On-demand Secure Circuits and Advance Reservation System) establish guaranteed bandwidth of secure virtual circuits at a certain time, for a certain bandwidth and length of time. OSCARS checks network availability and capacity for the specified period of time, and allocates requested bandwidth for that user if it is available. If the requested reservation cannot be granted, no further suggestion is returned back to the user. Further, there is no possibility from the users view-point to make an optimal choice. We report a new algorithm, where the user specifies the total volume that needs to be transferred, a maximum bandwidth that he/she can use, and a desired time period within which the transfer should be done. The algorithm can find alternate allocation possibilities, including earliest time for completion, or shortest transfer duration - leaving the choice to the user. We present a novel approach for path finding in time-dependent networks, and a new polynomial algorithm to find possible reservation options according to given constraints. We have implemented our algorithm for testing and incorporation into a future version of ESnet?s OSCARS. Our approach provides a basis for provisioning end-to-end high performance data transfers over storage and network resources.

  20. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard; Allcock, William; Beggio, Chris; Campbell, Stuart; Cherry, Andrew; Cholia, Shreyas; Dart, Eli; England, Clay; Fahey, Tim; Foertter, Fernanda; Goldstone, Robin; Hick, Jason; Karelitz, David; Kelly, Kaki; Monroe, Laura; Prabhat,; Skinner, David; White, Julia

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at the DOE national laboratories. The report contains findings from that review.

  1. High performance hybrid magnetic structure for biotechnology applications

    Science.gov (United States)

    Humphries, David E [El Cerrito, CA; Pollard, Martin J [El Cerrito, CA; Elkin, Christopher J [San Ramon, CA

    2009-02-03

    The present disclosure provides a high performance hybrid magnetic structure made from a combination of permanent magnets and ferromagnetic pole materials which are assembled in a predetermined array. The hybrid magnetic structure provides means for separation and other biotechnology applications involving holding, manipulation, or separation of magnetic or magnetizable molecular structures and targets. Also disclosed are further improvements to aspects of the hybrid magnetic structure, including additional elements and for adapting the use of the hybrid magnetic structure for use in biotechnology and high throughput processes.

  2. High performance protection circuit for power electronics applications

    Energy Technology Data Exchange (ETDEWEB)

    Tudoran, Cristian D., E-mail: cristian.tudoran@itim-cj.ro; Dădârlat, Dorin N.; Toşa, Nicoleta; Mişan, Ioan [National Institute for Research and Development of Isotopic and Molecular Technologies, 67-103 Donat, PO 5 Box 700, 400293 Cluj-Napoca (Romania)

    2015-12-23

    In this paper we present a high performance protection circuit designed for the power electronics applications where the load currents can increase rapidly and exceed the maximum allowed values, like in the case of high frequency induction heating inverters or high frequency plasma generators. The protection circuit is based on a microcontroller and can be adapted for use on single-phase or three-phase power systems. Its versatility comes from the fact that the circuit can communicate with the protected system, having the role of a “sensor” or it can interrupt the power supply for protection, in this case functioning as an external, independent protection circuit.

  3. Rapid Prototyping of High Performance Signal Processing Applications

    Science.gov (United States)

    Sane, Nimish

    Advances in embedded systems for digital signal processing (DSP) are enabling many scientific projects and commercial applications. At the same time, these applications are key to driving advances in many important kinds of computing platforms. In this region of high performance DSP, rapid prototyping is critical for faster time-to-market (e.g., in the wireless communications industry) or time-to-science (e.g., in radio astronomy). DSP system architectures have evolved from being based on application specific integrated circuits (ASICs) to incorporate reconfigurable off-the-shelf field programmable gate arrays (FPGAs), the latest multiprocessors such as graphics processing units (GPUs), or heterogeneous combinations of such devices. We, thus, have a vast design space to explore based on performance trade-offs, and expanded by the multitude of possibilities for target platforms. In order to allow systematic design space exploration, and develop scalable and portable prototypes, model based design tools are increasingly used in design and implementation of embedded systems. These tools allow scalable high-level representations, model based semantics for analysis and optimization, and portable implementations that can be verified at higher levels of abstractions and targeted toward multiple platforms for implementation. The designer can experiment using such tools at an early stage in the design cycle, and employ the latest hardware at later stages. In this thesis, we have focused on dataflow-based approaches for rapid DSP system prototyping. This thesis contributes to various aspects of dataflow-based design flows and tools as follows: 1. We have introduced the concept of topological patterns, which exploits commonly found repetitive patterns in DSP algorithms to allow scalable, concise, and parameterizable representations of large scale dataflow graphs in high-level languages. We have shown how an underlying design tool can systematically exploit a high

  4. High-performance dual-speed CCD camera system for scientific imaging

    Science.gov (United States)

    Simpson, Raymond W.

    1996-03-01

    Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.

  5. High Performance Computing - Power Application Programming Interface Specification.

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  6. High performance graphics processors for medical imaging applications

    International Nuclear Information System (INIS)

    Goldwasser, S.M.; Reynolds, R.A.; Talton, D.A.; Walsh, E.S.

    1989-01-01

    This paper describes a family of high- performance graphics processors with special hardware for interactive visualization of 3D human anatomy. The basic architecture expands to multiple parallel processors, each processor using pipelined arithmetic and logical units for high-speed rendering of Computed Tomography (CT), Magnetic Resonance (MR) and Positron Emission Tomography (PET) data. User-selectable display alternatives include multiple 2D axial slices, reformatted images in sagittal or coronal planes and shaded 3D views. Special facilities support applications requiring color-coded display of multiple datasets (such as radiation therapy planning), or dynamic replay of time- varying volumetric data (such as cine-CT or gated MR studies of the beating heart). The current implementation is a single processor system which generates reformatted images in true real time (30 frames per second), and shaded 3D views in a few seconds per frame. It accepts full scale medical datasets in their native formats, so that minimal preprocessing delay exists between data acquisition and display

  7. Development of high performance scientific components for interoperability of computing packages

    Energy Technology Data Exchange (ETDEWEB)

    Gulabani, Teena Pratap [Iowa State Univ., Ames, IA (United States)

    2008-01-01

    Three major high performance quantum chemistry computational packages, NWChem, GAMESS and MPQC have been developed by different research efforts following different design patterns. The goal is to achieve interoperability among these packages by overcoming the challenges caused by the different communication patterns and software design of each of these packages. A chemistry algorithm is hard to develop as well as being a time consuming process; integration of large quantum chemistry packages will allow resource sharing and thus avoid reinvention of the wheel. Creating connections between these incompatible packages is the major motivation of the proposed work. This interoperability is achieved by bringing the benefits of Component Based Software Engineering through a plug-and-play component framework called Common Component Architecture (CCA). In this thesis, I present a strategy and process used for interfacing two widely used and important computational chemistry methodologies: Quantum Mechanics and Molecular Mechanics. To show the feasibility of the proposed approach the Tuning and Analysis Utility (TAU) has been coupled with NWChem code and its CCA components. Results show that the overhead is negligible when compared to the ease and potential of organizing and coping with large-scale software applications.

  8. Load Balancing Scientific Applications

    Energy Technology Data Exchange (ETDEWEB)

    Pearce, Olga Tkachyshyn [Texas A & M Univ., College Station, TX (United States)

    2014-12-01

    The largest supercomputers have millions of independent processors, and concurrency levels are rapidly increasing. For ideal efficiency, developers of the simulations that run on these machines must ensure that computational work is evenly balanced among processors. Assigning work evenly is challenging because many large modern parallel codes simulate behavior of physical systems that evolve over time, and their workloads change over time. Furthermore, the cost of imbalanced load increases with scale because most large-scale scientific simulations today use a Single Program Multiple Data (SPMD) parallel programming model, and an increasing number of processors will wait for the slowest one at the synchronization points. To address load imbalance, many large-scale parallel applications use dynamic load balance algorithms to redistribute work evenly. The research objective of this dissertation is to develop methods to decide when and how to load balance the application, and to balance it effectively and affordably. We measure and evaluate the computational load of the application, and develop strategies to decide when and how to correct the imbalance. Depending on the simulation, a fast, local load balance algorithm may be suitable, or a more sophisticated and expensive algorithm may be required. We developed a model for comparison of load balance algorithms for a specific state of the simulation that enables the selection of a balancing algorithm that will minimize overall runtime.

  9. Designing scientific applications on GPUs

    CERN Document Server

    Couturier, Raphael

    2013-01-01

    Many of today's complex scientific applications now require a vast amount of computational power. General purpose graphics processing units (GPGPUs) enable researchers in a variety of fields to benefit from the computational power of all the cores available inside graphics cards.Understand the Benefits of Using GPUs for Many Scientific ApplicationsDesigning Scientific Applications on GPUs shows you how to use GPUs for applications in diverse scientific fields, from physics and mathematics to computer science. The book explains the methods necessary for designing or porting your scientific appl

  10. CUDA/GPU Technology : Parallel Programming For High Performance Scientific Computing

    OpenAIRE

    YUHENDRA; KUZE, Hiroaki; JOSAPHAT, Tetuko Sri Sumantyo

    2009-01-01

    [ABSTRACT]Graphics processing units (GP Us) originally designed for computer video cards have emerged as the most powerful chip in a high-performance workstation. In the high performance computation capabilities, graphic processing units (GPU) lead to much more powerful performance than conventional CPUs by means of parallel processing. In 2007, the birth of Compute Unified Device Architecture (CUDA) and CUDA-enabled GPUs by NVIDIA Corporation brought a revolution in the general purpose GPU a...

  11. Industrial applications of high-performance computing best global practices

    CERN Document Server

    Osseyran, Anwar

    2015-01-01

    ""This book gives a comprehensive and up-to-date overview of the rapidly expanding field of the industrial use of supercomputers. It is just a pleasure reading through informative country reports and in-depth case studies contributed by leading researchers in the field.""-Jysoo Lee, Principal Researcher, Korea Institute of Science and Technology Information""From telescopes to microscopes, from vacuums to hyperbaric chambers, from sonar waves to laser beams, scientists have perpetually strived to apply technology and invention to new frontiers of scientific advancement. Along the way, they hav

  12. High Performance Computing for Solving Fractional Differential Equations with Applications

    OpenAIRE

    Zhang, Wei

    2014-01-01

    Fractional calculus is the generalization of integer-order calculus to rational order. This subject has at least three hundred years of history. However, it was traditionally regarded as a pure mathematical field and lacked real world applications for a very long time. In recent decades, fractional calculus has re-attracted the attention of scientists and engineers. For example, many researchers have found that fractional calculus is a useful tool for describing hereditary materials and p...

  13. Monte Carlo Frameworks Building Customisable High-performance C++ Applications

    CERN Document Server

    Duffy, Daniel J

    2011-01-01

    This is one of the first books that describe all the steps that are needed in order to analyze, design and implement Monte Carlo applications. It discusses the financial theory as well as the mathematical and numerical background that is needed to write flexible and efficient C++ code using state-of-the art design and system patterns, object-oriented and generic programming models in combination with standard libraries and tools.   Includes a CD containing the source code for all examples. It is strongly advised that you experiment with the code by compiling it and extending it to suit your ne

  14. High-performance heat pipes for heat recovery applications

    Science.gov (United States)

    Saaski, E. W.; Hartl, J. H.

    1980-01-01

    Methods to improve the performance of reflux heat pipes for heat recovery applications were examined both analytically and experimentally. Various models for the estimation of reflux heat pipe transport capacity were surveyed in the literature and compared with experimental data. A high transport capacity reflux heat pipe was developed that provides up to a factor of 10 capacity improvement over conventional open tube designs; analytical models were developed for this device and incorporated into a computer program HPIPE. Good agreement of the model predictions with data for R-11 and benzene reflux heat pipes was obtained.

  15. High-performance insulator structures for accelerator applications

    International Nuclear Information System (INIS)

    Sampayan, S.E.; Caporaso, G.J.; Sanders, D.M.; Stoddard, R.D.; Trimble, D.O.; Elizondo, J.; Krogh, M.L.; Wieskamp, T.F.

    1997-05-01

    A new, high gradient insulator technology has been developed for accelerator systems. The concept involves the use of alternating layers of conductors and insulators with periods of order 1 mm or less. These structures perform many times better (about 1.5 to 4 times higher breakdown electric field) than conventional insulators in long pulse, short pulse, and alternating polarity applications. We describe our ongoing studies investigating the degradation of the breakdown electric field resulting from alternate fabrication techniques, the effect of gas pressure, the effect of the insulator-to-electrode interface gap spacing, and the performance of the insulator structure under bi-polar stress

  16. High performance polypyrrole coating for corrosion protection and biocidal applications

    Science.gov (United States)

    Nautiyal, Amit; Qiao, Mingyu; Cook, Jonathan Edwin; Zhang, Xinyu; Huang, Tung-Shi

    2018-01-01

    Polypyrrole (PPy) coating was electrochemically synthesized on carbon steel using sulfonic acids as dopants: p-toluene sulfonic acid (p-TSA), sulfuric acid (SA), (±) camphor sulfonic acid (CSA), sodium dodecyl sulfate (SDS), and sodium dodecylbenzene sulfonate (SDBS). The effect of acidic dopants (p-TSA, SA, CSA) on passivation of carbon steel was investigated by linear potentiodynamic and compared with morphology and corrosion protection performance of the coating produced. The types of the dopants used were significantly affecting the protection efficiency of the coating against chloride ion attack on the metal surface. The corrosion performance depends on size and alignment of dopant in the polymer backbone. Both p-TSA and SDBS have extra benzene ring that stack together to form a lamellar sheet like barrier to chloride ions thus making them appropriate dopants for PPy coating in suppressing the corrosion at significant level. Further, adhesion performance was enhanced by adding long chain carboxylic acid (decanoic acid) directly in the monomer solution. In addition, PPy coating doped with SDBS displayed excellent biocidal abilities against Staphylococcus aureus. The polypyrrole coatings on carbon steels with dual function of anti-corrosion and excellent biocidal properties shows great potential application in the industry for anti-corrosion/antimicrobial purposes.

  17. Development and application of high performance resins for crud removal

    International Nuclear Information System (INIS)

    Deguchi, Tatsuya; Izumi, Takeshi; Hagiwara, Masahiro

    1998-01-01

    The development of crud removal technology has started with the finding of the resin aging effect that an old ion exchange resin, aged by long year of use in the condensate demineralizer, had an enhanced crud removal capability. It was confirmed that some physical properties such as specific surface area and water retention capacity were increased due to degradation caused by long year of contact with active oxygens in the condensate water. So, it was speculated that those degradation in the resin matrix enhanced the adsorption of crud particulate onto the resin surface, hence the crud removal capability. Based on this, crud removal resin with greater surface area was first developed. This resin has shown an excellent crud removal efficiency in an actual power plant, and the crud iron concentration in the condensate effluent was drastically reduced by this application. However, the cross-linkage of the cation resin had to be lowered in a delicate manner for that specific purpose, and this has caused higher organic leachables from the resin, and the sulfate level in the reactor was raised accordingly. Our major goals, therefore, has been to develop a crud resin of as little organic leachables as possible with keeping the original crud removal efficiency. It was revealed through the evaluation of the first generation crud resin and its improved version installed in the actual condensate demineralizers that there was a good correlation between crud removal efficiency and organic leaching rate. The bast one among a number of developmental resins has shown the organic leaching rate of 1/10 of that of the original crud resin (ETR-C), and the crud removal efficiency of 90%. So far as we understand, the resin was considered to have the best overall balance between crud removal and leaching characteristics. The result of six month evaluation of this developmental resin, ETR-C3, in one vessel of condensate demineralizer of a power plant will be presented. (J.P.N.)

  18. High Performance Multi-GPU SpMV for Multi-component PDE-Based Applications

    KAUST Repository

    Abdelfattah, Ahmad; Ltaief, Hatem; Keyes, David E.

    2015-01-01

    -block structure. While these optimizations are important for high performance dense kernel executions, they are even more critical when dealing with sparse linear algebra operations. The most time-consuming phase of many multicomponent applications, such as models

  19. Application of High-performance Visual Analysis Methods to Laser Wakefield Particle Acceleration Data

    International Nuclear Information System (INIS)

    Rubel, Oliver; Prabhat, Mr.; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Ahern, Sean; Weber, Gunther H.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes

    2008-01-01

    Our work combines and extends techniques from high-performance scientific data management and visualization to enable scientific researchers to gain insight from extremely large, complex, time-varying laser wakefield particle accelerator simulation data. We extend histogram-based parallel coordinates for use in visual information display as well as an interface for guiding and performing data mining operations, which are based upon multi-dimensional and temporal thresholding and data subsetting operations. To achieve very high performance on parallel computing platforms, we leverage FastBit, a state-of-the-art index/query technology, to accelerate data mining and multi-dimensional histogram computation. We show how these techniques are used in practice by scientific researchers to identify, visualize and analyze a particle beam in a large, time-varying dataset

  20. Scientific applications of symbolic computation

    International Nuclear Information System (INIS)

    Hearn, A.C.

    1976-02-01

    The use of symbolic computation systems for problem solving in scientific research is reviewed. The nature of the field is described, and particular examples are considered from celestial mechanics, quantum electrodynamics and general relativity. Symbolic integration and some more recent applications of algebra systems are also discussed [fr

  1. Development of high performance Schottky barrier diode and its application to plasma diagnostics

    International Nuclear Information System (INIS)

    Fujita, Junji; Kawahata, Kazuo; Okajima, Shigeki

    1993-10-01

    At the conclusion of the Supporting Collaboration Research on 'Development of High Performance Detectors in the Far Infrared Range' carried out from FY1990 to FY1992, the results of developing Schottky barrier diode and its application to plasma diagnostics are summarized. Some remarks as well as technical know-how for the correct use of diodes are also described. (author)

  2. Application of secondary ion mass spectrometry for the characterization of commercial high performance materials

    International Nuclear Information System (INIS)

    Gritsch, M.

    2000-09-01

    The industry today offers an uncounted number of high performance materials, that have to meet highest standards. Commercial high performance materials, though often sold in large quantities, still require ongoing research and development to keep up to date with increasing needs and decreasing tolerances. Furthermore, a variety of materials is on the market that are not fully understood in their microstructure, in the way they react under application conditions, and in which mechanisms are responsible for their degradation. Secondary Ion Mass Spectrometry (SIMS) is an analytical method that is now in commercial use for over 30 years. Its main advantages are the very high detection sensitivity (down to ppb), the ability to measure all elements with isotopic sensitivity, the ability of gaining laterally resolved images, and the inherent capability of depth-profiling. These features make it an ideal tool for a wide field of applications within advanced material science. The present work gives an introduction into the principles of SIMS and shows the successful application for the characterization of commercially used high performance materials. Finally, a selected collection of my publications in reviewed journals will illustrate the state of the art in applied materials research and development with dynamic SIMS. All publications focus on the application of dynamic SIMS to analytical questions that stem from questions arising during the production and improvement of high-performance materials. (author)

  3. High performance statistical computing with parallel R: applications to biology and climate modelling

    International Nuclear Information System (INIS)

    Samatova, Nagiza F; Branstetter, Marcia; Ganguly, Auroop R; Hettich, Robert; Khan, Shiraj; Kora, Guruprasad; Li, Jiangtian; Ma, Xiaosong; Pan, Chongle; Shoshani, Arie; Yoginath, Srikanth

    2006-01-01

    Ultrascale computing and high-throughput experimental technologies have enabled the production of scientific data about complex natural phenomena. With this opportunity, comes a new problem - the massive quantities of data so produced. Answers to fundamental questions about the nature of those phenomena remain largely hidden in the produced data. The goal of this work is to provide a scalable high performance statistical data analysis framework to help scientists perform interactive analyses of these raw data to extract knowledge. Towards this goal we have been developing an open source parallel statistical analysis package, called Parallel R, that lets scientists employ a wide range of statistical analysis routines on high performance shared and distributed memory architectures without having to deal with the intricacies of parallelizing these routines

  4. High-performance floating-point image computing workstation for medical applications

    Science.gov (United States)

    Mills, Karl S.; Wong, Gilman K.; Kim, Yongmin

    1990-07-01

    The medical imaging field relies increasingly on imaging and graphics techniques in diverse applications with needs similar to (or more stringent than) those of the military, industrial and scientific communities. However, most image processing and graphics systems available for use in medical imaging today are either expensive, specialized, or in most cases both. High performance imaging and graphics workstations which can provide real-time results for a number of applications, while maintaining affordability and flexibility, can facilitate the application of digital image computing techniques in many different areas. This paper describes the hardware and software architecture of a medium-cost floating-point image processing and display subsystem for the NeXT computer, and its applications as a medical imaging workstation. Medical imaging applications of the workstation include use in a Picture Archiving and Communications System (PACS), in multimodal image processing and 3-D graphics workstation for a broad range of imaging modalities, and as an electronic alternator utilizing its multiple monitor display capability and large and fast frame buffer. The subsystem provides a 2048 x 2048 x 32-bit frame buffer (16 Mbytes of image storage) and supports both 8-bit gray scale and 32-bit true color images. When used to display 8-bit gray scale images, up to four different 256-color palettes may be used for each of four 2K x 2K x 8-bit image frames. Three of these image frames can be used simultaneously to provide pixel selectable region of interest display. A 1280 x 1024 pixel screen with 1: 1 aspect ratio can be windowed into the frame buffer for display of any portion of the processed image or images. In addition, the system provides hardware support for integer zoom and an 82-color cursor. This subsystem is implemented on an add-in board occupying a single slot in the NeXT computer. Up to three boards may be added to the NeXT for multiple display capability (e

  5. Precision ring rolling technique and application in high-performance bearing manufacturing

    Directory of Open Access Journals (Sweden)

    Hua Lin

    2015-01-01

    Full Text Available High-performance bearing has significant application in many important industry fields, like automobile, precision machine tool, wind power, etc. Precision ring rolling is an advanced rotary forming technique to manufacture high-performance seamless bearing ring thus can improve the working life of bearing. In this paper, three kinds of precision ring rolling techniques adapt to different dimensional ranges of bearings are introduced, which are cold ring rolling for small-scale bearing, hot radial ring rolling for medium-scale bearing and hot radial-axial ring rolling for large-scale bearing. The forming principles, technological features and forming equipments for three kinds of precision ring rolling techniques are summarized, the technological development and industrial application in China are introduced, and the main technological development trend is described.

  6. In search of novel, high performance and intelligent materials for applications in severe and unconditioned environments

    International Nuclear Information System (INIS)

    Gyeabour Ayensu, A. I.; Normeshie, C. M. K.

    2007-01-01

    For extreme operating conditions in aerospace, nuclear power plants and medical applications, novel materials have become more competitive over traditional materials because of the unique characteristics. Extensive research programmes are being undertaken to develop high performance and knowledge-intensive new materials, since existing materials cannot meet the stringent technological requirements of advanced materials for emerging industries. The technologies of intermetallic compounds, nanostructural materials, advanced composites, and photonics materials are presented. In addition, medical biomaterial implants of high functional performance based on biocompatibility, resistance against corrosion and degradation, and for applications in hostile environment of human body are discussed. The opportunities for African researchers to collaborate in international research programmes to develop local raw materials into high performance materials are also highlighted. (au)

  7. Core-Shell Columns in High-Performance Liquid Chromatography: Food Analysis Applications

    OpenAIRE

    Preti, Raffaella

    2016-01-01

    The increased separation efficiency provided by the new technology of column packed with core-shell particles in high-performance liquid chromatography (HPLC) has resulted in their widespread diffusion in several analytical fields: from pharmaceutical, biological, environmental, and toxicological. The present paper presents their most recent applications in food analysis. Their use has proved to be particularly advantageous for the determination of compounds at trace levels or when a large am...

  8. Are Cloud Environments Ready for Scientific Applications?

    Science.gov (United States)

    Mehrotra, P.; Shackleford, K.

    2011-12-01

    Cloud computing environments are becoming widely available both in the commercial and government sectors. They provide flexibility to rapidly provision resources in order to meet dynamic and changing computational needs without the customers incurring capital expenses and/or requiring technical expertise. Clouds also provide reliable access to resources even though the end-user may not have in-house expertise for acquiring or operating such resources. Consolidation and pooling in a cloud environment allow organizations to achieve economies of scale in provisioning or procuring computing resources and services. Because of these and other benefits, many businesses and organizations are migrating their business applications (e.g., websites, social media, and business processes) to cloud environments-evidenced by the commercial success of offerings such as the Amazon EC2. In this paper, we focus on the feasibility of utilizing cloud environments for scientific workloads and workflows particularly of interest to NASA scientists and engineers. There is a wide spectrum of such technical computations. These applications range from small workstation-level computations to mid-range computing requiring small clusters to high-performance simulations requiring supercomputing systems with high bandwidth/low latency interconnects. Data-centric applications manage and manipulate large data sets such as satellite observational data and/or data previously produced by high-fidelity modeling and simulation computations. Most of the applications are run in batch mode with static resource requirements. However, there do exist situations that have dynamic demands, particularly ones with public-facing interfaces providing information to the general public, collaborators and partners, as well as to internal NASA users. In the last few months we have been studying the suitability of cloud environments for NASA's technical and scientific workloads. We have ported several applications to

  9. DEVICE TECHNOLOGY. Nanomaterials in transistors: From high-performance to thin-film applications.

    Science.gov (United States)

    Franklin, Aaron D

    2015-08-14

    For more than 50 years, silicon transistors have been continuously shrunk to meet the projections of Moore's law but are now reaching fundamental limits on speed and power use. With these limits at hand, nanomaterials offer great promise for improving transistor performance and adding new applications through the coming decades. With different transistors needed in everything from high-performance servers to thin-film display backplanes, it is important to understand the targeted application needs when considering new material options. Here the distinction between high-performance and thin-film transistors is reviewed, along with the benefits and challenges to using nanomaterials in such transistors. In particular, progress on carbon nanotubes, as well as graphene and related materials (including transition metal dichalcogenides and X-enes), outlines the advances and further research needed to enable their use in transistors for high-performance computing, thin films, or completely new technologies such as flexible and transparent devices. Copyright © 2015, American Association for the Advancement of Science.

  10. Tokamaks with high-performance resistive magnets: advanced test reactors and prospects for commercial applications

    International Nuclear Information System (INIS)

    Bromberg, L.; Cohn, D.R.; Williams, J.E.C.; Becker, H.; Leclaire, R.; Yang, T.

    1981-10-01

    Scoping studies have been made of tokamak reactors with high performance resistive magnets which maximize advantages gained from high field operation and reduced shielding requirements, and minimize resistive power requirements. High field operation can provide very high values of fusion power density and n tau/sub e/ while the resistive power losses can be kept relatively small. Relatively high values of Q' = Fusion Power/Magnet Resistive Power can be obtained. The use of high field also facilitates operation in the DD-DT advanced fuel mode. The general engineering and operational features of machines with high performance magnets are discussed. Illustrative parameters are given for advanced test reactors and for possible commercial reactors. Commercial applications that are discussed are the production of fissile fuel, electricity generation with and without fissioning blankets and synthetic fuel production

  11. DEVELOPMENT OF NEW VALVE STEELS FOR APPLICATION IN HIGH PERFORMANCE ENGINES

    Directory of Open Access Journals (Sweden)

    Alexandre Bellegard Farina

    2013-12-01

    Full Text Available UNS N07751 and UNS N07080 alloys are commonly applied for automotive valves production for high performance internal combustion engines. These alloys present high hot resistance to mechanical strength, oxidation, corrosion, creep and microstructural stability. However, these alloys presents low wear resistance and high cost due to the high nickel contents. In this work it is presented the development of two new Ni-based alloys for application in high performance automotive valve as an alternative to the alloys UNS N07751 and UNS N07080. The new developed alloys are based on a high nickel-chromium austenitic matrix with dispersion of γ’ and γ’’ phases and containing different NbC contents. Due to the nickel content reduction in the developed alloys in comparison with these actually used alloys, the new alloys present an economical advantage for substitution of UNS N07751 and UNS N0780 alloys.

  12. Research and Application of New Type of High Performance Titanium Alloy

    Directory of Open Access Journals (Sweden)

    ZHU Zhishou

    2016-06-01

    Full Text Available With the continuous extension of the application quantity and range for titanium alloy in the fields of national aviation, space, weaponry, marine and chemical industry, etc., even more critical requirements to the comprehensive mechanical properties, low cost and process technological properties of titanium alloy have been raised. Through the alloying based on the microstructure parameters design, and the comprehensive strengthening and toughening technologies of fine grain strengthening, phase transformation and process control of high toughening, the new type of high performance titanium alloy which has good comprehensive properties of high strength and toughness, anti-fatigue, failure resistance and anti-impact has been researched and manufactured. The new titanium alloy has extended the application quantity and application level in the high end field, realized the industrial upgrading and reforming, and met the application requirements of next generation equipment.

  13. Implementation of Scientific Computing Applications on the Cell Broadband Engine

    Directory of Open Access Journals (Sweden)

    Guochun Shi

    2009-01-01

    Full Text Available The Cell Broadband Engine architecture is a revolutionary processor architecture well suited for many scientific codes. This paper reports on an effort to implement several traditional high-performance scientific computing applications on the Cell Broadband Engine processor, including molecular dynamics, quantum chromodynamics and quantum chemistry codes. The paper discusses data and code restructuring strategies necessary to adapt the applications to the intrinsic properties of the Cell processor and demonstrates performance improvements achieved on the Cell architecture. It concludes with the lessons learned and provides practical recommendations on optimization techniques that are believed to be most appropriate.

  14. Application of Ionic Liquids in High Performance Reversed-Phase Chromatography

    Directory of Open Access Journals (Sweden)

    Wentao Bi

    2009-06-01

    Full Text Available Ionic liquids, considered “green” chemicals, are widely used in many areas of analytical chemistry due to their unique properties. Recently, ionic liquids have been used as a kind of novel additive in separation and combined with silica to synthesize new stationary phase as separation media. This review will focus on the properties and mechanisms of ionic liquids and their potential applications as mobile phase modifier and surface-bonded stationary phase in reversed-phase high performance liquid chromatography (RP-HPLC. Ionic liquids demonstrate advantages and potential in chromatographic field.

  15. High Performance Computing - Power Application Programming Interface Specification Version 2.0.

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Levenhagen, Michael J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Olivier, Stephen Lecler [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ward, H. Lee [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-03-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  16. Scientific Applications Performance Evaluation on Burst Buffer

    KAUST Repository

    Markomanolis, George S.

    2017-10-19

    Parallel I/O is an integral component of modern high performance computing, especially in storing and processing very large datasets, such as the case of seismic imaging, CFD, combustion and weather modeling. The storage hierarchy includes nowadays additional layers, the latest being the usage of SSD-based storage as a Burst Buffer for I/O acceleration. We present an in-depth analysis on how to use Burst Buffer for specific cases and how the internal MPI I/O aggregators operate according to the options that the user provides during his job submission. We analyze the performance of a range of I/O intensive scientific applications, at various scales on a large installation of Lustre parallel file system compared to an SSD-based Burst Buffer. Our results show a performance improvement over Lustre when using Burst Buffer. Moreover, we show results from a data hierarchy library which indicate that the standard I/O approaches are not enough to get the expected performance from this technology. The performance gain on the total execution time of the studied applications is between 1.16 and 3 times compared to Lustre. One of the test cases achieved an impressive I/O throughput of 900 GB/s on Burst Buffer.

  17. Performance Issues in High Performance Fortran Implementations of Sensor-Based Applications

    Directory of Open Access Journals (Sweden)

    David R. O'hallaron

    1997-01-01

    Full Text Available Applications that get their inputs from sensors are an important and often overlooked application domain for High Performance Fortran (HPF. Such sensor-based applications typically perform regular operations on dense arrays, and often have latency and through put requirements that can only be achieved with parallel machines. This article describes a study of sensor-based applications, including the fast Fourier transform, synthetic aperture radar imaging, narrowband tracking radar processing, multibaseline stereo imaging, and medical magnetic resonance imaging. The applications are written in a dialect of HPF developed at Carnegie Mellon, and are compiled by the Fx compiler for the Intel Paragon. The main results of the study are that (1 it is possible to realize good performance for realistic sensor-based applications written in HPF and (2 the performance of the applications is determined by the performance of three core operations: independent loops (i.e., loops with no dependences between iterations, reductions, and index permutations. The article discusses the implications for HPF implementations and introduces some simple tests that implementers and users can use to measure the efficiency of the loops, reductions, and index permutations generated by an HPF compiler.

  18. Characterization of high performance silicon-based VMJ PV cells for laser power transmission applications

    Science.gov (United States)

    Perales, Mico; Yang, Mei-huan; Wu, Cheng-liang; Hsu, Chin-wei; Chao, Wei-sheng; Chen, Kun-hsien; Zahuranec, Terry

    2016-03-01

    Continuing improvements in the cost and power of laser diodes have been critical in launching the emerging fields of power over fiber (PoF), and laser power beaming. Laser power is transmitted either over fiber (for PoF), or through free space (power beaming), and is converted to electricity by photovoltaic cells designed to efficiently convert the laser light. MH GoPower's vertical multi-junction (VMJ) PV cell, designed for high intensity photovoltaic applications, is fueling the emergence of this market, by enabling unparalleled photovoltaic receiver flexibility in voltage, cell size, and power output. Our research examined the use of the VMJ PV cell for laser power transmission applications. We fully characterized the performance of the VMJ PV cell under various laser conditions, including multiple near IR wavelengths and light intensities up to tens of watts per cm2. Results indicated VMJ PV cell efficiency over 40% for 9xx nm wavelengths, at laser power densities near 30 W/cm2. We also investigated the impact of the physical dimensions (length, width, and height) of the VMJ PV cell on its performance, showing similarly high performance across a wide range of cell dimensions. We then evaluated the VMJ PV cell performance within the power over fiber application, examining the cell's effectiveness in receiver packages that deliver target voltage, intensity, and power levels. By designing and characterizing multiple receivers, we illustrated techniques for packaging the VMJ PV cell for achieving high performance (> 30%), high power (> 185 W), and target voltages for power over fiber applications.

  19. High Performance Wideband CMOS CCI and its Application in Inductance Simulator Design

    Directory of Open Access Journals (Sweden)

    ARSLAN, E.

    2012-08-01

    Full Text Available In this paper, a new, differential pair based, low-voltage, high performance and wideband CMOS first generation current conveyor (CCI is proposed. The proposed CCI has high voltage swings on ports X and Y and very low equivalent impedance on port X due to super source follower configuration. It also has high voltage swings (close to supply voltages on input and output ports and wideband current and voltage transfer ratios. Furthermore, two novel grounded inductance simulator circuits are proposed as application examples. Using HSpice, it is shown that the simulation results of the proposed CCI and also of the presented inductance simulators are in very good agreement with the expected ones.

  20. High Performance Computing - Power Application Programming Interface Specification Version 1.4

    Energy Technology Data Exchange (ETDEWEB)

    Laros III, James H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); DeBonis, David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kelly, Suzanne M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Levenhagen, Michael J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Olivier, Stephen Lecler [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-10-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  1. Core-Shell Columns in High-Performance Liquid Chromatography: Food Analysis Applications

    Science.gov (United States)

    Preti, Raffaella

    2016-01-01

    The increased separation efficiency provided by the new technology of column packed with core-shell particles in high-performance liquid chromatography (HPLC) has resulted in their widespread diffusion in several analytical fields: from pharmaceutical, biological, environmental, and toxicological. The present paper presents their most recent applications in food analysis. Their use has proved to be particularly advantageous for the determination of compounds at trace levels or when a large amount of samples must be analyzed fast using reliable and solvent-saving apparatus. The literature hereby described shows how the outstanding performances provided by core-shell particles column on a traditional HPLC instruments are comparable to those obtained with a costly UHPLC instrumentation, making this novel column a promising key tool in food analysis. PMID:27143972

  2. Cloud object store for checkpoints of high performance computing applications using decoupling middleware

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-04-19

    Cloud object storage is enabled for checkpoints of high performance computing applications using a middleware process. A plurality of files, such as checkpoint files, generated by a plurality of processes in a parallel computing system are stored by obtaining said plurality of files from said parallel computing system; converting said plurality of files to objects using a log structured file system middleware process; and providing said objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  3. APPLICATION OF ULTRA-HIGH PERFORMANCE CONCRETE TO PEDESTRIAN CABLE-STAYED BRIDGES

    Directory of Open Access Journals (Sweden)

    CHI-DONG LEE

    2013-06-01

    Full Text Available The use of ultra-high performance concrete (UHPC, which enables reducing the cross sectional dimension of the structures due to its high strength, is expected in the construction of the super-long span bridges. Unlike conventional concrete, UHPC experiences less variation of material properties such as creep and drying shrinkage and can reduce uncertainties in predicting time-dependent behavior over the long term. This study describes UHPC’s material characteristics and benefits when applied to super-long span bridges. A UHPC girder pedestrian cable-stayed bridge was designed and successfully constructed. The UHPC reduced the deflections in both the short and long term. The cost analysis demonstrates a highly competitive price for UHPC. This study indicates that UHPC has a strong potential for application in the super-long span bridges.

  4. [Reversed-phase high-performance liquid chromatograph--application to serum aluminium monitoring].

    Science.gov (United States)

    Hoshino, H; Kaneko, E

    1996-01-01

    High-Performance Liquid Chromatography (HPLC) with the reversed-phase partition mode separation (including ion-pair one) towards metal chelate compounds prepared in an off-line fashion (precolumn chelation) is most versatile in terms of high sensitivity with base-line flatness, unique selectivity and cost effectiveness. The extraordinary toughness to the complicated matrices encountered in clinical testing is exemplified by the successful application to the aluminium monitoring of human serum samples. The A1 chelate with 2,2'-dihydroxyazobenzene is efficiently chromatographed on a LiChroCART RP-18 column using an aqueous methanol eluent (63.6 wt%) containing tetrabutylammonium bromide as an ion-pair agent. The serum concentration level of A1 down to 6 micrograms dm-3 is readily monitored without influences from iron, chyle and haemolysis.

  5. High Performance Multi-GPU SpMV for Multi-component PDE-Based Applications

    KAUST Repository

    Abdelfattah, Ahmad

    2015-07-25

    Leveraging optimization techniques (e.g., register blocking and double buffering) introduced in the context of KBLAS, a Level 2 BLAS high performance library on GPUs, the authors implement dense matrix-vector multiplications within a sparse-block structure. While these optimizations are important for high performance dense kernel executions, they are even more critical when dealing with sparse linear algebra operations. The most time-consuming phase of many multicomponent applications, such as models of reacting flows or petroleum reservoirs, is the solution at each implicit time step of large, sparse spatially structured or unstructured linear systems. The standard method is a preconditioned Krylov solver. The Sparse Matrix-Vector multiplication (SpMV) is, in turn, one of the most time-consuming operations in such solvers. Because there is no data reuse of the elements of the matrix within a single SpMV, kernel performance is limited by the speed at which data can be transferred from memory to registers, making the bus bandwidth the major bottleneck. On the other hand, in case of a multi-species model, the resulting Jacobian has a dense block structure. For contemporary petroleum reservoir simulations, the block size typically ranges from three to a few dozen among different models, and still larger blocks are relevant within adaptively model-refined regions of the domain, though generally the size of the blocks, related to the number of conserved species, is constant over large regions within a given model. This structure can be exploited beyond the convenience of a block compressed row data format, because it offers opportunities to hide the data motion with useful computations. The new SpMV kernel outperforms existing state-of-the-art implementations on single and multi-GPUs using matrices with dense block structure representative of porous media applications with both structured and unstructured multi-component grids.

  6. Graphene/CuS/ZnO hybrid nanocomposites for high performance photocatalytic applications

    International Nuclear Information System (INIS)

    Varghese, Jini; Varghese, K.T.

    2015-01-01

    We herein report a novel, high performance ternary nanocomposite composed of Graphene doped with nano Copper Sulphide and Zinc Oxide nanotubes (GCZ) for photodegradation of organic pollutants. Investigations were made to estimate and compare the Methyl Orange dye (MO) degradation using GCZ, synthesized pristine Graphene (Gr) and Graphene–ZnO hybrid nanocomposite (GZ) under UV light irradiations. The synthesis of nanocomposites involves the simple ultra-sonication and mixing methods. The nanocomposites were characterized using transmission electron microscopy (TEM), high resolution transmission electron microscopy (HR-TEM), X-ray diffraction (XRD), Raman spectroscopy, UV–vis absorption spectroscopy and Brunauer–Emmett–Teller (BET) surface area method. The as synthesized GCZ shows better surface area, porosity and band gap energy than as synthesized Gr and GZ. The photocatalytic degradation of methyl orange dye follows as Gr  > GZ due to the stronger adsorbability, large number of photo induced electrons and highest inhibition of charge carrier's recombination of GCZ. The kinetic investigation demonstrates that dye degradation exhibit the pseudo first order kinetic model with rate constant 0.1322, 0.049 and0.0109 min"−"1 corresponding to GCZ, GZ and Gr. The mechanism of dye degradation in presence of photocatalyst is also discussed. This study confirms that GCZ is a more promising material for high performance catalytic applications especially in the dye waste water purification. - Highlights: • Graphene–CuS–ZnO hybrid composites show better surface area, porosity and adsorbability. • CuS–ZnO hybrid nanostructure highly enhanced the photocatalytic activity of Graphene. • Graphene–CuS–ZnO hybrid composites show superior photocatalytic efficiency, rate constant and quantum yield.

  7. Graphene/CuS/ZnO hybrid nanocomposites for high performance photocatalytic applications

    Energy Technology Data Exchange (ETDEWEB)

    Varghese, Jini, E-mail: jini.nano@gmail.com; Varghese, K.T., E-mail: ktvscs@gmail.com

    2015-11-01

    We herein report a novel, high performance ternary nanocomposite composed of Graphene doped with nano Copper Sulphide and Zinc Oxide nanotubes (GCZ) for photodegradation of organic pollutants. Investigations were made to estimate and compare the Methyl Orange dye (MO) degradation using GCZ, synthesized pristine Graphene (Gr) and Graphene–ZnO hybrid nanocomposite (GZ) under UV light irradiations. The synthesis of nanocomposites involves the simple ultra-sonication and mixing methods. The nanocomposites were characterized using transmission electron microscopy (TEM), high resolution transmission electron microscopy (HR-TEM), X-ray diffraction (XRD), Raman spectroscopy, UV–vis absorption spectroscopy and Brunauer–Emmett–Teller (BET) surface area method. The as synthesized GCZ shows better surface area, porosity and band gap energy than as synthesized Gr and GZ. The photocatalytic degradation of methyl orange dye follows as Gr <<< GCZ >> GZ due to the stronger adsorbability, large number of photo induced electrons and highest inhibition of charge carrier's recombination of GCZ. The kinetic investigation demonstrates that dye degradation exhibit the pseudo first order kinetic model with rate constant 0.1322, 0.049 and0.0109 min{sup −1} corresponding to GCZ, GZ and Gr. The mechanism of dye degradation in presence of photocatalyst is also discussed. This study confirms that GCZ is a more promising material for high performance catalytic applications especially in the dye waste water purification. - Highlights: • Graphene–CuS–ZnO hybrid composites show better surface area, porosity and adsorbability. • CuS–ZnO hybrid nanostructure highly enhanced the photocatalytic activity of Graphene. • Graphene–CuS–ZnO hybrid composites show superior photocatalytic efficiency, rate constant and quantum yield.

  8. Applications of plasma spectrometry and high performance liquid chromatography in environmental and food science

    International Nuclear Information System (INIS)

    Iordache, Andreea-Maria; Biraruti, Elisabeta-Irina; Ionete, Roxana-Elena

    2008-01-01

    Full text: Plasma spectrometry has many applications in food science in analysis of a wide range of samples in the food chain. Food science in the broadest sense can be extended to include soil chemistry, plant uptake and, at the other end of the food chain, studies into the metabolic fate of particular elements or elemental species when the foods are consumed by humans or animals. Inductively Coupled Plasma Mass Spectrometry allows multi-element measurements of most elements in the periodic table. A very sensitive analytical technique for trace analysis of samples can be performed by inductively plasma mass spectrometer with quadrupolar detector using ultrasonic nebulization. High Performance Liquid Chromatography (HPLC) is an analytical technique for the separation and determination of organic and inorganic solutes in any samples especially biological, pharmaceutical, food, environmental. The present paper emphasizes that the future tendencies HPLC-ICP-MS is often the preferred analytical technique for these applications due to the simplicity of the coupling between the HPLC and ICP-MS Varian 820 using ultrasonic nebulization, potential for on-line separations with high species specificity and the capability for optimum limits of detection without the necessity of using complex hydride generation mechanisms. (authors)

  9. Analysis of Application Power and Schedule Composition in a High Performance Computing Environment

    Energy Technology Data Exchange (ETDEWEB)

    Elmore, Ryan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gruchalla, Kenny [National Renewable Energy Lab. (NREL), Golden, CO (United States); Phillips, Caleb [National Renewable Energy Lab. (NREL), Golden, CO (United States); Purkayastha, Avi [National Renewable Energy Lab. (NREL), Golden, CO (United States); Wunder, Nick [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-01-05

    As the capacity of high performance computing (HPC) systems continues to grow, small changes in energy management have the potential to produce significant energy savings. In this paper, we employ an extensive informatics system for aggregating and analyzing real-time performance and power use data to evaluate energy footprints of jobs running in an HPC data center. We look at the effects of algorithmic choices for a given job on the resulting energy footprints, and analyze application-specific power consumption, and summarize average power use in the aggregate. All of these views reveal meaningful power variance between classes of applications as well as chosen methods for a given job. Using these data, we discuss energy-aware cost-saving strategies based on reordering the HPC job schedule. Using historical job and power data, we present a hypothetical job schedule reordering that: (1) reduces the facility's peak power draw and (2) manages power in conjunction with a large-scale photovoltaic array. Lastly, we leverage this data to understand the practical limits on predicting key power use metrics at the time of submission.

  10. Fabrication of graphene foam supported carbon nanotube/polyaniline hybrids for high-performance supercapacitor applications

    International Nuclear Information System (INIS)

    Yang, Hongxia; Wang, Nan; Xu, Qun; Chen, Zhimin; Ren, Yumei; Razal, Joselito M; Chen, Jun

    2014-01-01

    A large-scale, high-powered energy storage system is crucial for addressing the energy problem. The development of high-performance materials is a key issue in realizing the grid-scale applications of energy-storage devices. In this work, we describe a simple and scalable method for fabricating hybrids (graphene-pyrrole/carbon nanotube-polyaniline (GPCP)) using graphene foam as the supporting template. Graphene-pyrrole (G-Py) aerogels are prepared via a green hydrothermal route from two-dimensional materials such as graphene sheets, while a carbon nanotube/polyaniline (CNT/PANI) composite dispersion is obtained via the in situ polymerization method. The functional nanohybrid materials of GPCP can be assembled by simply dipping the prepared G-py aerogels into the CNT/PANI dispersion. The morphology of the obtained GPCP is investigated by scanning electron microscopy (SEM) and transmission electron microscopy (TEM), which revealed that the CNT/PANI was uniformly deposited onto the surfaces of the graphene. The as-synthesized GPCP maintains its original three-dimensional hierarchical porous architecture, which favors the diffusion of the electrolyte ions into the inner region of the active materials. Such hybrid materials exhibit significant specific capacitance of up to 350 F g −1 , making them promising in large-scale energy-storage device applications. (paper)

  11. High performance parallel computing of flows in complex geometries: II. Applications

    International Nuclear Information System (INIS)

    Gourdain, N; Gicquel, L; Staffelbach, G; Vermorel, O; Duchaine, F; Boussuge, J-F; Poinsot, T

    2009-01-01

    Present regulations in terms of pollutant emissions, noise and economical constraints, require new approaches and designs in the fields of energy supply and transportation. It is now well established that the next breakthrough will come from a better understanding of unsteady flow effects and by considering the entire system and not only isolated components. However, these aspects are still not well taken into account by the numerical approaches or understood whatever the design stage considered. The main challenge is essentially due to the computational requirements inferred by such complex systems if it is to be simulated by use of supercomputers. This paper shows how new challenges can be addressed by using parallel computing platforms for distinct elements of a more complex systems as encountered in aeronautical applications. Based on numerical simulations performed with modern aerodynamic and reactive flow solvers, this work underlines the interest of high-performance computing for solving flow in complex industrial configurations such as aircrafts, combustion chambers and turbomachines. Performance indicators related to parallel computing efficiency are presented, showing that establishing fair criterions is a difficult task for complex industrial applications. Examples of numerical simulations performed in industrial systems are also described with a particular interest for the computational time and the potential design improvements obtained with high-fidelity and multi-physics computing methods. These simulations use either unsteady Reynolds-averaged Navier-Stokes methods or large eddy simulation and deal with turbulent unsteady flows, such as coupled flow phenomena (thermo-acoustic instabilities, buffet, etc). Some examples of the difficulties with grid generation and data analysis are also presented when dealing with these complex industrial applications.

  12. Application of metal foam heat exchangers for a high-performance liquefied natural gas regasification system

    International Nuclear Information System (INIS)

    Kim, Dae Yeon; Sung, Tae Hong; Kim, Kyung Chun

    2016-01-01

    The intermediate fluid vaporizer has wide applications in the regasification of LNG (liquefied natural gas). The heat exchanger performance is one of the main contributors to the thermodynamic and cost effectiveness of the entire LNG regasification system. Within the paper, the authors discuss a new concept for a compact heat exchanger with a micro-cellular structure medium to minimize volume and mass and to increase thermal efficiency. Numerical calculations have been conducted to design a metal-foam filled plate heat exchanger and a shell-and-tube heat exchanger using published experimental correlations. The geometry of both heat exchangers was optimized using the conditions of thermolators in LNG regasification systems. The heat transfer and pressure drop performance was predicted to compare the heat exchangers. The results show that the metal-foam plate heat exchanger has the best performance at different channel heights and mass flow rates of fluid. In the optimized configurations, the metal-foam plate heat exchanger has a higher heat transfer rate and lower pressure drop than the shell-and-tube heat exchanger as the mass flow rate of natural gas is increased. - Highlights: • A metal foam heat exchanger is proposed for LNG regasification system. • Comparison was made with a shell and tube heat exchanger. • Heat transfer and pressure drop characteristics were estimated. • The geometry of both heat exchangers is optimized for thermolators. • It can be used as a compact and high performance thermolators.

  13. High-Performance Integrated Virtual Environment (HIVE) Tools and Applications for Big Data Analysis.

    Science.gov (United States)

    Simonyan, Vahan; Mazumder, Raja

    2014-09-30

    The High-performance Integrated Virtual Environment (HIVE) is a high-throughput cloud-based infrastructure developed for the storage and analysis of genomic and associated biological data. HIVE consists of a web-accessible interface for authorized users to deposit, retrieve, share, annotate, compute and visualize Next-generation Sequencing (NGS) data in a scalable and highly efficient fashion. The platform contains a distributed storage library and a distributed computational powerhouse linked seamlessly. Resources available through the interface include algorithms, tools and applications developed exclusively for the HIVE platform, as well as commonly used external tools adapted to operate within the parallel architecture of the system. HIVE is composed of a flexible infrastructure, which allows for simple implementation of new algorithms and tools. Currently, available HIVE tools include sequence alignment and nucleotide variation profiling tools, metagenomic analyzers, phylogenetic tree-building tools using NGS data, clone discovery algorithms, and recombination analysis algorithms. In addition to tools, HIVE also provides knowledgebases that can be used in conjunction with the tools for NGS sequence and metadata analysis.

  14. High-Performance Integrated Virtual Environment (HIVE Tools and Applications for Big Data Analysis

    Directory of Open Access Journals (Sweden)

    Vahan Simonyan

    2014-09-01

    Full Text Available The High-performance Integrated Virtual Environment (HIVE is a high-throughput cloud-based infrastructure developed for the storage and analysis of genomic and associated biological data. HIVE consists of a web-accessible interface for authorized users to deposit, retrieve, share, annotate, compute and visualize Next-generation Sequencing (NGS data in a scalable and highly efficient fashion. The platform contains a distributed storage library and a distributed computational powerhouse linked seamlessly. Resources available through the interface include algorithms, tools and applications developed exclusively for the HIVE platform, as well as commonly used external tools adapted to operate within the parallel architecture of the system. HIVE is composed of a flexible infrastructure, which allows for simple implementation of new algorithms and tools. Currently, available HIVE tools include sequence alignment and nucleotide variation profiling tools, metagenomic analyzers, phylogenetic tree-building tools using NGS data, clone discovery algorithms, and recombination analysis algorithms. In addition to tools, HIVE also provides knowledgebases that can be used in conjunction with the tools for NGS sequence and metadata analysis.

  15. 3D printed high performance strain sensors for high temperature applications

    Science.gov (United States)

    Rahman, Md Taibur; Moser, Russell; Zbib, Hussein M.; Ramana, C. V.; Panat, Rahul

    2018-01-01

    Realization of high temperature physical measurement sensors, which are needed in many of the current and emerging technologies, is challenging due to the degradation of their electrical stability by drift currents, material oxidation, thermal strain, and creep. In this paper, for the first time, we demonstrate that 3D printed sensors show a metamaterial-like behavior, resulting in superior performance such as high sensitivity, low thermal strain, and enhanced thermal stability. The sensors were fabricated using silver (Ag) nanoparticles (NPs), using an advanced Aerosol Jet based additive printing method followed by thermal sintering. The sensors were tested under cyclic strain up to a temperature of 500 °C and showed a gauge factor of 3.15 ± 0.086, which is about 57% higher than that of those available commercially. The sensor thermal strain was also an order of magnitude lower than that of commercial gages for operation up to a temperature of 500 °C. An analytical model was developed to account for the enhanced performance of such printed sensors based on enhanced lateral contraction of the NP films due to the porosity, a behavior akin to cellular metamaterials. The results demonstrate the potential of 3D printing technology as a pathway to realize highly stable and high-performance sensors for high temperature applications.

  16. Heads in the Cloud: A Primer on Neuroimaging Applications of High Performance Computing

    Directory of Open Access Journals (Sweden)

    Anwar S. Shatil

    2015-01-01

    Full Text Available With larger data sets and more sophisticated analyses, it is becoming increasingly common for neuroimaging researchers to push (or exceed the limitations of standalone computer workstations. Nonetheless, although high-performance computing platforms such as clusters, grids and clouds are already in routine use by a small handful of neuroimaging researchers to increase their storage and/or computational power, the adoption of such resources by the broader neuroimaging community remains relatively uncommon. Therefore, the goal of the current manuscript is to: 1 inform prospective users about the similarities and differences between computing clusters, grids and clouds; 2 highlight their main advantages; 3 discuss when it may (and may not be advisable to use them; 4 review some of their potential problems and barriers to access; and finally 5 give a few practical suggestions for how interested new users can start analyzing their neuroimaging data using cloud resources. Although the aim of cloud computing is to hide most of the complexity of the infrastructure management from end-users, we recognize that this can still be an intimidating area for cognitive neuroscientists, psychologists, neurologists, radiologists, and other neuroimaging researchers lacking a strong computational background. Therefore, with this in mind, we have aimed to provide a basic introduction to cloud computing in general (including some of the basic terminology, computer architectures, infrastructure and service models, etc., a practical overview of the benefits and drawbacks, and a specific focus on how cloud resources can be used for various neuroimaging applications.

  17. Heads in the Cloud: A Primer on Neuroimaging Applications of High Performance Computing.

    Science.gov (United States)

    Shatil, Anwar S; Younas, Sohail; Pourreza, Hossein; Figley, Chase R

    2015-01-01

    With larger data sets and more sophisticated analyses, it is becoming increasingly common for neuroimaging researchers to push (or exceed) the limitations of standalone computer workstations. Nonetheless, although high-performance computing platforms such as clusters, grids and clouds are already in routine use by a small handful of neuroimaging researchers to increase their storage and/or computational power, the adoption of such resources by the broader neuroimaging community remains relatively uncommon. Therefore, the goal of the current manuscript is to: 1) inform prospective users about the similarities and differences between computing clusters, grids and clouds; 2) highlight their main advantages; 3) discuss when it may (and may not) be advisable to use them; 4) review some of their potential problems and barriers to access; and finally 5) give a few practical suggestions for how interested new users can start analyzing their neuroimaging data using cloud resources. Although the aim of cloud computing is to hide most of the complexity of the infrastructure management from end-users, we recognize that this can still be an intimidating area for cognitive neuroscientists, psychologists, neurologists, radiologists, and other neuroimaging researchers lacking a strong computational background. Therefore, with this in mind, we have aimed to provide a basic introduction to cloud computing in general (including some of the basic terminology, computer architectures, infrastructure and service models, etc.), a practical overview of the benefits and drawbacks, and a specific focus on how cloud resources can be used for various neuroimaging applications.

  18. Heads in the Cloud: A Primer on Neuroimaging Applications of High Performance Computing

    Science.gov (United States)

    Shatil, Anwar S.; Younas, Sohail; Pourreza, Hossein; Figley, Chase R.

    2015-01-01

    With larger data sets and more sophisticated analyses, it is becoming increasingly common for neuroimaging researchers to push (or exceed) the limitations of standalone computer workstations. Nonetheless, although high-performance computing platforms such as clusters, grids and clouds are already in routine use by a small handful of neuroimaging researchers to increase their storage and/or computational power, the adoption of such resources by the broader neuroimaging community remains relatively uncommon. Therefore, the goal of the current manuscript is to: 1) inform prospective users about the similarities and differences between computing clusters, grids and clouds; 2) highlight their main advantages; 3) discuss when it may (and may not) be advisable to use them; 4) review some of their potential problems and barriers to access; and finally 5) give a few practical suggestions for how interested new users can start analyzing their neuroimaging data using cloud resources. Although the aim of cloud computing is to hide most of the complexity of the infrastructure management from end-users, we recognize that this can still be an intimidating area for cognitive neuroscientists, psychologists, neurologists, radiologists, and other neuroimaging researchers lacking a strong computational background. Therefore, with this in mind, we have aimed to provide a basic introduction to cloud computing in general (including some of the basic terminology, computer architectures, infrastructure and service models, etc.), a practical overview of the benefits and drawbacks, and a specific focus on how cloud resources can be used for various neuroimaging applications. PMID:27279746

  19. High-Performance MIM Capacitors for a Secondary Power Supply Application

    Directory of Open Access Journals (Sweden)

    Jiliang Mu

    2018-02-01

    Full Text Available Microstructure is important to the development of energy devices with high performance. In this work, a three-dimensional Si-based metal-insulator-metal (MIM capacitor has been reported, which is fabricated by microelectromechanical systems (MEMS technology. Area enlargement is achieved by forming deep trenches in a silicon substrate using the deep reactive ion etching method. The results indicate that an area of 2.45 × 103 mm2 can be realized in the deep trench structure with a high aspect ratio of 30:1. Subsequently, a dielectric Al2O3 layer and electrode W/TiN layers are deposited by atomic layer deposition. The obtained capacitor has superior performance, such as a high breakdown voltage (34.1 V, a moderate energy density (≥1.23 mJ/cm2 per unit planar area, a high breakdown electric field (6.1 ± 0.1 MV/cm, a low leakage current (10−7 A/cm2 at 22.5 V, and a low quadratic voltage coefficient of capacitance (VCC (≤63.1 ppm/V2. In addition, the device’s performance has been theoretically examined. The results show that the high energy supply and small leakage current can be attributed to the Poole–Frenkel emission in the high-field region and the trap-assisted tunneling in the low-field region. The reported capacitor has potential application as a secondary power supply.

  20. Perspectives for high-performance permanent magnets: applications, coercivity, and new materials

    Science.gov (United States)

    Hirosawa, Satoshi; Nishino, Masamichi; Miyashita, Seiji

    2017-03-01

    High-performance permanent magnets are indispensable in the production of high-efficiency motors and generators and ultimately for sustaining the green earth. The central issue of modern permanent magnetism is to realize high coercivity near and above room temperature on marginally hard magnetic materials without relying upon the critical elements such as heavy rare earths by means of nanostructure engineering. Recent investigations based on advanced nanostructure analysis and large-scale first principles calculations have led to significant paradigm shifts in the understandings of coercivity mechanism in Nd-Fe-B permanent magnets, which includes the discovery of the ferromagnetism of the thin (2 nm) intergranular phase surrounding the Nd2Fe14B grains, the occurrence of negative (in-plane) magnetocrystalline anisotropy of Nd ions and some Fe atoms at the interface which degrades coercivity, and visualization of the stochastic behaviors of magnetization in the magnetization reversal process at high temperatures. A major change may occur also in the motor topologies, which is currently overwhelmed by the magnetic flux weakening interior permanent magnet motor type, to other types with variable flux permanent magnet type in some applications to open up a niche for new permanent magnet materials. Keynote talk at 8th International Workshop on Advanced Materials Science and Nanotechnology (IWAMSN2016), 8-12 November 2016, Ha Long City, Vietnam.

  1. Computer application in scientific investigations

    International Nuclear Information System (INIS)

    Govorun, N.N.

    1981-01-01

    A short review of the computer development and application and software in JINR for the last 15 years is presented. Main trends of studies on computer application in experimental and theoretical investigations are enumerated: software of computers and their systems, software of data processing systems, designing automatic and automized systems for measuring track detectors images, development of technique of carrying out experiments on computer line, packets of applied computer codes and specialized systems. The development of the on line technique is successfully used in investigations of nuclear processes at relativistic energies. The new trend is the development of television methods of data output and its computer recording [ru

  2. BEAGLE: an application programming interface and high-performance computing library for statistical phylogenetics.

    Science.gov (United States)

    Ayres, Daniel L; Darling, Aaron; Zwickl, Derrick J; Beerli, Peter; Holder, Mark T; Lewis, Paul O; Huelsenbeck, John P; Ronquist, Fredrik; Swofford, David L; Cummings, Michael P; Rambaut, Andrew; Suchard, Marc A

    2012-01-01

    Phylogenetic inference is fundamental to our understanding of most aspects of the origin and evolution of life, and in recent years, there has been a concentration of interest in statistical approaches such as Bayesian inference and maximum likelihood estimation. Yet, for large data sets and realistic or interesting models of evolution, these approaches remain computationally demanding. High-throughput sequencing can yield data for thousands of taxa, but scaling to such problems using serial computing often necessitates the use of nonstatistical or approximate approaches. The recent emergence of graphics processing units (GPUs) provides an opportunity to leverage their excellent floating-point computational performance to accelerate statistical phylogenetic inference. A specialized library for phylogenetic calculation would allow existing software packages to make more effective use of available computer hardware, including GPUs. Adoption of a common library would also make it easier for other emerging computing architectures, such as field programmable gate arrays, to be used in the future. We present BEAGLE, an application programming interface (API) and library for high-performance statistical phylogenetic inference. The API provides a uniform interface for performing phylogenetic likelihood calculations on a variety of compute hardware platforms. The library includes a set of efficient implementations and can currently exploit hardware including GPUs using NVIDIA CUDA, central processing units (CPUs) with Streaming SIMD Extensions and related processor supplementary instruction sets, and multicore CPUs via OpenMP. To demonstrate the advantages of a common API, we have incorporated the library into several popular phylogenetic software packages. The BEAGLE library is free open source software licensed under the Lesser GPL and available from http://beagle-lib.googlecode.com. An example client program is available as public domain software.

  3. Characteristics and applications of high-performance fiber reinforced asphalt concrete

    Science.gov (United States)

    Park, Philip

    Steel fiber reinforced asphalt concrete (SFRAC) is suggested in this research as a multifunctional high performance material that can potentially lead to a breakthrough in developing a sustainable transportation system. The innovative use of steel fibers in asphalt concrete is expected to improve mechanical performance and electrical conductivity of asphalt concrete that is used for paving 94% of U. S. roadways. In an effort to understand the fiber reinforcing mechanisms in SFRAC, the interaction between a single straight steel fiber and the surrounding asphalt matrix is investigated through single fiber pull-out tests and detailed numerical simulations. It is shown that pull-out failure modes can be classified into three types: matrix, interface, and mixed failure modes and that there is a critical shear stress, independent of temperature and loading rate, beyond which interfacial debonding will occur. The reinforcing effects of SFRAC with various fiber sizes and shapes are investigated through indirect tension tests at low temperature. Compared to unreinforced specimens, fiber reinforced specimens exhibit up to 62.5% increase in indirect tensile strength and 895% improvements in toughness. The documented improvements are the highest attributed to fiber reinforcement in asphalt concrete to date. The use of steel fibers and other conductive additives provides an opportunity to make asphalt pavement electrically conductive, which opens up the possibility for multifunctional applications. Various asphalt mixtures and mastics are tested and the results indicate that the electrical resistivity of asphaltic materials can be manipulated over a wide range by replacing a part of traditional fillers with a specific type of graphite powder. Another important achievement of this study is development and validation of a three dimensional nonlinear viscoelastic constitutive model that is capable of simulating both linear and nonlinear viscoelasticity of asphaltic materials. The

  4. InfoMall: An Innovative Strategy for High-Performance Computing and Communications Applications Development.

    Science.gov (United States)

    Mills, Kim; Fox, Geoffrey

    1994-01-01

    Describes the InfoMall, a program led by the Northeast Parallel Architectures Center (NPAC) at Syracuse University (New York). The InfoMall features a partnership of approximately 24 organizations offering linked programs in High Performance Computing and Communications (HPCC) technology integration, software development, marketing, education and…

  5. High performance liquid-level sensor based on mPOFBG for aircraft applications

    DEFF Research Database (Denmark)

    Marques, C. A. F.; Pospori, A.; Saez-Rodriguez, D.

    2015-01-01

    A high performance liquid-level sensor based on microstructured polymer optical fiber Bragg grating (mPOFBG) array sensors is reported in detail. The sensor sensitivity is found to be 98pm/cm of liquid, enhanced by more than a factor of 9 compared to a reported silica fiber-based sensor....

  6. Production and application of cation/anion exchange membranes of high performance

    International Nuclear Information System (INIS)

    Xu Zhili; Tan Chunhong; Yang Xiangmin

    1995-01-01

    A third affiliated factory of our university has been established for the production in batches of cation/anion exchange membranes of high performance, trade marks of which are HF-1 and HF-2. Membrane products have been applied in various fields (including industries and research institutions) with great success

  7. Model My Watershed: A high-performance cloud application for public engagement, watershed modeling and conservation decision support

    Science.gov (United States)

    Aufdenkampe, A. K.; Tarboton, D. G.; Horsburgh, J. S.; Mayorga, E.; McFarland, M.; Robbins, A.; Haag, S.; Shokoufandeh, A.; Evans, B. M.; Arscott, D. B.

    2017-12-01

    The Model My Watershed Web app (https://app.wikiwatershed.org/) and the BiG-CZ Data Portal (http://portal.bigcz.org/) and are web applications that share a common codebase and a common goal to deliver high-performance discovery, visualization and analysis of geospatial data in an intuitive user interface in web browser. Model My Watershed (MMW) was designed as a decision support system for watershed conservation implementation. BiG CZ Data Portal was designed to provide context and background data for research sites. Users begin by creating an Area of Interest, via an automated watershed delineation tool, a free draw tool, selection of a predefined area such as a county or USGS Hydrological Unit (HUC), or uploading a custom polygon. Both Web apps visualize and provide summary statistics of land use, soil groups, streams, climate and other geospatial information. MMW then allows users to run a watershed model to simulate different scenarios of human impacts on stormwater runoff and water-quality. BiG CZ Data Portal allows users to search for scientific and monitoring data within the Area of Interest, which also serves as a prototype for the upcoming Monitor My Watershed web app. Both systems integrate with CUAHSI cyberinfrastructure, including visualizing observational data from CUAHSI Water Data Center and storing user data via CUAHSI HydroShare. Both systems also integrate with the new EnviroDIY Water Quality Data Portal (http://data.envirodiy.org/), a system for crowd-sourcing environmental monitoring data using open-source sensor stations (http://envirodiy.org/mayfly/) and based on the Observations Data Model v2.

  8. Re-Engineering a High Performance Electrical Series Elastic Actuator for Low-Cost Industrial Applications

    Directory of Open Access Journals (Sweden)

    Kenan Isik

    2017-01-01

    Full Text Available Cost is an important consideration when transferring a technology from research to industrial and educational use. In this paper, we introduce the design of an industrial grade series elastic actuator (SEA performed via re-engineering a research grade version of it. Cost-constrained design requires careful consideration of the key performance parameters for an optimal performance-to-cost component selection. To optimize the performance of the new design, we started by matching the capabilities of a high-performance SEA while cutting down its production cost significantly. Our posit was that performing a re-engineering design process on an existing high-end device will significantly reduce the cost without compromising the performance drastically. As a case study of design for manufacturability, we selected the University of Texas Series Elastic Actuator (UT-SEA, a high-performance SEA, for its high power density, compact design, high efficiency and high speed properties. We partnered with an industrial corporation in China to research the best pricing options and to exploit the retail and production facilities provided by the Shenzhen region. We succeeded in producing a low-cost industrial grade actuator at one-third of the cost of the original device by re-engineering the UT-SEA with commercial off-the-shelf components and reducing the number of custom-made parts. Subsequently, we conducted performance tests to demonstrate that the re-engineered product achieves the same high-performance specifications found in the original device. With this paper, we aim to raise awareness in the robotics community on the possibility of low-cost realization of low-volume, high performance, industrial grade research and education hardware.

  9. Topic 14+16: High-performance and scientific applications and extreme-scale computing (Introduction)

    KAUST Repository

    Downes, Turlough P.; Roller, Sabine P.; Seitsonen, Ari Paavo; Valcke, Sophie; Keyes, David E.; Sawley, Marie Christine; Schulthess, Thomas C.; Shalf, John M.

    2013-01-01

    and algorithms to address the varied, complex and increasing challenges of modern research throughout both the "hard" and "soft" sciences. This necessitates being able to use large numbers of compute nodes, many of which are equipped with accelerators

  10. Application of Ultra High Performance Fiber Reinforced Concrete – The Malaysia Perspective

    OpenAIRE

    Voo - Yen Lei; Behzad Nematollahi; Abu Bakar Mohamed Said; Balamurugan A Gopal; Tet Shun Yee

    2012-01-01

    One of the most significant breakthroughs in concrete technology at the end of the 20th century was the development of ultra-high performance fiber reinforced concrete (UHPFRC) with compressive strength and flexure strength beyond 160 MPa and 30 MPa, respectively; remarkable improvement in workability; durability resembled to natural rocks; ductility and toughness comparable to steel. While over the last two decades a tremendous amount of research works have been undertaken by academics and e...

  11. Bringing high-performance computing to the biologist's workbench: approaches, applications, and challenges

    International Nuclear Information System (INIS)

    Oehmen, C S; Cannon, W R

    2008-01-01

    Data-intensive and high-performance computing are poised to significantly impact the future of biological research which is increasingly driven by the prevalence of high-throughput experimental methodologies for genome sequencing, transcriptomics, proteomics, and other areas. Large centers such as NIH's National Center for Biotechnology Information, The Institute for Genomic Research, and the DOE's Joint Genome Institute) have made extensive use of multiprocessor architectures to deal with some of the challenges of processing, storing and curating exponentially growing genomic and proteomic datasets, thus enabling users to rapidly access a growing public data source, as well as use analysis tools transparently on high-performance computing resources. Applying this computational power to single-investigator analysis, however, often relies on users to provide their own computational resources, forcing them to endure the learning curve of porting, building, and running software on multiprocessor architectures. Solving the next generation of large-scale biology challenges using multiprocessor machines-from small clusters to emerging petascale machines-can most practically be realized if this learning curve can be minimized through a combination of workflow management, data management and resource allocation as well as intuitive interfaces and compatibility with existing common data formats

  12. Ultra-sensitive high performance liquid chromatography-laser-induced fluorescence based proteomics for clinical applications.

    Science.gov (United States)

    Patil, Ajeetkumar; Bhat, Sujatha; Pai, Keerthilatha M; Rai, Lavanya; Kartha, V B; Chidangil, Santhosh

    2015-09-08

    An ultra-sensitive high performance liquid chromatography-laser induced fluorescence (HPLC-LIF) based technique has been developed by our group at Manipal, for screening, early detection, and staging for various cancers, using protein profiling of clinical samples like, body fluids, cellular specimens, and biopsy-tissue. More than 300 protein profiles of different clinical samples (serum, saliva, cellular samples and tissue homogenates) from volunteers (normal, and different pre-malignant/malignant conditions) were recorded using this set-up. The protein profiles were analyzed using principal component analysis (PCA) to achieve objective detection and classification of malignant, premalignant and healthy conditions with high sensitivity and specificity. The HPLC-LIF protein profiling combined with PCA, as a routine method for screening, diagnosis, and staging of cervical cancer and oral cancer, is discussed in this paper. In recent years, proteomics techniques have advanced tremendously in life sciences and medical sciences for the detection and identification of proteins in body fluids, tissue homogenates and cellular samples to understand biochemical mechanisms leading to different diseases. Some of the methods include techniques like high performance liquid chromatography, 2D-gel electrophoresis, MALDI-TOF-MS, SELDI-TOF-MS, CE-MS and LC-MS techniques. We have developed an ultra-sensitive high performance liquid chromatography-laser induced fluorescence (HPLC-LIF) based technique, for screening, early detection, and staging for various cancers, using protein profiling of clinical samples like, body fluids, cellular specimens, and biopsy-tissue. More than 300 protein profiles of different clinical samples (serum, saliva, cellular samples and tissue homogenates) from healthy and volunteers with different malignant conditions were recorded by using this set-up. The protein profile data were analyzed using principal component analysis (PCA) for objective

  13. Development of high-performance phased-array UT system 'DYNARAY' and its application examples

    International Nuclear Information System (INIS)

    Ehara, Eiji

    2011-01-01

    This article outlined the history leading to develop high-performance phased-array (PA) UT system called DYNARAY, with up to 256 phased-array active channels and maximum 4096 focal laws, lowering the inspection time. As examples it was applied to in-service inspection of reactor pressure vessel welded joints using module of PA-UT probe or eddy-current probe, inspection of seal welds of dry storage containers using scanner of PA-UT, crack detection of end ring of generators using PA-UT probe and UT inspection of cast austenitic stainless steel using 500 kHz probe. Advanced data acquisition and analysis functions for PA-UT system had been developed. (T. Tanaka)

  14. [Determination of oxaprozin in human plasma with high performance liquid chromatography (HPLC) and its application].

    Science.gov (United States)

    Mao, Mian; Wang, Ling; Jiang, Xuehua; Yang, Lin

    2013-06-01

    The present research was aimed to develop a high performance liquid chromatography (HPLC) method to determine oxaprozin in plasma and to evaluate the bioavailability of two oxaprozin enteric coated tablets. A C18 column was used to separate the plasma after protein precipitation and the mobile phase was methanol-12. 5mmol/L ammonium acetate buffer solution (pH=3.0)(71:29). The calibration curve was linear in the concentration range of 0. 50-70. 56 microg . mL-1, and the intra and inter-day RSDs were less than 12. 33% and 10. 42% respectively. A single dose of 0. 4 g reference preparation or test preparation of oxaprozin enteric coated tablets was administered to 20 healthy volunteers according to a randomized crossover study. AUC0-->264h were (4 917. 44 +/- 629. 57) microg . h . mL-1 and (4 604. 30+/-737. 83) microg . h . mL-1, respectively; Cmax were (52. 34+/-7. 68) microg . mL-1 and (48. 66+/-4. 87) microg . mL-1, respectively; Tmax were (18. 70+/-2.27) h and (19. 30+/-1. 63) h, respectively; The relative bioavailability of test preparation was 94.0% +/- 13. 7%. The method is simple, rapid and selective for oxaprozin determination. There is no significant difference in the main pharmacokinetic parameters between the test formulation and reference formulation and the two formulations are in bioequivalence.

  15. Application of High Performance Computing to Earthquake Hazard and Disaster Estimation in Urban Area

    Directory of Open Access Journals (Sweden)

    Muneo Hori

    2018-02-01

    Full Text Available Integrated earthquake simulation (IES is a seamless simulation of analyzing all processes of earthquake hazard and disaster. There are two difficulties in carrying out IES, namely, the requirement of large-scale computation and the requirement of numerous analysis models for structures in an urban area, and they are solved by taking advantage of high performance computing (HPC and by developing a system of automated model construction. HPC is a key element in developing IES, as it needs to analyze wave propagation and amplification processes in an underground structure; a model of high fidelity for the underground structure exceeds a degree-of-freedom larger than 100 billion. Examples of IES for Tokyo Metropolis are presented; the numerical computation is made by using K computer, the supercomputer of Japan. The estimation of earthquake hazard and disaster for a given earthquake scenario is made by the ground motion simulation and the urban area seismic response simulation, respectively, for the target area of 10,000 m × 10,000 m.

  16. Application of dynamic compaction technology for high performance and precision powder products

    International Nuclear Information System (INIS)

    Lee, Chang Kyu; Lee, Jung Gu; Lee, Min Ku; Uhm, Young Rang; Park, Jin Ju; Lee, Gyeong Ja; Hong, Soon Jik

    2011-06-01

    The automation technology of magnetic pulsed compaction (MPC) has been developed for mass production of high performance powder products by dynamic compaction method. The pulse power equipment in MPC system has been modified for improved lifetime and productivity, so the modified one can produce high-density compacts at a rate of 10 times/min with semipermanent lifetime. Using this modified pulse power equipment, two types of automated MPC apparatus were constructed, which are operated by mechanical and hydraulic driving systems, respectively. By repeated compaction operations at a rate of 5 times/min, durability and productivity of these automated apparatus have been proven to be suitable for mass production. In addition, the lifetime of mold and punch for MPC has been improved by optimizing design and material as well as employing new lubrication system. By applying such automated MPC apparatus, detailed mass production technologies have been developed for several powder products such as diamond drilling segments, ceramic targets for optical coating, silver coins for water disinfection and small powder products for automobile. The developed powder products showed improved performance as compared to commercial ones, so they will be mass-produced industrially before long

  17. Fabrications and application of single crystalline GaN for high-performance deep UV photodetectors

    Energy Technology Data Exchange (ETDEWEB)

    Velazquez, R.; Rivera, M.; Feng, P., E-mail: p.feng@upr.edu [Department of Physics, College of Natural Sciences, University of Puerto Rico, San Juan, 00936-8377, PR/USA (Puerto Rico); Aldalbahi, A. [Department of Chemistry, College of Science, King Saud University, Riyadh 11451 (Saudi Arabia)

    2016-08-15

    High-quality single crystalline Gallium Nitride (GaN) semiconductor has been synthesized using molecule beam epitaxy (MBE) technique for development of high-performance deep ultraviolet (UV) photodetectors. Thickness of the films was estimated by using surface profile meter and scanning electron microscope. Electronic states and elemental composition of the films were obtained using Raman scattering spectroscopy. The orientation, crystal structure and phase purity of the films were examined using a Siemens x-ray diffractometer radiation. The surface microstructure was studied using high resolution scanning electron microscopy (SEM). Two types of metal pairs: Al-Al, Al-Cu or Cu-Cu were used for interdigital electrodes on GaN film in order to examine the Schottky properties of the GaN based photodetector. The characterizations of the fabricated prototype include the stability, responsivity, response and recovery times. Typical time dependent photoresponsivity by switching different UV light source on and off five times for each 240 seconds at a bias of 2V, respectively, have been obtained. The detector appears to be highly sensitive to various UV wavelengths of light with very stable baseline and repeatability. The obtained photoresponsivity was up to 354 mA/W at the bias 2V. Higher photoresponsivity could be obtained if higher bias was applied but it would unavoidably result in a higher dark current. Thermal effect on the fabricated GaN based prototype was discussed.

  18. Sewage sludge ash (SSA in high performance concrete: characterization and application

    Directory of Open Access Journals (Sweden)

    C. M. A. Fontes

    Full Text Available ABSTRACT Sewage sludge originated from the process of treatment of wastewater has become an environmental issue for three main reasons: contains pathogens, heavy metals and organic compounds that are harmful to the environmental and human health; high volumes are daily generated; and shortage of landfill sites for proper disposal. This research deals with the viability study of sewage sludge utilization, after calcination process, as mineral admixture in the production of concrete. High-performance concretes were produced with replacement content of 5% and 10% by weight of Portland cement with sewage sludge ash (SSA. The influence of this ash was analyzed through physical and mechanical tests. Analysis showed that the mixtures containing SSA have lower values of compressive strength than the reference. The results of absorptivity, porosity and accelerated penetration of chloride ions, presents that mixtures containing ash showed reductions compared to the reference. This indicates that SSA provided refinement of the pore structure, which was confirmed by mercury intrusion porosimetry test.

  19. Application of rare-earth magnets in high-performance electric machines

    International Nuclear Information System (INIS)

    Ramsden, V.S.

    1998-01-01

    Some state of the art developments of high-performance machines using rare-earth magnets are reviewed with particular examples drawn from a number of novel machine designs developed jointly by the Faculty of Engineering, University of Technology, Sydney (UTS) and CSIRO Telecommunications and Industrial Physics. These designs include an 1800 W, 1060 rev/min, 98% efficient solar car in-wheel motor using a Halbach magnet array, axial flux, and ironless winding; a 1200 W, 3000 rev/min, 91% efficient solar-powered, water-filled, submersible, bore-hole pump motor using a surface magnet rotor; a 500 W, 10000 rev/min, 87% efficient, oil-filled, oil-well tractor motor using a 2-pole cylindrical magnet rotor and slotless winding; a 75 kW, 48000 rev/min, 97% efficient, high-speed compressor drive with 2-pole cylindrical magnet rotor, slotted stator, and refrigerant cooling; and a 20 kW, 211 rev/min, 87% efficient, direct-drive generator for wind turbines with very low starting torque using an outer rotor with surface magnets and a slotted stator. (orig.)

  20. MaMR: High-performance MapReduce programming model for material cloud applications

    Science.gov (United States)

    Jing, Weipeng; Tong, Danyu; Wang, Yangang; Wang, Jingyuan; Liu, Yaqiu; Zhao, Peng

    2017-02-01

    With the increasing data size in materials science, existing programming models no longer satisfy the application requirements. MapReduce is a programming model that enables the easy development of scalable parallel applications to process big data on cloud computing systems. However, this model does not directly support the processing of multiple related data, and the processing performance does not reflect the advantages of cloud computing. To enhance the capability of workflow applications in material data processing, we defined a programming model for material cloud applications that supports multiple different Map and Reduce functions running concurrently based on hybrid share-memory BSP called MaMR. An optimized data sharing strategy to supply the shared data to the different Map and Reduce stages was also designed. We added a new merge phase to MapReduce that can efficiently merge data from the map and reduce modules. Experiments showed that the model and framework present effective performance improvements compared to previous work.

  1. Feasibility analysis of ultra high performance concrete for prestressed concrete bridge applications.

    Science.gov (United States)

    2010-07-01

    UHPC is an emerging material technology in which concrete develops very high : compressive strengths and exhibits improved tensile strength and toughness. A : comprehensive literature and historical application review was completed to determine the :...

  2. Direct synthesis of highly porous interconnected carbon nanosheets and their application as high-performance supercapacitors.

    Science.gov (United States)

    Sevilla, Marta; Fuertes, Antonio B

    2014-05-27

    An easy, one-step procedure is proposed for the synthesis of highly porous carbon nanosheets with an excellent performance as supercapacitor electrodes. The procedure is based on the carbonization of an organic salt, i.e., potassium citrate, at a temperature in the 750-900 °C range. In this way, carbon particles made up of interconnected carbon nanosheets with a thickness of <80 nm are obtained. The porosity of the carbon nanosheets consists essentially of micropores distributed in two pore systems of 0.7-0.85 nm and 0.95-1.6 nm. Importantly, the micropore sizes of both systems can be enlarged by simply increasing the carbonization temperature. Furthermore, the carbon nanosheets possess BET surface areas in the ∼1400-2200 m(2) g(-1) range and electronic conductivities in the range of 1.7-7.4 S cm(-1) (measured at 7.1 MPa). These materials behave as high-performance supercapacitor electrodes in organic electrolyte and exhibit an excellent power handling ability and a superb robustness over long-term cycling. Excellent results were obtained with the supercapacitor fabricated from the material synthesized at 850 °C in terms of both gravimetric and volumetric energy and power densities. This device was able to deliver ∼13 Wh kg(-1) (5.2 Wh L(-1)) at an extremely high power density of 78 kW kg(-1) (31 kW L(-1)) and ∼30 Wh kg(-1) (12 Wh L(-1)) at a power density of 13 kW kg(-1) (5.2 kW L(-1)) (voltage range of 2.7 V).

  3. Accessible high performance computing solutions for near real-time image processing for time critical applications

    Science.gov (United States)

    Bielski, Conrad; Lemoine, Guido; Syryczynski, Jacek

    2009-09-01

    High Performance Computing (HPC) hardware solutions such as grid computing and General Processing on a Graphics Processing Unit (GPGPU) are now accessible to users with general computing needs. Grid computing infrastructures in the form of computing clusters or blades are becoming common place and GPGPU solutions that leverage the processing power of the video card are quickly being integrated into personal workstations. Our interest in these HPC technologies stems from the need to produce near real-time maps from a combination of pre- and post-event satellite imagery in support of post-disaster management. Faster processing provides a twofold gain in this situation: 1. critical information can be provided faster and 2. more elaborate automated processing can be performed prior to providing the critical information. In our particular case, we test the use of the PANTEX index which is based on analysis of image textural measures extracted using anisotropic, rotation-invariant GLCM statistics. The use of this index, applied in a moving window, has been shown to successfully identify built-up areas in remotely sensed imagery. Built-up index image masks are important input to the structuring of damage assessment interpretation because they help optimise the workload. The performance of computing the PANTEX workflow is compared on two different HPC hardware architectures: (1) a blade server with 4 blades, each having dual quad-core CPUs and (2) a CUDA enabled GPU workstation. The reference platform is a dual CPU-quad core workstation and the PANTEX workflow total computing time is measured. Furthermore, as part of a qualitative evaluation, the differences in setting up and configuring various hardware solutions and the related software coding effort is presented.

  4. High performance diamond-like carbon layers obtained by pulsed laser deposition for conductive electrode applications

    Science.gov (United States)

    Stock, F.; Antoni, F.; Le Normand, F.; Muller, D.; Abdesselam, M.; Boubiche, N.; Komissarov, I.

    2017-09-01

    For the future, one of the biggest challenge faced to the technologies of flat panel display and various optoelectronic and photovoltaic devices is to find an alternative to the use of transparent conducting oxides like ITO. In this new approach, the objective is to grow high conductive thin-layer graphene (TLG) on the top of diamond-like carbon (DLC) layers presenting high performance. DLC prepared by pulsed laser deposition (PLD) have attracted special interest due to a unique combination of their properties, close to those of monocrystalline diamond, like its transparency, hardness and chemical inertia, very low roughness, hydrogen-free and thus high thermal stability up to 1000 K. In our future work, we plane to explore the synthesis of conductive TLG on top of insulating DLC thin films. The feasibility and obtained performances of the multi-layered structure will be explored in great details in the short future to develop an alternative to ITO with comparable performance (conductivity of transparency). To select the best DLC candidate for this purpose, we focus this work on the physicochemical properties of the DLC thin films deposited by PLD from a pure graphite target at two wavelengths (193 and 248 nm) at various laser fluences. A surface graphenization process, as well as the required efficiency of the complete structure (TLG/DLC) will clearly be related to the DLC properties, especially to the initial sp3/sp2 hybridization ratio. Thus, an exhaustive description of the physicochemical properties of the DLC layers is a fundamental step in the research of comparable performance to ITO.

  5. A high performance data parallel tensor contraction framework: Application to coupled electro-mechanics

    Science.gov (United States)

    Poya, Roman; Gil, Antonio J.; Ortigosa, Rogelio

    2017-07-01

    The paper presents aspects of implementation of a new high performance tensor contraction framework for the numerical analysis of coupled and multi-physics problems on streaming architectures. In addition to explicit SIMD instructions and smart expression templates, the framework introduces domain specific constructs for the tensor cross product and its associated algebra recently rediscovered by Bonet et al. (2015, 2016) in the context of solid mechanics. The two key ingredients of the presented expression template engine are as follows. First, the capability to mathematically transform complex chains of operations to simpler equivalent expressions, while potentially avoiding routes with higher levels of computational complexity and, second, to perform a compile time depth-first or breadth-first search to find the optimal contraction indices of a large tensor network in order to minimise the number of floating point operations. For optimisations of tensor contraction such as loop transformation, loop fusion and data locality optimisations, the framework relies heavily on compile time technologies rather than source-to-source translation or JIT techniques. Every aspect of the framework is examined through relevant performance benchmarks, including the impact of data parallelism on the performance of isomorphic and nonisomorphic tensor products, the FLOP and memory I/O optimality in the evaluation of tensor networks, the compilation cost and memory footprint of the framework and the performance of tensor cross product kernels. The framework is then applied to finite element analysis of coupled electro-mechanical problems to assess the speed-ups achieved in kernel-based numerical integration of complex electroelastic energy functionals. In this context, domain-aware expression templates combined with SIMD instructions are shown to provide a significant speed-up over the classical low-level style programming techniques.

  6. High-Performance Modeling and Simulation of Anchoring in Granular Media for NEO Applications

    Science.gov (United States)

    Quadrelli, Marco B.; Jain, Abhinandan; Negrut, Dan; Mazhar, Hammad

    2012-01-01

    NASA is interested in designing a spacecraft capable of visiting a near-Earth object (NEO), performing experiments, and then returning safely. Certain periods of this mission would require the spacecraft to remain stationary relative to the NEO, in an environment characterized by very low gravity levels; such situations require an anchoring mechanism that is compact, easy to deploy, and upon mission completion, easy to remove. The design philosophy used in this task relies on the simulation capability of a high-performance multibody dynamics physics engine. On Earth, it is difficult to create low-gravity conditions, and testing in low-gravity environments, whether artificial or in space, can be costly and very difficult to achieve. Through simulation, the effect of gravity can be controlled with great accuracy, making it ideally suited to analyze the problem at hand. Using Chrono::Engine, a simulation pack age capable of utilizing massively parallel Graphic Processing Unit (GPU) hardware, several validation experiments were performed. Modeling of the regolith interaction has been carried out, after which the anchor penetration tests were performed and analyzed. The regolith was modeled by a granular medium composed of very large numbers of convex three-dimensional rigid bodies, subject to microgravity levels and interacting with each other with contact, friction, and cohesional forces. The multibody dynamics simulation approach used for simulating anchors penetrating a soil uses a differential variational inequality (DVI) methodology to solve the contact problem posed as a linear complementarity method (LCP). Implemented within a GPU processing environment, collision detection is greatly accelerated compared to traditional CPU (central processing unit)- based collision detection. Hence, systems of millions of particles interacting with complex dynamic systems can be efficiently analyzed, and design recommendations can be made in a much shorter time. The figure

  7. Applications of artificial intelligence to scientific research

    Science.gov (United States)

    Prince, Mary Ellen

    1986-01-01

    Artificial intelligence (AI) is a growing field which is just beginning to make an impact on disciplines other than computer science. While a number of military and commercial applications were undertaken in recent years, few attempts were made to apply AI techniques to basic scientific research. There is no inherent reason for the discrepancy. The characteristics of the problem, rather than its domain, determines whether or not it is suitable for an AI approach. Expert system, intelligent tutoring systems, and learning programs are examples of theoretical topics which can be applied to certain areas of scientific research. Further research and experimentation should eventurally make it possible for computers to act as intelligent assistants to scientists.

  8. Low Cost High Performance Generator Technology Program. Volume 4. Mission application study

    International Nuclear Information System (INIS)

    1975-07-01

    Results of initial efforts to investigate application of selenide thermoelectric RTG's to specific missions as well as an indication of development requirements to enable satisfaction of emerging RTG performance criteria are presented. Potential mission applications in DoD such as SURVSATCOM, Advance Defense Support Program, Laser Communication Satellite, Satellite Data System, Global Positioning Satellite, Deep Space Surveillance Satellite, and Unmanned Free Swimming Submersible illustrate power requirements in the range of 500 to 1000 W. In contrast, the NASA applications require lower power ranging from 50 W for outer planetary atmospheric probes to about 200 W for spacecraft flights to Jupiter and other outer planets. The launch dates for most of these prospective missions is circa 1980, a requirement roughly compatible with selenide thermoelectric and heat source technology development. A discussion of safety criteria is included to give emphasis to the requirements for heat source design. In addition, the observation is made that the potential accident environments of all launch vehicles are similar so that a reasonable composite set of design specifications may be derived to satisfy almost all applications. Details of the LCHPG application potential is afforded by three designs: an 80 W RTG using improved selenide thermoelectric material, a 55 to 65 W LCHPG using current and improved selenide materials, and the final 500 W LCHPG as reported in Volume 2. The final results of the LCHPG design study have shown that in general, all missions can expect an LCHPG design which yields 10 percent efficiency at 3 W/lb with the current standard selenide thermoelectric materials, with growth potential to 14 percent at greater than 4 W/lb in the mid 1980's time frame

  9. The MOA thruster. A high performance plasma accelerator for nuclear power and propulsion applications

    International Nuclear Information System (INIS)

    Frischauf, Norbert; Hettmer, Manfred; Grassauer, Andreas; Bartusch, Tobias; Koudelka, Otto

    2009-01-01

    More than 60 years after the late Nobel laureate Hannes Alfven had published a letter stating that oscillating magnetic fields can accelerate ionised matter via magneto-hydrodynamic interactions in a wave like fashion, the technical implementation of Alfven waves for propulsive purposes has been proposed, patented and examined for the first time by a group of inventors. The name of the concept, utilising Alfven waves to accelerate ionised matter for propulsive purposes, is MOA - Magnetic field Oscillating Amplified thruster. Alfven waves are generated by making use of two coils, one being permanently powered and serving also as magnetic nozzle, the other one being switched on and off in a cyclic way, deforming the field lines of the overall system. It is this deformation that generates Alfven waves, which are in the next step used to transport and compress the propulsive medium, in theory leading to a propulsion system with a much higher performance than any other electric propulsion system. While space propulsion is expected to be the prime application for MOA and is supported by numerous applications such as Solar and/or Nuclear Electric Propulsion or even as an 'afterburner system' for Nuclear Thermal Propulsion, other, terrestrial applications, like coating, semiconductor implantation and manufacturing as well as steel cutting can be thought of as well, making the system highly suited for a common space-terrestrial application research and utilisation strategy. This paper presents the recent developments of the MOA Thruster R and D activities at QASAR, the company in Vienna, Austria, which has been set up to further develop and test the Alfven wave technology and its applications. (author)

  10. Application of high performance asynchronous socket communication in power distribution automation

    Science.gov (United States)

    Wang, Ziyu

    2017-05-01

    With the development of information technology and Internet technology, and the growing demand for electricity, the stability and the reliable operation of power system have been the goal of power grid workers. With the advent of the era of big data, the power data will gradually become an important breakthrough to guarantee the safe and reliable operation of the power grid. So, in the electric power industry, how to efficiently and robustly receive the data transmitted by the data acquisition device, make the power distribution automation system be able to execute scientific decision quickly, which is the pursuit direction in power grid. In this paper, some existing problems in the power system communication are analysed, and with the help of the network technology, a set of solutions called Asynchronous Socket Technology to the problem in network communication which meets the high concurrency and the high throughput is proposed. Besides, the paper also looks forward to the development direction of power distribution automation in the era of big data and artificial intelligence.

  11. Vision systems for scientific and engineering applications

    International Nuclear Information System (INIS)

    Chadda, V.K.

    2009-01-01

    Human performance can get degraded due to boredom, distraction and fatigue in vision-related tasks such as measurement, counting etc. Vision based techniques are increasingly being employed in many scientific and engineering applications. Notable advances in this field are emerging from continuing improvements in the fields of sensors and related technologies, and advances in computer hardware and software. Automation utilizing vision-based systems can perform repetitive tasks faster and more accurately, with greater consistency over time than humans. Electronics and Instrumentation Services Division has developed vision-based systems for several applications to perform tasks such as precision alignment, biometric access control, measurement, counting etc. This paper describes in brief four such applications. (author)

  12. Impulse: Memory System Support for Scientific Applications

    Directory of Open Access Journals (Sweden)

    John B. Carter

    1999-01-01

    Full Text Available Impulse is a new memory system architecture that adds two important features to a traditional memory controller. First, Impulse supports application‐specific optimizations through configurable physical address remapping. By remapping physical addresses, applications control how their data is accessed and cached, improving their cache and bus utilization. Second, Impulse supports prefetching at the memory controller, which can hide much of the latency of DRAM accesses. Because it requires no modification to processor, cache, or bus designs, Impulse can be adopted in conventional systems. In this paper we describe the design of the Impulse architecture, and show how an Impulse memory system can improve the performance of memory‐bound scientific applications. For instance, Impulse decreases the running time of the NAS conjugate gradient benchmark by 67%. We expect that Impulse will also benefit regularly strided, memory‐bound applications of commercial importance, such as database and multimedia programs.

  13. Electrospun nitrocellulose and nylon: Design and fabrication of novel high performance platforms for protein blotting applications

    Directory of Open Access Journals (Sweden)

    Bowlin Gary L

    2007-10-01

    Full Text Available Abstract Background Electrospinning is a non-mechanical processing strategy that can be used to process a variety of native and synthetic polymers into highly porous materials composed of nano-scale to micron-scale diameter fibers. By nature, electrospun materials exhibit an extensive surface area and highly interconnected pore spaces. In this study we adopted a biological engineering approach to ask how the specific unique advantages of the electrospinning process might be exploited to produce a new class of research/diagnostic tools. Methods The electrospinning properties of nitrocellulose, charged nylon and blends of these materials are characterized. Results Nitrocellulose electrospun from a starting concentration of Conclusion The flexibility afforded by electrospinning process makes it possible to tailor blotting membranes to specific applications. Electrospinning has a variety of potential applications in the clinical diagnostic field of use.

  14. A high performance DC-DC converter with intelligent control for photovoltaic applications

    OpenAIRE

    M. Niroomand; M. Sherkat; M. Soheili

    2013-01-01

    In this paper, a SEPIC (Single-Ended Primary Inductance Converter) with high efficiency has been proposed for photovoltaic applications. In the proposed converter, an auxiliary circuit without any additional switches has been used. The switch works under ZCS and ZVS conditions. No auxiliary switch was added to the circuit, thus any additional drive circuit is not needed. The proposed control system based on fuzzy logic method, has shown the smart accurate and faster tracking of the maximum po...

  15. High performance superconducting radio frequency ingot niobium technology for continuous wave applications

    International Nuclear Information System (INIS)

    Dhakal, Pashupati; Ciovati, Gianluigi; Myneni, Ganapati R.

    2015-01-01

    Future continuous wave (CW) accelerators require the superconducting radio frequency cavities with high quality factor and medium accelerating gradients (≤20 MV/m). Ingot niobium cavities with medium purity fulfill the specifications of both accelerating gradient and high quality factor with simple processing techniques and potential reduction in cost. This contribution reviews the current superconducting radiofrequency research and development and outlines the potential benefits of using ingot niobium technology for CW applications

  16. Modeling and Experiments with Carbon Nanotubes for Applications in High Performance Circuits

    Science.gov (United States)

    2017-04-06

    silicon substrates. The poor gate coupling due to the thick Silicon Dioxide ( SiO2 ) layer and back gate geometry limited their applications. However, in...transistor, there is significant scattering of electrons due to the disordered nature of the Si– SiO2 interface. However, the CNT has a crystalline...unlimited. 52 Glass Substrate Silver Electrodes CNT Matrix PBS (Phosphate Buffered Saline) Figure 40 Simple SWNT conductance-based bio-sensor for

  17. Construction of Blaze at the University of Illinois at Chicago: A Shared, High-Performance, Visual Computer for Next-Generation Cyberinfrastructure-Accelerated Scientific, Engineering, Medical and Public Policy Research

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Maxine D. [Acting Director, EVL; Leigh, Jason [PI

    2014-02-17

    The Blaze high-performance visual computing system serves the high-performance computing research and education needs of University of Illinois at Chicago (UIC). Blaze consists of a state-of-the-art, networked, computer cluster and ultra-high-resolution visualization system called CAVE2(TM) that is currently not available anywhere in Illinois. This system is connected via a high-speed 100-Gigabit network to the State of Illinois' I-WIRE optical network, as well as to national and international high speed networks, such as the Internet2, and the Global Lambda Integrated Facility. This enables Blaze to serve as an on-ramp to national cyberinfrastructure, such as the National Science Foundation’s Blue Waters petascale computer at the National Center for Supercomputing Applications at the University of Illinois at Chicago and the Department of Energy’s Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. DOE award # DE-SC005067, leveraged with NSF award #CNS-0959053 for “Development of the Next-Generation CAVE Virtual Environment (NG-CAVE),” enabled us to create a first-of-its-kind high-performance visual computing system. The UIC Electronic Visualization Laboratory (EVL) worked with two U.S. companies to advance their commercial products and maintain U.S. leadership in the global information technology economy. New applications are being enabled with the CAVE2/Blaze visual computing system that is advancing scientific research and education in the U.S. and globally, and help train the next-generation workforce.

  18. High Performance Relaxor-Based Ferroelectric Single Crystals for Ultrasonic Transducer Applications

    Directory of Open Access Journals (Sweden)

    Yan Chen

    2014-07-01

    Full Text Available Relaxor-based ferroelectric single crystals Pb(Mg1/3Nb2/3O3-PbTiO3 (PMN-PT have drawn much attention in the ferroelectric field because of their excellent piezoelectric properties and high electromechanical coupling coefficients (d33~2000 pC/N, kt~60% near the morphotropic phase boundary (MPB. Ternary Pb(In1/2Nb1/2O3-Pb(Mg1/3Nb2/3O3-PbTiO3 (PIN-PMN-PT single crystals also possess outstanding performance comparable with PMN-PT single crystals, but have higher phase transition temperatures (rhombohedral to tetragonal Trt, and tetragonal to cubic Tc and larger coercive field Ec. Therefore, these relaxor-based single crystals have been extensively employed for ultrasonic transducer applications. In this paper, an overview of our work and perspectives on using PMN-PT and PIN-PMN-PT single crystals for ultrasonic transducer applications is presented. Various types of single-element ultrasonic transducers, including endoscopic transducers, intravascular transducers, high-frequency and high-temperature transducers fabricated using the PMN-PT and PIN-PMN-PT crystals and their 2-2 and 1-3 composites are reported. Besides, the fabrication and characterization of the array transducers, such as phased array, cylindrical shaped linear array, high-temperature linear array, radial endoscopic array, and annular array, are also addressed.

  19. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Chase Qishi [New Jersey Inst. of Technology, Newark, NJ (United States); Univ. of Memphis, TN (United States); Zhu, Michelle Mengxia [Southern Illinois Univ., Carbondale, IL (United States)

    2016-06-06

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models feature diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific

  20. Azaisoindigo conjugated polymers for high performance n-type and ambipolar thin film transistor applications

    KAUST Repository

    Yue, Wan

    2016-09-28

    Two new alternating copolymers, PAIIDBT and PAIIDSe have been prepared by incorporating a highly electron deficient azaisoindigo core. The molecular structure and packing of the monomer is determined from the single crystal X-ray diffraction. Both polymers exhibit high EAs and highly planar polymer backbones. When polymers are used as the semiconducting channel for solution-processed thin film transistor application, good properties are observed. A–A type PAIIDBT exhibits unipolar electron mobility as high as 1.0 cm2 V−1 s−1, D–A type PAIIDSe exhibits ambipolar charge transport behavior with predominately electron mobility up to 0.5 cm2 V−1 s−1 and hole mobility to 0.2 cm2 V−1 s−1. The robustness of the extracted mobility values are also commented on in detail. Molecular orientation, thin film morphology and energetic disorder of both polymers are systematically investigated.

  1. Nanomechanical analysis of high performance materials (solid mechanics and its applications)

    CERN Document Server

    2013-01-01

    This book is intended for researchers who are interested in investigating the nanomechanical properties of materials using advanced instrumentation techniques. The chapters of the book are written in an easy-to-follow format, just like solved examples. The book comprehensively covers a broad range of materials such as polymers, ceramics, hybrids, biomaterials, metal oxides, nanoparticles, minerals, carbon nanotubes and welded joints. Each chapter describes the application of techniques on the selected material and also mentions the methodology adopted for the extraction of information from the raw data. This is a unique book in which both equipment manufacturers and equipment users have contributed chapters. Novices will learn the techniques directly from the inventors and senior researchers will gain in-depth information on the new technologies that are suitable for advanced analysis. On one hand, fundamental concepts that are needed to understand the nanomechanical behavior of materials is included in the i...

  2. Life assessment of PVD based hard coatings by linear sweep voltammetry for high performance industrial application

    International Nuclear Information System (INIS)

    Malik, M.; Alam, S.; Irfan, M.; Hassan, Z.

    2006-01-01

    PVD based hard coatings have remarkable achievements in order to improve Tribological and surface properties of coating tools and dies. As PVD based hard coatings have a wide range of industrial applications especially in aerospace and automobile parts where they met different chemical attacks and in order to improve industrial performance these coatings must provide an excellent resistance against corrosion, high temperature oxidation and chemical reaction. This paper focuses on study of behaviour of PVD based hard coatings under different corrosive environments like as H/sub 2/SO/sub 4/, HCl, NaCl, KCl, NaOH etc. Corrosion rate was calculate under linear sweep voltammetry method where the Tafel extrapolation curves used for continuously monitoring the corrosion rate. The results show that these coatings have an excellent resistance against chemical attack. (author)

  3. High performance solution processed zirconium oxide gate dielectric appropriate for low temperature device application

    Energy Technology Data Exchange (ETDEWEB)

    Hasan, Musarrat; Nguyen, Manh-Cuong; Kim, Hyojin; You, Seung-Won; Jeon, Yoon-Seok; Tong, Duc-Tai; Lee, Dong-Hwi; Jeong, Jae Kyeong; Choi, Rino, E-mail: rino.choi@inha.ac.kr

    2015-08-31

    This paper reports a solution processed electrical device with zirconium oxide gate dielectric that was fabricated at a low enough temperature appropriate for flexible electronics. Both inorganic dielectric and channel materials were synthesized in the same organic solvent. The dielectric constant achieved was 13 at 250 °C with a reasonably low leakage current. The bottom gate transistor devices showed the highest mobility of 75 cm{sup 2}/V s. The device is operated at low voltage with high-k dielectric with excellent transconductance and low threshold voltage. Overall, the results highlight the potential of low temperature solution based deposition in fabricating more complicated circuits for a range of applications. - Highlights: • We develop a low temperature inorganic dielectric deposition process. • We fabricate oxide semiconductor channel devices using all-solution processes. • Same solvent is used for dielectric and oxide semiconductor deposition.

  4. High performance field emission of silicon carbide nanowires and their applications in flexible field emission displays

    Science.gov (United States)

    Cui, Yunkang; Chen, Jing; Di, Yunsong; Zhang, Xiaobing; Lei, Wei

    2017-12-01

    In this paper, a facile method to fabricate the flexible field emission devices (FEDs) based on SiC nanostructure emitters by a thermal evaporation method has been demonstrated. The composition characteristics of SiC nanowires was characterized by X-ray diffraction (XRD), selected area electron diffraction (SAED) and energy dispersive X-ray spectrometer (EDX), while the morphology was revealed by field emission scanning electron microscopy (SEM) and high resolution transmission electron microscopy (HRTEM). The results showed that the SiC nanowires grew along the [111] direction with the diameter of ˜110 nm and length of˜30 μm. The flexible FEDs have been fabricated by transferring and screen-printing the SiC nanowires onto the flexible substrates exhibited excellent field emission properties, such as the low turn-on field (˜0.95 V/μm) and threshold field (˜3.26 V/μm), and the high field enhancement factor (β=4670). It is worth noting the current density degradation can be controlled lower than 2% per hour during the stability tests. In addition, the flexible FEDs based on SiC nanowire emitters exhibit uniform bright emission modes under bending test conditions. As a result, this strategy is very useful for its potential application in the commercial flexible FEDs.

  5. High performance field emission of silicon carbide nanowires and their applications in flexible field emission displays

    Directory of Open Access Journals (Sweden)

    Yunkang Cui

    2017-12-01

    Full Text Available In this paper, a facile method to fabricate the flexible field emission devices (FEDs based on SiC nanostructure emitters by a thermal evaporation method has been demonstrated. The composition characteristics of SiC nanowires was characterized by X-ray diffraction (XRD, selected area electron diffraction (SAED and energy dispersive X-ray spectrometer (EDX, while the morphology was revealed by field emission scanning electron microscopy (SEM and high resolution transmission electron microscopy (HRTEM. The results showed that the SiC nanowires grew along the [111] direction with the diameter of ∼110 nm and length of∼30 μm. The flexible FEDs have been fabricated by transferring and screen-printing the SiC nanowires onto the flexible substrates exhibited excellent field emission properties, such as the low turn-on field (∼0.95 V/μm and threshold field (∼3.26 V/μm, and the high field enhancement factor (β=4670. It is worth noting the current density degradation can be controlled lower than 2% per hour during the stability tests. In addition, the flexible FEDs based on SiC nanowire emitters exhibit uniform bright emission modes under bending test conditions. As a result, this strategy is very useful for its potential application in the commercial flexible FEDs.

  6. Synthesis and characterization of prospective polyanionic electrode materials for high performance energy storage applications

    Science.gov (United States)

    Jayachandran, M.; Durai, G.; Vijayakumar, T.

    2018-04-01

    In the present study, Polyanionic compound (SO4)-group based on Li2Ni(SO4)2 (Lithium Nickel Sulphate) composite electrodes materials were prepared by a ball-milling method and solid-state reaction route. X-ray diffraction analysis confirmed the formation of a polycrystalline orthorhombic phase of composite Li2Ni(SO4)2 with an average crystallite size of about 50.16 nm. Field Emission Scanning electron microscopy investigation reveals the spherical shape particles with the particle size of around 200–500 nm. Raman and FTIR analysis confirms the structural and functional groups of the synthesized materials and also the formation of Li2Ni(SO4)2. The electrochemical measurements using cyclic voltammetry (CV) and galvanostatic charging-discharging (GCD) techniques were carried out to study the electrochemical supercapacitive performance of the composite Li2Ni (SO4)2 electrodes. From the CV investigations, an areal capacitance of 508 mF cm‑2 was obtained at 10 mV s‑1. The galvanostatic charge-discharge (GCD) measurements exhibited the areal capacitance of 101 mF cm‑2 at a constant current density of 2 mA cm‑2 in 2 M KOH. These GCD profiles were linear and also symmetric in nature with the maximum columbic efficiency of about 85%. The electrochemical performance of the composite Li2Ni(SO4)2 electrode material shows excellent performance for supercapacitor applications.

  7. High Performance Infrared Plasmonic Metamaterial Absorbers and Their Applications to Thin-film Sensing

    KAUST Repository

    Yue, Weisheng

    2016-04-07

    Plasmonic metamaterial absorbers (PMAs) have attracted considerable attention for developing various sensing devices. In this work, we design, fabricate and characterize PMAs of different geometrical shapes operating in mid-infrared frequencies, and explore the applications of the PMAs as sensor for thin films. The PMAs, consisting of metal-insulator-metal stacks with patterned gold nanostructured surfaces (resonators), demonstrated high absorption efficiency (87 to 98 %) of electromagnetic waves in the infrared regime. The position and efficiency of resonance absorption are dependent on the shape of the resonators. Furthermore, the resonance wavelength of PMAs was sensitive to the thin film coated on the surface of the PMAs, which was tested using aluminum oxide (Al2O3) as the film. With increase of the Al2O3 thickness, the position of resonance absorption shifted to longer wavelengths. The dependence of the resonant wavelength on thin film thickness makes PMAs a suitable candidate as a sensor for thin films. Using this sensing strategy, PMAs have potential as a new method for thin film detection and in situ monitoring of surface reactions. © 2016 Springer Science+Business Media New York

  8. High-performance visible/UV CCD focal plane technology for spacebased applications

    Science.gov (United States)

    Burke, B. E.; Mountain, R. W.; Gregory, J. A.; Huang, J. C. M.; Cooper, M. J.; Savoye, E. D.; Kosicki, B. B.

    1993-01-01

    We describe recent technology developments aimed at large CCD imagers for space based applications in the visible and UV. Some of the principal areas of effort include work on reducing device degradation in the natural space-radiation environment, improvements in quantum efficiency in the visible and UV, and larger-device formats. One of the most serious hazards for space based CCD's operating at low signal levels is the displacement damage resulting from bombardment by energetic protons. Such damage degrades charge-transfer efficiency and increases dark current. We have achieved improved hardness to proton-induced displacement damage by selective ion implants into the CCD channel and by reduced temperature of operation. To attain high quantum efficiency across the visible and UV we have developed a technology for back-illuminated CCD's. With suitable antireflection (AR) coatings such devices have quantum efficiencies near 90 percent in the 500-700-nm band. In the UV band from 200 to 400 nm, where it is difficult to find coatings that are sufficiently transparent and can provide good matching to the high refractive index of silicon, we have been able to substantially increase the quantum efficiency using a thin film of HfO2 as an AR coating. These technology efforts were applied to a 420 x 420-pixel frame-transfer imager, and future work will be extended to a 1024 x 1024-pixel device now under development.

  9. Prototyping of a highly performant and integrated piezoresistive force sensor for microscale applications

    International Nuclear Information System (INIS)

    Komati, Bilal; Agnus, Joël; Clévy, Cédric; Lutz, Philippe

    2014-01-01

    In this paper, the prototyping of a new piezoresistive microforce sensor is presented. An original design taking advantage of both the mechanical and bulk piezoresistive properties of silicon is presented, which enables the easy fabrication of a very small, large-range, high-sensitivity with high integration potential sensor. The sensor is made of two silicon strain gauges for which widespread and known microfabrication processes are used. The strain gauges present a high gauge factor which allows a good sensitivity of this force sensor. The dimensions of this sensor are 700 μm in length, 100 μm in width and 12 μm in thickness. These dimensions make its use convenient with many microscale applications, notably its integration in a microgripper. The fabricated sensor is calibrated using an industrial force sensor. The design, microfabrication process and performances of the fabricated piezoresistive force sensor are innovative thanks to its resolution of 100 nN and its measurement range of 2 mN. This force sensor also presents a high signal-to-noise ratio, typically 50 dB when a 2 mN force is applied at the tip of the force sensor. (paper)

  10. Relational database hybrid model, of high performance and storage capacity for nuclear engineering applications

    International Nuclear Information System (INIS)

    Gomes Neto, Jose

    2008-01-01

    The objective of this work is to present the relational database, named FALCAO. It was created and implemented to support the storage of the monitored variables in the IEA-R1 research reactor, located in the Instituto de Pesquisas Energeticas e Nucleares, IPEN/CNEN-SP. The data logical model and its direct influence in the integrity of the provided information are carefully considered. The concepts and steps of normalization and de normalization including the entities and relations involved in the logical model are presented. It is also presented the effects of the model rules in the acquisition, loading and availability of the final information, under the performance concept since the acquisition process loads and provides lots of information in small intervals of time. The SACD application, through its functionalities, presents the information stored in the FALCAO database in a practical and optimized form. The implementation of the FALCAO database occurred successfully and its existence leads to a considerably favorable situation. It is now essential to the routine of the researchers involved, not only due to the substantial improvement of the process but also to the reliability associated to it. (author)

  11. A High-Performance Application Specific Integrated Circuit for Electrical and Neurochemical Traumatic Brain Injury Monitoring.

    Science.gov (United States)

    Pagkalos, Ilias; Rogers, Michelle L; Boutelle, Martyn G; Drakakis, Emmanuel M

    2018-05-22

    This paper presents the first application specific integrated chip (ASIC) for the monitoring of patients who have suffered a Traumatic Brain Injury (TBI). By monitoring the neurophysiological (ECoG) and neurochemical (glucose, lactate and potassium) signals of the injured human brain tissue, it is possible to detect spreading depolarisations, which have been shown to be associated with poor TBI patient outcome. This paper describes the testing of a new 7.5 mm 2 ASIC fabricated in the commercially available AMS 0.35 μm CMOS technology. The ASIC has been designed to meet the demands of processing the injured brain tissue's ECoG signals, recorded by means of depth or brain surface electrodes, and neurochemical signals, recorded using microdialysis coupled to microfluidics-based electrochemical biosensors. The potentiostats use switchedcapacitor charge integration to record currents with 100 fA resolution, and allow automatic gain changing to track the falling sensitivity of a biosensor. This work supports the idea of a "behind the ear" wireless microplatform modality, which could enable the monitoring of currently non-monitored mobile TBI patients for the onset of secondary brain injury. ©2018 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  12. Three-dimensional graphene/polyaniline composite material for high-performance supercapacitor applications

    International Nuclear Information System (INIS)

    Liu, Huili; Wang, Yi; Gou, Xinglong; Qi, Tao; Yang, Jun; Ding, Yulong

    2013-01-01

    Highlights: ► A novel 3D graphene showed high specific surface area and large mesopore volume. ► Aniline monomer was polymerized in the presence of 3D graphene at room temperature. ► The supercapacitive properties were studied by CV and charge–discharge tests. ► The composite show a high gravimetric capacitance and good cyclic stability. ► The 3D graphene/polyaniline has never been report before our work. -- Abstract: A novel three-dimensional (3D) graphene/polyaniline nanocomposite material which is synthesized using in situ polymerization of aniline monomer on the graphene surface is reported as an electrode for supercapacitors. The morphology and structure of the material are characterized by scanning electron microscopy (SEM), transmission electron microscope (TEM), Fourier transform infrared spectroscopy (FTIR) and X-ray diffraction (XRD). The electrochemical properties of the resulting materials are systematically studied using cyclic voltammetry (CV) and constant current charge–discharge tests. A high gravimetric capacitance of 463 F g −1 at a scan rate of 1 mV s −1 is obtained by means of CVs with 3 mol L −1 KOH as the electrolyte. In addition, the composite material shows only 9.4% capacity loss after 500 cycles, indicating better cyclic stability for supercapacitor applications. The high specific surface area, large mesopore volume and three-dimensional nanoporous structure of 3D graphene could contribute to the high specific capacitance and good cyclic life

  13. High Performance Infrared Plasmonic Metamaterial Absorbers and Their Applications to Thin-film Sensing

    KAUST Repository

    Yue, Weisheng; Wang, Zhihong; Yang, Yang; Han, Jiaguang; Li, Jingqi; Guo, Zaibing; Tan, Hua; Zhang, Xixiang

    2016-01-01

    Plasmonic metamaterial absorbers (PMAs) have attracted considerable attention for developing various sensing devices. In this work, we design, fabricate and characterize PMAs of different geometrical shapes operating in mid-infrared frequencies, and explore the applications of the PMAs as sensor for thin films. The PMAs, consisting of metal-insulator-metal stacks with patterned gold nanostructured surfaces (resonators), demonstrated high absorption efficiency (87 to 98 %) of electromagnetic waves in the infrared regime. The position and efficiency of resonance absorption are dependent on the shape of the resonators. Furthermore, the resonance wavelength of PMAs was sensitive to the thin film coated on the surface of the PMAs, which was tested using aluminum oxide (Al2O3) as the film. With increase of the Al2O3 thickness, the position of resonance absorption shifted to longer wavelengths. The dependence of the resonant wavelength on thin film thickness makes PMAs a suitable candidate as a sensor for thin films. Using this sensing strategy, PMAs have potential as a new method for thin film detection and in situ monitoring of surface reactions. © 2016 Springer Science+Business Media New York

  14. Wavy channel Thin Film Transistor for area efficient, high performance and low power applications

    KAUST Repository

    Hanna, Amir

    2014-06-01

    We report a new Thin Film Transistor (TFT) architecture that allows expansion of the device width using wavy (continuous without separation) fin features - termed as wavy channel (WC) architecture. This architecture allows expansion of transistor width in a direction perpendicular to the substrate, thus not consuming extra chip area, achieving area efficiency. The devices have shown for a 13% increase in the device width resulting in a maximum 2.4x increase in \\'ON\\' current value of the WCTFT, when compared to planar devices consuming the same chip area, while using atomic layer deposition based zinc oxide (ZnO) as the channel material. The WCTFT devices also maintain similar \\'OFF\\' current value, similar to 100 pA, when compared to planar devices, thus not compromising on power consumption for performance which usually happens with larger width devices. This work offers a pragmatic opportunity to use WCTFTs as backplane circuitry for large-area high-resolution display applications without any limitation any TFT materials.

  15. Toolkit for high performance Monte Carlo radiation transport and activation calculations for shielding applications in ITER

    International Nuclear Information System (INIS)

    Serikov, A.; Fischer, U.; Grosse, D.; Leichtle, D.; Majerle, M.

    2011-01-01

    The Monte Carlo (MC) method is the most suitable computational technique of radiation transport for shielding applications in fusion neutronics. This paper is intended for sharing the results of long term experience of the fusion neutronics group at Karlsruhe Institute of Technology (KIT) in radiation shielding calculations with the MCNP5 code for the ITER fusion reactor with emphasizing on the use of several ITER project-driven computer programs developed at KIT. Two of them, McCad and R2S, seem to be the most useful in radiation shielding analyses. The McCad computer graphical tool allows to perform automatic conversion of the MCNP models from the underlying CAD (CATIA) data files, while the R2S activation interface couples the MCNP radiation transport with the FISPACT activation allowing to estimate nuclear responses such as dose rate and nuclear heating after the ITER reactor shutdown. The cell-based R2S scheme was applied in shutdown photon dose analysis for the designing of the In-Vessel Viewing System (IVVS) and the Glow Discharge Cleaning (GDC) unit in ITER. Newly developed at KIT mesh-based R2S feature was successfully tested on the shutdown dose rate calculations for the upper port in the Neutral Beam (NB) cell of ITER. The merits of McCad graphical program were broadly acknowledged by the neutronic analysts and its continuous improvement at KIT has introduced its stable and more convenient run with its Graphical User Interface. Detailed 3D ITER neutronic modeling with the MCNP Monte Carlo method requires a lot of computation resources, inevitably leading to parallel calculations on clusters. Performance assessments of the MCNP5 parallel runs on the JUROPA/HPC-FF supercomputer cluster permitted to find the optimal number of processors for ITER-type runs. (author)

  16. High performance graphene-poly (o-anisidine) nanocomposite for supercapacitor applications

    International Nuclear Information System (INIS)

    Basnayaka, Punya A.; Ram, Manoj K.; Stefanakos, Lee; Kumar, Ashok

    2013-01-01

    Our previous exciting results on graphene (G)-conducting polymer (polyaniline (PANI) and polyethylenedioxythiophene (PEDOT)) supercapacitors have prompted the investigation of G-substituted conducting polymer nanocomposites used as electrode materials in supercapacitors. The solubility of ortho-substituted PANI derivatives in a few common solvents has allowed the fabrication of stretchable films by the casting technique. The G-poly (o-anisidine) (G-POA) nanocomposites were synthesized with different weight ratios of G to o-anisidine by chemical methods, and characterized by various techniques, such as, scanning electron microscopy, transmission electron microscopy, UV–visible spectroscopy, Raman spectroscopy, thermogravimetric analysis and cyclic voltammetry. The electrical conductivity and specific capacitance obtained for the G-POA nanocomposites were found to be dependent on the weight ratios of G to o-anisidine. The specific capacitance and the charging–discharging behavior of the POA and G-POA supercapacitors were investigated in a 2 M H 2 SO 4 , 0.2 M LiClO 4 and 1 M 1-butyl-3-methylimidazolium hexafluorophosphate (BMIM-PF 6 ) ionic liquid. The specific capacitance of 380 F g −1 was calculated for the 1:1 weight ratio of G to o-anisidine based G-POA supercapacitor in 2 M H 2 SO 4 . The presence of the electron-donating group (–OCH 3 ) in the o-anisidine allows the electrons through the lone pair of nitrogen atoms to enhance the electronic charge transport inside the G-POA supercapacitor electrodes. However, the G-POA-based supercapacitors showed a 27% decrease in the specific capacitance in H 2 SO 4 and 16% decrease in the ionic liquid (BMIM-PF 6 ) after 1000 cycles of charging and discharging. The higher stability and rate capability of the G-POA based supercapacitor in an ionic liquid (BMIM-PF 6 ) as compared to an aqueous electrolytic supercapacitor opens the door for the fabrication of stable supercapacitors for practical applications

  17. The graphics future in scientific applications

    International Nuclear Information System (INIS)

    Enderle, G.

    1982-01-01

    Computer graphics methods and tools are being used to a great extent in scientific research. The future development in this area will be influenced both by new hardware developments and by software advances. On the hardware sector, the development of the raster technology will lead to the increased use of colour workstations with more local processing power. Colour hardcopy devices for creating plots, slides, or movies will be available at a lower price than today. The first real 3D-workstations appear on the marketplace. One of the main activities on the software sector is the standardization of computer graphics systems, graphical files, and device interfaces. This will lead to more portable graphical application programs and to a common base for computer graphics education. (orig.)

  18. ERAST: Scientific Applications and Technology Commercialization

    Science.gov (United States)

    Hunley, John D. (Compiler); Kellogg, Yvonne (Compiler)

    2000-01-01

    This is a conference publication for an event designed to inform potential contractors and appropriate personnel in various scientific disciplines that the ERAST (Environmental Research Aircraft and Sensor Technology) vehicles have reached a certain level of maturity and are available to perform a variety of missions ranging from data gathering to telecommunications. There are multiple applications of the technology and a great many potential commercial and governmental markets. As high altitude platforms, the ERAST vehicles can gather data at higher resolution than satellites and can do so continuously, whereas satellites pass over a particular area only once each orbit. Formal addresses are given by Rich Christiansen, (Director of Programs, NASA Aerospace Technology Ent.), Larry Roeder, (Senior Policy Advisor, U.S. Dept. of State), and Dr. Marianne McCarthy, (DFRC Education Dept.). The Commercialization Workshop is chaired by Dale Tietz (President, New Vista International) and the Science Workshop is chaired by Steve Wegener, (Deputy Manager of NASA ERAST, NASA Ames Research Center.

  19. CCD developed for scientific application by Hamamatsu

    CERN Document Server

    Miyaguchi, K; Dezaki, J; Yamamoto, K

    1999-01-01

    We have developed CCDs for scientific applications that feature a low readout noise of less than 5 e-rms and low dark current of 10-25 pA/cm sup 2 at room temperature. CCDs with these characteristics will prove extremely useful in applications such as spectroscopic measurement and dental radiography. In addition, a large-area CCD of 2kx4k pixels and 15 mu m square pixel size has recently been completed for optical use in astronomical observations. Applications to X-ray astronomy require the most challenging device performance in terms of deep depletion, high CTE, and focal plane size, among others. An abuttable X-ray CCD, having 1024x1024 pixels and 24 mu m square pixel size, is to be installed in an international space station (ISS). We are now striving to achieve the lowest usable cooling temperature by means of a built-in TEC with limited power consumption. Details on the development status are described in this report. We would also like to present our future plans for a large active area and deep depleti...

  20. HPC Cloud for Scientific and Business Applications: Taxonomy, Vision, and Research Challenges

    OpenAIRE

    Netto, Marco A. S.; Calheiros, Rodrigo N.; Rodrigues, Eduardo R.; Cunha, Renato L. F.; Buyya, Rajkumar

    2017-01-01

    High Performance Computing (HPC) clouds are becoming an alternative to on-premise clusters for executing scientific applications and business analytics services. Most research efforts in HPC cloud aim to understand the cost-benefit of moving resource-intensive applications from on-premise environments to public cloud platforms. Industry trends show hybrid environments are the natural path to get the best of the on-premise and cloud resources---steady (and sensitive) workloads can run on on-pr...

  1. Scientific Applications Performance Evaluation on Burst Buffer

    KAUST Repository

    Markomanolis, George S.; Hadri, Bilel; Khurram, Rooh Ul Amin; Feki, Saber

    2017-01-01

    Parallel I/O is an integral component of modern high performance computing, especially in storing and processing very large datasets, such as the case of seismic imaging, CFD, combustion and weather modeling. The storage hierarchy includes nowadays

  2. High-performance CPW MMIC LNA using GaAs-based metamorphic HEMTs for 94-GHz applications

    International Nuclear Information System (INIS)

    Ryu, Keun-Kwan; Kim, Sung-Chan; An, Dan; Rhee, Jin-Koo

    2010-01-01

    In this paper, we report on a high-performance low-noise amplifier (LNA) using metamorphic high-electron-mobility transistor (MHEMT) technology for 94-GHz applications. The 100 nm x 60 μm MHEMT devices for the coplanar MMIC LNA exhibited DC characteristics with a drain current density of 655 mA/mm and an extrinsic transconductance of 720 mS/mm. The current gain cutoff frequency (f T ) and the maximum oscillation frequency (f max ) were 195 GHz and 305 GHz, respectively. Based on this MHEMT technology, coplanar 94-GHz MMIC LNAs were realized, achieving a small signal gain of more than 13 dB between 90 and 100 GHz and a small signal gain of 14.8 dB and a noise figure of 4.7 dB at 94 GHz.

  3. Applications of scientific imaging in environmental toxicology

    Science.gov (United States)

    El-Demerdash, Aref M.

    The national goals of clean air, clean water, and healthy ecosystems are a few of the primary forces that drive the need for better environmental monitoring. As we approach the end of the 1990s, the environmental questions at regional to global scales are being redefined and refined in the light of developments in environmental understanding and technological capability. Research in the use of scientific imaging data for the study of the environment is urgently needed in order to explore the possibilities of utilizing emerging new technologies. The objective of this research proposal is to demonstrate the usability of a wealth of new technology made available in the last decade to providing a better understanding of environmental problems. Research is focused in two imaging techniques macro and micro imaging. Several examples of applications of scientific imaging in research in the field of environmental toxicology were presented. This was achieved on two scales, micro and macro imaging. On the micro level four specific examples were covered. First, the effect of utilizing scanning electron microscopy as an imaging tool in enhancing taxa identification when studying diatoms was presented. Second, scanning electron microscopy combined with energy dispersive x-ray analyzer were demonstrated as a valuable and effective tool for identifying and analyzing household dust samples. Third, electronic autoradiography combined with FT-IR microscopy were used to study the distribution pattern of [14C]-Malathion in rats as a result of dermal exposure. The results of the autoradiography made on skin sections of the application site revealed the presence of [ 14C]-activity in the first region of the skin. These results were evidenced by FT-IR microscopy. The obtained results suggest that the penetration of Malathion into the skin and other tissues is vehicle and dose dependent. The results also suggest the use of FT-IR microscopy imaging for monitoring the disposition of

  4. Prototyping of thermoplastic microfluidic chips and their application in high-performance liquid chromatography separations of small molecules.

    Science.gov (United States)

    Wouters, Sam; De Vos, Jelle; Dores-Sousa, José Luís; Wouters, Bert; Desmet, Gert; Eeltink, Sebastiaan

    2017-11-10

    The present paper discusses practical aspects of prototyping of microfluidic chips using cyclic olefin copolymer as substrate and the application in high-performance liquid chromatography. The developed chips feature a 60mm long straight separation channel with circular cross section (500μm i.d.) that was created using a micromilling robot. To irreversibly seal the top and bottom chip substrates, a solvent-vapor-assisted bonding approach was optimized, allowing to approximate the ideal circular channel geometry. Four different approaches to establish the micro-to-macro interface were pursued. The average burst pressure of the microfluidic chips in combination with an encasing holder was established at 38MPa and the maximum burst pressure was 47MPa, which is believed to be the highest ever report for these polymer-based microfluidic chips. Porous polymer monolithic frits were synthesized in-situ via UV-initiated polymerization and their locations were spatially controlled by the application of a photomask. Next, high-pressure slurry packing was performed to introduce 3μm silica reversed-phase particles as the stationary phase in the separation channel. Finally, the application of the chip technology is demonstrated for the separation of alkyl phenones in gradient mode yielding baseline peak widths of 6s by applying a steep gradient of 1.8min at a flow rate of 10μL/min. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Leveraging Transcultural Enrollments to Enhance Application of the Scientific Method

    Science.gov (United States)

    Loudin, M.

    2013-12-01

    Continued growth of transcultural academic programs presents an opportunity for all of the students involved to improve utilization of the scientific method. Our own business success depends on how effectively we apply the scientific method, and so it is unsurprising that our hiring programs focus on three broad areas of capability among applicants which are strongly related to the scientific method. These are 1) ability to continually learn up-to-date earth science concepts, 2) ability to effectively and succinctly communicate in the English language, both oral and written, and 3) ability to employ behaviors that are advantageous with respect to the various phases of the scientific method. This third area is often the most difficult to develop, because neither so-called Western nor Eastern cultures encourage a suite of behaviors that are ideally suited. Generally, the acceptance of candidates into academic programs, together with subsequent high performance evidenced by grades, is a highly valid measure of continuous learning capability. Certainly, students for whom English is not a native language face additional challenges, but succinct and effective communication is an art which requires practice and development, regardless of native language. The ability to communicate in English is crucial, since it is today's lingua franca for both science and commerce globally. Therefore, we strongly support the use of frequent English written assignments and oral presentations as an integral part of all scientific academic programs. There is no question but that this poses additional work for faculty; nevertheless it is a key ingredient to the optimal development of students. No one culture has a monopoly with respect to behaviors that promote effective leveraging of the scientific method. For instance, the growing complexity of experimental protocols argues for a high degree of interdependent effort, which is more often associated with so-called Eastern than Western

  6. Cloud Data Storage Federation for Scientific Applications

    NARCIS (Netherlands)

    Koulouzis, S.; Vasyunin, D.; Cushing, R.; Belloum, A.; Bubak, M.; an Mey, D.; Alexander, M.; Bientinesi, P.; Cannataro, M.; Clauss, C.; Costan, A.; Kecskemeti, G.; Morin, C.; Ricci, L.; Sahuquillo, J.; Schulz, M.; Scarano, V.; Scott, S.L.; Weidendorfer, J.

    2014-01-01

    Nowadays, data-intensive scientific research needs storage capabilities that enable efficient data sharing. This is of great importance for many scientific domains such as the Virtual Physiological Human. In this paper, we introduce a solution that federates a variety of systems ranging from file

  7. Flexible and High Performance Supercapacitors Based on NiCo2O4for Wide Temperature Range Applications

    Science.gov (United States)

    Gupta, Ram K.; Candler, John; Palchoudhury, Soubantika; Ramasamy, Karthik; Gupta, Bipin Kumar

    2015-10-01

    Binder free nanostructured NiCo2O4 were grown using a facile hydrothermal technique. X-ray diffraction patterns confirmed the phase purity of NiCo2O4. The surface morphology and microstructure of the NiCo2O4 analyzed by scanning electron microscopy (SEM) showed flower-like morphology composed of needle-like structures. The potential application of binder free NiCo2O4 as an electrode for supercapacitor devices was investigated using electrochemical methods. The cyclic voltammograms of NiCo2O4 electrode using alkaline aqueous electrolytes showed the presence of redox peaks suggesting pseudocapacitance behavior. Quasi-solid state supercapacitor device fabricated by sandwiching two NiCo2O4 electrodes and separating them by ion transporting layer. The performance of the device was tested using cyclic voltammetry, galvanostatic charge-discharge and electrochemical impedance spectroscopy. The device showed excellent flexibility and cyclic stability. The temperature dependent charge storage capacity was measured for their variable temperature applications. Specific capacitance of the device was enhanced by ~150% on raising the temperature from 20 to 60 °C. Hence, the results suggest that NiCo2O4 grown under these conditions could be a suitable material for high performance supercapacitor devices that can be operated at variable temperatures.

  8. Green synthesis of boron doped graphene and its application as high performance anode material in Li ion battery

    Energy Technology Data Exchange (ETDEWEB)

    Sahoo, Madhumita; Sreena, K.P.; Vinayan, B.P.; Ramaprabhu, S., E-mail: ramp@iitm.ac.in

    2015-01-15

    Graphical abstract: Boron doped graphene (B-G), synthesized by simple hydrogen induced reduction technique using boric acid as boron precursor, have more uneven surface as a result of smaller bonding distance of boron compared to carbon, showed high capacity and high rate capability compared to pristine graphene as an anode material for Li ion battery application. - Abstract: The present work demonstrates a facile route for the large-scale, catalyst free, and green synthesis approach of boron doped graphene (B-G) and its use as high performance anode material for Li ion battery (LIB) application. Boron atoms were doped into graphene framework with an atomic percentage of 5.93% via hydrogen induced thermal reduction technique using graphite oxide and boric acid as precursors. Various characterization techniques were used to confirm the boron doping in graphene sheets. B-G as anode material shows a discharge capacity of 548 mAh g{sup −1} at 100 mA g{sup −1} after 30th cycles. At high current density value of 1 A g{sup −1}, B-G as anode material enhances the specific capacity by about 1.7 times compared to pristine graphene. The present study shows a simplistic way of boron doping in graphene leading to an enhanced Li ion adsorption due to the change in electronic states.

  9. Green synthesis of boron doped graphene and its application as high performance anode material in Li ion battery

    International Nuclear Information System (INIS)

    Sahoo, Madhumita; Sreena, K.P.; Vinayan, B.P.; Ramaprabhu, S.

    2015-01-01

    Graphical abstract: Boron doped graphene (B-G), synthesized by simple hydrogen induced reduction technique using boric acid as boron precursor, have more uneven surface as a result of smaller bonding distance of boron compared to carbon, showed high capacity and high rate capability compared to pristine graphene as an anode material for Li ion battery application. - Abstract: The present work demonstrates a facile route for the large-scale, catalyst free, and green synthesis approach of boron doped graphene (B-G) and its use as high performance anode material for Li ion battery (LIB) application. Boron atoms were doped into graphene framework with an atomic percentage of 5.93% via hydrogen induced thermal reduction technique using graphite oxide and boric acid as precursors. Various characterization techniques were used to confirm the boron doping in graphene sheets. B-G as anode material shows a discharge capacity of 548 mAh g −1 at 100 mA g −1 after 30th cycles. At high current density value of 1 A g −1 , B-G as anode material enhances the specific capacity by about 1.7 times compared to pristine graphene. The present study shows a simplistic way of boron doping in graphene leading to an enhanced Li ion adsorption due to the change in electronic states

  10. Development of CSS-42L{trademark}, a high performance carburizing stainless steel for high temperature aerospace applications

    Energy Technology Data Exchange (ETDEWEB)

    Burrier, H.I.; Milam, L. [Timken Co., Canton, OH (United States); Tomasello, C.M.; Balliett, S.A.; Maloney, J.L. [Latrobe Steel Co., Latrobe, PA (United States); Ogden, W.P. [MPB Corp., Lebanon, NH (United States)

    1998-12-31

    Today`s aerospace engineering challenges demand materials which can operate under conditions of temperature extremes, high loads and harsh, corrosive environments. This paper presents a technical overview of the on-going development of CSS-42L (US Patent No. 5,424,028). This alloy is a case-carburizable, stainless steel alloy suitable for use in applications up to 427 C, particularly suited to high performance rolling element bearings, gears, shafts and fasteners. The nominal chemistry of CSS-42L includes: (by weight) 0.12% carbon, 14.0% chromium, 0.60% vanadium, 2.0% nickel, 4.75% molybdenum and 12.5% cobalt. Careful balancing of these components combined with VIM-VAR melting produces an alloy that can be carburized and heat treated to achieve a high surface hardness (>58 HRC at 1mm (0.040 in) depth) with excellent corrosion resistance. The hot hardness of the carburized case is equal to or better than all competitive grades, exceeding 60 HRC at 427 C. The fracture toughness and impact resistance of the heat treated core material have likewise been evaluated in detail and found to be better than M50-NiL steel. The corrosion resistance has been shown to be equivalent to that of 440C steel in tests performed to date.

  11. Towards Highly Performing and Stable PtNi Catalysts in Polymer Electrolyte Fuel Cells for Automotive Application

    Directory of Open Access Journals (Sweden)

    Sabrina C. Zignani

    2017-03-01

    Full Text Available In order to help the introduction on the automotive market of polymer electrolyte fuel cells (PEFCs, it is mandatory to develop highly performing and stable catalysts. The main objective of this work is to investigate PtNi/C catalysts in a PEFC under low relative humidity and pressure conditions, more representative of automotive applications. Carbon supported PtNi nanoparticles were prepared by reduction of metal precursors with formic acid and successive thermal and leaching treatments. The effect of the chemical composition, structure and surface characteristics of the synthesized samples on their electrochemical behavior was investigated. The catalyst characterized by a larger Pt content (Pt3Ni2/C presented the highest catalytic activity (lower potential losses in the activation region among the synthesized bimetallic PtNi catalysts and the commercial Pt/C, used as the reference material, after testing at high temperature (95 °C and low humidification (50% conditions for automotive applications, showing a cell potential (ohmic drop-free of 0.82 V at 500 mA·cm−2. In order to assess the electro-catalysts stability, accelerated degradation tests were carried out by cycling the cell potential between 0.6 V and 1.2 V. By comparing the electrochemical and physico-chemical parameters at the beginning of life (BoL and end of life (EoL, it was demonstrated that the Pt1Ni1/C catalyst was the most stable among the catalyst series, with only a 2% loss of voltage at 200 mA·cm−2 and 12.5% at 950 mA·cm−2. However, further improvements are needed to produce durable catalysts.

  12. Application of denaturing high-performance liquid chromatography for monitoring sulfate-reducing bacteria in oil fields.

    Science.gov (United States)

    Priha, Outi; Nyyssönen, Mari; Bomberg, Malin; Laitila, Arja; Simell, Jaakko; Kapanen, Anu; Juvonen, Riikka

    2013-09-01

    Sulfate-reducing bacteria (SRB) participate in microbially induced corrosion (MIC) of equipment and H2S-driven reservoir souring in oil field sites. Successful management of industrial processes requires methods that allow robust monitoring of microbial communities. This study investigated the applicability of denaturing high-performance liquid chromatography (DHPLC) targeting the dissimilatory sulfite reductase ß-subunit (dsrB) gene for monitoring SRB communities in oil field samples from the North Sea, the United States, and Brazil. Fifteen of the 28 screened samples gave a positive result in real-time PCR assays, containing 9 × 10(1) to 6 × 10(5) dsrB gene copies ml(-1). DHPLC and denaturing gradient gel electrophoresis (DGGE) community profiles of the PCR-positive samples shared an overall similarity; both methods revealed the same samples to have the lowest and highest diversity. The SRB communities were diverse, and different dsrB compositions were detected at different geographical locations. The identified dsrB gene sequences belonged to several phylogenetic groups, such as Desulfovibrio, Desulfococcus, Desulfomicrobium, Desulfobulbus, Desulfotignum, Desulfonatronovibrio, and Desulfonauticus. DHPLC showed an advantage over DGGE in that the community profiles were very reproducible from run to run, and the resolved gene fragments could be collected using an automated fraction collector and sequenced without a further purification step. DGGE, on the other hand, included casting of gradient gels, and several rounds of rerunning, excising, and reamplification of bands were needed for successful sequencing. In summary, DHPLC proved to be a suitable tool for routine monitoring of the diversity of SRB communities in oil field samples.

  13. Efficient Use of Distributed Systems for Scientific Applications

    Science.gov (United States)

    Taylor, Valerie; Chen, Jian; Canfield, Thomas; Richard, Jacques

    2000-01-01

    Distributed computing has been regarded as the future of high performance computing. Nationwide high speed networks such as vBNS are becoming widely available to interconnect high-speed computers, virtual environments, scientific instruments and large data sets. One of the major issues to be addressed with distributed systems is the development of computational tools that facilitate the efficient execution of parallel applications on such systems. These tools must exploit the heterogeneous resources (networks and compute nodes) in distributed systems. This paper presents a tool, called PART, which addresses this issue for mesh partitioning. PART takes advantage of the following heterogeneous system features: (1) processor speed; (2) number of processors; (3) local network performance; and (4) wide area network performance. Further, different finite element applications under consideration may have different computational complexities, different communication patterns, and different element types, which also must be taken into consideration when partitioning. PART uses parallel simulated annealing to partition the domain, taking into consideration network and processor heterogeneity. The results of using PART for an explicit finite element application executing on two IBM SPs (located at Argonne National Laboratory and the San Diego Supercomputer Center) indicate an increase in efficiency by up to 36% as compared to METIS, a widely used mesh partitioning tool. The input to METIS was modified to take into consideration heterogeneous processor performance; METIS does not take into consideration heterogeneous networks. The execution times for these applications were reduced by up to 30% as compared to METIS. These results are given in Figure 1 for four irregular meshes with number of elements ranging from 30,269 elements for the Barth5 mesh to 11,451 elements for the Barth4 mesh. Future work with PART entails using the tool with an integrated application requiring

  14. Techniques and tools for measuring energy efficiency of scientific software applications

    CERN Document Server

    Abdurachmanov, David; Eulisse, Giulio; Knight, Robert; Niemi, Tapio; Nurminen, Jukka K.; Nyback, Filip; Pestana, Goncalo; Ou, Zhonghong; Khan, Kashif

    2014-01-01

    The scale of scientific High Performance Computing (HPC) and High Throughput Computing (HTC) has increased significantly in recent years, and is becoming sensitive to total energy use and cost. Energy-efficiency has thus become an important concern in scientific fields such as High Energy Physics (HEP). There has been a growing interest in utilizing alternate architectures, such as low power ARM processors, to replace traditional Intel x86 architectures. Nevertheless, even though such solutions have been successfully used in mobile applications with low I/O and memory demands, it is unclear if they are suitable and more energy-efficient in the scientific computing environment. Furthermore, there is a lack of tools and experience to derive and compare power consumption between the architectures for various workloads, and eventually to support software optimizations for energy efficiency. To that end, we have performed several physical and software-based measurements of workloads from HEP applications running o...

  15. High-Performance Networking

    CERN Multimedia

    CERN. Geneva

    2003-01-01

    The series will start with an historical introduction about what people saw as high performance message communication in their time and how that developed to the now to day known "standard computer network communication". It will be followed by a far more technical part that uses the High Performance Computer Network standards of the 90's, with 1 Gbit/sec systems as introduction for an in depth explanation of the three new 10 Gbit/s network and interconnect technology standards that exist already or emerge. If necessary for a good understanding some sidesteps will be included to explain important protocols as well as some necessary details of concerned Wide Area Network (WAN) standards details including some basics of wavelength multiplexing (DWDM). Some remarks will be made concerning the rapid expanding applications of networked storage.

  16. Application of Logic Models in a Large Scientific Research Program

    Science.gov (United States)

    O'Keefe, Christine M.; Head, Richard J.

    2011-01-01

    It is the purpose of this article to discuss the development and application of a logic model in the context of a large scientific research program within the Commonwealth Scientific and Industrial Research Organisation (CSIRO). CSIRO is Australia's national science agency and is a publicly funded part of Australia's innovation system. It conducts…

  17. Stationary and through-flow radiochemical detectors in cooperation with high performance liquid chromatography: Application in biochemistry

    International Nuclear Information System (INIS)

    Kehr, J.

    1986-01-01

    A review article is presented containing some original experimental data and discussing the usability of radiochemical detection of labelled compounds using high performance liquid chromatography. The stationary and through-flow types of detection are compared with respect to efficiency, chromatographic zone resolution, usability in biochemical research, and also to the current trends of development of liquid chromatography. (author). 3 figs., 1 tab., 19 refs

  18. Comparative Study of Continuous Pralidoxime Infusion versus Intermittent Dosing: Application of High-Performance Liquid Chromatography Method on Serum of Organophosphate Poisoned Patients

    Directory of Open Access Journals (Sweden)

    Girish Thunga

    2013-09-01

    How to cite this article: Thunga G, Pandey S, Nair S, Mylapuri R, Vidyasagar S, Kunhikatta V, et al. Comparative Study of Continuous Pralidoxime Infusion versus Intermittent Dosing: Application of High-Performance Liquid Chromatography Method on Serum of Organophosphate Poisoned Patients. Asia Pac J Med Toxicol 2013;2:105-10.

  19. Design strategy for air-stable organic semiconductors applicable to high-performance field-effect transistors

    OpenAIRE

    Kazuo Takimiya et al

    2007-01-01

    Electronic structure of air-stable, high-performance organic field-effect transistor (OFET) material, 2,7-dipheneyl[1]benzothieno[3,2-b]benzothiophene (DPh-BTBT), was discussed based on the molecular orbital calculations. It was suggested that the stability is originated from relatively low-lying HOMO level, despite the fact that the molecule contains highly π-extended aromatic core ([1]benzothieno[3,2-b]benzothiophene, BTBT) with four fused aromatic rings like naphthacene. This is rationaliz...

  20. High performance liquid chromatographic separation of polycyclic aromatic hydrocarbons on microparticulate pyrrolidone and application to the analysis of shale oil

    International Nuclear Information System (INIS)

    Mourey, T.H.; Siggia, S.; Uden, P.C.; Crowley, R.J.

    1980-01-01

    A chemically bonded pyrrolidone substrate is used for the high performance liquid chromatographic separation of polycyclic aromatic hydrocarbons. The cyclic amide phase interacts electronically with the polycyclic aromatic hydrocarbons in both the normal and reversed phase modes. Separation is effected according to the number of aromatic rings and the type of ring condensation. Information obtained is very different from that observed on hydrocarbon substrates, and thus these phases can be used in a complementary fashion to give a profile of polycyclic aromatics in shale oil samples. 7 figures, 1 table

  1. An ontology model for execution records of Grid scientific applications

    NARCIS (Netherlands)

    Baliś, B.; Bubak, M.

    2008-01-01

    Records of past application executions are particularly important in the case of loosely-coupled, workflow driven scientific applications which are used to conduct in silico experiments, often on top of Grid infrastructures. In this paper, we propose an ontology-based model for storing and querying

  2. Multi-Language Programming Environments for High Performance Java Computing

    OpenAIRE

    Vladimir Getov; Paul Gray; Sava Mintchev; Vaidy Sunderam

    1999-01-01

    Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI) tool which provides ...

  3. Scalability of Parallel Scientific Applications on the Cloud

    Directory of Open Access Journals (Sweden)

    Satish Narayana Srirama

    2011-01-01

    Full Text Available Cloud computing, with its promise of virtually infinite resources, seems to suit well in solving resource greedy scientific computing problems. To study the effects of moving parallel scientific applications onto the cloud, we deployed several benchmark applications like matrix–vector operations and NAS parallel benchmarks, and DOUG (Domain decomposition On Unstructured Grids on the cloud. DOUG is an open source software package for parallel iterative solution of very large sparse systems of linear equations. The detailed analysis of DOUG on the cloud showed that parallel applications benefit a lot and scale reasonable on the cloud. We could also observe the limitations of the cloud and its comparison with cluster in terms of performance. However, for efficiently running the scientific applications on the cloud infrastructure, the applications must be reduced to frameworks that can successfully exploit the cloud resources, like the MapReduce framework. Several iterative and embarrassingly parallel algorithms are reduced to the MapReduce model and their performance is measured and analyzed. The analysis showed that Hadoop MapReduce has significant problems with iterative methods, while it suits well for embarrassingly parallel algorithms. Scientific computing often uses iterative methods to solve large problems. Thus, for scientific computing on the cloud, this paper raises the necessity for better frameworks or optimizations for MapReduce.

  4. Efficient preparation of highly hydrogenated graphene and its application as a high-performance anode material for lithium ion batteries

    Science.gov (United States)

    Chen, Wufeng; Zhu, Zhiye; Li, Sirong; Chen, Chunhua; Yan, Lifeng

    2012-03-01

    A novel method has been developed to prepare hydrogenated graphene (HG) via a direct synchronized reduction and hydrogenation of graphene oxide (GO) in an aqueous suspension under 60Co gamma ray irradiation at room temperature. GO can be reduced by the aqueous electrons (eaq-) while the hydrogenation takes place due to the hydrogen radicals formed in situ under irradiation. The maximum hydrogen content of the as-prepared highly hydrogenated graphene (HHG) is found to be 5.27 wt% with H/C = 0.76. The yield of the target product is on the gram scale. The as-prepared HHG also shows high performance as an anode material for lithium ion batteries.

  5. Application of solvent floatation to separation and determination of triazine herbicides in honey by high-performance liquid chromatography.

    Science.gov (United States)

    Wang, Kun; Jiang, Jia; Lv, Xinping; Zang, Shuang; Tian, Sizhu; Zhang, Hanqi; Yu, Aimin; Zhang, Ziwei; Yu, Yong

    2018-03-01

    Based on the foaming property of the honey, a rapid, simple, and effective method solvent floatation (SF) was developed and firstly applied to the extraction and separation of triazine herbicides in honey. The analytes were determined by high-performance liquid chromatography. Some parameters affecting the extraction efficiencies, such as the type and volume of extraction solvent, type of salt, amount of (NH 4 ) 2 SO 4 , pH value of sample solution, gas flow rate, and floatation time, were investigated and optimized. The limits of detection for analytes are in the range of 0.16-0.56 μg kg -1 . The recoveries and relative standard deviations for determining triazines in five real honey samples are in the range of 78.2-112.9 and 0.2-9.2%, respectively.

  6. Alkylated selenophene-based ladder-type monomers via a facile route for high performance thin-film transistor applications

    KAUST Repository

    Fei, Zhuping

    2017-05-26

    We report the synthesis of two new selenophene containing ladder-type monomers, cyclopentadiselenophene (CDS) and indacenodiselenophene (IDSe), via a twofold and fourfold Pd catalyzed coupling with a 1,1-diborylmethane derivative. Co-polymers with benzothiadiazole (BT) were prepared in high yield by Suzuki polymerization to afford co-polymers which exhibited excellent solubility in a range of non-chlorinated solvents. The CDS co-polymer exhibited a band gap of just 1.18 eV, which is amongst the lowest reported for donor-acceptor polymers. Thin-film transistors were fabricated using environmentally benign, non-chlorinated solvents with the CDS and IDSe co-polymers exhibiting hole mobility up to 0.15 and 6.4 cm2 /Vs, respectively. This high performance was achieved without the undesirable peak in mobility often observed at low gate voltages due to parasitic contact resistance.

  7. Alkylated selenophene-based ladder-type monomers via a facile route for high performance thin-film transistor applications

    KAUST Repository

    Fei, Zhuping; Han, Yang; Gann, Eliot; Hodsden, Thomas; Chesman, Anthony; McNeill, Christopher R.; Anthopoulos, Thomas D.; Heeney, Martin

    2017-01-01

    We report the synthesis of two new selenophene containing ladder-type monomers, cyclopentadiselenophene (CDS) and indacenodiselenophene (IDSe), via a twofold and fourfold Pd catalyzed coupling with a 1,1-diborylmethane derivative. Co-polymers with benzothiadiazole (BT) were prepared in high yield by Suzuki polymerization to afford co-polymers which exhibited excellent solubility in a range of non-chlorinated solvents. The CDS co-polymer exhibited a band gap of just 1.18 eV, which is amongst the lowest reported for donor-acceptor polymers. Thin-film transistors were fabricated using environmentally benign, non-chlorinated solvents with the CDS and IDSe co-polymers exhibiting hole mobility up to 0.15 and 6.4 cm2 /Vs, respectively. This high performance was achieved without the undesirable peak in mobility often observed at low gate voltages due to parasitic contact resistance.

  8. Scientific production and technological production: transforming a scientific paper into patent applications.

    Science.gov (United States)

    Dias, Cleber Gustavo; Almeida, Roberto Barbosa de

    2013-01-01

    Brazil has been presenting in the last years a scientific production well-recognized in the international scenario, in several areas of knowledge, according to the impact of their publications in important events and especially in indexed journals of wide circulation. On the other hand, the country does not seem to be in the same direction regarding to the technological production and wealth creation from the established scientific development, and particularly from the applied research. The present paper covers such issue and discloses the main similarities and differences between a scientific paper and a patent application, in order to contribute to a better understanding of both types of documents and help the researchers to chose and select the results with technological potential, decide what is appropriated for industrial protection, as well as foster new business opportunities for each technology which has been created.

  9. Design strategy for air-stable organic semiconductors applicable to high-performance field-effect transistors

    Directory of Open Access Journals (Sweden)

    Kazuo Takimiya et al

    2007-01-01

    Full Text Available Electronic structure of air-stable, high-performance organic field-effect transistor (OFET material, 2,7-dipheneyl[1]benzothieno[3,2-b]benzothiophene (DPh-BTBT, was discussed based on the molecular orbital calculations. It was suggested that the stability is originated from relatively low-lying HOMO level, despite the fact that the molecule contains highly π-extended aromatic core ([1]benzothieno[3,2-b]benzothiophene, BTBT with four fused aromatic rings like naphthacene. This is rationalized by the consideration that the BTBT core is not isoelectronic with naphthacene but with chrysene, a cata-condensed phene with four benzene rings. It is well known that the acene-type compound is unstable among its structural isomers with the same number of benzene rings. Therefore, polycyclic aromatic compounds possessing the phene-substructure will be good candidates for stable organic semiconductors. Considering synthetic easiness, we suggest that the BTBT-substructure is the molecular structure of choice for developing air-stable organic semiconductors.

  10. Application of high-performance liquid chromatography to the determination of glyoxylate synthesis in chick embryo liver.

    Science.gov (United States)

    Qureshi, A A; Elson, C E; Lebeck, L A

    1982-11-19

    The isolation and identification of three major alpha-keto end products (glyoxylate, pyruvate, alpha-ketoglutarate) of the isocitrate lyase reaction in 18-day chick embryo liver have been described. This was accomplished by the separation of these alpha-keto acids as their 2,4-dinitrophenylhydrazones (DNPHs) by high-performance liquid chromatography (HPLC). The DNPHs of alpha-keto acids were eluted with an isocratic solvent system of methanol-water-acetic acid (60:38.5:1.5) containing 5 mM tetrabutylammonium phosphate from a reversed-phase ultrasphere C18 (IP) and from a radial compression C18 column. The separation can be completed on the radial compression column within 15-20 min as compared to 30-40 min with a conventional reversed-phase column. Retention times and peak areas were integrated for both the assay samples and reference compounds. A relative measure of alpha-keto acid in the peak was calculated by comparison with the standard. The identification of each peak was done on the basis of retention time matching, co-chromatography with authentic compounds, and stopped flow UV-VIS scanning between 240 and 440 nm. Glyoxylate represented 5% of the total product of the isocitrate lyase reaction. Day 18 parallels the peak period of embryonic hepatic glycogenesis which occurs at a time when the original egg glucose reserve has been depleted.

  11. Application of ultra-high performance supercritical fluid chromatography for the determination of carotenoids in dietary supplements.

    Science.gov (United States)

    Li, Bing; Zhao, Haiyan; Liu, Jing; Liu, Wei; Fan, Sai; Wu, Guohua; Zhao, Rong

    2015-12-18

    A quick and simple ultra-high performance supercritical fluid chromatography-photodiode array detector method was developed and validated for the simultaneous determination of 9 carotenoids in dietary supplements. The influences of stationary phase, co-solvent, pressure, temperature and flow rate on the separation of carotenoids were evaluated. The separation of the carotenoids was carried out using an Acquity UPC(2) HSS C18 SB column (150mm×3.0mm, 1.8μm) by gradient elution with carbon dioxide and a 1:2 (v:v) methanol/ethanol mixture. The column temperature was set to 35°C and the backpressure was 15.2MPa. Under these conditions, 9 carotenoids and the internal standard, β-apo-8'-carotenal, were successfully separated within 10min. The correlation coefficients (R(2)) of the calibration curves were all above 0.997, the limits of detection for the 9 carotenoids were in the range of 0.33-1.08μg/mL, and the limits of quantification were in the range of 1.09-3.58μg/mL. The mean recoveries were from 93.4% to 109.5% at different spiking levels, and the relative standard deviations were between 0.8% and 6.0%. This method was successfully applied to the determination of 9 carotenoids in commercial dietary supplements. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. High performance homes

    DEFF Research Database (Denmark)

    Beim, Anne; Vibæk, Kasper Sánchez

    2014-01-01

    Can prefabrication contribute to the development of high performance homes? To answer this question, this chapter defines high performance in more broadly inclusive terms, acknowledging the technical, architectural, social and economic conditions under which energy consumption and production occur....... Consideration of all these factors is a precondition for a truly integrated practice and as this chapter demonstrates, innovative project delivery methods founded on the manufacturing of prefabricated buildings contribute to the production of high performance homes that are cost effective to construct, energy...

  13. Application of high performance liquid chromatography for the profiling of complex chemical mixtures with the aid of chemometrics.

    Science.gov (United States)

    Ni, Yongnian; Zhang, Liangsheng; Churchill, Jane; Kokot, Serge

    2007-06-15

    In this paper, chemometrics methods were applied to resolve the high performance liquid chromatography (HPLC) fingerprints of complex, many-component substances to compare samples from a batch from a given manufacturer, or from those of different producers. As an example of such complex substances, we used a common Chinese traditional medicine, Huoxiang Zhengqi Tincture (HZT) for this research. Twenty-one samples, each representing a separate HZT production batch from one of three manufacturers were analyzed by HPLC with the aid of a diode array detector (DAD). An Agilent Zorbax Eclipse XDB-C18 column with an Agilent Zorbax high pressure reliance cartridge guard-column were used. The mobile phase consisted of water (A) and methanol (B) with a gradient program of 25-65% (v/v, B) during 0-30min, 65-55% (v/v, B) during 30-35min and 55-100% (v/v, B) during 35-60min (flow rate, 1.0mlmin(-1); injection volume, 20mul; and column temperature-ambient). The detection wavelength was adjusted for maximum sensitivity at different time periods. A peak area matrix with 21objectsx14HPLC variables was obtained by sampling each chromatogram at 14 common retention times. Similarities were then calculated to discriminate the batch-to-batch samples and also, a more informative multi-criteria decision making methodology (MCDM), PROMETHEE and GAIA, was applied to obtain more information from the chromatograms in order to rank and compare the complex HZT profiles. The results showed that with the MCDM analysis, it was possible to match and discriminate correctly the batch samples from the three different manufacturers. Fourier transform infrared (FT-IR) spectra taken from samples from several batches were compared by the common similarity method with the HPLC results. It was found that the FT-IR spectra did not discriminate the samples from the different batches.

  14. Determination of metabolite of nicergoline in human plasma by high-performance liquid chromatography and its application in pharmacokinetic studies

    Directory of Open Access Journals (Sweden)

    Rong Zheng

    2012-02-01

    Full Text Available A fast, simple and sensitive high performance liquid chromatographic (HPLC method has been developed for determination of 10α-methoxy-6-methyl ergoline-8β-methanol (MDL, a main metabolite of nicergoline in human plasma. One-step liquid–liquid extraction (LLE with diethyl ether was employed as the sample preparation method. Tizanidine hydrochloride was selected as the internal standard (IS. Analysis was carried out on a Diamonsil ODS column (150 mm×4.6 mm, 5 μm using acetonitrile–ammonium acetate (0.1 mol/L (15/85, v/v as mobile phase at detection wavelength of 224 nm. The calibration curves were linear over the range of 2.288–73.2 ng/mL with a lower limit of quantitation (LLOQ of 2.288 ng/mL. The intra- and inter-day precision values were below 13% and the recoveries were from 74.47% to 83.20% at three quality control levels. The method herein described was successfully applied in a randomized crossover bioequivalence study of two different nicergoline preparations after administration of 30 mg in 20 healthy volunteers. Keywords: Nicergoline, 10α-methoxy-6-methylergoline-8β-methanol (MDL, HPLC, Plasma-drug concentration, Bioequivalence study

  15. Award for Distinguished Scientific Applications of Psychology: Nancy E. Adler

    Science.gov (United States)

    American Psychologist, 2009

    2009-01-01

    Nancy E. Adler, winner of the Award for Distinguished Scientific Applications of Psychology, is cited for her research on reproductive health examining adolescent decision making with regard to contraception, conscious and preconscious motivations for pregnancy, and perception of risk for sexually transmitted diseases, and for her groundbreaking…

  16. Kelly D. Brownell: Award for Distinguished Scientific Applications of Psychology

    Science.gov (United States)

    American Psychologist, 2012

    2012-01-01

    Presents a short biography of Kelly D. Brownwell, winner of the American Psychological Association's Award for Distinguished Scientific Applications of Psychology (2012). He won the award for outstanding contributions to our understanding of the etiology and management of obesity and the crisis it poses for the modern world. A seminal thinker in…

  17. Workshop on scientific and industrial applications of free electron lasers

    International Nuclear Information System (INIS)

    Difilippo, F.C.; Perez, R.B.

    1990-05-01

    A Workshop on Scientific and Industrial Applications of Free Electron Lasers was organized to address potential uses of a Free Electron Laser in the infrared wavelength region. A total of 13 speakers from national laboratories, universities, and the industry gave seminars to an average audience of 30 persons during June 12 and 13, 1989. The areas covered were: Free Electron Laser Technology, Chemistry and Surface Science, Atomic and Molecular Physics, Condensed Matter, and Biomedical Applications, Optical Damage, and Optoelectronics

  18. Techniques and tools for measuring energy efficiency of scientific software applications

    International Nuclear Information System (INIS)

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Niemi, Tapio; Pestana, Gonçalo; Khan, Kashif; Nurminen, Jukka K; Nyback, Filip; Ou, Zhonghong

    2015-01-01

    The scale of scientific High Performance Computing (HPC) and High Throughput Computing (HTC) has increased significantly in recent years, and is becoming sensitive to total energy use and cost. Energy-efficiency has thus become an important concern in scientific fields such as High Energy Physics (HEP). There has been a growing interest in utilizing alternate architectures, such as low power ARM processors, to replace traditional Intel x86 architectures. Nevertheless, even though such solutions have been successfully used in mobile applications with low I/O and memory demands, it is unclear if they are suitable and more energy-efficient in the scientific computing environment. Furthermore, there is a lack of tools and experience to derive and compare power consumption between the architectures for various workloads, and eventually to support software optimizations for energy efficiency. To that end, we have performed several physical and software-based measurements of workloads from HEP applications running on ARM and Intel architectures, and compare their power consumption and performance. We leverage several profiling tools (both in hardware and software) to extract different characteristics of the power use. We report the results of these measurements and the experience gained in developing a set of measurement techniques and profiling tools to accurately assess the power consumption for scientific workloads. (paper)

  19. The application of cloud computing to scientific workflows: a study of cost and performance.

    Science.gov (United States)

    Berriman, G Bruce; Deelman, Ewa; Juve, Gideon; Rynge, Mats; Vöckler, Jens-S

    2013-01-28

    The current model of transferring data from data centres to desktops for analysis will soon be rendered impractical by the accelerating growth in the volume of science datasets. Processing will instead often take place on high-performance servers co-located with data. Evaluations of how new technologies such as cloud computing would support such a new distributed computing model are urgently needed. Cloud computing is a new way of purchasing computing and storage resources on demand through virtualization technologies. We report here the results of investigations of the applicability of commercial cloud computing to scientific computing, with an emphasis on astronomy, including investigations of what types of applications can be run cheaply and efficiently on the cloud, and an example of an application well suited to the cloud: processing a large dataset to create a new science product.

  20. TiO_2 hierarchical hollow microspheres with different size for application as anodes in high-performance lithium storage

    International Nuclear Information System (INIS)

    Wang, Xiaobing; Meng, Qiuxia; Wang, Yuanyuan; Liang, Huijun; Bai, Zhengyu; Wang, Kui; Lou, Xiangdong; Cai, Bibo; Yang, Lin

    2016-01-01

    Graphical abstract: In the application of lithium-ion batteries, the influences of microsphere sizes are more significant than the secondary nanoparticles size and crystallinity of TiO_2-HSs for their transfer resistance and cycling performance, so that the bigger sizes of TiO_2-HSs can retain high reversible capacities after 30 recycles. - Highlights: • Hierarchical hollow microspheres have size-effect in the application of lithium ion battery. • The microsphere sizes can significantly affect the cycling capacities of TiO_2. • The nanoparticles size affect the initial discharge capacity and lithium ion diffusion. • Controlled microsphere size is more significant for improving TiO_2 cycling capacities. - Abstract: Nowadays, the safety issue has greatly hindered the development of large capacity lithium-ion batteries (LIBs), especially in electric vehicles applications. TiO_2 is a kind of potential anode candidate for improving the safety of LIBs. However, it still needs to understand how to improve the performance of TiO_2 anode in the practical applications. Herein, we design a contrast experiment by using three sizes of TiO_2 hierarchical hollow microspheres (TiO_2-HSs). The research results indicated that the cycling performance of TiO_2-HSs anode can be affected by the size of microspheres, and the nanoparticles size of microspheres and crystallinity of TiO_2 can affect their initial discharge capacity and lithium ion diffusion. And, the influence of microspheres size is more significant. This may provide a new strategy for improving the lithium-ion storage property of TiO_2 anode material in the practical applications.

  1. Recent developments of the MOA thruster, a high performance plasma accelerator for nuclear power and propulsion applications

    International Nuclear Information System (INIS)

    Frischauf, N.; Hettmer, M.; Grassauer, A.; Bartusch, T.; Koudelka, O.

    2008-01-01

    More than 60 years after the late Nobel laureate Hannes Alfven had published a letter stating that oscillating magnetic fields can accelerate ionised matter via magneto-hydrodynamic interactions in a wave like fashion, the technical implementation of Alfven waves for propulsive purposes has been proposed, patented and examined for the first time by a group of inventors. The name of the concept, utilising Alfven waves to accelerate ionised matter for propulsive purposes, is MOA -Magnetic field Oscillating Amplified thruster. Alfven waves are generated by making use of two coils, one being permanently powered and serving also as magnetic nozzle, the other one being switched on and off in a cyclic way, deforming the field lines of the overall system. It is this deformation that generates Alfven waves, which are in the next step used to transport and compress the propulsive medium, in theory leading to a propulsion system with a much higher performance than any other electric propulsion system. Based on computer simulations, which were conducted to get a first estimate on the performance of the system, MOA is a highly flexible propulsion system, whose performance parameters might easily be adapted, by changing the mass flow and/or the power level. As such the system is capable to deliver a maximum specific impulse of 13116 s (12.87 mN) at a power level of 11.16 kW, using Xe as propellant, but can also be attuned to provide a thrust of 236.5 mN (2411 s) at 6.15 kW of power. While space propulsion is expected to be the prime application for MOA and is supported by numerous applications such as Solar and/or Nuclear Electric Propulsion or even as an 'afterburner system' for Nuclear Thermal Propulsion, other terrestrial applications can be thought of as well, making the system highly suited for a common space-terrestrial application research and utilization strategy. This paper presents the recent developments of the MOA Thruster R and D activities at QASAR, the company in

  2. High performance sapphire windows

    Science.gov (United States)

    Bates, Stephen C.; Liou, Larry

    1993-02-01

    High-quality, wide-aperture optical access is usually required for the advanced laser diagnostics that can now make a wide variety of non-intrusive measurements of combustion processes. Specially processed and mounted sapphire windows are proposed to provide this optical access to extreme environment. Through surface treatments and proper thermal stress design, single crystal sapphire can be a mechanically equivalent replacement for high strength steel. A prototype sapphire window and mounting system have been developed in a successful NASA SBIR Phase 1 project. A large and reliable increase in sapphire design strength (as much as 10x) has been achieved, and the initial specifications necessary for these gains have been defined. Failure testing of small windows has conclusively demonstrated the increased sapphire strength, indicating that a nearly flawless surface polish is the primary cause of strengthening, while an unusual mounting arrangement also significantly contributes to a larger effective strength. Phase 2 work will complete specification and demonstration of these windows, and will fabricate a set for use at NASA. The enhanced capabilities of these high performance sapphire windows will lead to many diagnostic capabilities not previously possible, as well as new applications for sapphire.

  3. Applications of field-programmable gate arrays in scientific research

    CERN Document Server

    Sadrozinski, Hartmut F W

    2011-01-01

    Focusing on resource awareness in field-programmable gate array (FPGA) design, Applications of Field-Programmable Gate Arrays in Scientific Research covers the principle of FPGAs and their functionality. It explores a host of applications, ranging from small one-chip laboratory systems to large-scale applications in ""big science."" The book first describes various FPGA resources, including logic elements, RAM, multipliers, microprocessors, and content-addressable memory. It then presents principles and methods for controlling resources, such as process sequencing, location constraints, and in

  4. Cellulose nanocrystals in nanocomposite approach: Green and high-performance materials for industrial, biomedical and agricultural applications

    Science.gov (United States)

    Fortunati, E.; Torre, L.

    2016-05-01

    The need to both avoid wastes and find new renewable resources has led to a new and promising research based on the possibility to revalorize the biomass producing sustainable chemicals and/or materials which may play a major role in replacing systems traditionally obtained from non-renewable sources. Most of the low-value biomass is termed lignocellulosic, referring to its main constituent biopolymers: cellulose, hemicelluloses and lignin. In this context, nanocellulose, and in particular cellulose nanocrystals (CNC), have gain considerable attention as nanoreinforcement for polymer matrices, mainly biodegradable. Derived from the most abundant polymeric resource in nature and with inherent biodegradability, nanocellulose is an interesting nanofiller for the development of nanocomposites for industrial, biomedical and agricultural applications. Due to the high amount of hydroxyl groups on their surface, cellulose nanocrystals are easy to functionalize. Well dispersed CNC are able, in fact, to enhance several properties of polymers, i.e.: thermal, mechanical, barrier, surface wettability, controlled of active compound and/or drug release. The main objective here is to give a general overview of CNC applications, summarizing our recent developments of bio-based nanocomposite formulations reinforced with cellulose nanocrystals extracted from different natural sources and/or wastes for food packaging, medical and agricultural sectors.

  5. Strategies for application of scientific findings in prevention.

    Science.gov (United States)

    Wei, S H

    1995-07-01

    Dental research in the last 50 years has accomplished numerous significant advances in preventive dentistry, particularly in the area of research in fluorides, periodontal diseases, restorative dentistry, and dental materials, as well as craniofacial development and molecular biology. The transfer of scientific knowledge to clinical practitioners requires additional effort. It is the responsibility of the scientific communities to transfer the fruits of their findings to society through publications, conferences, media, and the press. Specific programs that the International Association for Dental Research (IADR) has developed to transmit science to the profession and the public have included science transfer seminars, the Visiting Lecture Program, and hands-on workshops. The IADR Strategic Plan also has a major outreach goal. In addition, the Federation Dentaire Internationale (FDI) and the World Health Organization (WHO) have initiated plans to celebrate World Health Day and the Year of Oral Health in 1994. These are important strategies for the application of scientific findings in prevention.

  6. Highly efficient synthesis of ordered nitrogen-doped mesoporous carbons with tunable properties and its application in high performance supercapacitors

    Science.gov (United States)

    Liu, Dan; Zeng, Chao; Qu, Deyu; Tang, Haolin; Li, Yu; Su, Bao-Lian; Qu, Deyang

    2016-07-01

    Nitrogen-doped ordered mesoporous carbons (OMCs) have been synthesized via aqueous cooperative assembly route in the presence of basic amino acids as either polymerization catalysts or nitrogen dopants. This method allows the large-scale production of nitrogen-doped OMCs with tunable composition, structure and morphology while maintaining highly ordered mesostructures. For instances, the nitrogen content can be varied from ∼1 wt% to ∼6.3 wt% and the mesophase can be either 3-D body-centered cubic or 2-D hexagonal. The specific surface area for typical OMCs is around 600 m2 g-1, and further KOH activation can significantly enhance the surface area to 1866 m2 g-1 without destroying the ordered mesostructures. Benefiting from hierarchically ordered porous structure, nitrogen-doping effect and large-scale production availability, the synthesized OMCs show a great potential towards supercapacitor application. When measured in a symmetrical two-electrode configuration with an areal mass loading of ∼3 mg cm-2, the activated OMC exhibits high capacitance (186 F g-1 at 0.25 A g-1) and good rate capability (75% capacity retention at 20 A g-1) in ionic liquid electrolyte. Even as the mass loading is up to ∼12 mg cm-2, the OMC electrode still yields a specific capacitance of 126 F g-1 at 20 A g-1.

  7. Electrochemical behavior of high performance on-chip porous carbon films for micro-supercapacitors applications in organic electrolytes

    Science.gov (United States)

    Brousse, K.; Huang, P.; Pinaud, S.; Respaud, M.; Daffos, B.; Chaudret, B.; Lethien, C.; Taberna, P. L.; Simon, P.

    2016-10-01

    Carbide derived carbons (CDCs) are promising materials for preparing integrated micro-supercapacitors, as on-chip CDC films are prepared via a process fully compatible with current silicon-based device technology. These films show good adherence on the substrate and high capacitance thanks to their unique nanoporous structure which can be fine-tuned by adjusting the synthesis parameters during chlorination of the metallic carbide precursor. The carbon porosity is mostly related to the synthesis temperature whereas the thickness of the films depends on the chlorination duration. Increasing the pore size allows the adsorption of large solvated ions from organic electrolytes and leads to higher energy densities. Here, we investigated the electrochemical behavior and performance of on-chip TiC-CDC in ionic liquid solvent mixtures of 1-ethyl-3-methylimidazolium tetrafluoroborate (EMIBF4) diluted in either acetonitrile or propylene carbonate via cyclic voltammetry and electrochemical impedance spectroscopy. Thin CDC films exhibited typical capacitive signature and achieved 169 F cm-3 in both electrolytes; 65% of the capacitance was still delivered at 1 V s-1. While increasing the thickness of the films, EMI+ transport limitation was observed in more viscous PC-based electrolyte. Nevertheless, the energy density reached 90 μW h cm-2 in 2M EMIBF4/ACN, confirming the interest of these CDC films for micro-supercapacitors applications.

  8. U.S. DOE Progress Towards Developing Low-Cost, High Performance, Durable Polymer Electrolyte Membranes for Fuel Cell Applications.

    Science.gov (United States)

    Houchins, Cassidy; Kleen, Greg J; Spendelow, Jacob S; Kopasz, John; Peterson, David; Garland, Nancy L; Ho, Donna Lee; Marcinkoski, Jason; Martin, Kathi Epping; Tyler, Reginald; Papageorgopoulos, Dimitrios C

    2012-12-18

    Low cost, durable, and selective membranes with high ionic conductivity are a priority need for wide-spread adoption of polymer electrolyte membrane fuel cells (PEMFCs) and direct methanol fuel cells (DMFCs). Electrolyte membranes are a major cost component of PEMFC stacks at low production volumes. PEMFC membranes also impose limitations on fuel cell system operating conditions that add system complexity and cost. Reactant gas and fuel permeation through the membrane leads to decreased fuel cell performance, loss of efficiency, and reduced durability in both PEMFCs and DMFCs. To address these challenges, the U.S. Department of Energy (DOE) Fuel Cell Technologies Program, in the Office of Energy Efficiency and Renewable Energy, supports research and development aimed at improving ion exchange membranes for fuel cells. For PEMFCs, efforts are primarily focused on developing materials for higher temperature operation (up to 120 °C) in automotive applications. For DMFCs, efforts are focused on developing membranes with reduced methanol permeability. In this paper, the recently revised DOE membrane targets, strategies, and highlights of DOE-funded projects to develop new, inexpensive membranes that have good performance in hot and dry conditions (PEMFC) and that reduce methanol crossover (DMFC) will be discussed.

  9. Multi-mode application of graphene quantum dots bonded silica stationary phase for high performance liquid chromatography.

    Science.gov (United States)

    Wu, Qi; Sun, Yaming; Zhang, Xiaoli; Zhang, Xia; Dong, Shuqing; Qiu, Hongdeng; Wang, Litao; Zhao, Liang

    2017-04-07

    Graphene quantum dots (GQDs), which possess hydrophobic, hydrophilic, π-π stacking and hydrogen bonding properties, have great prospect in HPLC. In this study, a novel GQDs bonded silica stationary phase was prepared and applied in multiple separation modes including normal phase, reversed phase and hydrophilic chromatography mode. Alkaloids, nucleosides and nucleobases were chosen as test compounds to evaluate the separation performance of this column in hydrophilic chromatographic mode. The tested polar compounds achieved baseline separation and the resolutions reached 2.32, 4.62, 7.79, 1.68 for thymidine, uridine, adenosine, cytidine and guanosine. This new column showed satisfactory chromatographic performance for anilines, phenols and polycyclic aromatic hydrocarbons in normal and reversed phase mode. Five anilines were completely separated within 10min under the condition of mobile phase containing only 10% methanol. The effect of water content, buffer concentration and pH on chromatographic separation was further investigated, founding that this new stationary phase showed a complex retention mechanism of partitioning, adsorption and electrostatic interaction in hydrophilic chromatography mode, and the multiple retention interactions such as π-π stacking and π-π electron-donor-acceptor interaction played an important role during the separation process. This GQDs bonded column, which allows us to adjust appropriate chromatography mode according to the properties of analytes, has possibility in actual application after further research. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. High-performance Cu nanoparticles/three-dimensional graphene/Ni foam hybrid for catalytic and sensing applications

    Science.gov (United States)

    Zhu, Long; Guo, Xinli; Liu, Yuanyuan; Chen, Zhongtao; Zhang, Weijie; Yin, Kuibo; Li, Long; Zhang, Yao; Wang, Zengmei; Sun, Litao; Zhao, Yuhong

    2018-04-01

    A novel hybrid of Cu nanoparticles/three-dimensional graphene/Ni foam (Cu NPs/3DGr/NiF) was prepared by chemical vapor deposition, followed by a galvanic displacement reaction in Ni- and Cu-ion-containing salt solution through a one-step reaction. The as-prepared Cu NPs/3DGr/NiF hybrid is uniform, stable, recyclable and exhibits an extraordinarily high catalytic efficiency for the reduction of 4-nitrophenol (4-NP) to 4-aminophenol (4-AP) with a reduction rate constant K = 0.056 15 s-1, required time ˜30 s and excellent sensing properties for the non-enzymatic amperometric hydrogen peroxide (H2O2) with a linear range ˜50 μM-9.65 mM, response time ˜3 s, detection limit ˜1 μM. The results indicate that the as-prepared Cu NPs/3DGr/NiF hybrid can be used to replace expensive noble metals in catalysis and sensing applications.

  11. U.S. DOE Progress Towards Developing Low-Cost, High Performance, Durable Polymer Electrolyte Membranes for Fuel Cell Applications

    Directory of Open Access Journals (Sweden)

    Dimitrios C. Papageorgopoulos

    2012-12-01

    Full Text Available Low cost, durable, and selective membranes with high ionic conductivity are a priority need for wide-spread adoption of polymer electrolyte membrane fuel cells (PEMFCs and direct methanol fuel cells (DMFCs. Electrolyte membranes are a major cost component of PEMFC stacks at low production volumes. PEMFC membranes also impose limitations on fuel cell system operating conditions that add system complexity and cost. Reactant gas and fuel permeation through the membrane leads to decreased fuel cell performance, loss of efficiency, and reduced durability in both PEMFCs and DMFCs. To address these challenges, the U.S. Department of Energy (DOE Fuel Cell Technologies Program, in the Office of Energy Efficiency and Renewable Energy, supports research and development aimed at improving ion exchange membranes for fuel cells. For PEMFCs, efforts are primarily focused on developing materials for higher temperature operation (up to 120 °C in automotive applications. For DMFCs, efforts are focused on developing membranes with reduced methanol permeability. In this paper, the recently revised DOE membrane targets, strategies, and highlights of DOE-funded projects to develop new, inexpensive membranes that have good performance in hot and dry conditions (PEMFC and that reduce methanol crossover (DMFC will be discussed.

  12. High Performance Marine Vessels

    CERN Document Server

    Yun, Liang

    2012-01-01

    High Performance Marine Vessels (HPMVs) range from the Fast Ferries to the latest high speed Navy Craft, including competition power boats and hydroplanes, hydrofoils, hovercraft, catamarans and other multi-hull craft. High Performance Marine Vessels covers the main concepts of HPMVs and discusses historical background, design features, services that have been successful and not so successful, and some sample data of the range of HPMVs to date. Included is a comparison of all HPMVs craft and the differences between them and descriptions of performance (hydrodynamics and aerodynamics). Readers will find a comprehensive overview of the design, development and building of HPMVs. In summary, this book: Focuses on technology at the aero-marine interface Covers the full range of high performance marine vessel concepts Explains the historical development of various HPMVs Discusses ferries, racing and pleasure craft, as well as utility and military missions High Performance Marine Vessels is an ideal book for student...

  13. High Performance Macromolecular Material

    National Research Council Canada - National Science Library

    Forest, M

    2002-01-01

    .... In essence, most commercial high-performance polymers are processed through fiber spinning, following Nature and spider silk, which is still pound-for-pound the toughest liquid crystalline polymer...

  14. A framework for distributed mixed-language scientific applications

    International Nuclear Information System (INIS)

    Quarrie, D.R.

    1996-01-01

    The Object Management Group has defined an architecture (COBRA) for distributed object applications based on an Object Broker and Interface Definition Language. This project builds upon this architecture to establish a framework for the creation of mixed language scientific applications. A prototype compiler has been written that generates FORTRAN 90 or Eiffel subs and skeletons and the required C++ glue code from an input IDL file that specifies object interfaces. This generated code can be used directly for non-distributed mixed language applications or in conjunction with the C++ code generated from a commercial IDL compiler for distributed applications. A feasibility study is presently to see whether a fully integrated software development environment for distributed, mixed-language applications can be created by modifying the back-end code generator of a commercial CASE tool to emit IDL. (author)

  15. Validation and application of a high-performance liquid chromatography--tandem mass spectrometry assay for mosapride in human plasma.

    Science.gov (United States)

    Ramakrishna, N V S; Vishwottam, K N; Manoj, S; Koteshwara, M; Chidambara, J; Varma, D P

    2005-09-01

    A simple, rapid, sensitive and specific liquid chromatography-tandem mass spectrometry method was developed and validated for quantification of mosapride (I), a novel and potent gastroprokinetic agent that enhances the upper gastrointestinal motility by stimulating 5-HT(4) receptor. The analyte and internal standard, tamsulosin (II), were extracted by liquid-liquid extraction with diethyl ether-dichloromethane (70:30, v/v) using a Glas-Col Multi-Pulse Vortexer. The chromatographic separation was performed on a reversed-phase Waters symmetry C(18) column with a mobile phase of 0.03% formic acid-acetonitrile (10:90, v/v). The protonated analyte was quantitated in positive ionization by multiple reaction monitoring with a mass spectrometer. The mass transitions m/z 422.3 -->198.3 and m/z 409.1 -->228.1 were used to measure I and II, respectively. The assay exhibited a linear dynamic range of 0.5-100.0 ng/mL for mosapride in human plasma. The lower limit of quantitation was 500 pg/mL with a relative standard deviation of less than 15%. Acceptable precision and accuracy were obtained for concentrations over the standard curve ranges. A run time of 2.0 min for each sample made it possible to analyze a throughput of more than 400 human plasma samples per day. The validated method has been successfully used to analyze human plasma samples for application in pharmacokinetic, bioavailability or bioequivalence studies. Copyright (c) 2005 John Wiley & Sons, Ltd.

  16. Considerations on the application in supermarkets. The high performance air cooler in the course of time; Ueberlegungen fuer die Anwendung im Supermarkt. Der Hochleistungsluftkuehler im Wandel der Zeit

    Energy Technology Data Exchange (ETDEWEB)

    Lich, Mathias [GEA Kueba GmbH, Baierbrunn (Germany)

    2011-08-15

    In the last twenty years, the high performant air cooler has undergone a rapid development. Power, energy efficiency and compact size an important role in the selection for the application in the supermarket. The development of the technology of EC fans shows that there always are potentials for an optimal development of products. While an air cooler needed a current consumption of 180 W for the fan twenty years ago, now significantly less than 100 W are necessary. Fan diameter, pipe diameter, shell size and all incorporated components have become more powerful and more efficient.

  17. High-Performance Operating Systems

    DEFF Research Database (Denmark)

    Sharp, Robin

    1999-01-01

    Notes prepared for the DTU course 49421 "High Performance Operating Systems". The notes deal with quantitative and qualitative techniques for use in the design and evaluation of operating systems in computer systems for which performance is an important parameter, such as real-time applications......, communication systems and multimedia systems....

  18. Solar-driven Joule cycle reciprocating Ericsson engines for small scale applications. From improper operation to high performance

    International Nuclear Information System (INIS)

    Stanciu, Dorin; Bădescu, Viorel

    2017-01-01

    Highlights: • New dynamic model for parabolic trough collector (PTC) coupled to Ericsson engine (EE). • Design procedure of the PTC-EE system which avoid malfunction. • Variation of PTC-EE system performance during a day for different engine rotation speeds. • Strategy to switch between different rotation speeds to maximize daily output work. - Abstract: The paper focuses on a Joule cycle reciprocating Ericsson engine (JCREE) coupled with a solar parabolic trough collector (PTC). A small scale application located at mid Northern Hemisphere latitude (44°25″N) is considered. A new dynamic (time-dependent) model is developed and used to design the geometry and estimate the performance of the PTC-JCREE system under the most favorable weather conditions (i.e. summer day and clear sky). The paper brings two main contributions. First, specific constraints on the design parameters have been identified in order to avoid improper JCREE operation, such as gas under-compression in the compressor cylinder and gas over-compression and/or over-expansion in the expander cylinder. Second, increasing the work generated per day requires using a proper strategy to switch between different rotation speeds. Specific results are as follows. For the (reference) constant engine rotation speed 480 rpm, the output work per day is 39,270 kJ and the overall efficiency is 0.134. The output work decreases by increasing the rotation speed, since the operation interval during a day diminishes. A better operation strategy is to switch among three rotation speed values, namely 480, 540 and 600 rpm. In this case the output work is 40,322 kJ and the overall efficiency is 0.137. The performance improvement is quite small and the reference constant rotation speed 480 rpm may be a suitable choice, easier to use in practice. For both the constant and variable rotation speed strategies, the overall efficiency is almost constant along the effective operation time interval, which is from 8:46 to

  19. Methods for Specifying Scientific Data Standards and Modeling Relationships with Applications to Neuroscience

    Science.gov (United States)

    Rübel, Oliver; Dougherty, Max; Prabhat; Denes, Peter; Conant, David; Chang, Edward F.; Bouchard, Kristofer

    2016-01-01

    Neuroscience continues to experience a tremendous growth in data; in terms of the volume and variety of data, the velocity at which data is acquired, and in turn the veracity of data. These challenges are a serious impediment to sharing of data, analyses, and tools within and across labs. Here, we introduce BRAINformat, a novel data standardization framework for the design and management of scientific data formats. The BRAINformat library defines application-independent design concepts and modules that together create a general framework for standardization of scientific data. We describe the formal specification of scientific data standards, which facilitates sharing and verification of data and formats. We introduce the concept of Managed Objects, enabling semantic components of data formats to be specified as self-contained units, supporting modular and reusable design of data format components and file storage. We also introduce the novel concept of Relationship Attributes for modeling and use of semantic relationships between data objects. Based on these concepts we demonstrate the application of our framework to design and implement a standard format for electrophysiology data and show how data standardization and relationship-modeling facilitate data analysis and sharing. The format uses HDF5, enabling portable, scalable, and self-describing data storage and integration with modern high-performance computing for data-driven discovery. The BRAINformat library is open source, easy-to-use, and provides detailed user and developer documentation and is freely available at: https://bitbucket.org/oruebel/brainformat. PMID:27867355

  20. Physics through the 1990s: Scientific interfaces and technological applications

    International Nuclear Information System (INIS)

    1986-01-01

    Physics traditionally serves mankind through its fundamental discoveries, which enrich our understanding of nature and the cosmos. While the basic driving force for physics research is intellectual curiosity and the search for understanding, the nation's support for physics is also motivated by strategic national goals, by the pride of world scientific leadership, by societal impact through symbiosis with other natural sciences, and through the stimulus of advanced technology provided by applications of physics. This Physics Survey volume looks outward from physics to report its profound impact on society and the economy through interactions at the interfaces with other natural sciences and through applications of physics to technology, medicine, and national defense

  1. Scientific production on the applicability of phenytoin in wound healing

    Directory of Open Access Journals (Sweden)

    Flávia Firmino

    2014-02-01

    Full Text Available Phenytoin is an anticonvulsant that has been used in wound healing. The objectives of this study were to describe how the scientific production presents the use ofphenytoinas a healing agent and to discuss its applicability in wounds. A literature review and hierarchy analysis of evidence-based practices was performed. Eighteen articles were analyzed that tested the intervention in wounds such as leprosy ulcers, leg ulcers, diabetic foot ulcers, pressure ulcers, trophic ulcers, war wounds, burns, preparation of recipient graft area, radiodermatitis and post-extraction of melanocytic nevi. Systemic use ofphenytoinin the treatment of fistulas and the hypothesis of topical use in the treatment of vitiligo were found. In conclusion, topical use ofphenytoinis scientifically evidenced. However robust research is needed that supports a protocol for the use ofphenytoinas another option of a healing agent in clinical practice.

  2. Physics through the 1990s: scientific interfaces and technological applications

    International Nuclear Information System (INIS)

    1986-01-01

    The volume examines the scientific interfaces and technological applications of physics. Twelve areas are dealt with: biological physics--biophysics, the brain, and theoretical biology; the physics-chemistry interface--instrumentation, surfaces, neutron and synchrotron radiation, polymers, organic electronic materials; materials science; geophysics--tectonics, the atmosphere and oceans, planets, drilling and seismic exploration, and remote sensing; computational physics--complex systems and applications in basic research; mathematics--field theory and chaos; microelectronics--integrated circuits, miniaturization, future trends; optical information technologies--fiber optics and photonics; instrumentation; physics applications to energy needs and the environment; national security--devices, weapons, and arms control; medical physics--radiology, ultrasonics, NMR, and photonics. An executive summary and many chapters contain recommendations regarding funding, education, industry participation, small-group university research and large facility programs, government agency programs, and computer database needs

  3. Workshop on scientific applications of short wavelength coherent light sources

    International Nuclear Information System (INIS)

    Spicer, W.; Arthur, J.; Winick, H.

    1993-02-01

    This report contains paper on the following topics: A 2 to 4nm High Power FEL On the SLAC Linac; Atomic Physics with an X-ray Laser; High Resolution, Three Dimensional Soft X-ray Imaging; The Role of X-ray Induced Damage in Biological Micro-imaging; Prospects for X-ray Microscopy in Biology; Femtosecond Optical Pulses?; Research in Chemical Physics Surface Science, and Materials Science, with a Linear Accelerator Coherent Light Source; Application of 10 GeV Electron Driven X-ray Laser in Gamma-ray Laser Research; Non-Linear Optics, Fluorescence, Spectromicroscopy, Stimulated Desorption: We Need LCLS' Brightness and Time Scale; Application of High Intensity X-rays to Materials Synthesis and Processing; LCLS Optics: Selected Technological Issues and Scientific Opportunities; Possible Applications of an FEL for Materials Studies in the 60 eV to 200 eV Spectral Region

  4. High performance conductometry

    International Nuclear Information System (INIS)

    Saha, B.

    2000-01-01

    Inexpensive but high performance systems have emerged progressively for basic and applied measurements in physical and analytical chemistry on one hand, and for on-line monitoring and leak detection in plants and facilities on the other. Salient features of the developments will be presented with specific examples

  5. High performance systems

    Energy Technology Data Exchange (ETDEWEB)

    Vigil, M.B. [comp.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  6. Danish High Performance Concretes

    DEFF Research Database (Denmark)

    Nielsen, M. P.; Christoffersen, J.; Frederiksen, J.

    1994-01-01

    In this paper the main results obtained in the research program High Performance Concretes in the 90's are presented. This program was financed by the Danish government and was carried out in cooperation between The Technical University of Denmark, several private companies, and Aalborg University...... concretes, workability, ductility, and confinement problems....

  7. High performance homes

    DEFF Research Database (Denmark)

    Beim, Anne; Vibæk, Kasper Sánchez

    2014-01-01

    . Consideration of all these factors is a precondition for a truly integrated practice and as this chapter demonstrates, innovative project delivery methods founded on the manufacturing of prefabricated buildings contribute to the production of high performance homes that are cost effective to construct, energy...

  8. High-performance computing in seismology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  9. Vacuum insulation - Panel properties and building applications. HiPTI - High Performance Thermal Insulation - IEA/ECBCS Annex 39 - Final report

    Energy Technology Data Exchange (ETDEWEB)

    Erb, M. (ed.)

    2005-12-15

    This paper takes a look at the properties of vacuum insulation panels (VIP) that have already been developed some time ago for use in appliances such as refrigerators and deep-freezers. Their insulation performance is a factor of five to ten times better than that of conventional insulation. The paper discusses the use of such panels in buildings to provide thin, highly-insulating constructions for walls, roofs and floors. The motivation for examining the applicability of high performance thermal insulation in buildings is discussed, including solutions where severe space limitations and other technical and aesthetic considerations exist. The use of nano-structured materials and laminated foils is examined and discussed. The questions arising from the use of such panels in buildings is discussed and the open questions and risks involved are examined. Finally, an outlook on the introduction of VIP technology is presented and quality assurance aspects are examined. This work was done within the framework of the Task 39 'High Performance Thermal Insulation' of the 'Energy Conservation in Buildings and Community Systems ECBCS' programme of the International Energy Agency IEA.

  10. Scientific Applications of Optical Instruments to Materials Research

    Science.gov (United States)

    Witherow, William K.

    1997-01-01

    Microgravity is a unique environment for materials and biotechnology processing. Microgravity minimizes or eliminates some of the effects that occur in one g. This can lead to the production of new materials or crystal structures. It is important to understand the processes that create these new materials. Thus, experiments are designed so that optical data collection can take place during the formation of the material. This presentation will discuss scientific application of optical instruments at MSFC. These instruments include a near-field scanning optical microscope, a miniaturized holographic system, and a phase-shifting interferometer.

  11. Applications of industrial computed tomography at Los Alamos Scientific Laboratory

    International Nuclear Information System (INIS)

    Kruger, R.P.; Morris, R.A.; Wecksung, G.W.

    1980-01-01

    A research and development program was begun three years ago at the Los Alamos Scientific Laboratory (LASL) to study nonmedical applications of computed tomography. This program had several goals. The first goal was to develop the necessary reconstruction algorithms to accurately reconstruct cross sections of nonmedical industrial objects. The second goal was to be able to perform extensive tomographic simulations to determine the efficacy of tomographic reconstruction with a variety of hardware configurations. The final goal was to construct an inexpensive industrial prototype scanner with a high degree of design flexibility. The implementation of these program goals is described

  12. Numerical Platon: A unified linear equation solver interface by CEA for solving open foe scientific applications

    International Nuclear Information System (INIS)

    Secher, Bernard; Belliard, Michel; Calvin, Christophe

    2009-01-01

    This paper describes a tool called 'Numerical Platon' developed by the French Atomic Energy Commission (CEA). It provides a freely available (GNU LGPL license) interface for coupling scientific computing applications to various freeware linear solver libraries (essentially PETSc, SuperLU and HyPre), together with some proprietary CEA solvers, for high-performance computers that may be used in industrial software written in various programming languages. This tool was developed as part of considerable efforts by the CEA Nuclear Energy Division in the past years to promote massively parallel software and on-shelf parallel tools to help develop new generation simulation codes. After the presentation of the package architecture and the available algorithms, we show examples of how Numerical Platon is used in sequential and parallel CEA codes. Comparing with in-house solvers, the gain in terms of increases in computation capacities or in terms of parallel performances is notable, without considerable extra development cost

  13. Numerical Platon: A unified linear equation solver interface by CEA for solving open foe scientific applications

    Energy Technology Data Exchange (ETDEWEB)

    Secher, Bernard [French Atomic Energy Commission (CEA), Nuclear Energy Division (DEN) (France); CEA Saclay DM2S/SFME/LGLS, Bat. 454, F-91191 Gif-sur-Yvette Cedex (France)], E-mail: bsecher@cea.fr; Belliard, Michel [French Atomic Energy Commission (CEA), Nuclear Energy Division (DEN) (France); CEA Cadarache DER/SSTH/LMDL, Bat. 238, F-13108 Saint-Paul-lez-Durance Cedex (France); Calvin, Christophe [French Atomic Energy Commission (CEA), Nuclear Energy Division (DEN) (France); CEA Saclay DM2S/SERMA/LLPR, Bat. 470, F-91191 Gif-sur-Yvette Cedex (France)

    2009-01-15

    This paper describes a tool called 'Numerical Platon' developed by the French Atomic Energy Commission (CEA). It provides a freely available (GNU LGPL license) interface for coupling scientific computing applications to various freeware linear solver libraries (essentially PETSc, SuperLU and HyPre), together with some proprietary CEA solvers, for high-performance computers that may be used in industrial software written in various programming languages. This tool was developed as part of considerable efforts by the CEA Nuclear Energy Division in the past years to promote massively parallel software and on-shelf parallel tools to help develop new generation simulation codes. After the presentation of the package architecture and the available algorithms, we show examples of how Numerical Platon is used in sequential and parallel CEA codes. Comparing with in-house solvers, the gain in terms of increases in computation capacities or in terms of parallel performances is notable, without considerable extra development cost.

  14. Application of the Instrumental Neutron Activation Analysis and High Performance Liquid Chromatography (HPLC) in the rare earth elements determination in reference geological materials

    International Nuclear Information System (INIS)

    Figueiredo, Ana M.G.; Moraes, Noemia M.P. de; Shihomatsu, Helena M.

    1997-01-01

    Instrumental Neutron Activation Analysis (INAA) and High Performance Liquid Chromatography (HPLC) were applied to the determination of rare earth elements (REE) in the geological reference materials AGV-1, G-2 and GSP-1 (USGS). Results obtained by both techniques showed good agreement with certified values, giving relative errors less than 10%. The La, Ce, Nd, Sm, Eu, Tb, Yb and Lu REE elements were determined. All the REE except Dy and Y were determined by HPLC. The reference material G94, employed in the International Proficiency Test for Analytical Geochemistry Laboratories (GeoTP1) was analysed. The results obtained are a contribution to REE contents in this sample. The INAA and HPLC application to the determination of REE in this kind of matrix is also discussed. (author). 10 refs., 1 fig., 5 tabs

  15. Neo4j high performance

    CERN Document Server

    Raj, Sonal

    2015-01-01

    If you are a professional or enthusiast who has a basic understanding of graphs or has basic knowledge of Neo4j operations, this is the book for you. Although it is targeted at an advanced user base, this book can be used by beginners as it touches upon the basics. So, if you are passionate about taming complex data with the help of graphs and building high performance applications, you will be able to get valuable insights from this book.

  16. High Performance Concrete

    Directory of Open Access Journals (Sweden)

    Traian Oneţ

    2009-01-01

    Full Text Available The paper presents the last studies and researches accomplished in Cluj-Napoca related to high performance concrete, high strength concrete and self compacting concrete. The purpose of this paper is to raid upon the advantages and inconveniences when a particular concrete type is used. Two concrete recipes are presented, namely for the concrete used in rigid pavement for roads and another one for self-compacting concrete.

  17. High performance polymeric foams

    International Nuclear Information System (INIS)

    Gargiulo, M.; Sorrentino, L.; Iannace, S.

    2008-01-01

    The aim of this work was to investigate the foamability of high-performance polymers (polyethersulfone, polyphenylsulfone, polyetherimide and polyethylenenaphtalate). Two different methods have been used to prepare the foam samples: high temperature expansion and two-stage batch process. The effects of processing parameters (saturation time and pressure, foaming temperature) on the densities and microcellular structures of these foams were analyzed by using scanning electron microscopy

  18. Educational and Scientific Applications of the \\itTime Navigator}

    Science.gov (United States)

    Cole, M.; Snow, J. T.; Slatt, R. M.

    2001-05-01

    Several recent conferences have noted the need to focus on the evolving interface between research and education at all levels of science, mathematics, engineering, and technology education. This interface, which is a distinguishing feature of graduate education in the U.S., is increasingly in demand at the undergraduate and K-12 levels, particularly in the earth sciences. In this talk, we present a new database for earth systems science and will explore applications to K-12 and undergraduate education, as well as the scientific and graduate role. The University of Oklahoma, College of Geosciences is in the process of acquiring the \\itTime Navigator}, a multi-disciplinary, multimedia database, which will form the core asset of the Center for Earth Systems Science. The Center, whose mission is to further the understanding of the dynamic Earth within both the academic and the general public communities, will serve as a portal for research, information, and education for scientists and educators. \\itTime Navigator} was developed over a period of some twenty years by the noted British geoscience author, Ron Redfern, in connection with the recently published, \\itOrigins, the evolution of continents, oceans and life}, the third in a series of books for the educated layperson. Over the years \\itTime Navigator} has evolved into an interactive, multimedia database displaying much of the significant geological, paleontological, climatological, and tectonic events from the latest Proterozoic (750 MYA) through to the present. The focus is mainly on the Western Hemisphere and events associated with the coalescence and breakup of Pangea and the evolution of the earth into its present form. \\itOrigins} will be available as early as Fall 2001 as an interactive electronic book for the general, scientifically-literate public. While electronic books are unlikely to replace traditional print books, the format does allow non-linear exploration of content. We believe that the

  19. Application of BIM technology in green scientific research office building

    Science.gov (United States)

    Ni, Xin; Sun, Jianhua; Wang, Bo

    2017-05-01

    BIM technology as a kind of information technology, has been along with the advancement of building industrialization application in domestic building industry gradually. Based on reasonable construction BIM model, using BIM technology platform, through collaborative design tools can effectively improve the design efficiency and design quality. Vanda northwest engineering design and research institute co., LTD., the scientific research office building project in combination with the practical situation of engineering using BIM technology, formed in the BIM model combined with related information according to the energy energy model (BEM) and the application of BIM technology in construction management stage made exploration, and the direct experience and the achievements gained by the architectural design part made a summary.

  20. AMRZone: A Runtime AMR Data Sharing Framework For Scientific Applications

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Wenzhao; Tang, Houjun; Harenberg, Steven; Byna, Suren; Zou, Xiaocheng; Devendran, Dharshi; Martin, Daniel; Wu, Kesheng; Dong, Bin; Klasky, Scott; Samatova, Nagiza

    2017-08-31

    Frameworks that facilitate runtime data sharing across multiple applications are of great importance for scientific data analytics. Although existing frameworks work well over uniform mesh data, they can not effectively handle adaptive mesh refinement (AMR) data. Among the challenges to construct an AMR-capable framework include: (1) designing an architecture that facilitates online AMR data management; (2) achieving a load-balanced AMR data distribution for the data staging space at runtime; and (3) building an effective online index to support the unique spatial data retrieval requirements for AMR data. Towards addressing these challenges to support runtime AMR data sharing across scientific applications, we present the AMRZone framework. Experiments over real-world AMR datasets demonstrate AMRZone's effectiveness at achieving a balanced workload distribution, reading/writing large-scale datasets with thousands of parallel processes, and satisfying queries with spatial constraints. Moreover, AMRZone's performance and scalability are even comparable with existing state-of-the-art work when tested over uniform mesh data with up to 16384 cores; in the best case, our framework achieves a 46% performance improvement.

  1. Educational and Scientific Applications of Climate Model Diagnostic Analyzer

    Science.gov (United States)

    Lee, S.; Pan, L.; Zhai, C.; Tang, B.; Kubar, T. L.; Zhang, J.; Bao, Q.

    2016-12-01

    Climate Model Diagnostic Analyzer (CMDA) is a web-based information system designed for the climate modeling and model analysis community to analyze climate data from models and observations. CMDA provides tools to diagnostically analyze climate data for model validation and improvement, and to systematically manage analysis provenance for sharing results with other investigators. CMDA utilizes cloud computing resources, multi-threading computing, machine-learning algorithms, web service technologies, and provenance-supporting technologies to address technical challenges that the Earth science modeling and model analysis community faces in evaluating and diagnosing climate models. As CMDA infrastructure and technology have matured, we have developed the educational and scientific applications of CMDA. Educationally, CMDA supported the summer school of the JPL Center for Climate Sciences for three years since 2014. In the summer school, the students work on group research projects where CMDA provide datasets and analysis tools. Each student is assigned to a virtual machine with CMDA installed in Amazon Web Services. A provenance management system for CMDA is developed to keep track of students' usages of CMDA, and to recommend datasets and analysis tools for their research topic. The provenance system also allows students to revisit their analysis results and share them with their group. Scientifically, we have developed several science use cases of CMDA covering various topics, datasets, and analysis types. Each use case developed is described and listed in terms of a scientific goal, datasets used, the analysis tools used, scientific results discovered from the use case, an analysis result such as output plots and data files, and a link to the exact analysis service call with all the input arguments filled. For example, one science use case is the evaluation of NCAR CAM5 model with MODIS total cloud fraction. The analysis service used is Difference Plot Service of

  2. High performance liquid chromatographic determination of ...

    African Journals Online (AJOL)

    STORAGESEVER

    2010-02-08

    ) high performance liquid chromatography (HPLC) grade .... applications. These are important requirements if the reagent is to be applicable to on-line pre or post column derivatisation in a possible automation of the analytical.

  3. Clojure high performance programming

    CERN Document Server

    Kumar, Shantanu

    2013-01-01

    This is a short, practical guide that will teach you everything you need to know to start writing high performance Clojure code.This book is ideal for intermediate Clojure developers who are looking to get a good grip on how to achieve optimum performance. You should already have some experience with Clojure and it would help if you already know a little bit of Java. Knowledge of performance analysis and engineering is not required. For hands-on practice, you should have access to Clojure REPL with Leiningen.

  4. Porous NiCo_2S_4-halloysite hybrid self-assembled from nanosheets for high-performance asymmetric supercapacitor applications

    International Nuclear Information System (INIS)

    Chai, Hui; Dong, Hong; Wang, Yucheng; Xu, Jiayu; Jia, Dianzeng

    2017-01-01

    Highlights: • The NiCo_2S_4-HL nanomaterial is achieved via two-step hydrothermal approach. • The unique structures are assembled self-assembly by nanosheets. • The obtained electrode exhibits high capacitance and excellent retention. • An asymmetric supercapacitor also displays high energy density and outstanding cycling stability. • The high-performance of the device is possibly due to the introduction of HL and formation of composed nanosheets. - Abstract: The porous nanostructures have drawn considerable attention because of their abundant pore volume and unique properties that provide outstanding performance in catalysis and energy storage applications. This study proposes the growth mechanism of porous NiCo_2S_4 composited with halloysite (HL) via a self-assembly method using halloysite as a template and component. Electrochemical tests showed that the NiCo_2S_4-HL exhibited an ultrahigh specific capacitance (Csp) (589C g"−"1 at 1A g"−"1) and good cycle stability (Csp retention of 86% after 1000 cycles). The desirable capacitive performance of the NiCo_2S_4-HL can be attributed to the large specific surface area and short diffusion path for electrons and ions in the hierarchical porous structure. The superior electrochemical performances with the energy density of 35.48 W h kg"−"1 at a power density of 199.9 W kg"−"1 were achieved in an assembled aqueous asymmetric supercapacitor (ASC) device using NiCo_2S_4-HL as a positive electrode and N-doped graphene (NG) as a negative electrode. Moreover, the NiCo_2S_4-HL//NG asymmetric supercapacitor achieved outstanding cycle stability (also retained 83.2% after 1700 cycles). The high-performance of the ASC device will undoubtedly make the porous NiCo_2S_4-HL as potential electrode materials attractive in energy storage systems.

  5. Development of a high performances heat pipe (HPHP) for space applications; Developpement d`un caloduc hautes performances (HPHP) pour applications spatiales

    Energy Technology Data Exchange (ETDEWEB)

    Moschetti, B; Voyer, E [Aerospatiale, 06 - Cannes (France)

    1997-12-31

    This paper presents the research program for the development of a prototype of high performances heat pipe (HPHP) intended to be installed on the STENTOR telecommunication satellite. A trade-off study was performed and led to the selection of a reliable and simple concept with axial grooves, ammonia and a minimum heat transport capacity of 500 W.m. A first model with a 17 mm diameter, a 2.8 m length and a mass lower than 500 g/m has been manufactured and tested. First results indicate a 600 W.m heat transport capacity at 20 deg. C (horizontal position) and a 400 W.m capacity with a 5 mm tilt, and allow to validate this concept. (J.S.) 6 refs.

  6. Development of a high performances heat pipe (HPHP) for space applications; Developpement d`un caloduc hautes performances (HPHP) pour applications spatiales

    Energy Technology Data Exchange (ETDEWEB)

    Moschetti, B.; Voyer, E. [Aerospatiale, 06 - Cannes (France)

    1996-12-31

    This paper presents the research program for the development of a prototype of high performances heat pipe (HPHP) intended to be installed on the STENTOR telecommunication satellite. A trade-off study was performed and led to the selection of a reliable and simple concept with axial grooves, ammonia and a minimum heat transport capacity of 500 W.m. A first model with a 17 mm diameter, a 2.8 m length and a mass lower than 500 g/m has been manufactured and tested. First results indicate a 600 W.m heat transport capacity at 20 deg. C (horizontal position) and a 400 W.m capacity with a 5 mm tilt, and allow to validate this concept. (J.S.) 6 refs.

  7. High performance data transfer

    Science.gov (United States)

    Cottrell, R.; Fang, C.; Hanushevsky, A.; Kreuger, W.; Yang, W.

    2017-10-01

    The exponentially increasing need for high speed data transfer is driven by big data, and cloud computing together with the needs of data intensive science, High Performance Computing (HPC), defense, the oil and gas industry etc. We report on the Zettar ZX software. This has been developed since 2013 to meet these growing needs by providing high performance data transfer and encryption in a scalable, balanced, easy to deploy and use way while minimizing power and space utilization. In collaboration with several commercial vendors, Proofs of Concept (PoC) consisting of clusters have been put together using off-the- shelf components to test the ZX scalability and ability to balance services using multiple cores, and links. The PoCs are based on SSD flash storage that is managed by a parallel file system. Each cluster occupies 4 rack units. Using the PoCs, between clusters we have achieved almost 200Gbps memory to memory over two 100Gbps links, and 70Gbps parallel file to parallel file with encryption over a 5000 mile 100Gbps link.

  8. Scientific applications and numerical algorithms on the midas multiprocessor system

    International Nuclear Information System (INIS)

    Logan, D.; Maples, C.

    1986-01-01

    The MIDAS multiprocessor system is a multi-level, hierarchial structure designed at the Advanced Computer Architecture Laboratory of the University of California's Lawrence Berkeley Laboratory. A two-stage, 11-processor system has been operational for over a year and is currently undergoing expansion. It has been employed to investigate the performance of different methods of decomposing various problems and algorithms into a multiprocessor environment. The results of such tests on a variety of applications such as scientific data analysis, Monte Carlo calculations, and image processing, are discussed. Often such decompositions involve investigating the parallel structure of fundamental algorithms. Several basic algorithms dealing with random number generation, matrix diagonalization, fast Fourier transforms, and finite element methods in solving partial differential equations are also discussed. The performance and projected extensibilities of these decompositions on the MIDAS system are reported

  9. Scientific applications of frequency-stabilized laser technology in space

    Science.gov (United States)

    Schumaker, Bonny L.

    1990-01-01

    A synoptic investigation of the uses of frequency-stabilized lasers for scientific applications in space is presented. It begins by summarizing properties of lasers, characterizing their frequency stability, and describing limitations and techniques to achieve certain levels of frequency stability. Limits to precision set by laser frequency stability for various kinds of measurements are investigated and compared with other sources of error. These other sources include photon-counting statistics, scattered laser light, fluctuations in laser power, and intensity distribution across the beam, propagation effects, mechanical and thermal noise, and radiation pressure. Methods are explored to improve the sensitivity of laser-based interferometric and range-rate measurements. Several specific types of science experiments that rely on highly precise measurements made with lasers are analyzed, and anticipated errors and overall performance are discussed. Qualitative descriptions are given of a number of other possible science applications involving frequency-stabilized lasers and related laser technology in space. These applications will warrant more careful analysis as technology develops.

  10. Emerging Nanophotonic Applications Explored with Advanced Scientific Parallel Computing

    Science.gov (United States)

    Meng, Xiang

    The domain of nanoscale optical science and technology is a combination of the classical world of electromagnetics and the quantum mechanical regime of atoms and molecules. Recent advancements in fabrication technology allows the optical structures to be scaled down to nanoscale size or even to the atomic level, which are far smaller than the wavelength they are designed for. These nanostructures can have unique, controllable, and tunable optical properties and their interactions with quantum materials can have important near-field and far-field optical response. Undoubtedly, these optical properties can have many important applications, ranging from the efficient and tunable light sources, detectors, filters, modulators, high-speed all-optical switches; to the next-generation classical and quantum computation, and biophotonic medical sensors. This emerging research of nanoscience, known as nanophotonics, is a highly interdisciplinary field requiring expertise in materials science, physics, electrical engineering, and scientific computing, modeling and simulation. It has also become an important research field for investigating the science and engineering of light-matter interactions that take place on wavelength and subwavelength scales where the nature of the nanostructured matter controls the interactions. In addition, the fast advancements in the computing capabilities, such as parallel computing, also become as a critical element for investigating advanced nanophotonic devices. This role has taken on even greater urgency with the scale-down of device dimensions, and the design for these devices require extensive memory and extremely long core hours. Thus distributed computing platforms associated with parallel computing are required for faster designs processes. Scientific parallel computing constructs mathematical models and quantitative analysis techniques, and uses the computing machines to analyze and solve otherwise intractable scientific challenges. In

  11. The present status of scientific applications of nuclear explosions

    International Nuclear Information System (INIS)

    Cowan, G.A.; Diven, B.C.

    1970-01-01

    This is the fourth in a series of symposia which started, in 1957 at Livermore with the purpose of examining the peaceful uses of nuclear explosives. Although principal emphasis has b een placed on technological applications, the discussions have, from the outset, included the fascinating question of scientific uses. Of the possible scientific applications which were mentioned at the 1957 meeting, the proposals which attracted most attention involved uses of nuclear explosions for research in seismology. It is interesting to note that since then a very large and stimulating body of data in the field of seismology has been collected from nuclear tests. Ideas for scientific applications of nuclear explosions go back considerably further than 1957. During the war days Otto Frisch at Los Alamos suggested that a fission bomb would provide an excellent source of fast neutrons which could be led down a vacuum pipe and used for experiments in a relatively unscattered state. This idea, reinvented, modified, and elaborated upon in the ensuing twenty-five years, provides the basis for much of the research discussed in this morning's program. In 1952 a somewhat different property of nuclear explosions, their ability to produce intense neutron exposures on internal targets and to synthesize large quantities of multiple neutron capture products, was dramatically brought to our attention by analysis of debris from the first large thermonuclear explosion (Mike) in which the elements einsteinium and fermiun were observed for the first time. The reports of the next two Plowshare symposia in 1959 and 1964 help record the fascinating development of the scientific uses of neutrons in nuclear explosions. Starting with two 'wheel' experiments in 1958 to measure symmetry of fission in 235-U resonances, the use of external beams of energy-resolved neutrons was expanded on the 'Gnome' experiment in 1961 to include the measurement of neutron capture excitation functions for 238-U, 232-Th

  12. The present status of scientific applications of nuclear explosions

    Energy Technology Data Exchange (ETDEWEB)

    Cowan, G A; Diven, B C [Los Alamos Scientific Laboratory, University of California, Los Alamos, NM (United States)

    1970-05-15

    This is the fourth in a series of symposia which started, in 1957 at Livermore with the purpose of examining the peaceful uses of nuclear explosives. Although principal emphasis has {sup b}een placed on technological applications, the discussions have, from the outset, included the fascinating question of scientific uses. Of the possible scientific applications which were mentioned at the 1957 meeting, the proposals which attracted most attention involved uses of nuclear explosions for research in seismology. It is interesting to note that since then a very large and stimulating body of data in the field of seismology has been collected from nuclear tests. Ideas for scientific applications of nuclear explosions go back considerably further than 1957. During the war days Otto Frisch at Los Alamos suggested that a fission bomb would provide an excellent source of fast neutrons which could be led down a vacuum pipe and used for experiments in a relatively unscattered state. This idea, reinvented, modified, and elaborated upon in the ensuing twenty-five years, provides the basis for much of the research discussed in this morning's program. In 1952 a somewhat different property of nuclear explosions, their ability to produce intense neutron exposures on internal targets and to synthesize large quantities of multiple neutron capture products, was dramatically brought to our attention by analysis of debris from the first large thermonuclear explosion (Mike) in which the elements einsteinium and fermiun were observed for the first time. The reports of the next two Plowshare symposia in 1959 and 1964 help record the fascinating development of the scientific uses of neutrons in nuclear explosions. Starting with two 'wheel' experiments in 1958 to measure symmetry of fission in 235-U resonances, the use of external beams of energy-resolved neutrons was expanded on the 'Gnome' experiment in 1961 to include the measurement of neutron capture excitation functions for 238-U, 232

  13. Porous NiCo2S4-halloysite hybrid self-assembled from nanosheets for high-performance asymmetric supercapacitor applications

    Science.gov (United States)

    Chai, Hui; Dong, Hong; Wang, Yucheng; Xu, Jiayu; Jia, Dianzeng

    2017-04-01

    The porous nanostructures have drawn considerable attention because of their abundant pore volume and unique properties that provide outstanding performance in catalysis and energy storage applications. This study proposes the growth mechanism of porous NiCo2S4 composited with halloysite (HL) via a self-assembly method using halloysite as a template and component. Electrochemical tests showed that the NiCo2S4-HL exhibited an ultrahigh specific capacitance (Csp) (589C g-1 at 1A g-1) and good cycle stability (Csp retention of 86% after 1000 cycles). The desirable capacitive performance of the NiCo2S4-HL can be attributed to the large specific surface area and short diffusion path for electrons and ions in the hierarchical porous structure. The superior electrochemical performances with the energy density of 35.48 W h kg-1 at a power density of 199.9 W kg-1 were achieved in an assembled aqueous asymmetric supercapacitor (ASC) device using NiCo2S4-HL as a positive electrode and N-doped graphene (NG) as a negative electrode. Moreover, the NiCo2S4-HL//NG asymmetric supercapacitor achieved outstanding cycle stability (also retained 83.2% after 1700 cycles). The high-performance of the ASC device will undoubtedly make the porous NiCo2S4-HL as potential electrode materials attractive in energy storage systems.

  14. Few layer graphene wrapped mixed phase TiO2 nanofiber as a potential electrode material for high performance supercapacitor applications

    Science.gov (United States)

    Thirugnanam, Lavanya; Sundara, Ramaprabhu

    2018-06-01

    A combination of favorable composition and optimized anatase/rutile mixed-phase TiO2 (MPTNF)/Hydrogen exfoliated graphene (HEG) composite nanofibers (MPTNF/HEG) and anatase/rutile mixed-phase TiO2/reduced graphene oxide (rGO) composite nanofibers (MPTNF/rGO) have been reported to enhance the electrochemical properties for supercapacitor applications. These composite nanofibers have been synthesized by an efficient route of electrospinning together with the help of easy chemical methods. Both the composites exhibit good charge storage capability with enhanced pseudocapacitance and electric double-layer capacitance (EDLC) as confirmed by cyclic voltammetry studies. MPTNF/HEG composite showed maximum specific capacitance of 210.5 F/g at the current density of 1 A/g, which was mainly due to its availability of the more active sites for ions adsorption on a few layers of graphene wrapped TiO2 nanofiber surface. The synergistic effect of anatase/rutile mixed phase with one dimensional nanostructure and the electronic interaction between TiO2 and few layer graphene provided the subsequent improvement of ion adsorption capacity. Also exhibit excellent electrochemical performance to improve the capacitive properties of TiO2 electrode materials which is required for the development of flexible electrodes in energy storage devices and open up new opportunities for high performance supercapacitors.

  15. Application of microscopy technique and high-performance liquid chromatography for quality assessment of the flower bud of Tussilago farfara L. (Kuandonghua)

    Science.gov (United States)

    Li, Da; Liang, Li; Zhang, Jing; Kang, Tingguo

    2015-01-01

    Background: Quality control is one of the bottleneck problems limiting the application and development of traditional Chinese medicine (TCM). In recent years, microscopy and high-performance liquid chromatography (HPLC) techniques have been frequently applied in the quality control of TCM. However, studies combining conventional microscopy and HPLC techniques for the quality control of the flower bud of Tussilago farfara L. (Kuandonghua) have not been reported. Objective: This study was undertaken to evaluate the quality of the flower bud of T. farfara L. and to establish the relationships between the quantity of pollen grains and four main bioactive constituents: tussilagone, chlorogenic acid, rutin and isoquercitrin. Materials and Methods: In this study, microscopic examination was used to quantify microscopic characteristics of the flower bud of T. farfara L., and the chemical components were determined by HPLC. The data were analyzed by Statistical Package for the Social Sciences statistics software. Results: The results of the analysis showed that tussilagone, chlorogenic acid, rutin and isoquercitrin were significantly correlated with the quantity of pollen grains in the flower bud of T. farfara L. There is a positive correlation between them. From these results, it can be deduced that the flower bud of T. farfara L. with a greater quantity of pollen grains should be of better quality. Conclusion: The study showed that the established method can be helpful for evaluating the quality of the flower bud of T. farfara L. based on microscopic characteristic constants and chemical quantitation. PMID:26246737

  16. Quantification of Photocyanine in Human Serum by High-Performance Liquid Chromatography-Tandem Mass Spectrometry and Its Application in a Pharmacokinetic Study

    Directory of Open Access Journals (Sweden)

    Bing-Tian Bi

    2014-01-01

    Full Text Available Photocyanine is a novel anticancer drug. Its pharmacokinetic study in cancer patients is therefore very important for choosing doses, and dosing intervals in clinical application. A rapid, selective and sensitive high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS method was developed and validated for the determination of photocyanine in patient serum. Sample preparation involved one-step protein precipitation by adding methanol and N,N-dimethyl formamide to 0.1 mL serum. The detection was performed on a triple quadrupole tandem mass spectrometer operating in multiple reaction-monitoring (MRM mode. Each sample was chromatographed within 7 min. Linear calibration curves were obtained for photocyanine at a concentration range of 20–2000 ng/mL (r>0.995, with the lower limit of quantification (LLOQ being 20 ng/mL. The intrabatch accuracy ranged from 101.98% to 107.54%, and the interbatch accuracy varied from 100.52% to 105.62%. Stability tests showed that photocyanine was stable throughout the analytical procedure. This study is the first to utilize the HPLC-MS/MS method for the pharmacokinetic study of photocyanine in six cancer patients who had received a single dose of photocyanine (0.1 mg/kg administered intravenously.

  17. In situ polymerization of monolith based on poly(Triallyl Isocyanurate-co-trimethylolpropane triacrylate) and its application in high-performance liquid chromatography.

    Science.gov (United States)

    Zhong, Jing; Bai, Ligai; Qin, Junxiao; Wang, Jiafei; Hao, Mengbei; Yang, Gengliang

    2015-04-01

    A novel organic monolithic stationary phase was prepared for high-performance liquid chromatography (HPLC) by in situ copolymerization. In which, triallyl isocyanurate (TAIC) and trimethylolpropane triacrylate (TMPTA) in a binary porogenic solvent consisting of polyethylene glycol 200 and 1, 2-propanediol were used. The resultant monoliths with different column properties (e.g., morphology and pressure) were optimized by adjusting the ratio of TMPTA/TAIC and the composition of porogenic solvent. The resulting poly(TAIC-co-TMPTA) monolith showed a relatively homogeneous structure, good permeability and mechanical stability. The chemical group of the monolith was assayed by the infrared spectra method, the morphology of monolithic material was studied by scanning electron microscopy and the pore size distribution was determined by a mercury porosimeter. A series of small molecules were used to evaluate the column performance in terms of hydrophobic mode. At an optimized flow rate of 1.0 mL min(-1), the theoretical plate number of analyte was >15,000 plates m(-1). These applications demonstrated that the monoliths could be successfully used as the stationary phase in conjunction with HPLC to separate small molecules from the mixture. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Bioanalytical Applications of Fluorescence Line-Narrowing and Non-Line-Narrowing Spectroscopy Interfaced with Capillary Electrophoresis and High-Performance Liquid Chromatography

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, Kenneth Paul [Iowa State Univ., Ames, IA (United States)

    2001-01-01

    Capillary electrophoresis (CE) and high-performance liquid chromatography (HPLC) are widely used analytical separation techniques with many applications in chemical, biochemical, and biomedical sciences. Conventional analyte identification in these techniques is based on retention/migration times of standards; requiring a high degree of reproducibility, availability of reliable standards, and absence of coelution. From this, several new information-rich detection methods (also known as hyphenated techniques) are being explored that would be capable of providing unambiguous on-line identification of separating analytes in CE and HPLC. As further discussed, a number of such on-line detection methods have shown considerable success, including Raman, nuclear magnetic resonance (NMR), mass spectrometry (MS), and fluorescence line-narrowing spectroscopy (FLNS). In this thesis, the feasibility and potential of combining the highly sensitive and selective laser-based detection method of FLNS with analytical separation techniques are discussed and presented. A summary of previously demonstrated FLNS detection interfaced with chromatography and electrophoresis is given, and recent results from on-line FLNS detection in CE (CE-FLNS), and the new combination of HPLC-FLNS, are shown.

  19. Elemental speciation via high-performance liquid chromatography combined with inductively coupled plasma atomic emission spectroscopic detection: application of a direct injection nebulizer

    International Nuclear Information System (INIS)

    LaFreniere, K.E; Fassel, V.A.; Eckels, D.E.

    1987-01-01

    An evaluation is presented of a direct injection nebulizer (DIN) interfaced to a high-performance liquid chromatograph (HPLC) with inductively coupled plasma atomic emission spectroscopic (ICP-AES) detection for simultaneous multielement speciation. The limits of detection (LODs) obtained with the DIN interface in the HPLC mode were found to be comparable to those obtained by continuous-flow sample introduction into the ICP, or inferior by up to only a factor of 4. In addition, the DIN allowed for the direct injection into the ICP of a variety of common HPLC solvents (up to 100% methanol, acetonitrile, methyl isobutyl ketone, pyridine, and water). The HPLC-DIN-ICP-AES system was compared to other HPLC-atomic spectroscopic detection techniques and was found to offer substantial improvement over the alternative on-line, detection methods in terms of LODs. Representative applications of the HPLC-DIN-ICP-AES system to the elemental speciation of coal process streams, shale oil, solvent refined coal, and crude oil are presented

  20. Supercritical fluid chromatography versus high performance liquid chromatography for enantiomeric and diastereoisomeric separations on coated polysaccharides-based stationary phases: Application to dihydropyridone derivatives.

    Science.gov (United States)

    Hoguet, Vanessa; Charton, Julie; Hecquet, Paul-Emile; Lakhmi, Chahinaze; Lipka, Emmanuelle

    2018-05-11

    For analytical applications, SFC has always remained in the shadow of LC. Analytical enantioseparation of eight dihydropyridone derivatives, was run in both High Performance Liquid Chromatography and Supercritical Fluid Chromatography. Four polysaccharide based chiral stationary phases namely amylose and cellulose tris(3, 5-dimethylphenylcarbamate), amylose tris((S)-α-phenylethylcarbamate) and cellulose tris(4-methylbenzoate) with four mobile phases consisted of either n-hexane/ethanol or propan-2-ol (80:20 v:v) or carbon dioxide/ethanol or propan-2-ol (80:20 v:v) mixtures were investigated under same operatory conditions (temperature and flow-rate). The elution strength, enantioselectivity and resolution were compared in the two methodologies. For these compounds, for most of the conditions, HPLC afforded shorter retention times and a higher resolution than SFC. HPLC appears particularly suitable for the separation of the compounds bearing two chiral centers. For instance compound 7 was baseline resolved on OD-H CSP under n-Hex/EtOH 80/20, with resolution values equal to 2.98, 1.55, 4.52, between the four stereoisomers in less than 17 min, whereas in SFC, this latter is not fully separated in 23 min under similar eluting conditions. After analytical screenings, the best conditions were transposed to semi-preparative scale. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Strategy Guideline: High Performance Residential Lighting

    Energy Technology Data Exchange (ETDEWEB)

    Holton, J.

    2012-02-01

    The Strategy Guideline: High Performance Residential Lighting has been developed to provide a tool for the understanding and application of high performance lighting in the home. The high performance lighting strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner's expectations for high quality lighting.

  2. Porous NiCo{sub 2}S{sub 4}-halloysite hybrid self-assembled from nanosheets for high-performance asymmetric supercapacitor applications

    Energy Technology Data Exchange (ETDEWEB)

    Chai, Hui, E-mail: huichmails@163.com; Dong, Hong; Wang, Yucheng; Xu, Jiayu; Jia, Dianzeng

    2017-04-15

    Highlights: • The NiCo{sub 2}S{sub 4}-HL nanomaterial is achieved via two-step hydrothermal approach. • The unique structures are assembled self-assembly by nanosheets. • The obtained electrode exhibits high capacitance and excellent retention. • An asymmetric supercapacitor also displays high energy density and outstanding cycling stability. • The high-performance of the device is possibly due to the introduction of HL and formation of composed nanosheets. - Abstract: The porous nanostructures have drawn considerable attention because of their abundant pore volume and unique properties that provide outstanding performance in catalysis and energy storage applications. This study proposes the growth mechanism of porous NiCo{sub 2}S{sub 4} composited with halloysite (HL) via a self-assembly method using halloysite as a template and component. Electrochemical tests showed that the NiCo{sub 2}S{sub 4}-HL exhibited an ultrahigh specific capacitance (Csp) (589C g{sup −1} at 1A g{sup −1}) and good cycle stability (Csp retention of 86% after 1000 cycles). The desirable capacitive performance of the NiCo{sub 2}S{sub 4}-HL can be attributed to the large specific surface area and short diffusion path for electrons and ions in the hierarchical porous structure. The superior electrochemical performances with the energy density of 35.48 W h kg{sup −1} at a power density of 199.9 W kg{sup −1} were achieved in an assembled aqueous asymmetric supercapacitor (ASC) device using NiCo{sub 2}S{sub 4}-HL as a positive electrode and N-doped graphene (NG) as a negative electrode. Moreover, the NiCo{sub 2}S{sub 4}-HL//NG asymmetric supercapacitor achieved outstanding cycle stability (also retained 83.2% after 1700 cycles). The high-performance of the ASC device will undoubtedly make the porous NiCo{sub 2}S{sub 4}-HL as potential electrode materials attractive in energy storage systems.

  3. Fabrication of novel high performance ductile poly(lactic acid) nanofiber scaffold coated with poly(vinyl alcohol) for tissue engineering applications.

    Science.gov (United States)

    Abdal-Hay, Abdalla; Hussein, Kamal Hany; Casettari, Luca; Khalil, Khalil Abdelrazek; Hamdy, Abdel Salam

    2016-03-01

    Poly(lactic acid) (PLA) nanofiber scaffold has received increasing interest as a promising material for potential application in the field of regenerative medicine. However, the low hydrophilicity and poor ductility restrict its practical application. Integration of hydrophilic elastic polymer onto the surface of the nanofiber scaffold may help to overcome the drawbacks of PLA material. Herein, we successfully optimized the parameters for in situ deposition of poly(vinyl alcohol), (PVA) onto post-electrospun PLA nanofibers using a simple hydrothermal approach. Our results showed that the average fiber diameter of coated nanofiber mat is about 1265±222 nm, which is remarkably higher than its pristine counterpart (650±180 nm). The hydrophilicity of PLA nanofiber scaffold coated with a PVA thin layer improved dramatically (36.11±1.5°) compared to that of pristine PLA (119.7±1.5°) scaffold. The mechanical testing showed that the PLA nanofiber scaffold could be converted from rigid to ductile with enhanced tensile strength, due to maximizing the hydrogen bond interaction during the heat treatment and in the presence of PVA. Cytocompatibility performance of the pristine and coated PLA fibers with PVA was observed through an in vitro experiment based on cell attachment and the MTT assay by EA.hy926 human endothelial cells. The cytocompatibility results showed that human cells induced more favorable attachment and proliferation behavior on hydrophilic PLA composite scaffold than that of pristine PLA. Hence, PVA coating resulted in an increase in initial human cell attachment and proliferation. We believe that the novel PVA-coated PLA nanofiber scaffold developed in this study, could be a promising high performance biomaterial in regeneration medicine. Copyright © 2015. Published by Elsevier B.V.

  4. GRID Prototype for imagery processing in scientific applications

    International Nuclear Information System (INIS)

    Stan, Ionel; Zgura, Ion Sorin; Haiduc, Maria; Valeanu, Vlad; Giurgiu, Liviu

    2004-01-01

    The paper presents the results of our study which is part of the InGRID project. This project is supported by ROSA (ROmanian Space Agency). In this paper we will show the possibility to take images from the optical microscope through web camera. The images are then stored on the PC in Linux operating system and distributed to other clusters through GRID technology (using http, php, MySQL, Globus or AliEn systems). The images are provided from nuclear emulsions in the frame of Becquerel Collaboration. The main goal of the project InGRID is to actuate developing and deploying GRID technology for images technique taken from space, different application fields and telemedicine. Also it will create links with the same international projects which use advanced Grid technology and scalable storage solutions. The main topics proposed to be solved in the frame of InGRID project are: - Implementation of two GRID clusters, minimum level Tier 3; - Adapting and updating the common storage and processing computing facility; - Testing the middelware packages developed in the frame of this project; - Testbed production of the prototype; - Build-up and advertise the InGRID prototype in scientific community through current dissemination. InGRID Prototype developed in the frame of this project, will be used by partner institutes as deploying environment of the imaging applications the dynamical features of which will be defined by conditions of contract. Subsequent applications will be deployed by the partners of this project with governmental, nongovernmental and private institutions. (authors)

  5. 78 FR 52760 - Application(s) for Duty-Free Entry of Scientific Instruments

    Science.gov (United States)

    2013-08-26

    ... invite comments on the question of whether instruments of equivalent scientific value, for the purposes... platforms based on self- assembled DNA nanostructures for studying cell biology. DNA nanostructures will be... 23, 2013. Docket Number: 13-033. Applicant: University of Pittsburgh School of Medicine, 3500 Terrace...

  6. Python high performance programming

    CERN Document Server

    Lanaro, Gabriele

    2013-01-01

    An exciting, easy-to-follow guide illustrating the techniques to boost the performance of Python code, and their applications with plenty of hands-on examples.If you are a programmer who likes the power and simplicity of Python and would like to use this language for performance-critical applications, this book is ideal for you. All that is required is a basic knowledge of the Python programming language. The book will cover basic and advanced topics so will be great for you whether you are a new or a seasoned Python developer.

  7. Synthesis of functionalized 3D porous graphene using both ionic liquid and SiO2 spheres as ``spacers'' for high-performance application in supercapacitors

    Science.gov (United States)

    Li, Tingting; Li, Na; Liu, Jiawei; Cai, Kai; Foda, Mohamed F.; Lei, Xiaomin; Han, Heyou

    2014-12-01

    In this work, a high-capacity supercapacitor material based on functionalized three-dimensional (3D) porous graphene was fabricated by low temperature hydrothermal treatment of graphene oxide (GO) using both ionic liquid (IL) and SiO2 spheres as ``spacers''. In the synthesis, the introduction of dual ``spacers'' effectively enlarged the interspace between graphene sheets and suppressed their re-stacking. In addition, the IL also acted as a structure-directing agent playing a crucial role in inducing the formation of unique 3D architectures. Consequently, fast electron/ion transport channels were successfully constructed and numerous oxygen-containing groups on graphene sheets were effectively reserved, which had unique advantages in decreasing ion diffusion resistance and providing additional pseudocapacitance. As expected, the obtained material exhibited superior specific capacitance and rate capability compared to single ``spacer'' designed electrodes and simultaneously maintained excellent cycling stability. In particular, there was nearly no loss of its initial capacitance after 3000 cycles. In addition, we further assembled a symmetric two-electrode device using the material, which showed outstanding flexibility and low equivalent series resistance (ESR). More importantly, it was capable of yielding a maximum power density of about 13.3 kW kg-1 with an energy density of about 7.0 W h kg-1 at a voltage of 1.0 V in 1 M H2SO4 electrolyte. All these impressive results demonstrate that the material obtained by this approach is greatly promising for application in high-performance supercapacitors.In this work, a high-capacity supercapacitor material based on functionalized three-dimensional (3D) porous graphene was fabricated by low temperature hydrothermal treatment of graphene oxide (GO) using both ionic liquid (IL) and SiO2 spheres as ``spacers''. In the synthesis, the introduction of dual ``spacers'' effectively enlarged the interspace between graphene sheets

  8. Radiation protection. Scientific fundamentals, legal regulations, practical applications. Compendium

    International Nuclear Information System (INIS)

    Buchert, Guido; Gay, Juergen; Kirchner, Gerald; Michel, Rolf; Niggemann, Guenter; Schumann, Joerg; Wust, Peter; Jaehnert, Susanne; Strilek, Ralf; Martini, Ekkehard

    2011-06-01

    The compendium on radiation protection, scientific fundamentals, legal regulations and practical applications includes contributions to the following issues: (1) Effects and risk of ionizing radiation: fundamentals on effects and risk of ionizing radiation, news in radiation biology, advantages and disadvantages of screening investigations; (2) trends and legal regulations concerning radiation protection: development of European and national radiation protection laws, new regulations concerning X-rays, culture and ethics of radiation protection; (3) dosimetry and radiation measuring techniques: personal scanning using GHz radiation, new ''dose characteristics'' in practice, measuring techniques for the nuclear danger prevention and emergency hazard control; (4) radiation exposure in medicine: radiation exposure of modern medical techniques, heavy ion radiotherapy, deterministic and stochastic risks of the high-conformal photon radiotherapy, STEMO project - mobile CT for apoplectic stroke patients; (5) radiation exposure in technology: legal control of high-level radioactive sources, technical and public safety using enclosed radioactive sources for materials testing, radiation exposure in aviation, radon in Bavaria, NPP Fukushima-Daiichi - a status report; (6) radiation exposure in nuclear engineering: The Chernobyl accident - historical experiences or sustaining problem? European standards for radioactive waste disposal, radioactive material disposal in Germany risk assessment of ionizing and non-ionizing radiation (7) Case studies.

  9. First scientific application of the membrane cryostat technology

    Energy Technology Data Exchange (ETDEWEB)

    Montanari, David; Adamowski, Mark; Baller, Bruce R.; Barger, Robert K.; Chi, Edward C.; Davis, Ronald P.; Johnson, Bryan D.; Kubinski, Bob M.; Najdzion, John J.; Rucinski, Russel A.; Schmitt, Rich L.; Tope, Terry E. [Particle Physics Division, Fermilab, P.O. Box 500, Batavia, IL 60510 (United States); Mahoney, Ryan; Norris, Barry L.; Watkins, Daniel J. [Technical Division, Fermilab, P.O. Box 500, Batavia, IL 60510 (United States); McCluskey, Elaine G. [LBNE Project, Fermilab, P.O. Box 500, Batavia, IL 60510 (United States); Stewart, James [Physics Department, Brookhaven National Laboratory, P.O. Box 5000, Uptown, NY 11973 (United States)

    2014-01-29

    We report on the design, fabrication, performance and commissioning of the first membrane cryostat to be used for scientific application. The Long Baseline Neutrino Experiment (LBNE) has designed and fabricated a membrane cryostat prototype in collaboration with IHI Corporation (IHI). Original goals of the prototype are: to demonstrate the membrane cryostat technology in terms of thermal performance, feasibility for liquid argon, and leak tightness; to demonstrate that we can remove all the impurities from the vessel and achieve the purity requirements in a membrane cryostat without evacuation and using only a controlled gaseous argon purge; to demonstrate that we can achieve and maintain the purity requirements of the liquid argon during filling, purification, and maintenance mode using mole sieve and copper filters from the Liquid Argon Purity Demonstrator (LAPD) R and D project. The purity requirements of a large liquid argon detector such as LBNE are contaminants below 200 parts per trillion oxygen equivalent. This paper gives the requirements, design, construction, and performance of the LBNE membrane cryostat prototype, with experience and results important to the development of the LBNE detector.

  10. High-performance computing using FPGAs

    CERN Document Server

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  11. Synthesis of functionalized 3D porous graphene using both ionic liquid and SiO2 spheres as "spacers" for high-performance application in supercapacitors.

    Science.gov (United States)

    Li, Tingting; Li, Na; Liu, Jiawei; Cai, Kai; Foda, Mohamed F; Lei, Xiaomin; Han, Heyou

    2015-01-14

    In this work, a high-capacity supercapacitor material based on functionalized three-dimensional (3D) porous graphene was fabricated by low temperature hydrothermal treatment of graphene oxide (GO) using both ionic liquid (IL) and SiO2 spheres as "spacers". In the synthesis, the introduction of dual "spacers" effectively enlarged the interspace between graphene sheets and suppressed their re-stacking. In addition, the IL also acted as a structure-directing agent playing a crucial role in inducing the formation of unique 3D architectures. Consequently, fast electron/ion transport channels were successfully constructed and numerous oxygen-containing groups on graphene sheets were effectively reserved, which had unique advantages in decreasing ion diffusion resistance and providing additional pseudocapacitance. As expected, the obtained material exhibited superior specific capacitance and rate capability compared to single "spacer" designed electrodes and simultaneously maintained excellent cycling stability. In particular, there was nearly no loss of its initial capacitance after 3000 cycles. In addition, we further assembled a symmetric two-electrode device using the material, which showed outstanding flexibility and low equivalent series resistance (ESR). More importantly, it was capable of yielding a maximum power density of about 13.3 kW kg(-1) with an energy density of about 7.0 W h kg(-1) at a voltage of 1.0 V in 1 M H2SO4 electrolyte. All these impressive results demonstrate that the material obtained by this approach is greatly promising for application in high-performance supercapacitors.

  12. High performance computing in linear control

    International Nuclear Information System (INIS)

    Datta, B.N.

    1993-01-01

    Remarkable progress has been made in both theory and applications of all important areas of control. The theory is rich and very sophisticated. Some beautiful applications of control theory are presently being made in aerospace, biomedical engineering, industrial engineering, robotics, economics, power systems, etc. Unfortunately, the same assessment of progress does not hold in general for computations in control theory. Control Theory is lagging behind other areas of science and engineering in this respect. Nowadays there is a revolution going on in the world of high performance scientific computing. Many powerful computers with vector and parallel processing have been built and have been available in recent years. These supercomputers offer very high speed in computations. Highly efficient software, based on powerful algorithms, has been developed to use on these advanced computers, and has also contributed to increased performance. While workers in many areas of science and engineering have taken great advantage of these hardware and software developments, control scientists and engineers, unfortunately, have not been able to take much advantage of these developments

  13. High Performance Grinding and Advanced Cutting Tools

    CERN Document Server

    Jackson, Mark J

    2013-01-01

    High Performance Grinding and Advanced Cutting Tools discusses the fundamentals and advances in high performance grinding processes, and provides a complete overview of newly-developing areas in the field. Topics covered are grinding tool formulation and structure, grinding wheel design and conditioning and applications using high performance grinding wheels. Also included are heat treatment strategies for grinding tools, using grinding tools for high speed applications, laser-based and diamond dressing techniques, high-efficiency deep grinding, VIPER grinding, and new grinding wheels.

  14. Advanced I/O for large-scale scientific applications

    International Nuclear Information System (INIS)

    Klasky, Scott; Schwan, Karsten; Oldfield, Ron A.; Lofstead, Gerald F. II

    2010-01-01

    As scientific simulations scale to use petascale machines and beyond, the data volumes generated pose a dual problem. First, with increasing machine sizes, the careful tuning of IO routines becomes more and more important to keep the time spent in IO acceptable. It is not uncommon, for instance, to have 20% of an application's runtime spent performing IO in a 'tuned' system. Careful management of the IO routines can move that to 5% or even less in some cases. Second, the data volumes are so large, on the order of 10s to 100s of TB, that trying to discover the scientifically valid contributions requires assistance at runtime to both organize and annotate the data. Waiting for offline processing is not feasible due both to the impact on the IO system and the time required. To reduce this load and improve the ability of scientists to use the large amounts of data being produced, new techniques for data management are required. First, there is a need for techniques for efficient movement of data from the compute space to storage. These techniques should understand the underlying system infrastructure and adapt to changing system conditions. Technologies include aggregation networks, data staging nodes for a closer parity to the IO subsystem, and autonomic IO routines that can detect system bottlenecks and choose different approaches, such as splitting the output into multiple targets, staggering output processes. Such methods must be end-to-end, meaning that even with properly managed asynchronous techniques, it is still essential to properly manage the later synchronous interaction with the storage system to maintain acceptable performance. Second, for the data being generated, annotations and other metadata must be incorporated to help the scientist understand output data for the simulation run as a whole, to select data and data features without concern for what files or other storage technologies were employed. All of these features should be attained while

  15. Fabrication of novel high performance ductile poly(lactic acid) nanofiber scaffold coated with poly(vinyl alcohol) for tissue engineering applications

    Energy Technology Data Exchange (ETDEWEB)

    Abdal-hay, Abdalla, E-mail: abda_55@jbnu.ac.kr [Dept of Engineering Materials and Mechanical Design, Faculty of Engineering, South Valley of University, Qena 83523 (Egypt); Hussein, Kamal Hany [Stem Cell Institute and College of Veterinary Medicine, Kangwon National University, Chuncheon, Gangwon 200-701 (Korea, Republic of); Casettari, Luca [Department of Biomolecular Sciences, University of Urbino, Piazza Rinascimento, 6, Urbino, PU 61029 (Italy); Khalil, Khalil Abdelrazek [Dept. of Mechanical Engineering, College of Engineering, King Saud University, 800, Riyadh 11421 (Saudi Arabia); Dept. of Mechanical Engineering, Faculty of Energy Engineering, Aswan University, Aswan (Egypt); Hamdy, Abdel Salam [Dept. of Manufacturing and Industrial Engineering, College of Engineering and Computer Science, University of Texas Rio Grande Valley, 1201 West University Dr., Edinburg, TX 78541-2999 (United States)

    2016-03-01

    Poly(lactic acid) (PLA) nanofiber scaffold has received increasing interest as a promising material for potential application in the field of regenerative medicine. However, the low (hydrophilicity) and poor ductility restrict its practical application. Integration of hydrophilic elastic polymer onto the surface of the nanofiber scaffold may help to overcome the drawbacks of PLA material. Herein, we successfully optimized the parameters for in situ deposition of poly(vinyl alcohol), (PVA) onto post-electrospun PLA nanofibers using a simple hydrothermal approach. Our results showed that the average fiber diameter of coated nanofiber mat is about 1265 ± 222 nm, which is remarkably higher than its pristine counterpart (650 ± 180 nm). The hydrophilicity of PLA nanofiber scaffold coated with a PVA thin layer improved dramatically (36.11 ± 1.5°) compared to that of pristine PLA (119.7 ± 1.5°) scaffold. The mechanical testing showed that the PLA nanofiber scaffold could be converted from rigid to ductile with enhanced tensile strength, due to maximizing the hydrogen bond interaction during the heat treatment and in the presence of PVA. Cytocompatibility performance of the pristine and coated PLA fibers with PVA was observed through an in vitro experiment based on cell attachment and the MTT assay by EA.hy926 human endothelial cells. The cytocompatibility results showed that human cells induced more favorable attachment and proliferation behavior on hydrophilic PLA composite scaffold than that of pristine PLA. Hence, PVA coating resulted in an increase in initial human cell attachment and proliferation. We believe that the novel PVA-coated PLA nanofiber scaffold developed in this study, could be a promising high performance biomaterial in regeneration medicine. - Highlights: • Novel PVA-coated PLA nanofibers were prepared by a simple hydrothermal route. • This in situ treatment strategy for PLA fibers induced polymer chain conformation. • Bonding interaction

  16. Fabrication of novel high performance ductile poly(lactic acid) nanofiber scaffold coated with poly(vinyl alcohol) for tissue engineering applications

    International Nuclear Information System (INIS)

    Abdal-hay, Abdalla; Hussein, Kamal Hany; Casettari, Luca; Khalil, Khalil Abdelrazek; Hamdy, Abdel Salam

    2016-01-01

    Poly(lactic acid) (PLA) nanofiber scaffold has received increasing interest as a promising material for potential application in the field of regenerative medicine. However, the low (hydrophilicity) and poor ductility restrict its practical application. Integration of hydrophilic elastic polymer onto the surface of the nanofiber scaffold may help to overcome the drawbacks of PLA material. Herein, we successfully optimized the parameters for in situ deposition of poly(vinyl alcohol), (PVA) onto post-electrospun PLA nanofibers using a simple hydrothermal approach. Our results showed that the average fiber diameter of coated nanofiber mat is about 1265 ± 222 nm, which is remarkably higher than its pristine counterpart (650 ± 180 nm). The hydrophilicity of PLA nanofiber scaffold coated with a PVA thin layer improved dramatically (36.11 ± 1.5°) compared to that of pristine PLA (119.7 ± 1.5°) scaffold. The mechanical testing showed that the PLA nanofiber scaffold could be converted from rigid to ductile with enhanced tensile strength, due to maximizing the hydrogen bond interaction during the heat treatment and in the presence of PVA. Cytocompatibility performance of the pristine and coated PLA fibers with PVA was observed through an in vitro experiment based on cell attachment and the MTT assay by EA.hy926 human endothelial cells. The cytocompatibility results showed that human cells induced more favorable attachment and proliferation behavior on hydrophilic PLA composite scaffold than that of pristine PLA. Hence, PVA coating resulted in an increase in initial human cell attachment and proliferation. We believe that the novel PVA-coated PLA nanofiber scaffold developed in this study, could be a promising high performance biomaterial in regeneration medicine. - Highlights: • Novel PVA-coated PLA nanofibers were prepared by a simple hydrothermal route. • This in situ treatment strategy for PLA fibers induced polymer chain conformation. • Bonding interaction

  17. Developing Scientific Thinking Methods and Applications in Islamic Education

    Science.gov (United States)

    Al-Sharaf, Adel

    2013-01-01

    This article traces the early and medieval Islamic scholarship to the development of critical and scientific thinking and how they contributed to the development of an Islamic theory of epistemology and scientific thinking education. The article elucidates how the Qur'an and the Sunna of Prophet Muhammad have also contributed to the…

  18. Advanced scientific computational methods and their applications of nuclear technologies. (1) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (1)

    International Nuclear Information System (INIS)

    Oka, Yoshiaki; Okuda, Hiroshi

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the first issue showing their overview and introduction of continuum simulation methods. Finite element method as their applications is also reviewed. (T. Tanaka)

  19. Advanced scientific computational methods and their applications to nuclear technologies. (4) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (4)

    International Nuclear Information System (INIS)

    Sekimura, Naoto; Okita, Taira

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the fourth issue showing the overview of scientific computational methods with the introduction of continuum simulation methods and their applications. Simulation methods on physical radiation effects on materials are reviewed based on the process such as binary collision approximation, molecular dynamics, kinematic Monte Carlo method, reaction rate method and dislocation dynamics. (T. Tanaka)

  20. Rapid synthesis and characterization of hybrid ZnO@Au core–shell nanorods for high performance, low temperature NO{sub 2} gas sensor applications

    Energy Technology Data Exchange (ETDEWEB)

    Ponnuvelu, Dinesh Veeran [Nanosensor Laboratory, PSG Institute of Advanced Studies, Coimbatore 641 004 (India); Pullithadathil, Biji, E-mail: bijuja123@yahoo.co.in [Nanosensor Laboratory, PSG Institute of Advanced Studies, Coimbatore 641 004 (India); Prasad, Arun K.; Dhara, Sandip [Surface and Nanoscience Division, Indira Gandhi Center for Atomic Research, Kalpakkam (India); Ashok, Anuradha [Nanosensor Laboratory, PSG Institute of Advanced Studies, Coimbatore 641 004 (India); Mohamed, Kamruddin; Tyagi, Ashok Kumar [Surface and Nanoscience Division, Indira Gandhi Center for Atomic Research, Kalpakkam (India); Raj, Baldev [Nanosensor Laboratory, PSG Institute of Advanced Studies, Coimbatore 641 004 (India)

    2015-11-15

    Graphical abstract: - Highlights: • Hybrid ZnO@Au core–shell nanorods were developed using rapid chemical method that can be used as a high performance, low temperature NO{sub 2} gas sensor. • Surface defect analysis (PL and XPS) clearly illustrates the presence of surface oxygen species and Zn interstitials involved in charge transport properties in-turn affecting gas sensing properties. • Hybrid ZnO@Au core–shell nanorods establish enhanced gas sensing performance at 150 °C compared to ZnO (300 °C) with a lower detection limit of 500 ppb using conventional electrodes. • The enhanced performance of ZnO@Au core–shell nanorods based sensor was owing to the presence of Au nanoclusters on the surface of ZnO nanorods which is attributed to the formation of Schottky contacts at the interfaces leading to sensitization effects. • The hybrid material found to be selective toward NO{sub 2} gas and highly stable in nature. - Abstract: A rapid synthesis route for hybrid ZnO@Au core–shell nanorods has been realized for ultrasensitive, trace-level NO{sub 2} gas sensor applications. ZnO nanorods and hybrid ZnO@Au core–shell nanorods are structurally analyzed using X-ray diffraction (XRD), high resolution transmission electron microscopy (HR-TEM) and X-ray photoelectron spectroscopy (XPS). Optical characterization using UV–visible (UV–vis), photoluminescence (PL) and Raman spectroscopies elucidate alteration in the percentage of defect and charge transport properties of ZnO@Au core–shell nanorods. The study reveals the accumulation of electrons at metal–semiconductor junctions leading to upward band bending for ZnO and thus favors direct electron transfer from ZnO to Au nanoclusters, which mitigates charge carrier recombination process. The operating temperature of ZnO@Au core–shell nanorods based sensor significantly decreased to 150 °C compared to alternate NO{sub 2} sensors (300 °C). Moreover, a linear sensor response in the range of 0.5–5

  1. Scalable high-performance algorithm for the simulation of exciton-dynamics. Application to the light harvesting complex II in the presence of resonant vibrational modes

    DEFF Research Database (Denmark)

    Kreisbeck, Christoph; Kramer, Tobias; Aspuru-Guzik, Alán

    2014-01-01

    high-performance many-core platforms using the Open Compute Language (OpenCL). For the light-harvesting complex II (LHC II) found in spinach, the HEOM results deviate from predictions of approximate theories and clarify the time-scale of the transfer-process. We investigate the impact of resonantly...

  2. High performance parallel I/O

    CERN Document Server

    Prabhat

    2014-01-01

    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  3. NMR spectrometers. Current status and assessment of demand for high-resolution NMR spectrometers and for high-performance, solid NMR spectrometers at the scientific colleges and other research institutes in the Federal Republic of Germany. Pt. 1

    International Nuclear Information System (INIS)

    Schmidt, K.

    1989-01-01

    The survey includes high-resolution NMR spectrometers for liquids and solutions with magnetic field intensities of 11.7 Tesla and more (proton frequencies from 500 to 600 MHz) as well as high-performance solid-state NMR spectrometers with field intensities of, at least, 6.3 Tesla (proton frequencies of 270 MHz and more). The given results which had been obtained from documents of the manufacturers try to meet the manufacturers' need for safety. Market shares and sites are not listed. (DG) [de

  4. Rapid in situ synthesis of spherical microflower Pt/C catalyst via spray-drying for high performance fuel cell application

    Energy Technology Data Exchange (ETDEWEB)

    Balgis, R.; Ogi, T.; Okuyama, K. [Department of Chemical Engineering, Graduate School of Engineering, Hiroshima University, Higashi Hiroshima, Hiroshima (Japan); Anilkumar, G.M.; Sago, S. [Research and Development Centre, Noritake Co., Ltd., Higashiyama, Miyoshi, Aichi (Japan)

    2012-08-15

    A facile route for the rapid in situ synthesis of platinum nanoparticles on spherical microflower carbon has been developed. An aqueous precursor slurry containing carbon black, polystyrene latex (PSL), polyvinyl alcohol, and platinum salt was spray-dried, followed by calcination to simultaneously reduce platinum salt and to decompose PSL particles. Prepared Pt/C catalyst showed high-performance electrocatalytic activity with excellent durability. The mass activity and specific activity values were 132.26 mA mg{sup -1} Pt and 207.62 {mu}A cm{sup -2} Pt, respectively. This work presents a future direction for the production of high-performance Pt/C catalyst in an industrial scale. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  5. Application of High-Performance Liquid Chromatography Coupled with Linear Ion Trap Quadrupole Orbitrap Mass Spectrometry for Qualitative and Quantitative Assessment of Shejin-Liyan Granule Supplements

    OpenAIRE

    Jifeng Gu; Weijun Wu; Mengwei Huang; Fen Long; Xinhua Liu; Yizhun Zhu

    2018-01-01

    A method for high-performance liquid chromatography coupled with linear ion trap quadrupole Orbitrap high-resolution mass spectrometry (HPLC-LTQ-Orbitrap MS) was developed and validated for the qualitative and quantitative assessment of Shejin-liyan Granule. According to the fragmentation mechanism and high-resolution MS data, 54 compounds, including fourteen isoflavones, eleven ligands, eight flavonoids, six physalins, six organic acids, four triterpenoid saponins, two xanthones, two alkaloi...

  6. Designing a High Performance Parallel Personal Cluster

    OpenAIRE

    Kapanova, K. G.; Sellier, J. M.

    2016-01-01

    Today, many scientific and engineering areas require high performance computing to perform computationally intensive experiments. For example, many advances in transport phenomena, thermodynamics, material properties, computational chemistry and physics are possible only because of the availability of such large scale computing infrastructures. Yet many challenges are still open. The cost of energy consumption, cooling, competition for resources have been some of the reasons why the scientifi...

  7. Electrochromism: basis and application of nanomaterials in development of high performance electrodes; Eletrocromismo: fundamentos e a aplicacao de nanomateriais no desenvolvimento de eletrodos de alto desempenho

    Energy Technology Data Exchange (ETDEWEB)

    Quintanilha, Ronaldo C.; Rocha, Igor; Vichessi, Raquel B.; Lucht, Emili; Naidek, Karine; Winnischofer, Herbert; Vidotti, Marcio [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Departamento de Quimica

    2014-07-01

    This review deals with the basis and novel trends in electrochromism, describing the basic aspects and methodologies employed for the construction and analyses of different modified electrodes. The work presents the classic materials used for the construction of electrochromic electrodes, such as WO{sub 3} and a view on the basic concepts of chromaticity as a useful approach for analyzing colorimetric results. The report also addresses how the incorporation of nanomaterials and the consequent novel modification of electrodes have furthered this area of science, producing electrochromic electrodes with high performance, high efficiency and low response times. (author)

  8. Application of software quality assurance to a specific scientific code development task

    International Nuclear Information System (INIS)

    Dronkers, J.J.

    1986-03-01

    This paper describes an application of software quality assurance to a specific scientific code development program. The software quality assurance program consists of three major components: administrative control, configuration management, and user documentation. The program attempts to be consistent with existing local traditions of scientific code development while at the same time providing a controlled process of development

  9. Multi-Language Programming Environments for High Performance Java Computing

    Directory of Open Access Journals (Sweden)

    Vladimir Getov

    1999-01-01

    Full Text Available Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI tool which provides application programmers wishing to use Java with immediate accessibility to existing scientific packages. The JCI tool also facilitates rapid development and reuse of existing code. These benefits are provided at minimal cost to the programmer. While beneficial to the programmer, the additional advantages of mixed‐language programming in terms of application performance and portability are addressed in detail within the context of this paper. In addition, we discuss how the JCI tool is complementing other ongoing projects such as IBM’s High‐Performance Compiler for Java (HPCJ and IceT’s metacomputing environment.

  10. Application of Fourier transform near-infrared spectroscopy combined with high-performance liquid chromatography in rapid and simultaneous determination of essential components in crude Radix Scrophulariae.

    Science.gov (United States)

    Li, Xiaomeng; Fang, Dansi; Cong, Xiaodong; Cao, Gang; Cai, Hao; Cai, Baochang

    2012-12-01

    A method is described using rapid and sensitive Fourier transform near-infrared spectroscopy combined with high-performance liquid chromatography-diode array detection for the simultaneous identification and determination of four bioactive compounds in crude Radix Scrophulariae samples. Partial least squares regression is selected as the analysis type and multiplicative scatter correction, second derivative, and Savitzky-Golay filter were adopted for the spectral pretreatment. The correlation coefficients (R) of the calibration models were above 0.96 and the root mean square error of predictions were under 0.028. The developed models were applied to unknown samples with satisfactory results. The established method was validated and can be applied to the intrinsic quality control of crude Radix Scrophulariae.

  11. Application of denaturing high-performance liquid chromatography (DHPLC) for the identification of fish: a new way to determine the composition of processed food containing multiple species.

    Science.gov (United States)

    Le Fresne, Sophie; Popova, Milena; Le Vacon, Françoise; Carton, Thomas

    2011-12-14

    The identification of fish species in transformed food products is difficult because the existing methods are not adapted to heat-processed products containing more than one species. Using a common to all vertebrates region of the cytochrome b gene, we have developed a denaturing high-performance liquid chromatography (DHPLC) fingerprinting method, which allowed us to identify most of the species in commercial crab sticks. Whole fish and fillets were used for the creation of a library of referent DHPLC profiles. Crab sticks generated complex DHPLC profiles in which the number of contained fish species can be estimated by the number of major fluorescence peaks. The identity of some of the species was predicted by comparison of the peaks with the referent profiles, and others were identified after collection of the peak fractions, reamplification, and sequencing. DHPLC appears to be a quick and efficient method to analyze the species composition of complex heat-processed fish products.

  12. Application of dispersive liquid-liquid microextraction for the preconcentration of eight parabens in real samples and their determination by high-performance liquid chromatography.

    Science.gov (United States)

    Shen, Xiong; Liang, Jian; Zheng, Luxia; Lv, Qianzhou; Wang, Hong

    2017-11-01

    A simple and sensitive method for the simultaneous determination of eight parabens in human plasma and urine samples was developed. The samples were preconcentrated using dispersive liquid-liquid microextraction based on the solidification of floating organic drops and determined by high-performance liquid chromatography with ultraviolet detection. The influence of variables affecting the extraction efficiency was investigated and optimized using Placket-Burman design and Box-Behnken design. The optimized values were: 58 μL of 1-decanol (as extraction solvent), 0.65 mL methanol (as disperser solvent), 1.5% w/v NaCl in 5.0 mL of sample solution, pH 10.6, and 4.0 min centrifugation at 4000 rpm. The extract was injected into the high-performance liquid chromatography system for analysis. Under the optimum conditions, the linear ranges for eight parabens in plasma and urine were 1.0-1000 ng/mL, with correlation coefficients above 0.994. The limit of detection was 0.2-0.4 and 0.1-0.4 ng/mL for plasma and urine samples, respectively. Relative recoveries were between 80.3 and 110.7%, while relative standard deviations were less than 5.4%. Finally, the method was applied to analyze the parabens in 98 patients of primary breast cancer. Results showed that parabens existed widely, at least one paraben detected in 96.9% (95/98) of plasma samples and 98.0% (96/98) of urine samples. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Sensitive determination of 4-O-methylhonokiol in rabbit plasma by high performance liquid chromatography and application to its pharmacokinetic investigation

    Directory of Open Access Journals (Sweden)

    Ming-Yue Li

    2011-05-01

    Full Text Available A novel high Performance liquid Chromatographie method was developed for the determination of 4-O-methylhonokiol in rabbit plasma and was applied to its pharmacokinetic investigation. Plasma samples were treated by one-fold volume of methanol and acetonitrile to remove the interference proteins. A reverse phase column of SHIM-PACK VP-ODS (150 mm × 4.6 mm, 5.0 Mm was used to separate 4-O-methylhonokiol in the plasma samples. The detection limit of 4-O-methylhonokiol was 0.2 μg/L and the linear ränge was 0.012 – 1.536 μg/L. The good extraction recoveries were obtained for the spiked samples (84.7%, 89.3% and 87.7% for low, middle and high concentrations of added Standards, respectively. The relative standard deviation of intra-day and inter-day precisions was in the ränge from 0.6% to 13.5%. The pharmacokinetic study of 4-O-methylhonokiol was made and the results from the plasma-concentration curve of 4-O-methylhonokiol showed a two-apartment open model. This work developed a sensitive, stable and rapid HPLC method for the determination of 4-O-methylhonokiol and the developed method has been successfully applied to a pharmacokinetic study of 4-O-methylhonokiol. Keywords: 4-O-methylhonokiol, Cortex Magnoliae Officinalis, high Performance liquid chromatography, pharmacokinetic

  14. 75 FR 51439 - Proposed Information Collection; Comment Request; Application and Reports for Scientific Research...

    Science.gov (United States)

    2010-08-20

    ... DEPARTMENT OF COMMERCE National Oceanic and Atmospheric Administration Proposed Information Collection; Comment Request; Application and Reports for Scientific Research and Enhancement Permits Under the Endangered Species Act AGENCY: National Oceanic and Atmospheric Administration (NOAA), Commerce...

  15. Proceeding of the Scientific Meeting and Presentation on Accelerator Technology and its Application

    International Nuclear Information System (INIS)

    Sudjatmoko; Anggraita, P.; Darsono; Sudiyanto; Kusminarto; Karyono

    1999-07-01

    The proceeding contains papers presented on Scientific Meeting and Presentation on Accelerator Technology and Its Application, held in Yogyakarta, 16 january 1996. This proceeding contains papers on accelerator technology, especially electron beam machine. There are 11 papers indexed individually. (ID)

  16. Musculoskeletal applications of magnetic resonance imaging: Council on Scientific Affairs

    International Nuclear Information System (INIS)

    Harms, S.E.; Fisher, C.F.; Fulmer, J.M.

    1989-01-01

    Magnetic resonance imaging provides superior contrast, resolution, and multiplanar imaging capability, allowing excellent definition of soft-tissue and bone marrow abnormalities. For these reasons, magnetic resonance imaging has become a major diagnostic imaging method for the evaluation of many musculoskeletal disorders. The applications of magnetic resonance imaging for musculoskeletal diagnosis are summarized and examples of common clinical situations are given. General guidelines are suggested for the musculoskeletal applications of magnetic resonance imaging

  17. The scientific research programmes of Lakatos and applications in parasitology

    Directory of Open Access Journals (Sweden)

    Cabaret J.

    2008-09-01

    Full Text Available The methodology of scientific research programme (MSRP proposed by Lakatos was in the line of the proposals made by Popper. MSRP were intended for constructing and evaluating research programme, which is unique among philosophers of science. Surprisingly, scientists dedicated to research in mathematics, physic or biology have not used much MRSP. This could be due to the fact that scientists are not aware of the existence of MSRP, or they find it difficult to apply to their own investigations. That is why we present firstly the main characteristics of this methodology (hard core – the group of hypothesis that are admitted by experts in the field, auxiliary hypotheses – which are intended to protect and refine the hypotheses of the hard-core, and heuristics for mending and evaluating the MSRP and, secondly, propose an example in helminthology. We think that the methodology of Lakatos, is a useful tool, but it cannot encompass the large flexibility of investigations pathways.

  18. RavenDB high performance

    CERN Document Server

    Ritchie, Brian

    2013-01-01

    RavenDB High Performance is comprehensive yet concise tutorial that developers can use to.This book is for developers & software architects who are designing systems in order to achieve high performance right from the start. A basic understanding of RavenDB is recommended, but not required. While the book focuses on advanced topics, it does not assume that the reader has a great deal of prior knowledge of working with RavenDB.

  19. Application of high performance liquid chromatography with inductively coupled plasma mass spectrometry (HPLC-ICP-MS) for determination of chromium compounds in the air at the workplace.

    Science.gov (United States)

    Stanislawska, Magdalena; Janasik, Beata; Wasowicz, Wojciech

    2013-12-15

    The toxicity and bioavailability of chromium species are highly dependable on the form or species, therefore determination of total chromium is insufficient for a complete toxicological evaluation and risk assessment. An analytical method for determination of soluble and insoluble Cr (III) and Cr (VI) compounds in welding fume at workplace air has been developed. The total chromium (Cr) was determined by using quadruple inductively coupled plasma mass spectrometry (ICP-MS) equipped with a dynamic reaction cell (DRC(®)). Soluble trivalent and hexavalent chromium compounds were determined by high performance liquid chromatography with inductively coupled plasma mass spectrometry (HPLC-ICP-MS). A high-speed, reversed-phase CR C8 column (PerkinElmer, Inc., Shelton, CT, USA) was used for the speciation of soluble Cr (III) and soluble Cr (VI). The separation was accomplished by interaction of the chromium species with the different components of the mobile phase. Cr (III) formed a complex with EDTA, i.e. retained on the column, while Cr (VI) existed in the solutions as dichromate. Alkaline extraction (2% KOH and 3% Na2CO3) and anion exchange column (PRP-X100, PEEK, Hamilton) were used for the separation of the total Cr (VI). The results of the determination of Cr (VI) were confirmed by the analysis of the certified reference material BCR CRM 545 (Cr (VI) in welding dust). The results obtained for the certified material (40.2±0.6 g kg(-1)) and the values recorded in the examined samples (40.7±0.6 g kg(-1)) were highly consistent. This analytical method was applied for the determination of chromium in the samples in the workplace air collected onto glass (Whatman, Ø 37 mm) and membrane filters (Sartorius, 0.8 μm, Ø 37 mm). High performance liquid chromatography with inductively coupled plasma mass spectrometry is a remarkably powerful and versatile technique for determination of chromium species in welding fume at workplace air. Crown Copyright © 2013 Published by

  20. High performance structural ceramics for nuclear industry

    International Nuclear Information System (INIS)

    Pujari, Vimal K.; Faker, Paul

    2006-01-01

    A family of Saint-Gobain structural ceramic materials and products produced by its High performance Refractory Division is described. Over the last fifty years or so, Saint-Gobain has been a leader in developing non oxide ceramic based novel materials, processes and products for application in Nuclear, Chemical, Automotive, Defense and Mining industries

  1. International Conference on Scientific and Clinical Applications of Magnetic Carriers

    CERN Document Server

    Schütt, Wolfgang; Teller, Joachim; Zborowski, Maciej

    1997-01-01

    The discovery of uniform latex particles by polymer chemists of the Dow Chemical Company nearly 50 years ago opened up new exciting fields for scientists and physicians and established many new biomedical applications. Many in vitro diagnostic tests such as the latex agglutination tests, analytical cell and phagocytosis tests have since become rou­ tine. They were all developed on the basis of small particles bound to biological active molecules and fluorescent and radioactive markers. Further developments are ongoing, with the focus now shifted to applications of polymer particles in the controlled and di­ rected transport of drugs in living systems. Four important factors make microspheres interesting for in vivo applications: First, biocompatible polymer particles can be used to transport known amounts of drug and re­ lease them in a controlled fashion. Second, particles can be made of materials which bio­ degrade in living organisms without doing any harm. Third, particles with modified surfaces are a...

  2. High Performance Multivariate Visual Data Exploration for Extremely Large Data

    International Nuclear Information System (INIS)

    Ruebel, Oliver; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Ahern, Sean; Weber, Gunther H.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes; Prabhat

    2008-01-01

    One of the central challenges in modern science is the need to quickly derive knowledge and understanding from large, complex collections of data. We present a new approach that deals with this challenge by combining and extending techniques from high performance visual data analysis and scientific data management. This approach is demonstrated within the context of gaining insight from complex, time-varying datasets produced by a laser wakefield accelerator simulation. Our approach leverages histogram-based parallel coordinates for both visual information display as well as a vehicle for guiding a data mining operation. Data extraction and subsetting are implemented with state-of-the-art index/query technology. This approach, while applied here to accelerator science, is generally applicable to a broad set of science applications, and is implemented in a production-quality visual data analysis infrastructure. We conduct a detailed performance analysis and demonstrate good scalability on a distributed memory Cray XT4 system

  3. High Performance Multivariate Visual Data Exploration for Extremely Large Data

    Energy Technology Data Exchange (ETDEWEB)

    Rubel, Oliver; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Ahern, Sean; Weber, Gunther H.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes; Prabhat,

    2008-08-22

    One of the central challenges in modern science is the need to quickly derive knowledge and understanding from large, complex collections of data. We present a new approach that deals with this challenge by combining and extending techniques from high performance visual data analysis and scientific data management. This approach is demonstrated within the context of gaining insight from complex, time-varying datasets produced by a laser wakefield accelerator simulation. Our approach leverages histogram-based parallel coordinates for both visual information display as well as a vehicle for guiding a data mining operation. Data extraction and subsetting are implemented with state-of-the-art index/query technology. This approach, while applied here to accelerator science, is generally applicable to a broad set of science applications, and is implemented in a production-quality visual data analysis infrastructure. We conduct a detailed performance analysis and demonstrate good scalability on a distributed memory Cray XT4 system.

  4. A Wide Linearity Range Method for the Determination of Lenalidomide in Plasma by High-Performance Liquid Chromatography: Application to Pharmacokinetic Studies.

    Science.gov (United States)

    Guglieri-López, Beatriz; Pérez-Pitarch, Alejandro; Martinez-Gómez, Maria Amparo; Porta-Oltra, Begoña; Climente-Martí, Mónica; Merino-Sanjuán, Matilde

    2016-12-01

    A wide linearity range analytical method for the determination of lenalidomide in patients with multiple myeloma for pharmacokinetic studies is required. Plasma samples were ultrasonicated for protein precipitation. A solid-phase extraction was performed. The eluted samples were evaporated to dryness under vacuum, and the solid obtained was diluted and injected into the high-performance liquid chromatography (HPLC) system. Separation of lenalidomide was performed on an Xterra RP C18 (250 mm length × 4.6 mm i.d., 5 µm) using a mobile phase consisting of phosphate buffer/acetonitrile (85:15, v/v, pH 3.2) at a flow rate of 0.5 mL · min -1 The samples were monitored at a wavelength of 311 nm. A linear relationship with good correlation coefficient (r = 0.997, n = 9) was found between the peak area and lenalidomide concentrations in the range of 100 to 950 ng · mL -1 The limits of detection and quantitation were 28 and 100 ng · mL -1 , respectively. The intra- and interassay precisions were satisfactory, and the accuracy of the method was proved. In conclusion, the proposed method is suitable for the accurate quantification of lenalidomide in human plasma with a wide linear range, from 100 to 950 ng · mL -1 This is a valuable method for pharmacokinetic studies of lenalidomide in human subjects. © 2016 Society for Laboratory Automation and Screening.

  5. Application of ultrasound-assisted emulsification microextraction for simultaneous determination of aminophenol isomers in human urine, hair dye, and water samples using high-performance liquid chromatography.

    Science.gov (United States)

    Asghari, Alireza; Fazl-Karimi, Hamidreza; Barfi, Behruz; Rajabi, Maryam; Daneshfar, Ali

    2014-08-01

    Aminophenol isomers (2-, 3-, and 4-aminophenols) are typically classified as industrial pollutants with genotoxic and mutagenic effects due to their easy penetration through the skin and membranes of human, animals, and plants. In the present study, a simple and efficient ultrasound-assisted emulsification microextraction procedure coupled with high-performance liquid chromatography with ultraviolet detector was developed for preconcentration and determination of these compounds in human fluid and environmental water samples. Effective parameters (such as type and volume of extraction solvent, pH and ionic strength of sample, and ultrasonication and centrifuging time) were investigated and optimized. Under optimum conditions (including sample volume: 5 mL; extraction solvent: chloroform, 80 µL; pH: 6.5; without salt addition; ultrasonication: 3.5 min; and centrifuging time: 3 min, 5000 rpm min(-1)), the enrichment factors and limits of detection were ranged from 42 to 51 and 0.028 to 0.112 µg mL(-1), respectively. Once optimized, analytical performance of the method was studied in terms of linearity (0.085-157 µg mL(-1), r (2) > 0.998), accuracy (recovery = 88.6- 101.7%), and precision (repeatability: intraday precision water samples. © The Author(s) 2014.

  6. Development, validation, and application of a method for selected avermectin determination in rural waters using high performance liquid chromatography and fluorescence detection.

    Science.gov (United States)

    Lemos, Maria Augusta Travassos; Matos, Camila Alves; de Resende, Michele Fabri; Prado, Rachel Bardy; Donagemma, Raquel Andrade; Netto, Annibal Duarte Pereira

    2016-11-01

    Avermectins (AVM) are macrocyclic lactones used in livestock and agriculture. A quantitative method of high performance liquid chromatography with fluorescence detection for the determination of eprinomectin, abamectin, doramectin and ivermectin in rural water samples was developed and validated. The method was employed to study samples collected in the Pito Aceso River microbasin, located in the Bom Jardim municipality, Rio de Janeiro State, Brazil. Samples were extracted by solid phase extraction using a polymeric stationary phase, the eluted fraction was re-concentrated under a gentle N2 flow and derivatized to allow AVM determination using liquid chromatography with fluorescence detection. The excitation and emission wavelengths of the derivatives were 365 and 470nm, respectively, and a total chromatographic run of 12min was achieved. Very low limits of quantification (22-58ngL(-1)) were found after re-concentration using N2. Recovery values varied from 85.7% to 119.2% with standard deviations between 1.2% and 10.2%. The validated method was applied in the determination of AVM in 15 water samples collected in the Pito Aceso River microbasin, but most of them were free of AVM or showed only trace levels of these compounds, except for a sample that contained doramectin (9.11µgL(-1)). The method is suitable for routine analysis with satisfactory recovery, sensitivity, and selectivity. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Bio-analytical method development and validation of Rasagiline by high performance liquid chromatography tandem mass spectrometry detection and its application to pharmacokinetic study

    Directory of Open Access Journals (Sweden)

    Ravi Kumar Konda

    2012-10-01

    Full Text Available The most suitable bio-analytical method based on liquid–liquid extraction has been developed and validated for quantification of Rasagiline in human plasma. Rasagiline-13C3 mesylate was used as an internal standard for Rasagiline. Zorbax Eclipse Plus C18 (2.1 mm×50 mm, 3.5 μm column provided chromatographic separation of analyte followed by detection with mass spectrometry. The method involved simple isocratic chromatographic condition and mass spectrometric detection in the positive ionization mode using an API-4000 system. The total run time was 3.0 min. The proposed method has been validated with the linear range of 5–12000 pg/mL for Rasagiline. The intra-run and inter-run precision values were within 1.3%–2.9% and 1.6%–2.2% respectively for Rasagiline. The overall recovery for Rasagiline and Rasagiline-13C3 mesylate analog was 96.9% and 96.7% respectively. This validated method was successfully applied to the bioequivalence and pharmacokinetic study of human volunteers under fasting condition. Keywords: High performance liquid chromatography, Mass spectrometry, Rasagiline, Liquid–liquid extraction

  8. Direct formation of reduced graphene oxide and 3D lightweight nickel network composite foam by hydrohalic acids and its application for high-performance supercapacitors.

    Science.gov (United States)

    Huang, Haifu; Tang, Yanmei; Xu, Lianqiang; Tang, Shaolong; Du, Youwei

    2014-07-09

    Here, a novel graphene composite foam with 3D lightweight continuous and interconnected nickel network was successfully synthesized by hydroiodic (HI) acid using nickel foam as substrate template. The graphene had closely coated on the backbone of the 3D nickel conductive network to form nickel network supported composite foam without any polymeric binder during the HI reduction of GO process, and the nickel conductive network can be maintained even in only a small amount of nickel with 1.1 mg/cm(2) and had replaced the traditional current collector nickel foam (35 mg/cm(2)). In the electrochemical measurement, a supercapacitor device based on the 3D nickel network and graphene composite foam exhibited high rate capability of 100 F/g at 0.5 A/g and 86.7 F/g at 62.5 A/g, good cycle stability with capacitance retention of 95% after 2000 cycles, low internal resistance (1.68 Ω), and excellent flexible properties. Furthermore, the gravimetric capacitance (calculated using the total mass of the electrode) was high up to 40.9 F/g. Our work not only demonstrates high-quality graphene/nickel composite foam, but also provides a universal route for the rational design of high performance of supercapacitors.

  9. Screening and identification of antioxidants in biological samples using high-performance liquid chromatography-mass spectrometry and its application on Salacca edulis Reinw.

    Science.gov (United States)

    Shui, Guanghou; Leong, Lai Peng

    2005-02-23

    In this study, a new approach was developed for screening and identifying antioxidants in biological samples. The approach was based on significant decreases of the intensities of ion peaks obtained from high-performance liquid chromatography (HPLC) coupled with mass spectrometry (MS) upon reaction with 2,2'-azinobis(3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) free radicals. HPLC-MS/MS was further applied to elucidate structures of antioxidant peaks characterized in a spiking test. The new approach could also be used to monitor the reactivity of antioxidants in biological sample with free radicals. The approach was successfully applied to the identification of antioxidants in salak (Salacca edulis Reinw), a tropical fruit that is reported to be a very good source of natural antioxidants, but it was still not clear which compounds were responsible for its antioxidant property. The antioxidants in salak were identified to be chlorogenic acid, (-)-epicatechin, and singly linked proanthocyanidins that mainly existed as dimers through hexamers of catechin or epicatechin. In salak, chlorogenic acid was identified to be an antioxidant of the slow reaction type as it reacted with free radicals much more slowly than either (-)-epicatechin or proanthocyanidins. The new approach was proved to be useful for the characterization and identification of antioxidants in biological samples as a mass detector combined with an HPLC separation system not only serves as an ideal tool to monitor free radical active components but also provides their possible chemical structures in a biological sample.

  10. Potential Therapeutic Applications of Mucuna pruriens Peptide Fractions Purified by High-Performance Liquid Chromatography as Angiotensin-Converting Enzyme Inhibitors, Antioxidants, Antithrombotic and Hypocholesterolemic Agents.

    Science.gov (United States)

    Herrera-Chalé, Francisco; Ruiz-Ruiz, Jorge Carlos; Betancur-Ancona, David; Segura-Campos, Maira Rubi

    2016-02-01

    A Mucuna pruriens protein concentrate was hydrolyzed with a digestive (pepsin-pancreatin) enzymatic system. The soluble portion of the hydrolysate was fractionated by ultrafiltration and the ultrafiltered peptide fraction (PF) with lower molecular weight was purified by reversed-phase high-performance liquid chromatography. The PF obtained were evaluated by testing the biological activity in vitro. Fractions showed that the ability to inhibit the angiotensin-converting enzyme had IC50 values that ranged from 2.7 to 6.2 μg/mL. Trolox equivalent antioxidant capacity values ranged from 132.20 to 507.43 mM/mg. The inhibition of human platelet aggregation ranged from 1.59% to 11.11%, and the inhibition of cholesterol micellar solubility ranged from 0.24% to 0.47%. Hydrophobicity, size, and amino acid sequence could be factors in determining the biological activity of peptides contained in fractions. This is the first report that M. pruriens peptides act as antihypertensives, antioxidants, and inhibitors for human platelet aggregation and cholesterol micellar solubility in vitro.

  11. Sensitive determination of glucose in Dulbecco's modified Eagle medium by high-performance liquid chromatography with 1-phenyl-3-methyl-5-pyrazolone derivatization: application to gluconeogenesis studies.

    Science.gov (United States)

    Ling, Zhaoli; Xu, Ping; Zhong, Zeyu; Wang, Fan; Shu, Nan; Zhang, Ji; Tang, Xiange; Liu, Li; Liu, Xiaodong

    2016-04-01

    A new pre-column derivative high-performance liquid chromatography (HPLC) method for determination of d-glucose with 3-O-methyl-d-glucose (3-OMG) as the internal standard was developed and validated in order to study the gluconeogenesis in HepG2 cells. Samples were derivatized with 1-phenyl-3-methy-5-pyrazolone at 70°C for 50 min. Glucose and 3-OMG were extracted by liquid-liquid extraction and separated on a YMC-Triart C18 column, with a gradient mobile phase composed of acetonitrile and 20 mm ammonium acetate solution containing 0.09% tri-ethylamine at a flow rate of 1.0 mL/min. The eluate were detected using a UV detector at 250 nm. The assay was linear over the range 0.39-25 μm (R(2) = 0.9997, n = 5) and the lower limit of quantitation was 0.39 μm (0.070 mg/mL). Intra- and inter-day precision and accuracy were gluconeogenesis in Dulbecco's modified Eagle medium (DMEM) cultured HepG2 cells. Glucose concentration was determined to be about 1-2.5 μm in this gluconeogenesis assay. In conclusion, this method has been shown to determine small amounts of glucose in DMEM successfully, with lower limit of quantitation and better sensitivity when compared with common commercial glucose assay kits. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Simultaneous determination of plasma creatinine, uric acid, kynurenine and tryptophan by high-performance liquid chromatography: method validation and in application to the assessment of renal function.

    Science.gov (United States)

    Zhao, Jianxing

    2015-03-01

    A high-performance liquid chromatography with ultraviolet detection method has been developed for the simultaneous determination of a set of reliable markers of renal function, including creatinine, uric acid, kynurenine and tryptophan in plasma. Separation was achieved by an Agilent HC-C18 (2) analytical column. Gradient elution and programmed wavelength detection allowed the method to be used to analyze these compounds by just one injection. The total run time was 25 min with all peaks of interest being eluted within 13 min. Good linear responses were found with correlation coefficient >0.999 for all analytes within the concentration range of the relevant levels. The recovery was: creatinine, 101 ± 1%; uric acid, 94.9 ± 3.7%; kynurenine, 100 ± 2%; and tryptophan, 92.6 ± 2.9%. Coefficients of variation within-run and between-run of all analytes were ≤2.4%. The limit of detection of the method was: creatinine, 0.1 µmol/L; uric acid, 0.05 µmol/L; kynurenine, 0.02 µmol/L; and tryptophan, 1 µmol/L. The developed method could be employed as a useful tool for the detection of chronic kidney disease, even at an early stage. Copyright © 2014 John Wiley & Sons, Ltd.

  13. Study on the interaction between three benzimidazole anthelmintics and eosin Y by high performance liquid chromatography associating with resonance light scattering and its application.

    Science.gov (United States)

    Pan, Ziyu; Peng, Jingdong; Zang, Xu; Lei, Gang; He, Yan; Liu, Di

    2016-07-01

    A novel, highly selective, and sensitive resonance light scattering (RLS) detection approach coupled with high performance liquid chromatography (HPLC) was researched and developed for the synchronous analysis of three kinds of benzimidazole anthelmintics, including mebendazole (MBZ), albendazole (ABZ), and fenbendazole (FBZ) for the first time. In the pH range of 3.5-3.7 Britton-Robinson buffer medium, three kinds of anthelmintics, which were separated by HPLC, reacted with eosin Y (EY) to form 1:1 ion-association complexes, resulting in significantly enhanced RLS signals and the maximum peak located at 335 nm. The enhanced RLS intensity was in proportion to the MBZ, ABZ, and FBZ concentration in the range 0.2-25, 0.2-23, and 0.15-20 μg/mL, respectively. The limit of detection was in the range of 0.064-0.16 μg/mL. In addition, human urine was determined to validate the proposed method by spiked samples and real urine samples. Satisfactory results were obtained by HPLC-RLS method. Graphical Abstract The diagram mechanism of generating resonance between emitted light and scattered light.

  14. Application of High-Performance Liquid Chromatography Coupled with Linear Ion Trap Quadrupole Orbitrap Mass Spectrometry for Qualitative and Quantitative Assessment of Shejin-Liyan Granule Supplements.

    Science.gov (United States)

    Gu, Jifeng; Wu, Weijun; Huang, Mengwei; Long, Fen; Liu, Xinhua; Zhu, Yizhun

    2018-04-11

    A method for high-performance liquid chromatography coupled with linear ion trap quadrupole Orbitrap high-resolution mass spectrometry (HPLC-LTQ-Orbitrap MS) was developed and validated for the qualitative and quantitative assessment of Shejin-liyan Granule. According to the fragmentation mechanism and high-resolution MS data, 54 compounds, including fourteen isoflavones, eleven ligands, eight flavonoids, six physalins, six organic acids, four triterpenoid saponins, two xanthones, two alkaloids, and one licorice coumarin, were identified or tentatively characterized. In addition, ten of the representative compounds (matrine, galuteolin, tectoridin, iridin, arctiin, tectorigenin, glycyrrhizic acid, irigenin, arctigenin, and irisflorentin) were quantified using the validated HPLC-LTQ-Orbitrap MS method. The method validation showed a good linearity with coefficients of determination (r²) above 0.9914 for all analytes. The accuracy of the intra- and inter-day variation of the investigated compounds was 95.0-105.0%, and the precision values were less than 4.89%. The mean recoveries and reproducibilities of each analyte were 95.1-104.8%, with relative standard deviations below 4.91%. The method successfully quantified the ten compounds in Shejin-liyan Granule, and the results show that the method is accurate, sensitive, and reliable.

  15. Ultra-high-performance liquid chromatography tandem mass spectrometry method for the determination of gambogenic acid in dog plasma and its application to a pharmacokinetic study.

    Science.gov (United States)

    Chen, Jin Pei; Wang, Dian Lei; Yang, Li Li; Wang, Chen Yin; Wang, Shan Shan

    2014-12-01

    A highly sensitive and rapid ultra-high-performance liquid chromatography-tandem mass spectrometry method was developed and validated for the determination of gambogenic acid in dog plasma. Gambogic acid was used as an internal standard (IS). After a simple liquid-liquid extraction by ethyl acetate, the analyte and internal standard were separated on an Acquity BEH C18 (100 × 2.1 mm, 1.7 µm; Waters ) column at a flow rate of 0.2 mL/min, using 0.1% formic acid-methanol (10:90, v/v) as mobile phase. Electrospray ionization source was applied and operated in the positive ion mode. Multiple reaction monitoring mode with the transitions m/z 631.3 → 507.3 and m/z 629.1 → 573.2 was used to quantify gambogenic acid and the internal standard, respectively. The calibration curves were linear in the range of 5-1000 ng/mL, with a coefficient of determination (r) of 0.999 and good calculated accuracy and precision. The low limit of quantification was 5 ng/mL. The intra-and inter-day precisions (relative standard deviations) were dogs at a dose of 1 mg/kg. Copyright © 2014 John Wiley & Sons, Ltd.

  16. Application of Silver Ion High-Performance Liquid Chromatography for Quantitative Analysis of Selected n-3 and n-6 PUFA in Oil Supplements.

    Science.gov (United States)

    Czajkowska-Mysłek, Anna; Siekierko, Urszula; Gajewska, Magdalena

    2016-04-01

    The aim of this study was to develop a simple method for simultaneous determination of selected cis/cis PUFA-LNA (18:2), ALA (18:3), GLA (18:3), EPA (20:5), and DHA (22:6) by silver ion high-performance liquid chromatography coupled to a diode array detector (Ag-HPLC-DAD). The separation was performed on three Luna SCX Silver Loaded columns connected in series maintained at 10 °C with isocratic elution by 1% acetonitrile in n-hexane. The applied chromatographic system allowed a baseline separation of standard mixture of n-3 and n-6 fatty acid methyl esters containing LNA, DHA, and EPA and partial separation of ALA and GLA positional isomers. The method was validated by means of linearity, precision, stability, and recovery. Limits of detection (LOD) for considered PUFA standard solutions ranged from 0.27 to 0.43 mg L(-1). The developed method was used to evaluate of n-3 and n-6 fatty acids contents in plant and fish softgel oil capsules, results were compared with reference GC-FID based method.

  17. Facile fabrication of polyaniline nanotubes using the self-assembly behavior based on the hydrogen bonding: a mechanistic study and application in high-performance electrochemical supercapacitor electrode

    International Nuclear Information System (INIS)

    Wu, Wenling; Pan, Duo; Li, Yanfeng; Zhao, Guanghui; Jing, Lingyun; Chen, Suli

    2015-01-01

    At present, the in situ synthesis of polyaniline (PANI) nanotubes via self-assembly of organic dopant acid is a particularly charming task in supercapacitors. Herein, we report the formation of uniform PANI nanotubes doped with malic acid (MA) and other organic acids, such as propionic acid (PA), succinic acid (SA), tartaric acid (TA) and citric acid (CA), which simultaneously acts as a dopant acid as well as a structure-directing agent. The morphology, structure and thermal stability of PANI nanotubes were characterized by means of scanning electron microscopy (SEM), transmission electron microscopy (TEM), Fourier transform infrared spectroscopy (FTIR), Raman spectra, Ultraviolet-visible spectra (UV–vis), X-ray diffraction (XRD), thermogravimetric analysis (TGA). Meanwhile, the electrochemical performance of the fabricated electrodes was evaluated by cyclic voltammetry (CV), galvanostatic charge/discharge (GCD), and electrochemical impedance spectroscopy (EIS). Furthermore, the PANI-MA and PANI-CA nanotubes, with [aniline]/[acid] molar ratio of 4:1, possessed highest specific capacitance of 658 F/g and 617 F/g at the current density of 0.1 A/g in 1.0 M H 2 SO 4 electrolyte due to their unique nanotubular structures. It makes PANI nanotubes a promising electrode material for high performance supercapacitors

  18. Application of High-Performance Liquid Chromatography Coupled with Linear Ion Trap Quadrupole Orbitrap Mass Spectrometry for Qualitative and Quantitative Assessment of Shejin-Liyan Granule Supplements

    Directory of Open Access Journals (Sweden)

    Jifeng Gu

    2018-04-01

    Full Text Available A method for high-performance liquid chromatography coupled with linear ion trap quadrupole Orbitrap high-resolution mass spectrometry (HPLC-LTQ-Orbitrap MS was developed and validated for the qualitative and quantitative assessment of Shejin-liyan Granule. According to the fragmentation mechanism and high-resolution MS data, 54 compounds, including fourteen isoflavones, eleven ligands, eight flavonoids, six physalins, six organic acids, four triterpenoid saponins, two xanthones, two alkaloids, and one licorice coumarin, were identified or tentatively characterized. In addition, ten of the representative compounds (matrine, galuteolin, tectoridin, iridin, arctiin, tectorigenin, glycyrrhizic acid, irigenin, arctigenin, and irisflorentin were quantified using the validated HPLC-LTQ-Orbitrap MS method. The method validation showed a good linearity with coefficients of determination (r2 above 0.9914 for all analytes. The accuracy of the intra- and inter-day variation of the investigated compounds was 95.0–105.0%, and the precision values were less than 4.89%. The mean recoveries and reproducibilities of each analyte were 95.1–104.8%, with relative standard deviations below 4.91%. The method successfully quantified the ten compounds in Shejin-liyan Granule, and the results show that the method is accurate, sensitive, and reliable.

  19. Advanced scientific computational methods and their applications to nuclear technologies. (3) Introduction of continuum simulation methods and their applications (3)

    International Nuclear Information System (INIS)

    Satake, Shin-ichi; Kunugi, Tomoaki

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the third issue showing the introduction of continuum simulation methods and their applications. Spectral methods and multi-interface calculation methods in fluid dynamics are reviewed. (T. Tanaka)

  20. Architecting Web Sites for High Performance

    Directory of Open Access Journals (Sweden)

    Arun Iyengar

    2002-01-01

    Full Text Available Web site applications are some of the most challenging high-performance applications currently being developed and deployed. The challenges emerge from the specific combination of high variability in workload characteristics and of high performance demands regarding the service level, scalability, availability, and costs. In recent years, a large body of research has addressed the Web site application domain, and a host of innovative software and hardware solutions have been proposed and deployed. This paper is an overview of recent solutions concerning the architectures and the software infrastructures used in building Web site applications. The presentation emphasizes three of the main functions in a complex Web site: the processing of client requests, the control of service levels, and the interaction with remote network caches.

  1. Engineering of systems for application of scientific computing in industry

    OpenAIRE

    Loeve, W.; Loeve, W.

    1992-01-01

    Mathematics software is of growing importance for computer simulation in industrial computer aided engineering. To be applicable in industry the mathematics software and supporting software must be structured in such a way that functions and performance can be maintained easily. In the present paper a method is described for development of mathematics software in such a way that this requirement can be met.

  2. Quantification of nimesulide in human plasma by high-performance liquid chromatography with ultraviolet detector (HPLC-UV): application to pharmacokinetic studies in 28 healthy Korean subjects.

    Science.gov (United States)

    Kim, Mi-Sun; Park, Yoo-Sin; Kim, Shin-Hee; Kim, Sang-Yeon; Lee, Min-Ho; Kim, Youn-Hee; Kim, Do-Wan; Yang, Seok-Chul; Kang, Ju-Seop

    2012-05-01

    Nimesulide is a selective COX-2 inhibitor that is as effective as the classical non-acidic nonsteroidal anti-inflammatory drugs in the relief of various pain and inflammatory conditions, but is better tolerated with lower incidences of adverse effects than other drugs. After oral dose of 100 mg nimesulide to western subjects, a mean maximal concentration (C(max)) of 2.86 ∼ 6.5 µg/mL was reached at 1.22 ∼ 2.75 h and mean t(1/2β) of 1.8 ∼ 4.74 h. This study developed a robust method for quantification of nimesulide for the pharmacokinetics and suitability of its dosage in Korea and compared its suitability with other racial populations. Nimesulide and internal standard were extracted from acidified samples with methyl tert-butyl ether and analyzed by high-performance liquid chromatography with ultraviolet detection (HPLC-UV). The 28 healthy volunteers took 2 tablets of 100 mg nimesulide and blood concentrations were analyzed during the 24 h post dose. Several pharmacokinetic parameters were represented: AUC(0-infinity) = 113.0 mg-h/mL, C(max) = 12.06 mg/mL, time for maximal concentrations (T(max)) = 3.19 h and t(1/2β) = 4.51 h. These were different from those of western populations as follows: AUC was 14.5% and C(max) was 28% that of of Korean subjects and T(max) and t(1/2β) were also different. The validated HPLC-UV method was successfully applied for the pharmacokinetic studies of nimesulide in Korean subjects. Because the pharmacokinetics of nimesulide were different from western populations, its dosage regimen needs to be adjusted for Koreans. © The Author [2012]. Published by Oxford University Press. All rights reserved.

  3. Simultaneous quantification of the major bile acids in artificial Calculus bovis by high-performance liquid chromatography with precolumn derivatization and its application in quality control.

    Science.gov (United States)

    Shi, Yan; Xiong, Jing; Sun, Dongmei; Liu, Wei; Wei, Feng; Ma, Shuangcheng; Lin, Ruichao

    2015-08-01

    An accurate and sensitive high-performance liquid chromatography method coupled with ultralviolet detection and precolumn derivatization was developed for the simultaneous quantification of the major bile acids in Artificial Calculus bovis, including cholic acid, hyodeoxycholic acid, chenodeoxycholic acid, and deoxycholic acid. The extraction, derivatization, chromatographic separation, and detection parameters were fully optimized. The samples were extracted with methanol by ultrasonic extraction. Then, 2-bromine-4'-nitroacetophenone and 18-crown ether-6 were used for derivatization. The chromatographic separation was performed on an Agilent SB-C18 column (250 × 4.6 mm id, 5 μm) at a column temperature of 30°C and liquid flow rate of 1.0 mL/min using water and methanol as the mobile phase with a gradient elution. The detection wavelength was 263 nm. The method was extensively validated by evaluating the linearity (r(2) ≥ 0.9980), recovery (94.24-98.91%), limits of detection (0.25-0.31 ng) and limits of quantification (0.83-1.02 ng). Seventeen samples were analyzed using the developed and validated method. Then, the amounts of bile acids were analyzed by hierarchical agglomerative clustering analysis and principal component analysis. The results of the chemometric analysis showed that the contents of these compounds reflect the intrinsic quality of artificial Calculus bovis, and two compounds (hyodeoxycholic acid and chenodeoxycholic acid) were the most important markers for quality evaluating. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. A validated stability-indicating high performance liquid chromatographic method for moxifloxacin hydrochloride and ketorolac tromethamine eye drops and its application in pH dependent degradation kinetics

    Directory of Open Access Journals (Sweden)

    Jayant B Dave

    2013-01-01

    Full Text Available Background and Aim: A fixed dose combination of moxifloxacin hydrochloride and ketorolac tromethamine is used in ratio of 1:1 as eye drops for the treatment of the reduction of post operative inflammatory conditions of the eye. A simple, precise, and accurate High Performance Liquid Chromatographic (HPLC method was developed and validated for determination of moxifloxacin hydrochloride and ketorolac tromethamine in eye drops. Materials and Methods: Isocratic HPLC separation was achieved on a ACE C 18 column (C 18 (5 μm, 150 mm×4.6 mm, i.d. using the mobile phase 10 mM potassium di-hydrogen phosphate buffer pH 4.6-Acetonitrile (75:25 v/v at a flow rate of 1.0 mL/min. The detection was performed at 307 nm. Drugs were subjected to acid, alkali and neutral hydrolysis, oxidation and photo degradation. Moreover, the proposed HPLC method was utilized to investigate the pH dependent degradation kinetics of moxifloxacin hydrochloride and ketorolac tromethamine in buffer solutions at different pH values like 2.0, 6.8 and 9.0. Results and Conclusion: The retention time (t R of moxifloxacin hydrochloride and ketorolac tromethamine were 3.81±0.01 and 8.82±0.02 min, respectively. The method was linear in the concentration range of 2-20 μ/mL each for moxifloxacin hydrochloride and ketorolac tromethamine with a correlation coefficient of 0.9996 and 0.9999, respectively. The method was validated for linearity, precision, accuracy, robustness, specificity, limit of detection and limit of quantitation. The drugs could be effectively separated from different degradation products and hence the method can be used for stability analysis. Different kinetics parameters like apparent first-order rate constant, half-life and t 90 (time for 90% potency left were calculated.

  5. Determination of oxycodone and its major metabolites noroxycodone and oxymorphone by ultra-high-performance liquid chromatography tandem mass spectrometry in plasma and urine: application to real cases.

    Science.gov (United States)

    Pantano, Flaminia; Brauneis, Stefano; Forneris, Alexandre; Pacifici, Roberta; Marinelli, Enrico; Kyriakou, Chrystalla; Pichini, Simona; Busardò, Francesco Paolo

    2017-08-28

    Oxycodone is a narcotic drug widely used to alleviate moderate and severe acute and chronic pain. Variability in analgesic efficacy could be explained by inter-subject variations in plasma concentrations of parent drug and its active metabolite, oxymorphone. To evaluate patient compliance and to set up therapeutic drug monitoring (TDM), an ultra-high-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) assay was developed and validated for the parent drug and its major metabolites noroxycodone and oxymorphone. Extraction of analytes from plasma and urine samples was obtained by simple liquid-liquid extraction. The chromatographic separation was achieved with a reversed phase column using a linear gradient elution with two solvents: acetic acid 1% in water and methanol. The separated analytes were detected with a triple quadrupole mass spectrometer operated in multiple reaction monitoring (MRM) mode via positive electrospray ionization (ESI). Separation of analytes was obtained in less than 5 min. Linear calibration curves for all the analytes under investigation in urine and plasma samples showed determination coefficients (r2) equal or higher than 0.990. Mean absolute analytical recoveries were always above 86%. Intra- and inter-assay precision (measured as coefficient of variation, CV%) and accuracy (measured as % error) values were always better than 13%. Limit of detection at 0.06 and 0.15 ng/mL and limit of quantification at 0.2 and 0.5 ng/mL for plasma and urine samples, respectively, were adequate for the purpose of the present study. Rapid extraction, identification and quantification of oxycodone and its metabolites both in urine and plasma by UHPLC-MS/MS assay was tested for its feasibility in clinical samples and provided excellent results for rapid and effective drug testing in patients under oxycodone treatment.

  6. Application and comparison of high-speed countercurrent chromatography and high performance liquid chromatography in preparative enantioseparation of α-substitution mandelic acids.

    Science.gov (United States)

    Tong, Shengqiang; Zhang, Hu; Shen, Mangmang; Ito, Yoichiro; Yan, Jizhong

    2015-04-01

    Preparative enantioseparations of α-cyclopentylmandelic acid and α-methylmandelic acid by high-speed countercurrent chromatography (HSCCC) and high performance liquid chromatography (HPLC) were compared using hydroxypropy-β-cyclodextrin (HP-β-CD) and sulfobutyl ether-β-cyclodextrin (SBE-β-CD) as the chiral mobile phase additives. In preparative HPLC the enantioseparation was achieved on the ODS C 18 reverse phase column with the mobile phase composed of a mixture of acetonitrile and 0.10 mol L -1 phosphate buffer at pH 2.68 containing 20 mmol L -1 HP-β-CD for α-cyclopentylmandelic acid and 20 mmol L -1 SBE-β-CD for α-methylmandelic acid. The maximum sample size for α-cyclopentylmandelic acid and α-methylmandelic acid was only about 10 mg and 5 mg, respectively. In preparative HSCCC the enantioseparations of these two racemates were performed with the two-phase solvent system composed of n -hexane-methyl tert. -butyl ether-0.1 molL -1 phosphate buffer solution at pH 2.67 containing 0.1 mol L -1 HP-β-CD for α-cyclopentylmandelic acid (8.5:1.5:10, v/v/v) and 0.1 mol L -1 SBE-β-CD for α-methylmandelic acid (3:7:10, v/v/v). Under the optimum separation conditions, total 250 mg of racemic α-cyclopentylmandelic acid could be completely enantioseparated by HSCCC with HP-β-CD as a chiral mobile phase additive in a single run, yielding 105-110 mg of enantiomers with 95-98% purity and 85-90% recovery. But, no complete enantioseparation of α-methylmandelic acid was achieved by preparative HSCCC with either of the chiral selectors due to their limited enantioselectivity. In this paper preparative enantioseparation by HSCCC and HPLC was compared from various aspects.

  7. Simultaneous determination of linagliptin and metformin by reverse phase-high performance liquid chromatography method: An application in quantitative analysis of pharmaceutical dosage forms

    Directory of Open Access Journals (Sweden)

    Prathyusha Vemula

    2015-01-01

    Full Text Available To enhance patient compliance toward treatment in diseases like diabetes, usually a combination of drugs is prescribed. Therefore, an anti-diabetic fixed-dose combination of 2.5 mg of linagliptin 500 mg of metformin was taken for simultaneous estimation of both the drugs by reverse phase-high performance liquid chromatography (RP-HPLC method. The present study aimed to develop a simple and sensitive RP-HPLC method for the simultaneous determination of linagliptin and metformin in pharmaceutical dosage forms. The chromatographic separation was designed and evaluated by using linagliptin and metformin working standard and sample solutions in the linearity range. Chromatographic separation was performed on a C 18 column using a mobile phase of 70:30 (v/v mixture of methanol and 0.05 M potassium dihydrogen orthophosphate (pH adjusted to 4.6 with orthophosphoric acid delivered at a flow rate of 0.6 mL/min and UV detection at 267 nm. Linagliptin and metformin shown linearity in the range of 2-12 μg/mL and 400-2400 μg/mL respectively with correlation co-efficient of 0.9996 and 0.9989. The resultant findings analyzed for standard deviation (SD and relative standard deviation to validate the developed method. The retention time of linagliptin and metformin was found to be 6.3 and 4.6 min and separation was complete in <10 min. The method was validated for linearity, accuracy and precision were found to be acceptable over the linearity range of the linagliptin and metformin. The method was found suitable for the routine quantitative analysis of linagliptin and metformin in pharmaceutical dosage forms.

  8. Quantitation of itopride in human serum by high-performance liquid chromatography with fluorescence detection and its application to a bioequivalence study.

    Science.gov (United States)

    Singh, Sonu Sundd; Jain, Manish; Sharma, Kuldeep; Shah, Bhavin; Vyas, Meghna; Thakkar, Purav; Shah, Ruchy; Singh, Shriprakash; Lohray, Brajbhushan

    2005-04-25

    A new method was developed for determination of itopride in human serum by reversed phase high-performance liquid chromatography (HPLC) with fluorescence detection (excitation at 291 nm and emission at 342 nm). The method employed one-step extraction of itopride from serum matrix with a mixture of tert-butyl methyl ether and dichloromethane (70:30, v/v) using etoricoxib as an internal standard. Chromatographic separation was obtained within 12.0 min using a reverse phase YMC-Pack AM ODS column (250 mm x 4.6 mm, 5 microm) and an isocratic mobile phase constituting of a mixture of 0.05% tri-fluoro acetic acid in water and acetonitrile (75:25, v/v) flowing at a flow rate of 1.0 ml/min. The method was linear in the range of 14.0 ng/ml to 1000.0 ng/ml. The lower limit of quantitation (LLOQ) was 14.0 ng/ml. Average recovery of itopride and the internal standard from the biological matrix was more than 66.04 and 64.57%, respectively. The inter-day accuracy of the drug containing serum samples was more than 97.81% with a precision of 2.31-3.68%. The intra-day accuracy was 96.91% or more with a precision of 5.17-9.50%. Serum samples containing itopride were stable for 180.0 days at -70+/-5 degrees C and for 24.0 h at ambient temperature (25+/-5 degrees C). The method was successfully applied to the bioequivalence study of itopride in healthy, male human subjects.

  9. Nickel cobaltite nanograss grown around porous carbon nanotube-wrapped stainless steel wire mesh as a flexible electrode for high-performance supercapacitor application

    International Nuclear Information System (INIS)

    Wu, Mao-Sung; Zheng, Zhi-Bin; Lai, Yu-Sheng; Jow, Jiin-Jiang

    2015-01-01

    Graphical abstract: Nickel cobaltite nanograss with bimodal pore size distribution is grown around the carbon nanotube-wrapped stainless steel wire mesh as a high capacitance and stable electrode for high-performance and flexible supercapacitors. - Highlights: • NiCo 2 O 4 nanograss with bimodal pore size distribution is hydrothermally prepared. • Carbon nanotubes (CNTs) wrap around stainless steel (SS) wire mesh as a scaffold. • NiCo 2 O 4 grown on CNT-wrapped SS mesh shows excellent capacitive performance. • Porous CNT layer allows for rapid transport of electron and electrolyte. - Abstract: Nickel cobaltite nanograss with bimodal pore size distribution (small and large mesopores) is grown on various electrode substrates by one-pot hydrothermal synthesis. The small pores (<5 nm) in the nanograss of individual nanorods contribute to large surface area, while the large pore channels (>20 nm) between nanorods offer fast transport paths for electrolyte. Carbon nanotubes (CNTs) with high electrical conductivity wrap around stainless steel (SS) wire mesh by electrophoresis as an electrode scaffold for supporting the nickel cobaltite nanograss. This unique electrode configuration turns out to have great benefits for the development of supercapacitors. The specific capacitance of nickel cobaltite grown around CNT-wrapped SS wire mesh reaches 1223 and 1070 F g −1 at current densities of 1 and 50 A g −1 , respectively. CNT-wrapped SS wire mesh affords porous and conductive networks underneath the nanograss for rapid transport of electron and electrolyte. Flexible CNTs connect the nanorods to mitigate the contact resistance and the volume expansion during cycling test. Thus, this tailored electrode can significantly reduce the ohmic resistance, charge-transfer resistance, and diffusive impedance, leading to high specific capacitance, prominent rate performance, and good cycle-life stability.

  10. Development of a new ultra-high performance liquid chromatography - tandem mass spectrometry method for determination of ambroxol hydrochloride in serum with pharmacokinetic application

    Directory of Open Access Journals (Sweden)

    Vujović Maja M.

    2016-01-01

    Full Text Available Ambroxol hydrochloride is an expectorant agent, successfully applied in mucolytic therapy for acute and chronic bronchopulmonary diseases. The drug regulates not only mucus secretion but also showed antioxidant, anti-inflammatory and local anesthetic properties. To supplement the pharmacokinetic and toxicological studies of ambroxol, a rapid ultra-high performance liquid chromatography-tandem mass spectrometry method for the quantitation of ambroxol in rabbit serum was developed. A validation of the method was performed as per the ICH guidelines for the validation of bioanalytical methods. The chromatographic separation was achieved in a submicron Kinetex RP - C18 - column (2.1 mm x 50 mm, 1.3μm using the no buffer mobile phase. The ESI mass spectrometry in the MRM mode was used with a typical transitions m/z 378.9→263.8 for ambroxol and m/z 455.2→165.0 for IS. Linearity was determined with an average coefficient of determination >0.999 over the dynamic range from 0.5 - 200 ng/mL with LOD and LOQ of 0.25 ng/mL and 0.5 ng/mL, respectively. The results of the intra- and inter-day precision and accuracy determined in different days were all found to be within the acceptable limits ±15%. The present method was successfully applied to pharmacokinetic study in the rabbits after a single oral dose administration. [Projekat Ministarstva nauke Republike Srbije, br. 175045

  11. Measurement of surface contamination by certain antineoplastic drugs using high-performance liquid chromatography: applications in occupational hygiene investigations in hospital environments.

    Science.gov (United States)

    Rubino, F M; Floridia, L; Pietropaolo, A M; Tavazzani, M; Colombi, A

    1999-01-01

    Within the context of continuing interest in occupational hygiene of hospitals as workplaces, the authors report the results of a preliminary study on surface contamination by certain antineoplastic drugs (ANDs), recently performed in eight cancer departments of two large general hospitals in Milan, Italy. Since reliable quantitative information on the exposure levels to individual drugs is mandatory to establish a strong interpretative framework for correctly assessing the health risks associated with manipulation of ANDs and rationally advise intervention priorities for exposure abatement, two automated analytical methods were set up using reverse-phase high-performance liquid chromatography for the measurement of contamination by 1) methotrexate (MTX) and 2) the three most important nucleoside analogue antineoplastic drugs (5-fluorouracil 5FU, Cytarabin CYA, Gemcytabin GCA) on surfaces such as those of preparation hoods and work-benches in the pharmacies of cancer wards. The methods are characterized by short analysis time (7 min) under isocratic conditions, by the use of a mobile phase with a minimal content of organic solvent and by high sensitivity, adequate to detect surface contamination in the 5-10 micrograms/m2 range. To exemplify the performance of the analytical methods in the assessment of contamination levels from the target analyte ANDs, data are reported on the contamination levels measured on various surfaces (such as on handles, floor surfaces and window panes, even far from the preparation hood). Analyte concentrations corresponding to 0.8-1.5 micrograms of 5FU were measured on telephones, 0.85-28 micrograms/m2 of CYA were measured on tables, 1.2-1150 micrograms/m2 of GCA on furniture and floors. Spillage fractions between 1-5% of the used ANDs (daily use 5FU 7-13 g; CYA 0.1-7.1 g; GCA 0.2-5 g) were measured on the disposable polythene-backed paper cover sheet of the preparation hood.

  12. High performance bio-integrated devices

    Science.gov (United States)

    Kim, Dae-Hyeong; Lee, Jongha; Park, Minjoon

    2014-06-01

    In recent years, personalized electronics for medical applications, particularly, have attracted much attention with the rise of smartphones because the coupling of such devices and smartphones enables the continuous health-monitoring in patients' daily life. Especially, it is expected that the high performance biomedical electronics integrated with the human body can open new opportunities in the ubiquitous healthcare. However, the mechanical and geometrical constraints inherent in all standard forms of high performance rigid wafer-based electronics raise unique integration challenges with biotic entities. Here, we describe materials and design constructs for high performance skin-mountable bio-integrated electronic devices, which incorporate arrays of single crystalline inorganic nanomembranes. The resulting electronic devices include flexible and stretchable electrophysiology electrodes and sensors coupled with active electronic components. These advances in bio-integrated systems create new directions in the personalized health monitoring and/or human-machine interfaces.

  13. Facile synthesis of nanosheet-like CuO film and its potential application as a high-performance pseudocapacitor electrode

    CSIR Research Space (South Africa)

    Nwanya, AC

    2016-04-01

    Full Text Available We describe the chemical synthesis of binderless and surfactant free CuO films for pseudocapacitive applications. Nanosheet-like and nanorod-like CuO films are deposited on indium tin oxide (ITO) substrates using the successive ionic layer...

  14. The Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE high performance computing infrastructure: applications in neuroscience and neuroinformatics research

    Directory of Open Access Journals (Sweden)

    Wojtek James eGoscinski

    2014-03-01

    Full Text Available The Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE is a national imaging and visualisation facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organisation (CSIRO, and the Victorian Partnership for Advanced Computing (VPAC, with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI, x-ray computer tomography (CT, electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i integrated multiple different neuroimaging analysis software components, (ii enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research.

  15. INL High Performance Building Strategy

    Energy Technology Data Exchange (ETDEWEB)

    Jennifer D. Morton

    2010-02-01

    High performance buildings, also known as sustainable buildings and green buildings, are resource efficient structures that minimize the impact on the environment by using less energy and water, reduce solid waste and pollutants, and limit the depletion of natural resources while also providing a thermally and visually comfortable working environment that increases productivity for building occupants. As Idaho National Laboratory (INL) becomes the nation’s premier nuclear energy research laboratory, the physical infrastructure will be established to help accomplish this mission. This infrastructure, particularly the buildings, should incorporate high performance sustainable design features in order to be environmentally responsible and reflect an image of progressiveness and innovation to the public and prospective employees. Additionally, INL is a large consumer of energy that contributes to both carbon emissions and resource inefficiency. In the current climate of rising energy prices and political pressure for carbon reduction, this guide will help new construction project teams to design facilities that are sustainable and reduce energy costs, thereby reducing carbon emissions. With these concerns in mind, the recommendations described in the INL High Performance Building Strategy (previously called the INL Green Building Strategy) are intended to form the INL foundation for high performance building standards. This revised strategy incorporates the latest federal and DOE orders (Executive Order [EO] 13514, “Federal Leadership in Environmental, Energy, and Economic Performance” [2009], EO 13423, “Strengthening Federal Environmental, Energy, and Transportation Management” [2007], and DOE Order 430.2B, “Departmental Energy, Renewable Energy, and Transportation Management” [2008]), the latest guidelines, trends, and observations in high performance building construction, and the latest changes to the Leadership in Energy and Environmental Design

  16. Inclusive vision for high performance computing at the CSIR

    CSIR Research Space (South Africa)

    Gazendam, A

    2006-02-01

    Full Text Available and computationally intensive applications. A number of different technologies and standards were identified as core to the open and distributed high-performance infrastructure envisaged...

  17. Soft Robotics: from scientific challenges to technological applications

    Science.gov (United States)

    Laschi, C.

    2016-05-01

    Soft robotics is a recent and rapidly growing field of research, which aims at unveiling the principles for building robots that include soft materials and compliance in the interaction with the environment, so as to exploit so-called embodied intelligence and negotiate natural environment more effectively. Using soft materials for building robots poses new technological challenges: the technologies for actuating soft materials, for embedding sensors into soft robot parts, for controlling soft robots are among the main ones. This is stimulating research in many disciplines and many countries, such that a wide community is gathering around initiatives like the IEEE TAS TC on Soft Robotics and the RoboSoft CA - A Coordination Action for Soft Robotics, funded by the European Commission. Though still in its early stages of development, soft robotics is finding its way in a variety of applications, where safe contact is a main issue, in the biomedical field, as well as in exploration tasks and in the manufacturing industry. And though the development of the enabling technologies is still a priority, a fruitful loop is growing between basic research and application-oriented research in soft robotics.

  18. ForistomApp a Web application for scientific and technological information management of Forsitom foundation

    Science.gov (United States)

    Saavedra-Duarte, L. A.; Angarita-Jerardino, A.; Ruiz, P. A.; Dulce-Moreno, H. J.; Vera-Rivera, F. H.; V-Niño, E. D.

    2017-12-01

    Information and Communication Technologies (ICT) are essential in the transfer of knowledge, and the Web tools, as part of ICT, are important for institutions seeking greater visibility of the products developed by their researchers. For this reason, we implemented an application that allows the information management of the FORISTOM Foundation (Foundation of Researchers in Science and Technology of Materials). The application shows a detailed description, not only of all its members also of all the scientific production that they carry out, such as technological developments, research projects, articles, presentations, among others. This application can be implemented by other entities committed to the scientific dissemination and transfer of technology and knowledge.

  19. 77 FR 9896 - Proposed Information Collection; Comment Request; Application and Reports for Scientific Research...

    Science.gov (United States)

    2012-02-21

    ... Collection; Comment Request; Application and Reports for Scientific Research and Enhancement Permits Under... allows permits authorizing the taking of endangered species for research/enhancement purposes. The... sets of information collections: (1) Applications for research/enhancement permits, and (2) reporting...

  20. 1st Kassel user forum: Double-layer capacitors for high-performance applications. Proceedings '99; 1. Kasseler Anwenderforum: Doppelschichtkondensatoren fuer hohe Leistung. Tagungsband '99

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-07-01

    Double-layer capacitors ('Super Caps') have excellent perspectives as dynamic short-term stores. They have a long life of more than 200000 cycles, immunity against full discharge, and are maintenance-free. Apart from their high short-term power density, they also have an extremely high energy density which recommends them for many applications. Currently, motor car engineering, autonomous networks, power quality and solar applications are investigated. [German] Doppelschichtkondensatoren oder 'Super-Caps' sind Speicher fuer elektrische Energie und weisen, dank ihrer Eigenschaften, hervorragende Perspektiven als dynamische Kurzzeitspeicher auf. Sie besitzen eine hohe Lebenserwartung von mehr als 200000 Zyklen, Immunitaet gegen vollstaendige Entladung und Wartungsfreiheit. Neben einer hohen Leistungsdichte im Kurzzeitbereich verfuegen sie ueber eine fuer Kondensatoren extrem hohe Energiedichte. Zahlreiche Einsatzgebiete koennen auf diese Weise erschlossen werden. Derzeit stehen die Bereiche Automobiltechnik, autarke Netze, Power Quality und solare Applikationen im Vordergrund. (orig.)

  1. General scientific guidance for stakeholders on health claim applications

    DEFF Research Database (Denmark)

    Sjödin, Anders Mikael

    2016-01-01

    of Article 13.1 claims except for claims put on hold by the European Commission, and has evaluated additional health claim applications submitted pursuant to Articles 13.5, 14 and also 19. In addition, comments received from stakeholders indicate that general issues that are common to all health claims need...... based on the experience gained to date with the evaluation of health claims, and it may be further updated, as appropriate, when additional issues are addressed.......The European Food Safety Authority (EFSA) asked the Panel on Dietetic Products Nutrition and Allergies (NDA) to update the General guidance for stakeholders on the evaluation of Article 13.1, 13.5 and 14 health claims published in March 2011. Since then, the NDA Panel has completed the evaluation...

  2. Ethics in scientific results application: Gene and life forms patenting

    Directory of Open Access Journals (Sweden)

    Konstantinov Kosana

    2010-01-01

    Full Text Available The remarkable development and application of new genetic technologies over the past decades has been accompanied by profound changes in the way in which research is commercialized in the life sciences. As results, new varieties of commercially grown crops with improved or new traits are developed. Many thousands of patents which assert rights over DNA sequences have been granted to researchers across the public and private sector. The effects of many of these patents are extensive, because inventors who assert rights over DNA sequences obtain protection on all uses of the sequences. Extremely valuable to breeders in the national agricultural research system is the ability to genotype their collections to get a clear picture of their diversity and how diversity could be enhanced through sharing and access to global collections. The issue of the eligibility for patenting of DNA sequences needs to be reopened. Patents that assert rights over DNA sequences and their uses are, in some cases, supportable, but in others, should be treated with great caution. Rights over DNA sequences as research tools should be discouraged. That the best way to discourage the award of such patents is by stringent application of the criteria for patenting, particularly utility. A more equitable, ethically - based food and agricultural system must incorporate concern for three accepted global goals: improved well being, protection of the environment and improved public health (particular point food from GMO. To mitigate conflict one of the approach to solve problem is ethical and truthful label of GM food, because consumers have a right to choose whether to eat genetically modified foods or not. Interesting examples and risks as consequences of free availability of genetic resources utilization, its transformation, patenting of 'new' organism and selling it back to the genetic resource owner are presented. Society has obligations to raise levels of nutrition and

  3. 78 FR 13860 - Application(s) for Duty-Free Entry of Scientific Instruments

    Science.gov (United States)

    2013-03-01

    ... Scientific Instruments Pursuant to Section 6(c) of the Educational, Scientific and Cultural Materials... invite comments on the question of whether instruments of equivalent scientific value, for the purposes... conformational change of assemblies involved in biological processes such as ATP production, signal transduction...

  4. 76 FR 56156 - Application(s) for Duty-Free Entry of Scientific Instruments

    Science.gov (United States)

    2011-09-12

    ... Scientific Instruments Pursuant to Section 6(c) of the Educational, Scientific and Cultural Materials... invite comments on the question of whether instruments of equivalent scientific value, for the purposes... materials for energy production. The experiments will involve structural and chemical analyses of materials...

  5. High Performance Bulk Thermoelectric Materials

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Zhifeng [Boston College, Chestnut Hill, MA (United States)

    2013-03-31

    Over 13 plus years, we have carried out research on electron pairing symmetry of superconductors, growth and their field emission property studies on carbon nanotubes and semiconducting nanowires, high performance thermoelectric materials and other interesting materials. As a result of the research, we have published 104 papers, have educated six undergraduate students, twenty graduate students, nine postdocs, nine visitors, and one technician.

  6. Optimal Design of Fixed-Point and Floating-Point Arithmetic Units for Scientific Applications

    OpenAIRE

    Pongyupinpanich, Surapong

    2012-01-01

    The challenge in designing a floating-point arithmetic co-processor/processor for scientific and engineering applications is to improve the performance, efficiency, and computational accuracy of the arithmetic unit. The arithmetic unit should efficiently support several mathematical functions corresponding to scientific and engineering computation demands. Moreover, the computations should be performed as fast as possible with a high degree of accuracy. Thus, this thesis proposes algorithm, d...

  7. The origins of scientific cinematography and early medical applications.

    Science.gov (United States)

    Barboi, Alexandru C; Goetz, Christopher G; Musetoiu, Radu

    2004-06-08

    To examine the neurologic cinematographic contributions of Gheorghe Marinescu. Near the end of the 19th century, cinematography developed and was immediately recognized as a new technique applicable to medical documentation. After studying with several prominent European neurologists and deeply influenced by Jean-Martin Charcot, Marinescu returned to Bucharest in 1897 and applied moving picture techniques to the study of neurologic patients. The Romanian State Archives were researched for original Marinescu films, and related publications were translated from Romanian and French. Between 1899 and 1902, Marinescu perfected the use of cinematography as a research method in neurosciences and published five articles based on cinematographic documents. He focused his studies particularly on organic gait disorders, locomotor ataxia, and hysteria. He adapted Charcot's method of lining up several patients with the same disorder and showing them together to permit appreciation of archetypes and formes frustes. He decomposed the moving pictures into sequential tracings for publication. He documented treatment results with cases filmed before and after therapy. Processed and digitized excerpts of these films accompany this manuscript. Marinescu's cinematographic studies led to several original contributions in clinical neurology. Remaining film archives include examples of many neurologic diseases, his examination techniques, and the working medical environment of the young founder of the Romanian school of neurology.

  8. Improving UV Resistance of High Performance Fibers

    Science.gov (United States)

    Hassanin, Ahmed

    High performance fibers are characterized by their superior properties compared to the traditional textile fibers. High strength fibers have high modules, high strength to weight ratio, high chemical resistance, and usually high temperature resistance. It is used in application where superior properties are needed such as bulletproof vests, ropes and cables, cut resistant products, load tendons for giant scientific balloons, fishing rods, tennis racket strings, parachute cords, adhesives and sealants, protective apparel and tire cords. Unfortunately, Ultraviolet (UV) radiation causes serious degradation to the most of high performance fibers. UV lights, either natural or artificial, cause organic compounds to decompose and degrade, because the energy of the photons of UV light is high enough to break chemical bonds causing chain scission. This work is aiming at achieving maximum protection of high performance fibers using sheathing approaches. The sheaths proposed are of lightweight to maintain the advantage of the high performance fiber that is the high strength to weight ratio. This study involves developing three different types of sheathing. The product of interest that need be protected from UV is braid from PBO. First approach is extruding a sheath from Low Density Polyethylene (LDPE) loaded with different rutile TiO2 % nanoparticles around the braid from the PBO. The results of this approach showed that LDPE sheath loaded with 10% TiO2 by weight achieved the highest protection compare to 0% and 5% TiO2. The protection here is judged by strength loss of PBO. This trend noticed in different weathering environments, where the sheathed samples were exposed to UV-VIS radiations in different weatheromter equipments as well as exposure to high altitude environment using NASA BRDL balloon. The second approach is focusing in developing a protective porous membrane from polyurethane loaded with rutile TiO2 nanoparticles. Membrane from polyurethane loaded with 4

  9. Applications of the Integrated High-Performance CMOS Image Sensor to Range Finders — from Optical Triangulation to the Automotive Field

    Directory of Open Access Journals (Sweden)

    Joe-Air Jiang

    2008-03-01

    Full Text Available With their significant features, the applications of complementary metal-oxidesemiconductor (CMOS image sensors covers a very extensive range, from industrialautomation to traffic applications such as aiming systems, blind guidance, active/passiverange finders, etc. In this paper CMOS image sensor-based active and passive rangefinders are presented. The measurement scheme of the proposed active/passive rangefinders is based on a simple triangulation method. The designed range finders chieflyconsist of a CMOS image sensor and some light sources such as lasers or LEDs. Theimplementation cost of our range finders is quite low. Image processing software to adjustthe exposure time (ET of the CMOS image sensor to enhance the performance oftriangulation-based range finders was also developed. An extensive series of experimentswere conducted to evaluate the performance of the designed range finders. From theexperimental results, the distance measurement resolutions achieved by the active rangefinder and the passive range finder can be better than 0.6% and 0.25% within themeasurement ranges of 1 to 8 m and 5 to 45 m, respectively. Feasibility tests onapplications of the developed CMOS image sensor-based range finders to the automotivefield were also conducted. The experimental results demonstrated that our range finders arewell-suited for distance measurements in this field.

  10. Technology breakthroughs in high performance metal-oxide-semiconductor devices for ultra-high density, low power non-volatile memory applications

    Science.gov (United States)

    Hong, Augustin Jinwoo

    Non-volatile memory devices have attracted much attention because data can be retained without power consumption more than a decade. Therefore, non-volatile memory devices are essential to mobile electronic applications. Among state of the art non-volatile memory devices, NAND flash memory has earned the highest attention because of its ultra-high scalability and therefore its ultra-high storage capacity. However, human desire as well as market competition requires not only larger storage capacity but also lower power consumption for longer battery life time. One way to meet this human desire and extend the benefits of NAND flash memory is finding out new materials for storage layer inside the flash memory, which is called floating gate in the state of the art flash memory device. In this dissertation, we study new materials for the floating gate that can lower down the power consumption and increase the storage capacity at the same time. To this end, we employ various materials such as metal nanodot, metal thin film and graphene incorporating complementary-metal-oxide-semiconductor (CMOS) compatible processes. Experimental results show excellent memory effects at relatively low operating voltages. Detailed physics and analysis on experimental results are discussed. These new materials for data storage can be promising candidates for future non-volatile memory application beyond the state of the art flash technologies.

  11. Development of Low-Cost DDGS-Based Activated Carbons and Their Applications in Environmental Remediation and High-Performance Electrodes for Supercapacitors

    KAUST Repository

    Wang, Yong

    2015-08-28

    Abstract: A one-step, facile method to produce 3-dimensional porous activated carbons (ACs) from corn residual dried distillers grains with solubles (DDGS) by microwave-assisted chemical activation was developed. The ACs’ application potentials in dye removal and supercapacitor electrodes were also demonstrated. The porous structure and surface properties of the ACs were characterized by N2 adsorption/desorption isotherms and scanning electron microscopy. The results showed that the surface area of the as-prepared ACs was up to 1000 m2/g. In the dye removal tests, these DDGS-based ACs exhibited a maximum adsorption ratio of 477 mg/g on methylene blue. In electric double layer capacitors, electrochemical tests indicated that the ACs had ideal capacitive and reversible behaviors and exhibited excellent electrochemical performance. The specific capacitance varied between 120 and 210 F/g under different scan rates and current densities. In addition, the capacitors showed excellent stability even after one thousand charge–discharge cycles. The specific capacitance was further increased up to 300 F/g by in situ synthesis of MnO2 particles in the ACs to induce pseudo-capacitance. This research showed that the DDGS-based ACs had great potentials in environmental remediation and energy storage applications. Graphical Abstract: [Figure not available: see fulltext.] © 2015 Springer Science+Business Media New York

  12. Platinum-TM (TM = Fe, Co) alloy nanoparticles dispersed nitrogen doped (reduced graphene oxide-multiwalled carbon nanotube) hybrid structure cathode electrocatalysts for high performance PEMFC applications.

    Science.gov (United States)

    Vinayan, B P; Ramaprabhu, S

    2013-06-07

    The efforts to push proton exchange membrane fuel cells (PEMFC) for commercial applications are being undertaken globally. In PEMFC, the sluggish kinetics of oxygen reduction reactions (ORR) at the cathode can be improved by the alloying of platinum with 3d-transition metals (TM = Fe, Co, etc.) and with nitrogen doping, and in the present work we have combined both of these aspects. We describe a facile method for the synthesis of a nitrogen doped (reduced graphene oxide (rGO)-multiwalled carbon nanotubes (MWNTs)) hybrid structure (N-(G-MWNTs)) by the uniform coating of a nitrogen containing polymer over the surface of the hybrid structure (positively surface charged rGO-negatively surface charged MWNTs) followed by the pyrolysis of these (rGO-MWNTs) hybrid structure-polymer composites. The N-(G-MWNTs) hybrid structure is used as a catalyst support for the dispersion of platinum (Pt), platinum-iron (Pt3Fe) and platinum-cobalt (Pt3Co) alloy nanoparticles. The PEMFC performances of Pt-TM alloy nanoparticle dispersed N-(G-MWNTs) hybrid structure electrocatalysts are 5.0 times higher than that of commercial Pt-C electrocatalysts along with very good stability under acidic environment conditions. This work demonstrates a considerable improvement in performance compared to existing cathode electrocatalysts being used in PEMFC and can be extended to the synthesis of metal, metal oxides or metal alloy nanoparticle decorated nitrogen doped carbon nanostructures for various electrochemical energy applications.

  13. Training transfer: scientific background and insights for practical application.

    Science.gov (United States)

    Issurin, Vladimir B

    2013-08-01

    Training transfer as an enduring, multilateral, and practically important problem encompasses a large body of research findings and experience, which characterize the process by which improving performance in certain exercises/tasks can affect the performance in alternative exercises or motor tasks. This problem is of paramount importance for the theory of training and for all aspects of its application in practice. Ultimately, training transfer determines how useful or useless each given exercise is for the targeted athletic performance. The methodological background of training transfer encompasses basic concepts related to transfer modality, i.e., positive, neutral, and negative; the generalization of training responses and their persistence over time; factors affecting training transfer such as personality, motivation, social environment, etc. Training transfer in sport is clearly differentiated with regard to the enhancement of motor skills and the development of motor abilities. The studies of bilateral skill transfer have shown cross-transfer effects following one-limb training associated with neural adaptations at cortical, subcortical, spinal, and segmental levels. Implementation of advanced sport technologies such as motor imagery, biofeedback, and exercising in artificial environments can facilitate and reinforce training transfer from appropriate motor tasks to targeted athletic performance. Training transfer of motor abilities has been studied with regard to contralateral effects following one limb training, cross-transfer induced by arm or leg training, the impact of strength/power training on the preparedness of endurance athletes, and the impact of endurance workloads on strength/power performance. The extensive research findings characterizing the interactions of these workloads have shown positive transfer, or its absence, depending on whether the combinations conform to sport-specific demands and physiological adaptations. Finally, cross

  14. Strategy Guideline. High Performance Residential Lighting

    Energy Technology Data Exchange (ETDEWEB)

    Holton, J. [IBACOS, Inc., Pittsburgh, PA (United States)

    2012-02-01

    This report has been developed to provide a tool for the understanding and application of high performance lighting in the home. The strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner’s expectations for high quality lighting.

  15. CuCo_2O_4 flowers/Ni-foam architecture as a battery type positive electrode for high performance hybrid supercapacitor applications

    International Nuclear Information System (INIS)

    Vijayakumar, Subbukalai; Nagamuthu, Sadayappan; Ryu, Kwang-Sun

    2017-01-01

    Graphical abstract: The Ni- foam supported CuCo_2O_4 flowers exhibits a high specific capacity with superior long term cyclic stability. - Highlights: • This paper reports the hydrothermal preparation of CuCo_2O_4 flowers on Ni-foam. • The CuCo_2O_4 flowers exhibits maximum specific capacity of 645.1C g"−"1. • After 2000 cycles, 109% of the initial specific capacity was retained. - Abstract: The battery type CuCo_2O_4 electrode was evaluated as a positive electrode material for its hybrid supercapacitor applications. CuCo_2O_4 flowers were prepared on Ni-foam through a simple hydrothermal process and post calcination treatment. The structure and morphology of the CuCo_2O_4 flowers/Ni-foam was characterized by X-ray diffraction (XRD), field emission scanning electron microscopy (FESEM) and high resolution transmission electron microscopy. FESEM clearly revealed the flower-like morphology, which was composed of large number of petals. The length and width of the petals ranged from approximately 5–8 μm and approximately 50–150 nm, respectively. The CuCo_2O_4 flowers/Ni-foam electrode was employed for electrochemical characterization for hybrid supercapacitor applications. The specific capacity of the CuCo_2O_4 flower-like electrode was 692.4C g"−"1 (192.3 mA h g"−"1) at a scan rate of 5 mV s"−"1. The flower-like CuCo_2O_4 electrode exhibited a maximum specific capacity of 645.1C g"−"1 (179.2 mA h g"−"1) at a specific current of 1 A g"−"1 and good long term cyclic stability. The high specific capacity, good cyclic stability, and low internal and charge transfer resistance of the CuCo_2O_4 flowers/Ni-foam electrode confirmed the suitability of the prepared material as a positive electrode for hybrid supercapacitor applications.

  16. HiPTI - High Performance Thermal Insulation, Annex 39 to IEA/ECBCS-Implementing Agreement. Vacuum insulation in the building sector. Systems and applications

    Energy Technology Data Exchange (ETDEWEB)

    Binz, A.; Moosmann, A.; Steinke, G.; Schonhardt, U.; Fregnan, F. [Fachhochschule Nordwestschweiz (FHNW), Muttenz (Switzerland); Simmler, H.; Brunner, S.; Ghazi, K.; Bundi, R. [Swiss Federal Laboratories for Materials Testing and Research (EMPA), Duebendorf (Switzerland); Heinemann, U.; Schwab, H. [ZAE Bayern, Wuerzburg (Germany); Cauberg, H.; Tenpierik, M. [Delft University of Technology, Delft (Netherlands); Johannesson, G.; Thorsell, T. [Royal Institute of Technology (KTH), Stockholm (Sweden); Erb, M.; Nussbaumer, B. [Dr. Eicher und Pauli AG, Basel and Bern (Switzerland)

    2005-07-01

    This final report on vacuum insulation panels (VIP) presents and discusses the work done under IEA/Energy Conservation in Buildings and Community Systems (ECBCS) Annex 39, subtask B on the basis of a wide selection of reports from practice. The report shows how the building trade deals with this new material today, the experience gained and the conclusions drawn from this work. As well as presenting recommendations for the practical use of VIP, the report also addresses questions regarding the effective insulation values to be expected with current VIP, whose insulation performance is stated as being a factor of five to eight times better than conventional insulation. The introduction of this novel material in the building trade is discussed. Open questions and risks are examined. The fundamentals of vacuum insulation panels are discussed and the prerequisites, risks and optimal application of these materials in the building trade are examined.

  17. Facile Synthesis of Nanosheet-like CuO Film and its Potential Application as a High-Performance Pseudocapacitor Electrode

    International Nuclear Information System (INIS)

    Nwanya, Assumpta C.; Obi, Daniel; Ozoemena, Kenneth I.; Osuji, Rose U.; Awada, Chawki; Ruediger, Andreas; Maaza, Malik

    2016-01-01

    We describe the chemical synthesis of binderless and surfactant free CuO films for pseudocapacitive applications. Nanosheet-like and nanorod-like CuO films are deposited on indium tin oxide (ITO) substrates using the successive ionic layer adsorption and reaction (SILAR) approach. The nanostructured CuO shows uniform surface morphology and uniform pore distribution with average grain sizes in the range 30 − 50 nm and average pore size of 12.0 and 12.5 nm for 10 and for 40-cycles respectively, as estimated from AFM imaging. The electrochemical properties are characterized by cyclic voltammetry (CV), galvanostatic charge-discharge (GCD) and electrochemical impedance spectroscopy (EIS). The highest specific capacitance of 566.33 Fg"−"1 is obtained for as low as 10-cycle film at a scan rate of 5mVs"−"1. The long term stability tests by continuous GCD, indicates that there is no degradation after 1000 cycles with the film yielding 100% coulombic efficiency. This indicates a high stability of the synthesized CuO films. Hence, the developed nanostructured CuO film electrodes exhibit excellent properties for use as supercapacitors.

  18. Facile Synthesis of MnPO4·H2O Nanowire/Graphene Oxide Composite Material and Its Application as Electrode Material for High Performance Supercapacitors

    Directory of Open Access Journals (Sweden)

    Bo Yan

    2016-12-01

    Full Text Available In this work, we reported a facile one-pot hydrothermal method to synthesize MnPO4·H2O nanowire/graphene oxide composite material with coated graphene oxide. Transmission electron microscopy and scanning electron microscope were employed to study its morphology information, and X-ray diffraction was used to study the phase and structure of the material. Additionally, X-ray photoelectron spectroscopy was used to study the elements information. To measure electrochemical performances of electrode materials and the symmetry cell, cyclic voltammetry, chronopotentiometry and electrochemical impedance spectrometry were conducted on electrochemical workstation using 3 M KOH electrolytes. Importantly, electrochemical results showed that the as-prepared MnPO4·H2O nanowire/graphene oxide composite material exhibited high specific capacitance (287.9 F·g−1 at 0.625 A·g−1 and specific power (1.5 × 105 W·kg−1 at 2.271 Wh·kg−1, which is expected to have promising applications as supercapacitor electrode material.

  19. Application of high-performance liquid chromatography-tandem mass spectrometry with a quadrupole/linear ion trap instrument for the analysis of pesticide residues in olive oil.

    Science.gov (United States)

    Hernando, M D; Ferrer, C; Ulaszewska, M; García-Reyes, J F; Molina-Díaz, A; Fernández-Alba, A R

    2007-11-01

    This article describes the development of an enhanced liquid chromatography-mass spectrometry (LC-MS) method for the analysis of pesticides in olive oil. One hundred pesticides belonging to different classes and that are currently used in agriculture have been included in this method. The LC-MS method was developed using a hybrid quadrupole/linear ion trap (QqQ(LIT)) analyzer. Key features of this technique are the rapid scan acquisition times, high specificity and high sensitivity it enables when the multiple reaction monitoring (MRM) mode or the linear ion-trap operational mode is employed. The application of 5 ms dwell times using a linearly accelerating (LINAC) high-pressure collision cell enabled the analysis of a high number of pesticides, with enough data points acquired for optimal peak definition in MRM operation mode and for satisfactory quantitative determinations to be made. The method quantifies over a linear dynamic range of LOQs (0.03-10 microg kg(-1)) up to 500 microg kg(-1). Matrix effects were evaluated by comparing the slopes of matrix-matched and solvent-based calibration curves. Weak suppression or enhancement of signals was observed (ion (EPI) and MS3 were developed.

  20. Synergistic Effect between Ultra-Small Nickel Hydroxide Nanoparticles and Reduced Graphene Oxide sheets for the Application in High-Performance Asymmetric Supercapacitor.

    Science.gov (United States)

    Liu, Yonghuan; Wang, Rutao; Yan, Xingbin

    2015-06-08

    Nanoscale electrode materials including metal oxide nanoparticles and two-dimensional graphene have been employed for designing supercapacitors. However, inevitable agglomeration of nanoparticles and layers stacking of graphene largely hamper their practical applications. Here we demonstrate an efficient co-ordination and synergistic effect between ultra-small Ni(OH)2 nanoparticles and reduced graphene oxide (RGO) sheets for synthesizing ideal electrode materials. On one hand, to make the ultra-small Ni(OH)2 nanoparticles work at full capacity as an ideal pseudocapacitive material, RGO sheets are employed as an suitable substrate to anchor these nanoparticles against agglomeration. As a consequence, an ultrahigh specific capacitance of 1717 F g(-1) at 0.5 A g(-1) is achieved. On the other hand, to further facilitate ion transfer within RGO sheets as an ideal electrical double layer capacitor material, the ultra-small Ni(OH)2 nanoparticles are introduced among RGO sheets as the recyclable sacrificial spacer to prevent the stacking. The resulting RGO sheets exhibit superior rate capability with a high capacitance of 182 F g(-1) at 100 A g(-1). On this basis, an asymmetric supercapacitor is assembled using the two materials, delivering a superior energy density of 75 Wh kg(-1) and an ultrahigh power density of 40 000 W kg(-1).

  1. High performance in software development

    CERN Multimedia

    CERN. Geneva; Haapio, Petri; Liukkonen, Juha-Matti

    2015-01-01

    What are the ingredients of high-performing software? Software development, especially for large high-performance systems, is one the most complex tasks mankind has ever tried. Technological change leads to huge opportunities but challenges our old ways of working. Processing large data sets, possibly in real time or with other tight computational constraints, requires an efficient solution architecture. Efficiency requirements span from the distributed storage and large-scale organization of computation and data onto the lowest level of processor and data bus behavior. Integrating performance behavior over these levels is especially important when the computation is resource-bounded, as it is in numerics: physical simulation, machine learning, estimation of statistical models, etc. For example, memory locality and utilization of vector processing are essential for harnessing the computing power of modern processor architectures due to the deep memory hierarchies of modern general-purpose computers. As a r...

  2. Identifying High Performance ERP Projects

    OpenAIRE

    Stensrud, Erik; Myrtveit, Ingunn

    2002-01-01

    Learning from high performance projects is crucial for software process improvement. Therefore, we need to identify outstanding projects that may serve as role models. It is common to measure productivity as an indicator of performance. It is vital that productivity measurements deal correctly with variable returns to scale and multivariate data. Software projects generally exhibit variable returns to scale, and the output from ERP projects is multivariate. We propose to use Data Envelopment ...

  3. High performance light water reactor

    International Nuclear Information System (INIS)

    Squarer, D.; Schulenberg, T.; Struwe, D.; Oka, Y.; Bittermann, D.; Aksan, N.; Maraczy, C.; Kyrki-Rajamaeki, R.; Souyri, A.; Dumaz, P.

    2003-01-01

    The objective of the high performance light water reactor (HPLWR) project is to assess the merit and economic feasibility of a high efficiency LWR operating at thermodynamically supercritical regime. An efficiency of approximately 44% is expected. To accomplish this objective, a highly qualified team of European research institutes and industrial partners together with the University of Tokyo is assessing the major issues pertaining to a new reactor concept, under the co-sponsorship of the European Commission. The assessment has emphasized the recent advancement achieved in this area by Japan. Additionally, it accounts for advanced European reactor design requirements, recent improvements, practical design aspects, availability of plant components and the availability of high temperature materials. The final objective of this project is to reach a conclusion on the potential of the HPLWR to help sustain the nuclear option, by supplying competitively priced electricity, as well as to continue the nuclear competence in LWR technology. The following is a brief summary of the main project achievements:-A state-of-the-art review of supercritical water-cooled reactors has been performed for the HPLWR project.-Extensive studies have been performed in the last 10 years by the University of Tokyo. Therefore, a 'reference design', developed by the University of Tokyo, was selected in order to assess the available technological tools (i.e. computer codes, analyses, advanced materials, water chemistry, etc.). Design data and results of the analysis were supplied by the University of Tokyo. A benchmark problem, based on the 'reference design' was defined for neutronics calculations and several partners of the HPLWR project carried out independent analyses. The results of these analyses, which in addition help to 'calibrate' the codes, have guided the assessment of the core and the design of an improved HPLWR fuel assembly. Preliminary selection was made for the HPLWR scale

  4. Simultaneous estimation of lisofylline and pentoxifylline in rat plasma by high performance liquid chromatography-photodiode array detector and its application to pharmacokinetics in rat.

    Science.gov (United States)

    Italiya, Kishan S; Sharma, Saurabh; Kothari, Ishit; Chitkara, Deepak; Mittal, Anupama

    2017-09-01

    Lisofylline (LSF) is an anti-inflammatory and immunomodulatory agent with proven activity in serious infections associated with cancer chemotherapy, hyperoxia-induced acute lung injury, autoimmune disorders including type-1 diabetes (T1DM) and islet rejection after islet transplantation. It is also an active metabolite of another anti-inflammatory agent, Pentoxifylline (PTX). LSF bears immense therapeutic potential in multiple pharmacological activities and hence appropriate and accurate quantification of LSF is very important. Although a number of analytical methods for quantification of LSF and PTX have been reported for pharmacokinetics and metabolic studies, each of these have certain limitations in terms of large sample volume required, complex extraction procedure and/or use of highly sophisticated instruments like LC-MS/MS. The aim of current study is to develop a simple reversed-phase HPLC method in rat plasma for simultaneous determination of LSF and PTX with the major objective of ensuring minimum sample volume, ease of extraction, economy of analysis, selectivity and avoiding use of instruments like LC-MS/MS to ensure a widespread application of the method. A simple liquid-liquid extraction method using methylene chloride as extracting solvent was used for extracting LSF and PTX from rat plasma (200μL). Samples were then evaporated, reconstituted with mobile phase and injected into HPLC coupled with photo-diode detector (PDA). LSF, PTX and 3-isobutyl 1-methyl xanthine (IBMX, internal standard) were separated on Inertsil® ODS (C18) column (250×4.6mm, 5μm) with mobile phase consisting of A-methanol B-water (50:50v/v) run in isocratic mode at flow rate of 1mL/min for 15min and detection at 273nm. The method showed linearity in the concentration range of 50-5000ng/mL with LOD of 10ng/mL and LLOQ of 50ng/mL for both LSF and PTX. Weighted linear regression analysis was also performed on the calibration data. The mean absolute recoveries were found to be 80

  5. Parallel Backprojection: A Case Study in High-Performance Reconfigurable Computing

    Directory of Open Access Journals (Sweden)

    Cordes Ben

    2009-01-01

    Full Text Available High-performance reconfigurable computing (HPRC is a novel approach to provide large-scale computing power to modern scientific applications. Using both general-purpose processors and FPGAs allows application designers to exploit fine-grained and coarse-grained parallelism, achieving high degrees of speedup. One scientific application that benefits from this technique is backprojection, an image formation algorithm that can be used as part of a synthetic aperture radar (SAR processing system. We present an implementation of backprojection for SAR on an HPRC system. Using simulated data taken at a variety of ranges, our implementation runs over 200 times faster than a similar software program, with an overall application speedup better than 50x. The backprojection application is easily parallelizable, achieving near-linear speedup when run on multiple nodes of a clustered HPRC system. The results presented can be applied to other systems and other algorithms with similar characteristics.

  6. Parallel Backprojection: A Case Study in High-Performance Reconfigurable Computing

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available High-performance reconfigurable computing (HPRC is a novel approach to provide large-scale computing power to modern scientific applications. Using both general-purpose processors and FPGAs allows application designers to exploit fine-grained and coarse-grained parallelism, achieving high degrees of speedup. One scientific application that benefits from this technique is backprojection, an image formation algorithm that can be used as part of a synthetic aperture radar (SAR processing system. We present an implementation of backprojection for SAR on an HPRC system. Using simulated data taken at a variety of ranges, our implementation runs over 200 times faster than a similar software program, with an overall application speedup better than 50x. The backprojection application is easily parallelizable, achieving near-linear speedup when run on multiple nodes of a clustered HPRC system. The results presented can be applied to other systems and other algorithms with similar characteristics.

  7. High-performance mass storage system for workstations

    Science.gov (United States)

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive

  8. EDITORIAL: High performance under pressure High performance under pressure

    Science.gov (United States)

    Demming, Anna

    2011-11-01

    The accumulation of charge in certain materials in response to an applied mechanical stress was first discovered in 1880 by Pierre Curie and his brother Paul-Jacques. The effect, piezoelectricity, forms the basis of today's microphones, quartz watches, and electronic components and constitutes an awesome scientific legacy. Research continues to develop further applications in a range of fields including imaging [1, 2], sensing [3] and, as reported in this issue of Nanotechnology, energy harvesting [4]. Piezoelectricity in biological tissue was first reported in 1941 [5]. More recently Majid Minary-Jolandan and Min-Feng Yu at the University of Illinois at Urbana-Champaign in the USA have studied the piezoelectric properties of collagen I [1]. Their observations support the nanoscale origin of piezoelectricity in bone and tendons and also imply the potential importance of the shear load transfer mechanism in mechanoelectric transduction in bone. Shear load transfer has been the principle basis of the nanoscale mechanics model of collagen. The piezoelectric effect in quartz causes a shift in the resonant frequency in response to a force gradient. This has been exploited for sensing forces in scanning probe microscopes that do not need optical readout. Recently researchers in Spain explored the dynamics of a double-pronged quartz tuning fork [2]. They observed thermal noise spectra in agreement with a coupled-oscillators model, providing important insights into the system's behaviour. Nano-electromechanical systems are increasingly exploiting piezoresistivity for motion detection. Observations of the change in a material's resistance in response to the applied stress pre-date the discovery of piezoelectric effect and were first reported in 1856 by Lord Kelvin. Researchers at Caltech recently demonstrated that a bridge configuration of piezoresistive nanowires can be used to detect in-plane CMOS-based and fully compatible with future very-large scale integration of

  9. 78 FR 27186 - Application(s) for Duty-Free Entry of Scientific Instruments

    Science.gov (United States)

    2013-05-09

    ... Scientific Instruments Pursuant to Section 6(c) of the Educational, Scientific and Cultural Materials...: New Mexico Institute of Mining and Technology, 801 Leroy Place, Socorro, NM 87801. Instrument: Delay... dimensions. The experiments depend on this fast 3D scanning to capture sufficient data from the dendrites of...

  10. Scientific and technical guidance for the preparation and presentation of a health claim application (Revision 2)

    DEFF Research Database (Denmark)

    Sjödin, Anders Mikael

    2017-01-01

    EFSA asked the Panel on Dietetic Products, Nutrition and Allergies (NDA) to update the scientific and technical guidance for the preparation and presentation of an application for authorisation of a health claim published in 2011. Since then, the NDA Panel has gained considerable experience...... developments in this area. This guidance document presents a common format for the organisation of information for the preparation of a well-structured application for authorisation of health claims which fall under Articles 13(5), 14 and 19 of Regulation (EC) No 1924/2006. This guidance outlines...... the information and scientific data which must be included in the application, the hierarchy of different types of data and study designs, and the key issues which should be addressed in the application to substantiate the health claim....

  11. Back-End of the web application for the scientific journal Studia Kinanthropologica

    OpenAIRE

    ŠIMÁK, Lubomír

    2017-01-01

    The bachelor thesis deals with the creation of the server part of the web application of the scientific reviewed magazine Studia Kinanthropologica, which will serve for the review of the articles for printing. The bachelor thesis describes how to work on this system and how to solve the problems that have arisen in this work.

  12. 15 CFR 301.3 - Application for duty-free entry of scientific instruments.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Application for duty-free entry of scientific instruments. 301.3 Section 301.3 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE MISCELLANEOUS...

  13. The sixth Nordic conference on the application of scientific methods in archaeology

    International Nuclear Information System (INIS)

    1993-01-01

    The Sixth Nordic Conference on the Application of Scientific Methods in Archaeology with 73 participants was convened in Esbjerg (Denmark), 19-23 September 1993. Isotope dating of archaeological, paleoecological and geochronological objects, neutron activation and XRF analytical methods, magnetometry, thermoluminescence etc. have been discussed. The program included excursions to archaeological sites and a poster session with 12 posters. (EG)

  14. Proceedings of the Scientific Meeting on Research and Development of Isotopes Application and Radiation

    International Nuclear Information System (INIS)

    Singgih Sutrisno; Sofyan Yatim; Pattiradjawane, EIsje L.; Ismachin, Moch; Mugiono; Marga Utama; Komaruddin Idris

    2004-02-01

    The Proceedings of Scientific Meeting on Research and Development of Isotopes Application and Radiation has been presented on February 17-18, 2004 in Jakarta. The aims of the Meeting is to disseminate the results of research on application of nuclear techniques on agricultural, animal, industry, hydrology and environment. There were 4 invited papers and 38 papers from BATAN participants as well as outside. The articles are indexing separately. (PPIN)

  15. Proceeding on the scientific meeting and presentation on accelerator technology and its applications: physics, nuclear reactor

    International Nuclear Information System (INIS)

    Pramudita Anggraita; Sudjatmoko; Darsono; Tri Marji Atmono; Tjipto Sujitno; Wahini Nurhayati

    2012-01-01

    The scientific meeting and presentation on accelerator technology and its applications was held by PTAPB BATAN on 13 December 2011. This meeting aims to promote the technology and its applications to accelerator scientists, academics, researchers and technology users as well as accelerator-based accelerator research that have been conducted by researchers in and outside BATAN. This proceeding contains 23 papers about physics and nuclear reactor. (PPIKSN)

  16. Proceeding on the Scientific Meeting and Presentation on Accelerator Technology and Its Applications

    International Nuclear Information System (INIS)

    Susilo Widodo; Darsono; Slamet Santosa; Sudjatmoko; Tjipto Sujitno; Pramudita Anggraita; Wahini Nurhayati

    2015-11-01

    The scientific meeting and presentation on accelerator technology and its applications was held by PSTA BATAN on 30 November 2015. This meeting aims to promote the technology and its applications to accelerator scientists, academics, researchers and technology users as well as accelerator-based accelerator research that have been conducted by researchers in and outside BATAN. This proceeding contains 20 papers about physics and nuclear reactor. (PPIKSN)

  17. Application of Text Analytics to Extract and Analyze Material–Application Pairs from a Large Scientific Corpus

    Directory of Open Access Journals (Sweden)

    Nikhil Kalathil

    2018-01-01

    Full Text Available When assessing the importance of materials (or other components to a given set of applications, machine analysis of a very large corpus of scientific abstracts can provide an analyst a base of insights to develop further. The use of text analytics reduces the time required to conduct an evaluation, while allowing analysts to experiment with a multitude of different hypotheses. Because the scope and quantity of metadata analyzed can, and should, be large, any divergence from what a human analyst determines and what the text analysis shows provides a prompt for the human analyst to reassess any preliminary findings. In this work, we have successfully extracted material–application pairs and ranked them on their importance. This method provides a novel way to map scientific advances in a particular material to the application for which it is used. Approximately 438,000 titles and abstracts of scientific papers published from 1992 to 2011 were used to examine 16 materials. This analysis used coclustering text analysis to associate individual materials with specific clean energy applications, evaluate the importance of materials to specific applications, and assess their importance to clean energy overall. Our analysis reproduced the judgments of experts in assigning material importance to applications. The validated methods were then used to map the replacement of one material with another material in a specific application (batteries.

  18. A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations

    International Nuclear Information System (INIS)

    Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel; Buluc, Aydin; Shao, Meiyue

    2017-01-01

    As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using the compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.

  19. Predicting environmental aspects of CCSR leachates through the application of scientifically valid leaching protocols

    International Nuclear Information System (INIS)

    Hassett, D.J.

    1993-01-01

    The disposal of solid wastes from energy production, particularly solid wastes from coal conversion processes, requires a thorough understanding of the waste material as well as the disposal environment. Many coal conversion solid residues (CCSRs) have chemical, mineralogical, and physical properties advantageous for use as engineering construction materials and in other industrial applications. If disposal is to be the final disposition of CCSRs from any source, the very properties that can make ash useful also contribute to behavior that must be understood for scientifically logical and environmentally responsible disposal. This paper describes the application of scientifically valid leaching and characterization tests designed to predict field phenomena. The key to proper characterization of these unique materials is the recognition of and compensation for the hydration reactions that can occur during long-term leaching. Many of these reactions, such as the formation of the mineral ettringite, can have a profound effect on the concentration of potentially problematic trace elements such as boron, chromium, and selenium. The mobility of these elements, which may be concentrated in CCSRs due to the conversion process, must be properly evaluated for the formation of informed and scientifically sound decisions regarding safe disposal. Groundwater is an extremely important and relatively scarce resource. Contamination of this resource is a threat to life, which is highly dependent on it, so management of materials that can impact groundwater must be carefully planned and executed. The application of scientifically valid leaching protocols and complete testing are critical to proper waste management

  20. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm

    Science.gov (United States)

    Abdulhamid, Shafi’i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques. PMID:27384239

  1. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm.

    Science.gov (United States)

    Abdulhamid, Shafi'i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques.

  2. High Performance Proactive Digital Forensics

    International Nuclear Information System (INIS)

    Alharbi, Soltan; Traore, Issa; Moa, Belaid; Weber-Jahnke, Jens

    2012-01-01

    With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.

  3. Manuscript Architect: a Web application for scientific writing in virtual interdisciplinary groups

    Directory of Open Access Journals (Sweden)

    Menezes Andreia P

    2005-06-01

    Full Text Available Abstract Background Although scientific writing plays a central role in the communication of clinical research findings and consumes a significant amount of time from clinical researchers, few Web applications have been designed to systematically improve the writing process. This application had as its main objective the separation of the multiple tasks associated with scientific writing into smaller components. It was also aimed at providing a mechanism where sections of the manuscript (text blocks could be assigned to different specialists. Manuscript Architect was built using Java language in conjunction with the classic lifecycle development method. The interface was designed for simplicity and economy of movements. Manuscripts are divided into multiple text blocks that can be assigned to different co-authors by the first author. Each text block contains notes to guide co-authors regarding the central focus of each text block, previous examples, and an additional field for translation when the initial text is written in a language different from the one used by the target journal. Usability was evaluated using formal usability tests and field observations. Results The application presented excellent usability and integration with the regular writing habits of experienced researchers. Workshops were developed to train novice researchers, presenting an accelerated learning curve. The application has been used in over 20 different scientific articles and grant proposals. Conclusion The current version of Manuscript Architect has proven to be very useful in the writing of multiple scientific texts, suggesting that virtual writing by interdisciplinary groups is an effective manner of scientific writing when interdisciplinary work is required.

  4. Porting of Scientific Applications to Grid Computing on GridWay

    Directory of Open Access Journals (Sweden)

    J. Herrera

    2005-01-01

    Full Text Available The expansion and adoption of Grid technologies is prevented by the lack of a standard programming paradigm to port existing applications among different environments. The Distributed Resource Management Application API has been proposed to aid the rapid development and distribution of these applications across different Distributed Resource Management Systems. In this paper we describe an implementation of the DRMAA standard on a Globus-based testbed, and show its suitability to express typical scientific applications, like High-Throughput and Master-Worker applications. The DRMAA routines are supported by the functionality offered by the GridWay2 framework, which provides the runtime mechanisms needed for transparently executing jobs on a dynamic Grid environment based on Globus. As cases of study, we consider the implementation with DRMAA of a bioinformatics application, a genetic algorithm and the NAS Grid Benchmarks.

  5. High Performance Electronics on Flexible Silicon

    KAUST Repository

    Sevilla, Galo T.

    2016-09-01

    Over the last few years, flexible electronic systems have gained increased attention from researchers around the world because of their potential to create new applications such as flexible displays, flexible energy harvesters, artificial skin, and health monitoring systems that cannot be integrated with conventional wafer based complementary metal oxide semiconductor processes. Most of the current efforts to create flexible high performance devices are based on the use of organic semiconductors. However, inherent material\\'s limitations make them unsuitable for big data processing and high speed communications. The objective of my doctoral dissertation is to develop integration processes that allow the transformation of rigid high performance electronics into flexible ones while maintaining their performance and cost. In this work, two different techniques to transform inorganic complementary metal-oxide-semiconductor electronics into flexible ones have been developed using industry compatible processes. Furthermore, these techniques were used to realize flexible discrete devices and circuits which include metal-oxide-semiconductor field-effect-transistors, the first demonstration of flexible Fin-field-effect-transistors, and metal-oxide-semiconductors-based circuits. Finally, this thesis presents a new technique to package, integrate, and interconnect flexible high performance electronics using low cost additive manufacturing techniques such as 3D printing and inkjet printing. This thesis contains in depth studies on electrical, mechanical, and thermal properties of the fabricated devices.

  6. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  7. Playa: High-Performance Programmable Linear Algebra

    Directory of Open Access Journals (Sweden)

    Victoria E. Howle

    2012-01-01

    Full Text Available This paper introduces Playa, a high-level user interface layer for composing algorithms for complex multiphysics problems out of objects from other Trilinos packages. Among other features, Playa provides very high-performance overloaded operators implemented through an expression template mechanism. In this paper, we give an overview of the central Playa objects from a user's perspective, show application to a sequence of increasingly complex solver algorithms, provide timing results for Playa's overloaded operators and other functions, and briefly survey some of the implementation issues involved.

  8. An integrated high performance fastbus slave interface

    International Nuclear Information System (INIS)

    Christiansen, J.; Ljuslin, C.

    1992-01-01

    A high performance Fastbus slave interface ASIC is presented. The Fastbus slave integrated circuit (FASIC) is a programmable device, enabling its direct use in many different applications. The FASIC acts as an interface between Fastbus and a 'standard' processor/memory bus. It can work stand-alone or together with a microprocessor. A set of address mapping windows can map Fastbus addresses to convenient memory addresses and at the same time act as address decoding logic. Data rates of 100 MBytes/s to Fastbus can be obtained using an internal FIFO buffer in the FASIC. (orig.)

  9. High performance soft magnetic materials

    CERN Document Server

    2017-01-01

    This book provides comprehensive coverage of the current state-of-the-art in soft magnetic materials and related applications, with particular focus on amorphous and nanocrystalline magnetic wires and ribbons and sensor applications. Expert chapters cover preparation, processing, tuning of magnetic properties, modeling, and applications. Cost-effective soft magnetic materials are required in a range of industrial sectors, such as magnetic sensors and actuators, microelectronics, cell phones, security, automobiles, medicine, health monitoring, aerospace, informatics, and electrical engineering. This book presents both fundamentals and applications to enable academic and industry researchers to pursue further developments of these key materials. This highly interdisciplinary volume represents essential reading for researchers in materials science, magnetism, electrodynamics, and modeling who are interested in working with soft magnets. Covers magnetic microwires, sensor applications, amorphous and nanocrystalli...

  10. Enabling high performance computational science through combinatorial algorithms

    International Nuclear Information System (INIS)

    Boman, Erik G; Bozdag, Doruk; Catalyurek, Umit V; Devine, Karen D; Gebremedhin, Assefaw H; Hovland, Paul D; Pothen, Alex; Strout, Michelle Mills

    2007-01-01

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation

  11. Enabling high performance computational science through combinatorial algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Boman, Erik G [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Bozdag, Doruk [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Catalyurek, Umit V [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Devine, Karen D [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Gebremedhin, Assefaw H [Computer Science and Center for Computational Science, Old Dominion University (United States); Hovland, Paul D [Mathematics and Computer Science Division, Argonne National Laboratory (United States); Pothen, Alex [Computer Science and Center for Computational Science, Old Dominion University (United States); Strout, Michelle Mills [Computer Science, Colorado State University (United States)

    2007-07-15

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation.

  12. Development of high performance cladding

    International Nuclear Information System (INIS)

    Kiuchi, Kiyoshi

    2003-01-01

    The developments of superior next-generation light water reactor are requested on the basis of general view points, such as improvement of safety, economics, reduction of radiation waste and effective utilization of plutonium, until 2030 year in which conventional reactor plants should be renovate. Improvements of stainless steel cladding for conventional high burn-up reactor to more than 100 GWd/t, developments of manufacturing technology for reduced moderation-light water reactor (RMWR) of breeding ratio beyond 1.0 and researches of water-materials interaction on super critical pressure-water cooled reactor are carried out in Japan Atomic Energy Research Institute. Stable austenite stainless steel has been selected for fuel element cladding of advanced boiling water reactor (ABWR). The austenite stain less has the superiority for anti-irradiation properties, corrosion resistance and mechanical strength. A hard spectrum of neutron energy up above 0.1 MeV takes place in core of the reduced moderation-light water reactor, as liquid metal-fast breeding reactor (LMFBR). High performance cladding for the RMWR fuel elements is required to get anti-irradiation properties, corrosion resistance and mechanical strength also. Slow strain rate test (SSRT) of SUS 304 and SUS 316 are carried out for studying stress corrosion cracking (SCC). Irradiation tests in LMFBR are intended to obtain irradiation data for damaged quantity of the cladding materials. (M. Suetake)

  13. High performance fuel technology development

    Energy Technology Data Exchange (ETDEWEB)

    Koon, Yang Hyun; Kim, Keon Sik; Park, Jeong Yong; Yang, Yong Sik; In, Wang Kee; Kim, Hyung Kyu [KAERI, Daejeon (Korea, Republic of)

    2012-01-15

    {omicron} Development of High Plasticity and Annular Pellet - Development of strong candidates of ultra high burn-up fuel pellets for a PCI remedy - Development of fabrication technology of annular fuel pellet {omicron} Development of High Performance Cladding Materials - Irradiation test of HANA claddings in Halden research reactor and the evaluation of the in-pile performance - Development of the final candidates for the next generation cladding materials. - Development of the manufacturing technology for the dual-cooled fuel cladding tubes. {omicron} Irradiated Fuel Performance Evaluation Technology Development - Development of performance analysis code system for the dual-cooled fuel - Development of fuel performance-proving technology {omicron} Feasibility Studies on Dual-Cooled Annular Fuel Core - Analysis on the property of a reactor core with dual-cooled fuel - Feasibility evaluation on the dual-cooled fuel core {omicron} Development of Design Technology for Dual-Cooled Fuel Structure - Definition of technical issues and invention of concept for dual-cooled fuel structure - Basic design and development of main structure components for dual- cooled fuel - Basic design of a dual-cooled fuel rod.

  14. DOE research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-12-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  15. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel's MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  16. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  17. 77 FR 39682 - Application(s) for Duty-Free Entry of Scientific Instruments

    Science.gov (United States)

    2012-07-05

    ... invite comments on the question of whether instruments of equivalent scientific value, for the purposes... components with increased reliability, performance, reduction of cost, and improved safety, using technology... reliability investigations on the nanometer scale, to identify porosity, fracture surface features, fiber...

  18. 76 FR 52314 - Application(s) for Duty-Free Entry of Scientific Instruments

    Science.gov (United States)

    2011-08-22

    ... invite comments on the question of whether instruments of equivalent scientific value, for the purposes...: Projekt Messtechnik, Germany. Intended Use: The SPSx will be used to monitor the water-solid interaction... instrument monitors water-solid interactions by taking gravimetric measurement of samples continuously using...

  19. Future Translational Applications From the Contemporary Genomics Era: A Scientific Statement From the American Heart Association

    OpenAIRE

    Fox, Caroline S.; Hall, Jennifer L.; Arnett, Donna K.; Ashley, Euan A.; Delles, Christian; Engler, Mary B.; Freeman, Mason W.; Johnson, Julie A.; Lanfear, David E.; Liggett, Stephen B.; Lusis, Aldons J.; Loscalzo, Joseph; MacRae, Calum A.; Musunuru, Kiran; Newby, L. Kristin

    2015-01-01

    The field of genetics and genomics has advanced considerably with the achievement of recent milestones encompassing the identification of many loci for cardiovascular disease and variable drug responses. Despite this achievement, a gap exists in the understanding and advancement to meaningful translation that directly affects disease prevention and clinical care. The purpose of this scientific statement is to address the gap between genetic discoveries and their practical application to cardi...

  20. Performance Analysis Tool for HPC and Big Data Applications on Scientific Clusters

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wucherl [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Koo, Michelle [Univ. of California, Berkeley, CA (United States); Cao, Yu [California Inst. of Technology (CalTech), Pasadena, CA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Nugent, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States); Wu, Kesheng [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-09-17

    Big data is prevalent in HPC computing. Many HPC projects rely on complex workflows to analyze terabytes or petabytes of data. These workflows often require running over thousands of CPU cores and performing simultaneous data accesses, data movements, and computation. It is challenging to analyze the performance involving terabytes or petabytes of workflow data or measurement data of the executions, from complex workflows over a large number of nodes and multiple parallel task executions. To help identify performance bottlenecks or debug the performance issues in large-scale scientific applications and scientific clusters, we have developed a performance analysis framework, using state-ofthe- art open-source big data processing tools. Our tool can ingest system logs and application performance measurements to extract key performance features, and apply the most sophisticated statistical tools and data mining methods on the performance data. It utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of the big data analysis framework, we conduct case studies on the workflows from an astronomy project known as the Palomar Transient Factory (PTF) and the job logs from the genome analysis scientific cluster. Our study processed many terabytes of system logs and application performance measurements collected on the HPC systems at NERSC. The implementation of our tool is generic enough to be used for analyzing the performance of other HPC systems and Big Data workows.

  1. A High Performance COTS Based Computer Architecture

    Science.gov (United States)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  2. High-Performance Matrix-Vector Multiplication on the GPU

    DEFF Research Database (Denmark)

    Sørensen, Hans Henrik Brandenborg

    2012-01-01

    In this paper, we develop a high-performance GPU kernel for one of the most popular dense linear algebra operations, the matrix-vector multiplication. The target hardware is the most recent Nvidia Tesla 20-series (Fermi architecture), which is designed from the ground up for scientific computing...

  3. High performance computing and communications: Advancing the frontiers of information technology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental in the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.

  4. A framework for integration of scientific applications into the OpenTopography workflow

    Science.gov (United States)

    Nandigam, V.; Crosby, C.; Baru, C.

    2012-12-01

    The NSF-funded OpenTopography facility provides online access to Earth science-oriented high-resolution LIDAR topography data, online processing tools, and derivative products. The underlying cyberinfrastructure employs a multi-tier service oriented architecture that is comprised of an infrastructure tier, a processing services tier, and an application tier. The infrastructure tier consists of storage, compute resources as well as supporting databases. The services tier consists of the set of processing routines each deployed as a Web service. The applications tier provides client interfaces to the system. (e.g. Portal). We propose a "pluggable" infrastructure design that will allow new scientific algorithms and processing routines developed and maintained by the community to be integrated into the OpenTopography system so that the wider earth science community can benefit from its availability. All core components in OpenTopography are available as Web services using a customized open-source Opal toolkit. The Opal toolkit provides mechanisms to manage and track job submissions, with the help of a back-end database. It allows monitoring of job and system status by providing charting tools. All core components in OpenTopography have been developed, maintained and wrapped as Web services using Opal by OpenTopography developers. However, as the scientific community develops new processing and analysis approaches this integration approach is not scalable efficiently. Most of the new scientific applications will have their own active development teams performing regular updates, maintenance and other improvements. It would be optimal to have the application co-located where its developers can continue to actively work on it while still making it accessible within the OpenTopography workflow for processing capabilities. We will utilize a software framework for remote integration of these scientific applications into the OpenTopography system. This will be accomplished by

  5. Results of data base management system parameterized performance testing related to GSFC scientific applications

    Science.gov (United States)

    Carchedi, C. H.; Gough, T. L.; Huston, H. A.

    1983-01-01

    The results of a variety of tests designed to demonstrate and evaluate the performance of several commercially available data base management system (DBMS) products compatible with the Digital Equipment Corporation VAX 11/780 computer system are summarized. The tests were performed on the INGRES, ORACLE, and SEED DBMS products employing applications that were similar to scientific applications under development by NASA. The objectives of this testing included determining the strength and weaknesses of the candidate systems, performance trade-offs of various design alternatives and the impact of some installation and environmental (computer related) influences.

  6. High-performance composite chocolate

    Science.gov (United States)

    Dean, Julian; Thomson, Katrin; Hollands, Lisa; Bates, Joanna; Carter, Melvyn; Freeman, Colin; Kapranos, Plato; Goodall, Russell

    2013-07-01

    The performance of any engineering component depends on and is limited by the properties of the material from which it is fabricated. It is crucial for engineering students to understand these material properties, interpret them and select the right material for the right application. In this paper we present a new method to engage students with the material selection process. In a competition-based practical, first-year undergraduate students design, cost and cast composite chocolate samples to maximize a particular performance criterion. The same activity could be adapted for any level of education to introduce the subject of materials properties and their effects on the material chosen for specific applications.

  7. Scientific Services on the Cloud

    Science.gov (United States)

    Chapman, David; Joshi, Karuna P.; Yesha, Yelena; Halem, Milt; Yesha, Yaacov; Nguyen, Phuong

    Scientific Computing was one of the first every applications for parallel and distributed computation. To this date, scientific applications remain some of the most compute intensive, and have inspired creation of petaflop compute infrastructure such as the Oak Ridge Jaguar and Los Alamos RoadRunner. Large dedicated hardware infrastructure has become both a blessing and a curse to the scientific community. Scientists are interested in cloud computing for much the same reason as businesses and other professionals. The hardware is provided, maintained, and administrated by a third party. Software abstraction and virtualization provide reliability, and fault tolerance. Graduated fees allow for multi-scale prototyping and execution. Cloud computing resources are only a few clicks away, and by far the easiest high performance distributed platform to gain access to. There may still be dedicated infrastructure for ultra-scale science, but the cloud can easily play a major part of the scientific computing initiative.

  8. Intel Xeon Phi coprocessor high performance programming

    CERN Document Server

    Jeffers, James

    2013-01-01

    Authors Jim Jeffers and James Reinders spent two years helping educate customers about the prototype and pre-production hardware before Intel introduced the first Intel Xeon Phi coprocessor. They have distilled their own experiences coupled with insights from many expert customers, Intel Field Engineers, Application Engineers and Technical Consulting Engineers, to create this authoritative first book on the essentials of programming for this new architecture and these new products. This book is useful even before you ever touch a system with an Intel Xeon Phi coprocessor. To ensure that your applications run at maximum efficiency, the authors emphasize key techniques for programming any modern parallel computing system whether based on Intel Xeon processors, Intel Xeon Phi coprocessors, or other high performance microprocessors. Applying these techniques will generally increase your program performance on any system, and better prepare you for Intel Xeon Phi coprocessors and the Intel MIC architecture. It off...

  9. High performance nano-composite technology development

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D. [KAERI, Taejon (Korea, Republic of); Kim, E. K.; Jung, S. Y.; Ryu, H. J. [KRICT, Taejon (Korea, Republic of); Hwang, S. S.; Kim, J. K.; Hong, S. M. [KIST, Taejon (Korea, Republic of); Chea, Y. B. [KIGAM, Taejon (Korea, Republic of); Choi, C. H.; Kim, S. D. [ATS, Taejon (Korea, Republic of); Cho, B. G.; Lee, S. H. [HGREC, Taejon (Korea, Republic of)

    1999-06-15

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  10. High performance nano-composite technology development

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D. [KAERI, Taejon (Korea, Republic of); Kim, E. K.; Jung, S. Y.; Ryu, H. J. [KRICT, Taejon (Korea, Republic of); Hwang, S. S.; Kim, J. K.; Hong, S. M. [KIST, Taejon (Korea, Republic of); Chea, Y. B. [KIGAM, Taejon (Korea, Republic of); Choi, C. H.; Kim, S. D. [ATS, Taejon (Korea, Republic of); Cho, B. G.; Lee, S. H. [HGREC, Taejon (Korea, Republic of)

    1999-06-15

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  11. High performance nano-composite technology development

    International Nuclear Information System (INIS)

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D.; Kim, E. K.; Jung, S. Y.; Ryu, H. J.; Hwang, S. S.; Kim, J. K.; Hong, S. M.; Chea, Y. B.; Choi, C. H.; Kim, S. D.; Cho, B. G.; Lee, S. H.

    1999-06-01

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  12. 77 FR 26507 - Application(s) for Duty-Free Entry of Scientific Instruments

    Science.gov (United States)

    2012-05-04

    ... States. Application accepted by Commissioner of Customs: March 29, 2012. Docket Number: 12-018. Applicant... general category manufactured in the United States. Application accepted by Commissioner of Customs: March...: The instrument will be used to investigate the genes and proteins that underlie normal and pathologic...

  13. Decal electronics for printed high performance cmos electronic systems

    KAUST Repository

    Hussain, Muhammad Mustafa; Sevilla, Galo Torres; Cordero, Marlon Diaz; Kutbee, Arwa T.

    2017-01-01

    High performance complementary metal oxide semiconductor (CMOS) electronics are critical for any full-fledged electronic system. However, state-of-the-art CMOS electronics are rigid and bulky making them unusable for flexible electronic applications

  14. High Performance Thin-Film Composite Forward Osmosis Membrane

    KAUST Repository

    Yip, Ngai Yin; Tiraferri, Alberto; Phillip, William A.; Schiffman, Jessica D.; Elimelech, Menachem

    2010-01-01

    obstacle hindering further advancements of this technology. This work presents the development of a high performance thin-film composite membrane for forward osmosis applications. The membrane consists of a selective polyamide active layer formed

  15. High-Performance Composite Chocolate

    Science.gov (United States)

    Dean, Julian; Thomson, Katrin; Hollands, Lisa; Bates, Joanna; Carter, Melvyn; Freeman, Colin; Kapranos, Plato; Goodall, Russell

    2013-01-01

    The performance of any engineering component depends on and is limited by the properties of the material from which it is fabricated. It is crucial for engineering students to understand these material properties, interpret them and select the right material for the right application. In this paper we present a new method to engage students with…

  16. High-performance non-enzymatic catalysts based on 3D hierarchical hollow porous Co3O4 nanododecahedras in situ decorated on carbon nanotubes for glucose detection and biofuel cell application.

    Science.gov (United States)

    Wang, Shiyue; Zhang, Xiaohua; Huang, Junlin; Chen, Jinhua

    2018-03-01

    In this work, high-performance non-enzymatic catalysts based on 3D hierarchical hollow porous Co 3 O 4 nanododecahedras in situ decorated on carbon nanotubes (3D Co 3 O 4 -HPND/CNTs) were successfully prepared via direct carbonizing metal-organic framework-67 in situ grown on carbon nanotubes. The morphology, microstructure, and composite of 3D Co 3 O 4 -HPND/CNTs were characterized by scanning electron microscopy, transmission electron microscopy, micropore and chemisorption analyzer, and X-ray diffraction. The electrochemical characterizations indicated that 3D Co 3 O 4 -HPND/CNTs present considerably catalytic activity toward glucose oxidation and could be promising for constructing high-performance electrochemical non-enzymatic glucose sensors and glucose/O 2 biofuel cell. When used for non-enzymatic glucose detection, the 3D Co 3 O 4 -HPND/CNTs modified glassy carbon electrode (3D Co 3 O 4 -HPND/CNTs/GCE) exhibited excellent analytical performance with high sensitivity (22.21 mA mM -1  cm -2 ), low detection limit of 0.35 μM (S/N = 3), fast response (less than 5 s) and good stability. On the other hand, when the 3D Co 3 O 4 -HPND/CNTs/GCE worked as an anode of a biofuel cell, a maximum power density of 210 μW cm -2 at 0.15 V could be obtained, and the open circuit potential was 0.68 V. The attractive 3D hierarchical porous structural features, the large surface area, and the excellent conductivity based on the continuous and effective electron transport network in 3D Co 3 O 4 -HPND/CNTs endow 3D Co 3 O 4 -HPND/CNTs with the enhanced electrochemical performance and promising applications in electrochemical sensing, biofuel cell, and other energy storage and conversion devices such as supercapacitor. Graphical abstract High-performance non-enzymatic catalysts for enzymeless glucose sensing and biofuel cell based on 3D hierarchical hollow porous Co 3 O 4 nanododecahedras anchored on carbon nanotubes were successfully prepared via direct carbonizing

  17. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  18. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  19. High performance electromagnetic simulation tools

    Science.gov (United States)

    Gedney, Stephen D.; Whites, Keith W.

    1994-10-01

    Army Research Office Grant #DAAH04-93-G-0453 has supported the purchase of 24 additional compute nodes that were installed in the Intel iPsC/860 hypercube at the Univesity Of Kentucky (UK), rendering a 32-node multiprocessor. This facility has allowed the investigators to explore and extend the boundaries of electromagnetic simulation for important areas of defense concerns including microwave monolithic integrated circuit (MMIC) design/analysis and electromagnetic materials research and development. The iPSC/860 has also provided an ideal platform for MMIC circuit simulations. A number of parallel methods based on direct time-domain solutions of Maxwell's equations have been developed on the iPSC/860, including a parallel finite-difference time-domain (FDTD) algorithm, and a parallel planar generalized Yee-algorithm (PGY). The iPSC/860 has also provided an ideal platform on which to develop a 'virtual laboratory' to numerically analyze, scientifically study and develop new types of materials with beneficial electromagnetic properties. These materials simulations are capable of assembling hundreds of microscopic inclusions from which an electromagnetic full-wave solution will be obtained in toto. This powerful simulation tool has enabled research of the full-wave analysis of complex multicomponent MMIC devices and the electromagnetic properties of many types of materials to be performed numerically rather than strictly in the laboratory.

  20. High-Performance Phylogeny Reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Tiffani L. Williams

    2004-11-10

    Under the Alfred P. Sloan Fellowship in Computational Biology, I have been afforded the opportunity to study phylogenetics--one of the most important and exciting disciplines in computational biology. A phylogeny depicts an evolutionary relationship among a set of organisms (or taxa). Typically, a phylogeny is represented by a binary tree, where modern organisms are placed at the leaves and ancestral organisms occupy internal nodes, with the edges of the tree denoting evolutionary relationships. The task of phylogenetics is to infer this tree from observations upon present-day organisms. Reconstructing phylogenies is a major component of modern research programs in many areas of biology and medicine, but it is enormously expensive. The most commonly used techniques attempt to solve NP-hard problems such as maximum likelihood and maximum parsimony, typically by bounded searches through an exponentially-sized tree-space. For example, there are over 13 billion possible trees for 13 organisms. Phylogenetic heuristics that quickly analyze large amounts of data accurately will revolutionize the biological field. This final report highlights my activities in phylogenetics during the two-year postdoctoral period at the University of New Mexico under Prof. Bernard Moret. Specifically, this report reports my scientific, community and professional activities as an Alfred P. Sloan Postdoctoral Fellow in Computational Biology.

  1. Teleconference versus Face-to-Face Scientific Peer Review of Grant Application: Effects on Review Outcomes

    Science.gov (United States)

    Gallo, Stephen A.; Carpenter, Afton S.; Glisson, Scott R.

    2013-01-01

    Teleconferencing as a setting for scientific peer review is an attractive option for funding agencies, given the substantial environmental and cost savings. Despite this, there is a paucity of published data validating teleconference-based peer review compared to the face-to-face process. Our aim was to conduct a retrospective analysis of scientific peer review data to investigate whether review setting has an effect on review process and outcome measures. We analyzed reviewer scoring data from a research program that had recently modified the review setting from face-to-face to a teleconference format with minimal changes to the overall review procedures. This analysis included approximately 1600 applications over a 4-year period: two years of face-to-face panel meetings compared to two years of teleconference meetings. The average overall scientific merit scores, score distribution, standard deviations and reviewer inter-rater reliability statistics were measured, as well as reviewer demographics and length of time discussing applications. The data indicate that few differences are evident between face-to-face and teleconference settings with regard to average overall scientific merit score, scoring distribution, standard deviation, reviewer demographics or inter-rater reliability. However, some difference was found in the discussion time. These findings suggest that most review outcome measures are unaffected by review setting, which would support the trend of using teleconference reviews rather than face-to-face meetings. However, further studies are needed to assess any correlations among discussion time, application funding and the productivity of funded research projects. PMID:23951223

  2. Teleconference versus face-to-face scientific peer review of grant application: effects on review outcomes.

    Directory of Open Access Journals (Sweden)

    Stephen A Gallo

    Full Text Available Teleconferencing as a setting for scientific peer review is an attractive option for funding agencies, given the substantial environmental and cost savings. Despite this, there is a paucity of published data validating teleconference-based peer review compared to the face-to-face process. Our aim was to conduct a retrospective analysis of scientific peer review data to investigate whether review setting has an effect on review process and outcome measures. We analyzed reviewer scoring data from a research program that had recently modified the review setting from face-to-face to a teleconference format with minimal changes to the overall review procedures. This analysis included approximately 1600 applications over a 4-year period: two years of face-to-face panel meetings compared to two years of teleconference meetings. The average overall scientific merit scores, score distribution, standard deviations and reviewer inter-rater reliability statistics were measured, as well as reviewer demographics and length of time discussing applications. The data indicate that few differences are evident between face-to-face and teleconference settings with regard to average overall scientific merit score, scoring distribution, standard deviation, reviewer demographics or inter-rater reliability. However, some difference was found in the discussion time. These findings suggest that most review outcome measures are unaffected by review setting, which would support the trend of using teleconference reviews rather than face-to-face meetings. However, further studies are needed to assess any correlations among discussion time, application funding and the productivity of funded research projects.

  3. 75 FR 51239 - Application(s) for Duty-Free Entry of Scientific Instruments

    Science.gov (United States)

    2010-08-19

    ... operation, i.e., scanning tunneling microscopy and atomic force microscopy. Justification for Duty-Free... Microscope System for Application in High Magnetic Fields. Manufacturer: Omicron Nanotechnology, Germany...

  4. 75 FR 34096 - Application(s) for Duty-Free Entry of Scientific Instruments

    Science.gov (United States)

    2010-06-16

    ... University Department of Chemistry, 3501 Laclede Ave., St. Louis, MO 63103. Instrument: Electron Microscope..., enhanced thermal conductivity (lubricants), thermally stable, light-weight materials for space applications...

  5. High Performance JavaScript

    CERN Document Server

    Zakas, Nicholas

    2010-01-01

    If you're like most developers, you rely heavily on JavaScript to build interactive and quick-responding web applications. The problem is that all of those lines of JavaScript code can slow down your apps. This book reveals techniques and strategies to help you eliminate performance bottlenecks during development. You'll learn how to improve execution time, downloading, interaction with the DOM, page life cycle, and more. Yahoo! frontend engineer Nicholas C. Zakas and five other JavaScript experts -- Ross Harmes, Julien Lecomte, Steven Levithan, Stoyan Stefanov, and Matt Sweeney -- demonstra

  6. Learning Apache Solr high performance

    CERN Document Server

    Mohan, Surendra

    2014-01-01

    This book is an easy-to-follow guide, full of hands-on, real-world examples. Each topic is explained and demonstrated in a specific and user-friendly flow, from search optimization using Solr to Deployment of Zookeeper applications. This book is ideal for Apache Solr developers and want to learn different techniques to optimize Solr performance with utmost efficiency, along with effectively troubleshooting the problems that usually occur while trying to boost performance. Familiarity with search servers and database querying is expected.

  7. Future translational applications from the contemporary genomics era: a scientific statement from the American Heart Association.

    Science.gov (United States)

    Fox, Caroline S; Hall, Jennifer L; Arnett, Donna K; Ashley, Euan A; Delles, Christian; Engler, Mary B; Freeman, Mason W; Johnson, Julie A; Lanfear, David E; Liggett, Stephen B; Lusis, Aldons J; Loscalzo, Joseph; MacRae, Calum A; Musunuru, Kiran; Newby, L Kristin; O'Donnell, Christopher J; Rich, Stephen S; Terzic, Andre

    2015-05-12

    The field of genetics and genomics has advanced considerably with the achievement of recent milestones encompassing the identification of many loci for cardiovascular disease and variable drug responses. Despite this achievement, a gap exists in the understanding and advancement to meaningful translation that directly affects disease prevention and clinical care. The purpose of this scientific statement is to address the gap between genetic discoveries and their practical application to cardiovascular clinical care. In brief, this scientific statement assesses the current timeline for effective translation of basic discoveries to clinical advances, highlighting past successes. Current discoveries in the area of genetics and genomics are covered next, followed by future expectations, tools, and competencies for achieving the goal of improving clinical care. © 2015 American Heart Association, Inc.

  8. Application of quality assurance to scientific activities at Westinghouse Hanford Company

    International Nuclear Information System (INIS)

    Delvin, W.L.; Farwick, D.G.

    1988-01-01

    The application of quality assurance to scientific activities has been an ongoing subject of review, discussion, interpretation, and evaluation within the nuclear community for the past several years. This paper provides a discussion on the natures of science and quality assurance and presents suggestions for integrating the two successfully. The paper shows how those actions were used at the Westinghouse Hanford Company to successfully apply quality assurance to experimental studies and materials testing and evaluation activities that supported a major project. An important factor in developing and implementing the quality assurance program was the close working relationship that existed between the assigned quality engineers and the scientists. The quality engineers, who had had working experience in the scientific disciplines involved, were able to bridge across from the scientists to the more traditional quality assurance personnel who had overall responsibility for the project's quality assurance program

  9. Efficient Machine Learning Approach for Optimizing Scientific Computing Applications on Emerging HPC Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Arumugam, Kamesh [Old Dominion Univ., Norfolk, VA (United States)

    2017-05-01

    Efficient parallel implementations of scientific applications on multi-core CPUs with accelerators such as GPUs and Xeon Phis is challenging. This requires - exploiting the data parallel architecture of the accelerator along with the vector pipelines of modern x86 CPU architectures, load balancing, and efficient memory transfer between different devices. It is relatively easy to meet these requirements for highly structured scientific applications. In contrast, a number of scientific and engineering applications are unstructured. Getting performance on accelerators for these applications is extremely challenging because many of these applications employ irregular algorithms which exhibit data-dependent control-ow and irregular memory accesses. Furthermore, these applications are often iterative with dependency between steps, and thus making it hard to parallelize across steps. As a result, parallelism in these applications is often limited to a single step. Numerical simulation of charged particles beam dynamics is one such application where the distribution of work and memory access pattern at each time step is irregular. Applications with these properties tend to present significant branch and memory divergence, load imbalance between different processor cores, and poor compute and memory utilization. Prior research on parallelizing such irregular applications have been focused around optimizing the irregular, data-dependent memory accesses and control-ow during a single step of the application independent of the other steps, with the assumption that these patterns are completely unpredictable. We observed that the structure of computation leading to control-ow divergence and irregular memory accesses in one step is similar to that in the next step. It is possible to predict this structure in the current step by observing the computation structure of previous steps. In this dissertation, we present novel machine learning based optimization techniques to address

  10. 76 FR 50997 - Application(s) for Duty-Free Entry of Scientific Instruments

    Science.gov (United States)

    2011-08-17

    ... DEPARTMENT OF COMMERCE International Trade Administration Application(s) for Duty-Free Entry of..., School of Earth Sciences, 275 Mendenhall Laboratory, 125 South Oval Mall, Columbus, OH 43210. Instrument... and high-contrast images, a stage that is easy to move, a focus that does not change with changing...

  11. Management issues for high performance storage systems

    Energy Technology Data Exchange (ETDEWEB)

    Louis, S. [Lawrence Livermore National Lab., CA (United States); Burris, R. [Oak Ridge National Lab., TN (United States)

    1995-03-01

    Managing distributed high-performance storage systems is complex and, although sharing common ground with traditional network and systems management, presents unique storage-related issues. Integration technologies and frameworks exist to help manage distributed network and system environments. Industry-driven consortia provide open forums where vendors and users cooperate to leverage solutions. But these new approaches to open management fall short addressing the needs of scalable, distributed storage. We discuss the motivation and requirements for storage system management (SSM) capabilities and describe how SSM manages distributed servers and storage resource objects in the High Performance Storage System (HPSS), a new storage facility for data-intensive applications and large-scale computing. Modem storage systems, such as HPSS, require many SSM capabilities, including server and resource configuration control, performance monitoring, quality of service, flexible policies, file migration, file repacking, accounting, and quotas. We present results of initial HPSS SSM development including design decisions and implementation trade-offs. We conclude with plans for follow-on work and provide storage-related recommendations for vendors and standards groups seeking enterprise-wide management solutions.

  12. High-performance vertical organic transistors.

    Science.gov (United States)

    Kleemann, Hans; Günther, Alrun A; Leo, Karl; Lüssem, Björn

    2013-11-11

    Vertical organic thin-film transistors (VOTFTs) are promising devices to overcome the transconductance and cut-off frequency restrictions of horizontal organic thin-film transistors. The basic physical mechanisms of VOTFT operation, however, are not well understood and VOTFTs often require complex patterning techniques using self-assembly processes which impedes a future large-area production. In this contribution, high-performance vertical organic transistors comprising pentacene for p-type operation and C60 for n-type operation are presented. The static current-voltage behavior as well as the fundamental scaling laws of such transistors are studied, disclosing a remarkable transistor operation with a behavior limited by injection of charge carriers. The transistors are manufactured by photolithography, in contrast to other VOTFT concepts using self-assembled source electrodes. Fluorinated photoresist and solvent compounds allow for photolithographical patterning directly and strongly onto the organic materials, simplifying the fabrication protocol and making VOTFTs a prospective candidate for future high-performance applications of organic transistors. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. A Linux Workstation for High Performance Graphics

    Science.gov (United States)

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  14. Indoor Air Quality in High Performance Schools

    Science.gov (United States)

    High performance schools are facilities that improve the learning environment while saving energy, resources, and money. The key is understanding the lifetime value of high performance schools and effectively managing priorities, time, and budget.

  15. Energy Efficient Graphene Based High Performance Capacitors.

    Science.gov (United States)

    Bae, Joonwon; Kwon, Oh Seok; Lee, Chang-Soo

    2017-07-10

    Graphene (GRP) is an interesting class of nano-structured electronic materials for various cutting-edge applications. To date, extensive research activities have been performed on the investigation of diverse properties of GRP. The incorporation of this elegant material can be very lucrative in terms of practical applications in energy storage/conversion systems. Among various those systems, high performance electrochemical capacitors (ECs) have become popular due to the recent need for energy efficient and portable devices. Therefore, in this article, the application of GRP for capacitors is described succinctly. In particular, a concise summary on the previous research activities regarding GRP based capacitors is also covered extensively. It was revealed that a lot of secondary materials such as polymers and metal oxides have been introduced to improve the performance. Also, diverse devices have been combined with capacitors for better use. More importantly, recent patents related to the preparation and application of GRP based capacitors are also introduced briefly. This article can provide essential information for future study. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  16. Carpet Aids Learning in High Performance Schools

    Science.gov (United States)

    Hurd, Frank

    2009-01-01

    The Healthy and High Performance Schools Act of 2002 has set specific federal guidelines for school design, and developed a federal/state partnership program to assist local districts in their school planning. According to the Collaborative for High Performance Schools (CHPS), high-performance schools are, among other things, healthy, comfortable,…

  17. 76 FR 20953 - Application(s) for Duty-Free Entry of Scientific Instruments

    Science.gov (United States)

    2011-04-14

    ... the U.S. Department of Commerce in Room 3720. Docket Number: 11-023. Applicant: UChicago Argonne, LLC... Ltd., Switzerland. Intended Use: The instrument will be used for resonant inelastic x-ray scattering...

  18. 77 FR 20360 - Application(s) for Duty-Free Entry of Scientific Instruments

    Science.gov (United States)

    2012-04-04

    ... malfunctioning in diseases such as diabetes, cancer and heath disease, and understanding how the proteins are.... Docket Number: 12-012. Applicant: Alliance for Sustainable Energy, 1617 Cole Blvd. Golden, CO 80401-3305...

  19. 77 FR 32942 - Application(s) for Duty-Free Entry of Scientific Instruments

    Science.gov (United States)

    2012-06-04

    ... Statutory Import Programs Staff, Room 3720, U.S. Department of Commerce, Washington, DC 20230. Applications..., College Station, TX 77843-3123. Instrument: Arc melting system. Manufacturer: Edmund Beuhler GmbH, Germany...

  20. High performance parallel computers for science

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1989-01-01

    This paper reports that Fermilab's Advanced Computer Program (ACP) has been developing cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 Mflops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction

  1. Simultaneous measurement of proguanil and its metabolites in human plasma and urine by reversed-phase high-performance liquid chromatography, and its preliminary application in relation to genetically determined S-mephenytoin 4'-hydroxylation status.

    Science.gov (United States)

    Kusaka, M; Setiabudy, R; Chiba, K; Ishizaki, T

    1996-02-01

    A simple high-performance liquid chromatographic (HPLC) assay method was developed for the measurement of proguanil (PG) and its major metabolites, cycloguanil (CG) and 4-chlorophenyl-biguanide (CPB), in human plasma and urine. The assay allowed the simultaneous determination of all analytes in 1 ml of plasma or 0.1 ml of urine. The detection limits of PG, CG, and CPB, defined as the signal-to-noise ratio of 3, were 1 and 5 ng/ml for plasma and urine samples, respectively. Recoveries of the analytes and the internal standard (pyrimethamine) were > 62% from plasma and > 77% from urine. Intra-assay and interassay coefficients of variation for all analytes in plasma and urine were CG and CPB, which ranged from 10% to 15% at one or two concentrations among 4-5 concentrations studied. The clinical applicability of the method was assessed by the preliminary pharmacokinetic study of PG, CG, and CPB in six healthy volunteers with the individually known phenotypes (extensive and poor metabolizers) of S-mephenytoin 4'-hydroxylation, suggesting that individuals with a poor metabolizer phenotype of S-mephenytoin have a much lower capacity to bioactivate PG to CG compared with the extensive metabolizers.

  2. Eeonomer 200F®: A High-Performance Nanofiller for Polymer Reinforcement—Investigation of the Structure, Morphology and Dielectric Properties of Polyvinyl Alcohol/Eeonomer-200F® Nanocomposites for Embedded Capacitor Applications

    Science.gov (United States)

    Deshmukh, Kalim; Ahamed, M. Basheer; Deshmukh, Rajendra R.; Sadasivuni, Kishor Kumar; Ponnamma, Deepalekshmi; Pasha, S. K. Khadheer; AlMaadeed, Mariam Al-Ali; Polu, Anji Reddy; Chidambaram, K.

    2017-04-01

    In the present study, Eeonomer 200F® was used as a high-performance nanofiller to prepare polyvinyl alcohol (PVA)-based nanocomposite films using a simple and eco-friendly solution casting technique. The prepared PVA/Eeonomer nanocomposite films were further investigated using various techniques including Fourier transform infrared spectroscopy, x-ray diffraction, thermogravimetric analysis, polarized optical microscopy, scanning electron microscopy and mechanical testing. The dielectric behavior of the nanocomposites was examined over a broad frequency range from 50 Hz to 20 MHz and temperatures ranging from 40°C to 150°C. A notable improvement in the thermal stability of the PVA was observed with the incorporation of Eeonomer. The nanocomposites also demonstrated improved mechanical properties due to the fine dispersion of the Eeonomer, and good compatibility and strong interaction between the Eeonomer and the PVA matrix. A significant improvement was observed in the dielectric properties of the PVA upon the addition of Eeonomer. The nanocomposites containing 5 wt.% Eeonomer exhibited a dielectric constant of about 222.65 (50 Hz, 150°C), which was 18 times that of the dielectric constant (12.33) of neat PVA film under the same experimental conditions. These results thus indicate that PVA/Eeonomer nanocomposites can be used as a flexible high-k dielectric material for embedded capacitor applications.

  3. State of the art of parallel scientific visualization applications on PC clusters

    International Nuclear Information System (INIS)

    Juliachs, M.

    2004-01-01

    In this state of the art on parallel scientific visualization applications on PC clusters, we deal with both surface and volume rendering approaches. We first analyze available PC cluster configurations and existing parallel rendering software components for parallel graphics rendering. CEA/DIF has been studying cluster visualization since 2001. This report is part of a study to set up a new visualization research platform. This platform consisting of an eight-node PC cluster under Linux and a tiled display was installed in collaboration with Versailles-Saint-Quentin University in August 2003. (author)

  4. Visualization and Data Analysis for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  5. High performance MEAs. Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-07-15

    The aim of the present project is through modeling, material and process development to obtain significantly better MEA performance and to attain the technology necessary to fabricate stable catalyst materials thereby providing a viable alternative to current industry standard. This project primarily focused on the development and characterization of novel catalyst materials for the use in high temperature (HT) and low temperature (LT) proton-exchange membrane fuel cells (PEMFC). New catalysts are needed in order to improve fuel cell performance and reduce the cost of fuel cell systems. Additional tasks were the development of new, durable sealing materials to be used in PEMFC as well as the computational modeling of heat and mass transfer processes, predominantly in LT PEMFC, in order to improve fundamental understanding of the multi-phase flow issues and liquid water management in fuel cells. An improved fundamental understanding of these processes will lead to improved fuel cell performance and hence will also result in a reduced catalyst loading to achieve the same performance. The consortium have obtained significant research results and progress for new catalyst materials and substrates with promising enhanced performance and fabrication of the materials using novel methods. However, the new materials and synthesis methods explored are still in the early research and development phase. The project has contributed to improved MEA performance using less precious metal and has been demonstrated for both LT-PEM, DMFC and HT-PEM applications. New novel approach and progress of the modelling activities has been extremely satisfactory with numerous conference and journal publications along with two potential inventions concerning the catalyst layer. (LN)

  6. 8th International Workshop on Parallel Tools for High Performance Computing

    CERN Document Server

    Gracia, José; Knüpfer, Andreas; Resch, Michael; Nagel, Wolfgang

    2015-01-01

    Numerical simulation and modelling using High Performance Computing has evolved into an established technique in academic and industrial research. At the same time, the High Performance Computing infrastructure is becoming ever more complex. For instance, most of the current top systems around the world use thousands of nodes in which classical CPUs are combined with accelerator cards in order to enhance their compute power and energy efficiency. This complexity can only be mastered with adequate development and optimization tools. Key topics addressed by these tools include parallelization on heterogeneous systems, performance optimization for CPUs and accelerators, debugging of increasingly complex scientific applications, and optimization of energy usage in the spirit of green IT. This book represents the proceedings of the 8th International Parallel Tools Workshop, held October 1-2, 2014 in Stuttgart, Germany – which is a forum to discuss the latest advancements in the parallel tools.

  7. The Realist Paradigm Of Energy Diplomacy In The Russian Scientific Tradition And Its Practical Applicability

    Directory of Open Access Journals (Sweden)

    R. О. Reinhardt

    2018-01-01

    Full Text Available Nowadays energy diplomacy tends to be one of most relevant and important fields of applied research in International Relations. It is characterized by an interdisciplinary approach being an intersection of political and economic theory, international law, energetics, theory of diplomacy, as well as other fields. Still, numerous research works in the given area both in Russia and abroad are characterized by a number of controversies, such as absence of a common theoretical, methodological basis and conventional terminology, as well as lack of consistency in the choice of scientific paradigms, which leads to divergence of research results and hinders the comparability of the latter. Along with that, in terms of scientific policy it is worth mentioning the absence of a common scientific space in the above field of research, which tends to be shaped by national research cultures and traditions. Throughout the 2000-2010s representatives of the MGIMO scientific school have accumulated experience in dealing with problems of energy diplomacy. However, most of the existing works do not specify the selected political theory paradigms, such as, for instance, realism, liberalism or constructivism. With no intention to conduct a comparative analysis of the aforementioned concepts, the authors of the article outline the key theoretical findings of political realism as the most suitable paradigm for explaining, analyzing and eventually forecasting the recent trends and phenomena given the current geopolitical and economical juncture. They prove the applicability of the proposed model to the OPEC case study and demonstrate its potential practical usefulness for policy-makers in foreign affairs and international energy relations.

  8. Final Report for 'Center for Technology for Advanced Scientific Component Software'

    International Nuclear Information System (INIS)

    Shasharina, Svetlana

    2010-01-01

    The goal of the Center for Technology for Advanced Scientific Component Software is to fundamentally changing the way scientific software is developed and used by bringing component-based software development technologies to high-performance scientific and engineering computing. The role of Tech-X work in TASCS project is to provide an outreach to accelerator physics and fusion applications by introducing TASCS tools into applications, testing tools in the applications and modifying the tools to be more usable.

  9. 76 FR 48803 - Application(s) for Duty-Free Entry of Scientific Instruments

    Science.gov (United States)

    2011-08-09

    .... Manufacturer: FEI Company, The Netherlands. Intended Use: The instrument will be used for NIH-funded basic... Applied Life Sciences, Austria. Intended Use: The instrument is a highly specialized system for studying a wide range of materials used in very high cycle, high temperature applications, such as light metals...

  10. 78 FR 20614 - Application(s) for Duty-Free Entry of Scientific Instruments

    Science.gov (United States)

    2013-04-05

    ... compositions of electronic materials, advanced ceramics for medical applications, advanced Ni-based Superalloys... will be used to help understand how the human body functions normally, such as in learning, memory or... normal functional changes in cells of living organisms such as nerve cells or neurons of the brain, as...

  11. 75 FR 3895 - Application(s) for Duty-Free Entry of Scientific Instruments

    Science.gov (United States)

    2010-01-25

    ... ultrastructurally the plasticity of the brain and auditory pathway, in particular, different models of hearing loss... by Commissioner of Customs: December 28, 2009. Docket Number: 09-070. Applicant: Haverford College..., 2009. Dated: January 19, 2010. Christopher Cassel, Director, IA Subsidies Enforcement Office. [FR Doc...

  12. ACCTuner: OpenACC Auto-Tuner For Accelerated Scientific Applications

    KAUST Repository

    Alzayer, Fatemah

    2015-05-17

    We optimize parameters in OpenACC clauses for a stencil evaluation kernel executed on Graphical Processing Units (GPUs) using a variety of machine learning and optimization search algorithms, individually and in hybrid combinations, and compare execution time performance to the best possible obtained from brute force search. Several auto-tuning techniques – historic learning, random walk, simulated annealing, Nelder-Mead, and genetic algorithms – are evaluated over a large two-dimensional parameter space not satisfactorily addressed to date by OpenACC compilers, consisting of gang size and vector length. A hybrid of historic learning and Nelder-Mead delivers the best balance of high performance and low tuning effort. GPUs are employed over an increasing range of applications due to the performance available from their large number of cores, as well as their energy efficiency. However, writing code that takes advantage of their massive fine-grained parallelism requires deep knowledge of the hardware, and is generally a complex task involving program transformation and the selection of many parameters. To improve programmer productivity, the directive-based programming model OpenACC was announced as an industry standard in 2011. Various compilers have been developed to support this model, the most notable being those by Cray, CAPS, and PGI. While the architecture and number of cores have evolved rapidly, the compilers have failed to keep up at configuring the parallel program to run most e ciently on the hardware. Following successful approaches to obtain high performance in kernels for cache-based processors using auto-tuning, we approach this compiler-hardware gap in GPUs by employing auto-tuning for the key parameters “gang” and “vector” in OpenACC clauses. We demonstrate results for a stencil evaluation kernel typical of seismic imaging over a variety of realistically sized three-dimensional grid configurations, with different truncation error orders

  13. Scientific basis of development and application of nanotechnologies in oil industry

    International Nuclear Information System (INIS)

    Mirzajanzadeh, A.; Maharramov, A.; Abdullayev, R.; Yuzifzadeh, Kh.; Shahbazov, E.; Qurbanov, R.; Akhmadov, S.; Kazimov, E; Ramazanov, M.; Shafiyev, Sh.; Hajizadeh, N.

    2010-01-01

    Development and introduction of nanotechnologies in the oil industry is one of the most pressing issues of the present times.For the first time in the world practice scientific-methodological basis and application practice of nanotechnologies in oil industry is developed on the basis of uniform, scientifically proven approach by taking into account the specificities of oil and gas industry.The application system of such nanotechnologies was developed in oil and gas production.Mathematical models of nanotechnological processes, i.e. c haos regulation a nd hyper-accidental process were offered. Nanomedium and nanoimpact on the w ell-layer s ystem was studied. Wide application results of nanotechnologies in SOCAR's production fields in oil and gas production are shown.Research results of N ANOSAA o n the basis of N ANO + NANO e ffect are described in the development.For the first time in world practice N ANOOIL , N ANOBITUMEN , N ANOGUDRON' and N ANOMAY' systems on the basis of machine waste oil in the drilling mud were developed for the application in oil and gas drilling. Original property, e ffect of super small concentrations a nd n anomemory i n N ANOOIL a nd N ANOBITUMEN s ystems was discovered.By applying N ANOOIL , N ANOBITUMEN a nd N ANOMAY s ystems in the drilling process was discovered: the increase of linear speed, early turbulence, decrease of hydraulic resistance coefficient and economy in energy consumption.Hyper-accidental evaluation of mathematical expectation of general sum of values of the surface strain on the sample data is spelled out with the various experiment conditions. Estimated hyper-accidental value of the mathematical expectation allows us to offer practical recommendations for the development of new nanotechnologies on the basis of rheological parameters of oil.

  14. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    Science.gov (United States)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  15. A model of "integrated scientific method" and its application for the analysis of instruction

    Science.gov (United States)

    Rusbult, Craig Francis

    A model of 'integrated scientific method' (ISM) was constructed as a framework for describing the process of science in terms of activities (formulating a research problem, and inventing and evaluating actions--such as selecting and inventing theories, evaluating theories, designing experiments, and doing experiments--intended to solve the problem) and evaluation criteria (empirical, conceptual, and cultural-personal). Instead of trying to define the scientific method, ISM is intended to serve as a flexible framework that--by varying the characteristics of its components, their integrated relationships, and their relative importance can be used to describe a variety of scientific methods, and a variety of perspectives about what constitutes an accurate portrayal of scientific methods. This framework is outlined visually and verbally, followed by an elaboration of the framework and my own views about science, and an evaluation of whether ISM can serve as a relatively neutral framework for describing a wide range of science practices and science interpretations. ISM was used to analyze an innovative, guided inquiry classroom (taught by Susan Johnson, using Genetics Construction Kit software) in which students do simulated scientific research by solving classical genetics problems that require effect-to-cause reasoning and theory revision. The immediate goal of analysis was to examine the 'science experiences' of students, to determine how the 'structure of instruction' provides opportunities for these experiences. Another goal was to test and improve the descriptive and analytical utility of ISM. In developing ISM, a major objective was to make ISM educationally useful. A concluding discussion includes controversies about "the nature of science" and how to teach it, how instruction can expand opportunities for student experience, and how goal-oriented intentional learning (using ISM might improve the learning, retention, and transfer of thinking skills. Potential

  16. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  17. High-Performance Liquid Chromatography-Mass Spectrometry.

    Science.gov (United States)

    Vestal, Marvin L.

    1984-01-01

    Reviews techniques for online coupling of high-performance liquid chromatography with mass spectrometry, emphasizing those suitable for application to nonvolatile samples. Also summarizes the present status, strengths, and weaknesses of various techniques and discusses potential applications of recently developed techniques for combined liquid…

  18. An integrated high performance Fastbus slave interface

    International Nuclear Information System (INIS)

    Christiansen, J.; Ljuslin, C.

    1993-01-01

    A high performance CMOS Fastbus slave interface ASIC (Application Specific Integrated Circuit) supporting all addressing and data transfer modes defined in the IEEE 960 - 1986 standard is presented. The FAstbus Slave Integrated Circuit (FASIC) is an interface between the asynchronous Fastbus and a clock synchronous processor/memory bus. It can work stand-alone or together with a 32 bit microprocessor. The FASIC is a programmable device enabling its direct use in many different applications. A set of programmable address mapping windows can map Fastbus addresses to convenient memory addresses and at the same time act as address decoding logic. Data rates of 100 MBytes/sec to Fastbus can be obtained using an internal FIFO in the FASIC to buffer data between the two buses during block transfers. Message passing from Fastbus to a microprocessor on the slave module is supported. A compact (70 mm x 170 mm) Fastbus slave piggy back sub-card interface including level conversion between ECL and TTL signal levels has been implemented using surface mount components and the 208 pin FASIC chip

  19. High Performance Graphene Oxide Based Rubber Composites

    Science.gov (United States)

    Mao, Yingyan; Wen, Shipeng; Chen, Yulong; Zhang, Fazhong; Panine, Pierre; Chan, Tung W.; Zhang, Liqun; Liang, Yongri; Liu, Li

    2013-01-01

    In this paper, graphene oxide/styrene-butadiene rubber (GO/SBR) composites with complete exfoliation of GO sheets were prepared by aqueous-phase mixing of GO colloid with SBR latex and a small loading of butadiene-styrene-vinyl-pyridine rubber (VPR) latex, followed by their co-coagulation. During co-coagulation, VPR not only plays a key role in the prevention of aggregation of GO sheets but also acts as an interface-bridge between GO and SBR. The results demonstrated that the mechanical properties of the GO/SBR composite with 2.0 vol.% GO is comparable with those of the SBR composite reinforced with 13.1 vol.% of carbon black (CB), with a low mass density and a good gas barrier ability to boot. The present work also showed that GO-silica/SBR composite exhibited outstanding wear resistance and low-rolling resistance which make GO-silica/SBR very competitive for the green tire application, opening up enormous opportunities to prepare high performance rubber composites for future engineering applications. PMID:23974435

  20. The Application of Ultra-High-Performance Liquid Chromatography Coupled with a LTQ-Orbitrap Mass Technique to Reveal the Dynamic Accumulation of Secondary Metabolites in Licorice under ABA Stress.

    Science.gov (United States)

    Li, Da; Xu, Guojie; Ren, Guangxi; Sun, Yufeng; Huang, Ying; Liu, Chunsheng

    2017-10-20

    The traditional medicine licorice is the most widely consumed herbal product in the world. Although much research work on studying the changes in the active compounds of licorice has been reported, there are still many areas, such as the dynamic accumulation of secondary metabolites in licorice, that need to be further studied. In this study, the secondary metabolites from licorice under two different methods of stress were investigated by ultra-high-performance liquid chromatography coupled with hybrid linear ion trap-Orbitrap mass spectrometry (UHPLC-LTQ-Orbitrap-MS). A complex continuous coordination of flavonoids and triterpenoids in a network was modulated by different methods of stress during growth. The results showed that a total of 51 secondary metabolites were identified in licorice under ABA stress. The partial least squares-discriminate analysis (PLS-DA) revealed the distinction of obvious compounds among stress-specific districts relative to ABA stress. The targeted results showed that there were significant differences in the accumulation patterns of the deeply targeted 41 flavonoids and 10 triterpenoids compounds by PCA and PLS-DA analyses. To survey the effects of flavonoid and triterpenoid metabolism under ABA stress, we inspected the stress-specific metabolic changes. Our study testified that the majority of flavonoids and triterpenoids were elevated in licorice under ABA stress, while the signature metabolite affecting the dynamic accumulation of secondary metabolites was detected. Taken together, our results suggest that ABA-specific metabolite profiling dynamically changed in terms of the biosynthesis of flavonoids and triterpenoids, which may offer new trains of thought on the regular pattern of dynamic accumulation of secondary metabolites in licorice at the metabolite level. Our results also provide a reference for clinical applications and directional planting and licorice breeding.

  1. [Investigation of concentration levels of chromium(VI) in bottled mineral and spring waters by high performance ion chromatography technique with application of postcolumn reaction with 1,5-diphenylcarbazide and VIS detection].

    Science.gov (United States)

    Swiecicka, Dorota; Garboś, Sławomir

    2008-01-01

    The aim of this work was optimization and validation of the method of determination of Cr(VI) existing in the form of chromate(VI) in mineral and spring waters by High Performance Ion Chromatography (HPIC) technique with application of postcolumn reaction with 1,5-diphenylcarbazide and VIS detection. Optimization of the method performed with the use of initial apparatus parameters and chromatographic conditions from the Method 218.6 allowed to lowering detection limit for Cr(VI) from 400 ng/l to 2 ng/l. Thanks to very low detection limit achieved it was possible to determine of Cr(VI) concentrations in 25 mineral and spring waters presented at Polish market. In the cases of four mineral and spring waters analyzed, determined Cr(VI) concentrations were below of quantification limit (waters the concentrations of chromium(VI) were determined in the range of 5.6 - 1281 ng/l. The fact of existence of different Cr(VI) concentrations in investigated waters could be connected with secondary contamination of mineral and spring waters by chromium coming from metal installations and fittings. One should be underlined that even the highest determined concentration level of chromium(VI) was below of the maximum admissible concentration of total chromium presented in Polish Decree of Minister of Health from April 29th 2004. Therefore after taking into account determined in this work concentration of Cr(VI), the consumption of all waters analyzed in this study does not lead to essential human health risk.

  2. The Application of Ultra-High-Performance Liquid Chromatography Coupled with a LTQ-Orbitrap Mass Technique to Reveal the Dynamic Accumulation of Secondary Metabolites in Licorice under ABA Stress

    Directory of Open Access Journals (Sweden)

    Da Li

    2017-10-01

    Full Text Available The traditional medicine licorice is the most widely consumed herbal product in the world. Although much research work on studying the changes in the active compounds of licorice has been reported, there are still many areas, such as the dynamic accumulation of secondary metabolites in licorice, that need to be further studied. In this study, the secondary metabolites from licorice under two different methods of stress were investigated by ultra-high-performance liquid chromatography coupled with hybrid linear ion trap–Orbitrap mass spectrometry (UHPLC-LTQ-Orbitrap-MS. A complex continuous coordination of flavonoids and triterpenoids in a network was modulated by different methods of stress during growth. The results showed that a total of 51 secondary metabolites were identified in licorice under ABA stress. The partial least squares–discriminate analysis (PLS-DA revealed the distinction of obvious compounds among stress-specific districts relative to ABA stress. The targeted results showed that there were significant differences in the accumulation patterns of the deeply targeted 41 flavonoids and 10 triterpenoids compounds by PCA and PLS-DA analyses. To survey the effects of flavonoid and triterpenoid metabolism under ABA stress, we inspected the stress-specific metabolic changes. Our study testified that the majority of flavonoids and triterpenoids were elevated in licorice under ABA stress, while the signature metabolite affecting the dynamic accumulation of secondary metabolites was detected. Taken together, our results suggest that ABA-specific metabolite profiling dynamically changed in terms of the biosynthesis of flavonoids and triterpenoids, which may offer new trains of thought on the regular pattern of dynamic accumulation of secondary metabolites in licorice at the metabolite level. Our results also provide a reference for clinical applications and directional planting and licorice breeding.

  3. Withholding answers during hands-on scientific investigations? Comparing effects on developing students' scientific knowledge, reasoning, and application

    Science.gov (United States)

    Zhang, Lin

    2018-03-01

    As more concerns have been raised about withholding answers during science teaching, this article argues for a need to detach 'withholding answers' from 'hands-on' investigation tasks. The present study examined students' learning of light-related content through three conditions: 'hands-on' + no 'withholding' (hands-on only: HO), 'hands-on' + 'withholding' (hands-on investigation with answers withheld: HOW), and no 'hands-on' + no 'withholding' (direction instruction: DI). Students were assessed in terms of how well they (1) knew the content taught in class; (2) reasoned with the learned content; and (3) applied the learned content to real-life situations. Nine classes of students at 4th and 5th grades, N = 136 in total, were randomly assigned to one of the three conditions. ANCOVA results showed that students in the hands-on only condition reasoned significantly better than those in the other two conditions. Students in this condition also seemed to know the content fairly better although the advance was not significant. Students in all three conditions did not show a statistically significant difference in their ability to apply the learned content to real-life situations. The findings from this study provide important contributions regarding issues relating to withholding answers during guided scientific inquiry.

  4. High performance carbon nanocomposites for ultracapacitors

    Science.gov (United States)

    Lu, Wen

    2012-10-02

    The present invention relates to composite electrodes for electrochemical devices, particularly to carbon nanotube composite electrodes for high performance electrochemical devices, such as ultracapacitors.

  5. Engineering and Scientific Applications: Using MatLab(Registered Trademark) for Data Processing and Visualization

    Science.gov (United States)

    Sen, Syamal K.; Shaykhian, Gholam Ali

    2011-01-01

    MatLab(TradeMark)(MATrix LABoratory) is a numerical computation and simulation tool that is used by thousands Scientists and Engineers in many countries. MatLab does purely numerical calculations, which can be used as a glorified calculator or interpreter programming language; its real strength is in matrix manipulations. Computer algebra functionalities are achieved within the MatLab environment using "symbolic" toolbox. This feature is similar to computer algebra programs, provided by Maple or Mathematica to calculate with mathematical equations using symbolic operations. MatLab in its interpreter programming language form (command interface) is similar with well known programming languages such as C/C++, support data structures and cell arrays to define classes in object oriented programming. As such, MatLab is equipped with most of the essential constructs of a higher programming language. MatLab is packaged with an editor and debugging functionality useful to perform analysis of large MatLab programs and find errors. We believe there are many ways to approach real-world problems; prescribed methods to ensure foregoing solutions are incorporated in design and analysis of data processing and visualization can benefit engineers and scientist in gaining wider insight in actual implementation of their perspective experiments. This presentation will focus on data processing and visualizations aspects of engineering and scientific applications. Specifically, it will discuss methods and techniques to perform intermediate-level data processing covering engineering and scientific problems. MatLab programming techniques including reading various data files formats to produce customized publication-quality graphics, importing engineering and/or scientific data, organizing data in tabular format, exporting data to be used by other software programs such as Microsoft Excel, data presentation and visualization will be discussed.

  6. Brazilian academic search filter: application to the scientific literature on physical activity.

    Science.gov (United States)

    Sanz-Valero, Javier; Ferreira, Marcos Santos; Castiel, Luis David; Wanden-Berghe, Carmina; Guilam, Maria Cristina Rodrigues

    2010-10-01

    To develop a search filter in order to retrieve scientific publications on physical activity from Brazilian academic institutions. The academic search filter consisted of the descriptor "exercise" associated through the term AND, to the names of the respective academic institutions, which were connected by the term OR. The MEDLINE search was performed with PubMed on 11/16/2008. The institutions were selected according to the classification from the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) for interuniversity agreements. A total of 407 references were retrieved, corresponding to about 0.9% of all articles about physical activity and 0.5% of the Brazilian academic publications indexed in MEDLINE on the search date. When compared with the manual search undertaken, the search filter (descriptor + institutional filter) showed a sensitivity of 99% and a specificity of 100%. The institutional search filter showed high sensitivity and specificity, and is applicable to other areas of knowledge in health sciences. It is desirable that every Brazilian academic institution establish its "standard name/brand" in order to efficiently retrieve their scientific literature.

  7. Historic Learning Approach for Auto-tuning OpenACC Accelerated Scientific Applications

    KAUST Repository

    Siddiqui, Shahzeb

    2015-04-17

    The performance optimization of scientific applications usually requires an in-depth knowledge of the hardware and software. A performance tuning mechanism is suggested to automatically tune OpenACC parameters to adapt to the execution environment on a given system. A historic learning based methodology is suggested to prune the parameter search space for a more efficient auto-tuning process. This approach is applied to tune the OpenACC gang and vector clauses for a better mapping of the compute kernels onto the underlying architecture. Our experiments show a significant performance improvement against the default compiler parameters and drastic reduction in tuning time compared to a brute force search-based approach.

  8. The graphics future in scientific applications-trends and developments in computer graphics

    CERN Document Server

    Enderle, G

    1982-01-01

    Computer graphics methods and tools are being used to a great extent in scientific research. The future development in this area will be influenced both by new hardware developments and by software advances. On the hardware sector, the development of the raster technology will lead to the increased use of colour workstations with more local processing power. Colour hardcopy devices for creating plots, slides, or movies will be available at a lower price than today. The first real 3D-workstations will appear on the marketplace. One of the main activities on the software sector is the standardization of computer graphics systems, graphical files, and device interfaces. This will lead to more portable graphical application programs and to a common base for computer graphics education.

  9. New developments in laser-based photoemission spectroscopy and its scientific applications: a key issues review

    Science.gov (United States)

    Zhou, Xingjiang; He, Shaolong; Liu, Guodong; Zhao, Lin; Yu, Li; Zhang, Wentao

    2018-06-01

    The significant progress in angle-resolved photoemission spectroscopy (ARPES) in last three decades has elevated it from a traditional band mapping tool to a precise probe of many-body interactions and dynamics of quasiparticles in complex quantum systems. The recent developments of deep ultraviolet (DUV, including ultraviolet and vacuum ultraviolet) laser-based ARPES have further pushed this technique to a new level. In this paper, we review some latest developments in DUV laser-based photoemission systems, including the super-high energy and momentum resolution ARPES, the spin-resolved ARPES, the time-of-flight ARPES, and the time-resolved ARPES. We also highlight some scientific applications in the study of electronic structure in unconventional superconductors and topological materials using these state-of-the-art DUV laser-based ARPES. Finally we provide our perspectives on the future directions in the development of laser-based photoemission systems.

  10. Mono or 3D video production for scientific dissemination of nuclear energy applications

    International Nuclear Information System (INIS)

    Freitas, Victor Goncalves G.; Mol, Antonio Carlos A.; Biermann, Bruna; Jorge, Carlos Alexandre F.; Araujo, Tawein

    2011-01-01

    This work presents results of educational videos development, mono or stereo, for scientific dissemination of nuclear energy applications. Nuclear energy span through many important applications for the society, ranging from electrical power generation to nuclear medicine, among others. Thus, the purpose is to disseminate this information for the general public and specially for students. Educational videos consist in a good approach for this purpose, due to the involvement of the public they provide, more than simply text or oral exposition, or even static images presentation. Stereo videos result in even more involvement of the public, besides immersion, the later due to the realism 3D views provide. The video developed in this work deals with explanations of electrical power generation, including nuclear reactor operation, shows the percentage of nuclear source as power generation all over the world, and explains also nuclear energy application in medicine. It is expected all these characteristics provided by the use of video or virtual reality techniques will achieve the purpose of disseminating such important information, regarding the benefits of nuclear energy to the society. (author)

  11. Mono or 3D video production for scientific dissemination of nuclear energy applications

    Energy Technology Data Exchange (ETDEWEB)

    Freitas, Victor Goncalves G.; Mol, Antonio Carlos A.; Biermann, Bruna; Jorge, Carlos Alexandre F., E-mail: mol@ien.gov.b, E-mail: vgoncalves@ien.gov.b, E-mail: calexandre@ien.gov.b [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Araujo, Tawein [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Escola de Belas Artes; Legey, Ana Paula [Universidade Gama Filho (UGF), Rio de Janeiro, RJ (Brazil)

    2011-07-01

    This work presents results of educational videos development, mono or stereo, for scientific dissemination of nuclear energy applications. Nuclear energy span through many important applications for the society, ranging from electrical power generation to nuclear medicine, among others. Thus, the purpose is to disseminate this information for the general public and specially for students. Educational videos consist in a good approach for this purpose, due to the involvement of the public they provide, more than simply text or oral exposition, or even static images presentation. Stereo videos result in even more involvement of the public, besides immersion, the later due to the realism 3D views provide. The video developed in this work deals with explanations of electrical power generation, including nuclear reactor operation, shows the percentage of nuclear source as power generation all over the world, and explains also nuclear energy application in medicine. It is expected all these characteristics provided by the use of video or virtual reality techniques will achieve the purpose of disseminating such important information, regarding the benefits of nuclear energy to the society. (author)

  12. Automatic recognition of conceptualization zones in scientific articles and two life science applications.

    Science.gov (United States)

    Liakata, Maria; Saha, Shyamasree; Dobnik, Simon; Batchelor, Colin; Rebholz-Schuhmann, Dietrich

    2012-04-01

    Scholarly biomedical publications report on the findings of a research investigation. Scientists use a well-established discourse structure to relate their work to the state of the art, express their own motivation and hypotheses and report on their methods, results and conclusions. In previous work, we have proposed ways to explicitly annotate the structure of scientific investigations in scholarly publications. Here we present the means to facilitate automatic access to the scientific discourse of articles by automating the recognition of 11 categories at the sentence level, which we call Core Scientific Concepts (CoreSCs). These include: Hypothesis, Motivation, Goal, Object, Background, Method, Experiment, Model, Observation, Result and Conclusion. CoreSCs provide the structure and context to all statements and relations within an article and their automatic recognition can greatly facilitate biomedical information extraction by characterizing the different types of facts, hypotheses and evidence available in a scientific publication. We have trained and compared machine learning classifiers (support vector machines and conditional random fields) on a corpus of 265 full articles in biochemistry and chemistry to automatically recognize CoreSCs. We have evaluated our automatic classifications against a manually annotated gold standard, and have achieved promising accuracies with 'Experiment', 'Background' and 'Model' being the categories with the highest F1-scores (76%, 62% and 53%, respectively). We have analysed the task of CoreSC annotation both from a sentence classification as well as sequence labelling perspective and we present a detailed feature evaluation. The most discriminative features are local sentence features such as unigrams, bigrams and grammatical dependencies while features encoding the document structure, such as section headings, also play an important role for some of the categories. We discuss the usefulness of automatically generated Core

  13. Delivering high performance BWR fuel reliably

    International Nuclear Information System (INIS)

    Schardt, J.F.

    1998-01-01

    Utilities are under intense pressure to reduce their production costs in order to compete in the increasingly deregulated marketplace. They need fuel, which can deliver high performance to meet demanding operating strategies. GE's latest BWR fuel design, GE14, provides that high performance capability. GE's product introduction process assures that this performance will be delivered reliably, with little risk to the utility. (author)

  14. Powder metallurgical high performance materials. Proceedings. Volume 1: high performance P/M metals

    International Nuclear Information System (INIS)

    Kneringer, G.; Roedhammer, P.; Wildner, H.

    2001-01-01

    The proceedings of this sequence of seminars form an impressive chronicle of the continued progress in the understanding of refractory metals and cemented carbides and in their manufacture and application. There the ingenuity and assiduous work of thousands of scientists and engineers striving for progress in the field of powder metallurgy is documented in more than 2000 contributions covering some 30000 pages. The 15th Plansee Seminar was convened under the general theme 'Powder Metallurgical High Performance Materials'. Under this broadened perspective the seminar will strive to look beyond the refractory metals and cemented carbides, which remain at its focus, to novel classes of materials, such as intermetallic compounds, with potential for high temperature applications. (author)

  15. Powder metallurgical high performance materials. Proceedings. Volume 1: high performance P/M metals

    Energy Technology Data Exchange (ETDEWEB)

    Kneringer, G; Roedhammer, P; Wildner, H [eds.

    2001-07-01

    The proceedings of this sequence of seminars form an impressive chronicle of the continued progress in the understanding of refractory metals and cemented carbides and in their manufacture and application. There the ingenuity and assiduous work of thousands of scientists and engineers striving for progress in the field of powder metallurgy is documented in more than 2000 contributions covering some 30000 pages. The 15th Plansee Seminar was convened under the general theme 'Powder Metallurgical High Performance Materials'. Under this broadened perspective the seminar will strive to look beyond the refractory metals and cemented carbides, which remain at its focus, to novel classes of materials, such as intermetallic compounds, with potential for high temperature applications. (author)

  16. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2013-01-01

    Contemporary High Performance Computing: From Petascale toward Exascale focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. The first part of the book examines significant trends in HPC systems, including computer architectures, applications, performance, and software. It discusses the growth from terascale to petascale computing and the influence of the TOP500 and Green500 lists. The second part of the

  17. The StratusLab cloud distribution: Use-cases and support for scientific applications

    Science.gov (United States)

    Floros, E.

    2012-04-01

    The StratusLab project is integrating an open cloud software distribution that enables organizations to setup and provide their own private or public IaaS (Infrastructure as a Service) computing clouds. StratusLab distribution capitalizes on popular infrastructure virtualization solutions like KVM, the OpenNebula virtual machine manager, Claudia service manager and SlipStream deployment platform, which are further enhanced and expanded with additional components developed within the project. The StratusLab distribution covers the core aspects of a cloud IaaS architecture, namely Computing (life-cycle management of virtual machines), Storage, Appliance management and Networking. The resulting software stack provides a packaged turn-key solution for deploying cloud computing services. The cloud computing infrastructures deployed using StratusLab can support a wide range of scientific and business use cases. Grid computing has been the primary use case pursued by the project and for this reason the initial priority has been the support for the deployment and operation of fully virtualized production-level grid sites; a goal that has already been achieved by operating such a site as part of EGI's (European Grid Initiative) pan-european grid infrastructure. In this area the project is currently working to provide non-trivial capabilities like elastic and autonomic management of grid site resources. Although grid computing has been the motivating paradigm, StratusLab's cloud distribution can support a wider range of use cases. Towards this direction, we have developed and currently provide support for setting up general purpose computing solutions like Hadoop, MPI and Torque clusters. For what concerns scientific applications the project is collaborating closely with the Bioinformatics community in order to prepare VM appliances and deploy optimized services for bioinformatics applications. In a similar manner additional scientific disciplines like Earth Science can take

  18. A high performance architecture for accelerator controls

    International Nuclear Information System (INIS)

    Allen, M.; Hunt, S.M; Lue, H.; Saltmarsh, C.G.; Parker, C.R.C.B.

    1991-01-01

    The demands placed on the Superconducting Super Collider (SSC) control system due to large distances, high bandwidth and fast response time required for operation will require a fresh approach to the data communications architecture of the accelerator. The prototype design effort aims at providing deterministic communication across the accelerator complex with a response time of < 100 ms and total bandwidth of 2 Gbits/sec. It will offer a consistent interface for a large number of equipment types, from vacuum pumps to beam position monitors, providing appropriate communications performance for each equipment type. It will consist of highly parallel links to all equipment: those with computing resources, non-intelligent direct control interfaces, and data concentrators. This system will give each piece of equipment a dedicated link of fixed bandwidth to the control system. Application programs will have access to all accelerator devices which will be memory mapped into a global virtual addressing scheme. Links to devices in the same geographical area will be multiplexed using commercial Time Division Multiplexing equipment. Low-level access will use reflective memory techniques, eliminating processing overhead and complexity of traditional data communication protocols. The use of commercial standards and equipment will enable a high performance system to be built at low cost

  19. A high performance architecture for accelerator controls

    International Nuclear Information System (INIS)

    Allen, M.; Hunt, S.M.; Lue, H.; Saltmarsh, C.G.; Parker, C.R.C.B.

    1991-03-01

    The demands placed on the Superconducting Super Collider (SSC) control system due to large distances, high bandwidth and fast response time required for operation will require a fresh approach to the data communications architecture of the accelerator. The prototype design effort aims at providing deterministic communication across the accelerator complex with a response time of <100 ms and total bandwidth of 2 Gbits/sec. It will offer a consistent interface for a large number of equipment types, from vacuum pumps to beam position monitors, providing appropriate communications performance for each equipment type. It will consist of highly parallel links to all equipments: those with computing resources, non-intelligent direct control interfaces, and data concentrators. This system will give each piece of equipment a dedicated link of fixed bandwidth to the control system. Application programs will have access to all accelerator devices which will be memory mapped into a global virtual addressing scheme. Links to devices in the same geographical area will be multiplexed using commercial Time Division Multiplexing equipment. Low-level access will use reflective memory techniques, eliminating processing overhead and complexity of traditional data communication protocols. The use of commercial standards and equipment will enable a high performance system to be built at low cost. 1 fig

  20. Monitoring of IaaS and scientific applications on the Cloud using the Elasticsearch ecosystem

    Science.gov (United States)

    Bagnasco, S.; Berzano, D.; Guarise, A.; Lusso, S.; Masera, M.; Vallero, S.

    2015-05-01

    The private Cloud at the Torino INFN computing centre offers IaaS services to different scientific computing applications. The infrastructure is managed with the OpenNebula cloud controller. The main stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a grid Tier-2 site for the BES-III collaboration, plus an increasing number of other small tenants. Besides keeping track of the usage, the automation of dynamic allocation of resources to tenants requires detailed monitoring and accounting of the resource usage. As a first investigation towards this, we set up a monitoring system to inspect the site activities both in terms of IaaS and applications running on the hosted virtual instances. For this purpose we used the Elasticsearch, Logstash and Kibana stack. In the current implementation, the heterogeneous accounting information is fed to different MySQL databases and sent to Elasticsearch via a custom Logstash plugin. For the IaaS metering, we developed sensors for the OpenNebula API. The IaaS level information gathered through the API is sent to the MySQL database through an ad-hoc developed RESTful web service, which is also used for other accounting purposes. Concerning the application level, we used the Root plugin TProofMonSenderSQL to collect accounting data from the interactive analysis facility. The BES-III virtual instances used to be monitored with Zabbix, as a proof of concept we also retrieve the information contained in the Zabbix database. Each of these three cases is indexed separately in Elasticsearch. We are now starting to consider dismissing the intermediate level provided by the SQL database and evaluating a NoSQL option as a unique central database for all the monitoring information. We setup a set of Kibana dashboards with pre-defined queries in order to monitor the relevant information in each case. In this way we have achieved a uniform monitoring

  1. High performance liquid chromatography in pharmaceutical analyses

    Directory of Open Access Journals (Sweden)

    Branko Nikolin

    2004-05-01

    Full Text Available In testing the pre-sale procedure the marketing of drugs and their control in the last ten years, high performance liquid chromatographyreplaced numerous spectroscopic methods and gas chromatography in the quantitative and qualitative analysis. In the first period of HPLC application it was thought that it would become a complementary method of gas chromatography, however, today it has nearly completely replaced gas chromatography in pharmaceutical analysis. The application of the liquid mobile phase with the possibility of transformation of mobilized polarity during chromatography and all other modifications of mobile phase depending upon the characteristics of substance which are being tested, is a great advantage in the process of separation in comparison to other methods. The greater choice of stationary phase is the next factor which enables realization of good separation. The separation line is connected to specific and sensitive detector systems, spectrafluorimeter, diode detector, electrochemical detector as other hyphernated systems HPLC-MS and HPLC-NMR, are the basic elements on which is based such wide and effective application of the HPLC method. The purpose high performance liquid chromatography(HPLC analysis of any drugs is to confirm the identity of a drug and provide quantitative results and also to monitor the progress of the therapy of a disease.1 Measuring presented on the Fig. 1. is chromatogram obtained for the plasma of depressed patients 12 h before oral administration of dexamethasone. It may also be used to further our understanding of the normal and disease process in the human body trough biomedical and therapeutically research during investigation before of the drugs registration. The analyses of drugs and metabolites in biological fluids, particularly plasma, serum or urine is one of the most demanding but one of the most common uses of high performance of liquid chromatography. Blood, plasma or

  2. SISYPHUS: A high performance seismic inversion factory

    Science.gov (United States)

    Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas

    2016-04-01

    branches for the static process setup, inversion iterations, and solver runs, each branch specifying information at the event, station and channel levels. The workflow management framework is based on an embedded scripting engine that allows definition of various workflow scenarios using a high-level scripting language and provides access to all available inversion components represented as standard library functions. At present the SES3D wave propagation solver is integrated in the solution; the work is in progress for interfacing with SPECFEM3D. A separate framework is designed for interoperability with an optimization module; the workflow manager and optimization process run in parallel and cooperate by exchanging messages according to a specially designed protocol. A library of high-performance modules implementing signal pre-processing, misfit and adjoint computations according to established good practices is included. Monitoring is based on information stored in the inversion state database and at present implements a command line interface; design of a graphical user interface is in progress. The software design fits well into the common massively parallel system architecture featuring a large number of computational nodes running distributed applications under control of batch-oriented resource managers. The solution prototype has been implemented on the "Piz Daint" supercomputer provided by the Swiss Supercomputing Centre (CSCS).

  3. Scientific Advances with Aspergillus Species that Are Used for Food and Biotech Applications.

    Science.gov (United States)

    Biesebeke, Rob Te; Record, Erik

    2008-01-01

    Yeast and filamentous fungi have been used for centuries in diverse biotechnological processes. Fungal fermentation technology is traditionally used in relation to food production, such as for bread, beer, cheese, sake and soy sauce. Last century, the industrial application of yeast and filamentous fungi expanded rapidly, with excellent examples such as purified enzymes and secondary metabolites (e.g. antibiotics), which are used in a wide range of food as well as non-food industries. Research on protein and/or metabolite secretion by fungal species has focused on identifying bottlenecks in (post-) transcriptional regulation of protein production, metabolic rerouting, morphology and the transit of proteins through the secretion pathway. In past years, genome sequencing of some fungi (e.g. Aspergillus oryzae, Aspergillus niger) has been completed. The available genome sequences have enabled identification of genes and functionally important regions of the genome. This has directed research to focus on a post-genomics era in which transcriptomics, proteomics and metabolomics methodologies will help to explore the scientific relevance and industrial application of fungal genome sequences.

  4. High-performance ceramics. Fabrication, structure, properties

    International Nuclear Information System (INIS)

    Petzow, G.; Tobolski, J.; Telle, R.

    1996-01-01

    The program ''Ceramic High-performance Materials'' pursued the objective to understand the chaining of cause and effect in the development of high-performance ceramics. This chain of problems begins with the chemical reactions for the production of powders, comprises the characterization, processing, shaping and compacting of powders, structural optimization, heat treatment, production and finishing, and leads to issues of materials testing and of a design appropriate to the material. The program ''Ceramic High-performance Materials'' has resulted in contributions to the understanding of fundamental interrelationships in terms of materials science, which are summarized in the present volume - broken down into eight special aspects. (orig./RHM)

  5. Scientific and technical guidance for the preparation and presentation of an application for authorisation of a health claim (revision 1)

    DEFF Research Database (Denmark)

    Tetens, Inge

    2011-01-01

    The scientific and technical guidance of the EFSA Panel on Dietetic Products, Nutrition and Allergies for the preparation and presentation of an application for authorisation of a health claim presents a common format for the organisation of information for the preparation of a well......-structured application for authorisation of health claims which fall under Article 14 (referring to children’s development and health, and to disease risk reduction claims), or 13(5) (which are based on newly developed scientific evidence and/or which include a request for the protection of proprietary data......), or for the modification of an existing authorisation in accordance with Article 19 of Regulation (EC) No 1924/2006 on nutrition and health claims made on foods. This guidance outlines: the information and scientific data which must be included in the application, the hierarchy of different types of data and study designs...

  6. FY 1992 Blue Book: Grand Challenges: High Performance Computing and Communications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — High performance computing and computer communications networks are becoming increasingly important to scientific advancement, economic competition, and national...

  7. FY 1993 Blue Book: Grand Challenges 1993: High Performance Computing and Communications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — High performance computing and computer communications networks are becoming increasingly important to scientific advancement, economic competition, and national...

  8. High-Performance Linear Algebra Processor using FPGA

    National Research Council Canada - National Science Library

    Johnson, J

    2004-01-01

    With recent advances in FPGA (Field Programmable Gate Array) technology it is now feasible to use these devices to build special purpose processors for floating point intensive applications that arise in scientific computing...

  9. Analog circuit design designing high performance amplifiers

    CERN Document Server

    Feucht, Dennis

    2010-01-01

    The third volume Designing High Performance Amplifiers applies the concepts from the first two volumes. It is an advanced treatment of amplifier design/analysis emphasizing both wideband and precision amplification.

  10. Strategies and Experiences Using High Performance Fortran

    National Research Council Canada - National Science Library

    Shires, Dale

    2001-01-01

    .... High performance Fortran (HPF) is a relative new addition to the Fortran dialect It is an attempt to provide an efficient high-level Fortran parallel programming language for the latest generation of been debatable...

  11. Embedded High Performance Scalable Computing Systems

    National Research Council Canada - National Science Library

    Ngo, David

    2003-01-01

    The Embedded High Performance Scalable Computing Systems (EHPSCS) program is a cooperative agreement between Sanders, A Lockheed Martin Company and DARPA that ran for three years, from Apr 1995 - Apr 1998...

  12. Gradient High Performance Liquid Chromatography Method ...

    African Journals Online (AJOL)

    Purpose: To develop a gradient high performance liquid chromatography (HPLC) method for the simultaneous determination of phenylephrine (PHE) and ibuprofen (IBU) in solid ..... nimesulide, phenylephrine. Hydrochloride, chlorpheniramine maleate and caffeine anhydrous in pharmaceutical dosage form. Acta Pol.

  13. High performance computing in Windows Azure cloud

    OpenAIRE

    Ambruš, Dejan

    2013-01-01

    High performance, security, availability, scalability, flexibility and lower costs of maintenance have essentially contributed to the growing popularity of cloud computing in all spheres of life, especially in business. In fact cloud computing offers even more than this. With usage of virtual computing clusters a runtime environment for high performance computing can be efficiently implemented also in a cloud. There are many advantages but also some disadvantages of cloud computing, some ...

  14. Carbon nanomaterials for high-performance supercapacitors

    OpenAIRE

    Tao Chen; Liming Dai

    2013-01-01

    Owing to their high energy density and power density, supercapacitors exhibit great potential as high-performance energy sources for advanced technologies. Recently, carbon nanomaterials (especially, carbon nanotubes and graphene) have been widely investigated as effective electrodes in supercapacitors due to their high specific surface area, excellent electrical and mechanical properties. This article summarizes the recent progresses on the development of high-performance supercapacitors bas...

  15. Delivering high performance BWR fuel reliably

    Energy Technology Data Exchange (ETDEWEB)

    Schardt, J.F. [GE Nuclear Energy, Wilmington, NC (United States)

    1998-07-01

    Utilities are under intense pressure to reduce their production costs in order to compete in the increasingly deregulated marketplace. They need fuel, which can deliver high performance to meet demanding operating strategies. GE's latest BWR fuel design, GE14, provides that high performance capability. GE's product introduction process assures that this performance will be delivered reliably, with little risk to the utility. (author)

  16. HPTA: High-Performance Text Analytics

    OpenAIRE

    Vandierendonck, Hans; Murphy, Karen; Arif, Mahwish; Nikolopoulos, Dimitrios S.

    2017-01-01

    One of the main targets of data analytics is unstructured data, which primarily involves textual data. High-performance processing of textual data is non-trivial. We present the HPTA library for high-performance text analytics. The library helps programmers to map textual data to a dense numeric representation, which can be handled more efficiently. HPTA encapsulates three performance optimizations: (i) efficient memory management for textual data, (ii) parallel computation on associative dat...

  17. High-performance computing — an overview

    Science.gov (United States)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  18. Enabling High-Performance Computing as a Service

    KAUST Repository

    AbdelBaky, Moustafa

    2012-10-01

    With the right software infrastructure, clouds can provide scientists with as a service access to high-performance computing resources. An award-winning prototype framework transforms the Blue Gene/P system into an elastic cloud to run a representative HPC application. © 2012 IEEE.

  19. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2015-01-01

    A continuation of Contemporary High Performance Computing: From Petascale toward Exascale, this second volume continues the discussion of HPC flagship systems, major application workloads, facilities, and sponsors. The book includes of figures and pictures that capture the state of existing systems: pictures of buildings, systems in production, floorplans, and many block diagrams and charts to illustrate system design and performance.

  20. Developments on HNF based high performance and green solid propellants

    NARCIS (Netherlands)

    Keizers, H.L.J.; Heijden, A.E.D.M. van der; Vliet, L.D. van; Welland-Veltmans, W.H.M.; Ciucci, A.

    2001-01-01

    Worldwide developments are ongoing to develop new and more energetic composite solid propellant formulations for space transportation and military applications. Since the 90's, the use of HNF as a new high performance oxidiser is being reinvestigated. Within European development programmes,