WorldWideScience

Sample records for supercomputer center systems

  1. The ETA10 supercomputer system

    International Nuclear Information System (INIS)

    Swanson, C.D.

    1987-01-01

    The ETA Systems, Inc. ETA 10 is a next-generation supercomputer featuring multiprocessing, a large hierarchical memory system, high performance input/output, and network support for both batch and interactive processing. Advanced technology used in the ETA 10 includes liquid nitrogen cooled CMOS logic with 20,000 gates per chip, a single printed circuit board for each CPU, and high density static and dynamics MOS memory chips. Software for the ETA 10 includes an underlying kernel that supports multiple user environments, a new ETA FORTRAN compiler with an advanced automatic vectorizer, a multitasking library and debugging tools. Possible developments for future supercomputers from ETA Systems are discussed. (orig.)

  2. Supercomputing Centers and Electricity Service Providers

    DEFF Research Database (Denmark)

    Patki, Tapasya; Bates, Natalie; Ghatikar, Girish

    2016-01-01

    from a detailed, quantitative survey-based analysis and compare the perspectives of the European grid and SCs to the ones of the United States (US). We then show that contrary to the expectation, SCs in the US are more open toward cooperating and developing demand-management strategies with their ESPs......Supercomputing Centers (SCs) have high and variable power demands, which increase the challenges of the Electricity Service Providers (ESPs) with regards to efficient electricity distribution and reliable grid operation. High penetration of renewable energy generation further exacerbates...... this problem. In order to develop a symbiotic relationship between the SCs and their ESPs and to support effective power management at all levels, it is critical to understand and analyze how the existing relationships were formed and how these are expected to evolve. In this paper, we first present results...

  3. Research center Juelich to install Germany's most powerful supercomputer new IBM System for science and research will achieve 5.8 trillion computations per second

    CERN Multimedia

    2002-01-01

    "The Research Center Juelich, Germany, and IBM today announced that they have signed a contract for the delivery and installation of a new IBM supercomputer at the Central Institute for Applied Mathematics" (1/2 page).

  4. INTEL: Intel based systems move up in supercomputing ranks

    CERN Multimedia

    2002-01-01

    "The TOP500 supercomputer rankings released today at the Supercomputing 2002 conference show a dramatic increase in the number of Intel-based systems being deployed in high-performance computing (HPC) or supercomputing areas" (1/2 page).

  5. The ETA systems plans for supercomputers

    International Nuclear Information System (INIS)

    Swanson, C.D.

    1987-01-01

    The ETA Systems, is a class VII supercomputer featuring multiprocessing, a large hierarchical memory system, high performance input/output, and network support for both batch and interactive processing. Advanced technology used in the ETA 10 includes liquid nitrogen cooled CMOS logic with 20,000 gates per chip, a single printed circuit board for each CPU, and high density static and dynamic MOS memory chips. Software for the ETA 10 includes an underlying kernel that supports multiple user environments, a new ETA FORTRAN compiler with an advanced automatic vectorizer, a multitasking library and debugging tools. Possible developments for future supercomputers from ETA Systems are discussed

  6. Integration of Panda Workload Management System with supercomputers

    Science.gov (United States)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads

  7. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    Energy Technology Data Exchange (ETDEWEB)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S [Earth Sciences Department. Barcelona Supercomputing Center. Barcelona (Spain); Cuevas, E [Izanaa Atmospheric Research Center. Agencia Estatal de Meteorologia, Tenerife (Spain); Nickovic, S [Atmospheric Research and Environment Branch, World Meteorological Organization, Geneva (Switzerland)], E-mail: carlos.perez@bsc.es

    2009-03-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  8. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    International Nuclear Information System (INIS)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S; Cuevas, E; Nickovic, S

    2009-01-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  9. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee; Kaushik, Dinesh; Winfer, Andrew

    2011-01-01

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  10. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee

    2011-11-15

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  11. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Energy Technology Data Exchange (ETDEWEB)

    De, K [University of Texas at Arlington; Jha, S [Rutgers University; Klimentov, A [Brookhaven National Laboratory (BNL); Maeno, T [Brookhaven National Laboratory (BNL); Nilsson, P [Brookhaven National Laboratory (BNL); Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Wells, Jack C [ORNL; Wenaus, T [Brookhaven National Laboratory (BNL)

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation

  12. Integration of Titan supercomputer at OLCF with ATLAS production system

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration

    2016-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job...

  13. Integration of Titan supercomputer at OLCF with ATLAS Production System

    CERN Document Server

    AUTHOR|(SzGeCERN)643806; The ATLAS collaboration; De, Kaushik; Klimentov, Alexei; Nilsson, Paul; Oleynik, Danila; Padolski, Siarhei; Panitkin, Sergey; Wenaus, Torre

    2017-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for jo...

  14. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Science.gov (United States)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  15. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    International Nuclear Information System (INIS)

    Klimentov, A; Maeno, T; Nilsson, P; Panitkin, S; Wenaus, T; De, K; Oleynik, D; Jha, S; Wells, J

    2016-01-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  16. SUPERCOMPUTER SIMULATION OF CRITICAL PHENOMENA IN COMPLEX SOCIAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Petrus M.A. Sloot

    2014-09-01

    Full Text Available The paper describes a problem of computer simulation of critical phenomena in complex social systems on a petascale computing systems in frames of complex networks approach. The three-layer system of nested models of complex networks is proposed including aggregated analytical model to identify critical phenomena, detailed model of individualized network dynamics and model to adjust a topological structure of a complex network. The scalable parallel algorithm covering all layers of complex networks simulation is proposed. Performance of the algorithm is studied on different supercomputing systems. The issues of software and information infrastructure of complex networks simulation are discussed including organization of distributed calculations, crawling the data in social networks and results visualization. The applications of developed methods and technologies are considered including simulation of criminal networks disruption, fast rumors spreading in social networks, evolution of financial networks and epidemics spreading.

  17. Visualization at supercomputing centers: the tale of little big iron and the three skinny guys.

    Science.gov (United States)

    Bethel, E W; van Rosendale, J; Southard, D; Gaither, K; Childs, H; Brugger, E; Ahern, S

    2011-01-01

    Supercomputing centers are unique resources that aim to enable scientific knowledge discovery by employing large computational resources-the "Big Iron." Design, acquisition, installation, and management of the Big Iron are carefully planned and monitored. Because these Big Iron systems produce a tsunami of data, it's natural to colocate the visualization and analysis infrastructure. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys doesn't receive the same level of treatment as that of the Big Iron. This article explores the following questions about the Little Iron: How should we size the Little Iron to adequately support visualization and analysis of data coming off the Big Iron? What sort of capabilities must it have? Related questions concern the size of visualization support staff: How big should a visualization program be-that is, how many Skinny Guys should it have? What should the staff do? How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?

  18. Comprehensive efficiency analysis of supercomputer resource usage based on system monitoring data

    Science.gov (United States)

    Mamaeva, A. A.; Shaykhislamov, D. I.; Voevodin, Vad V.; Zhumatiy, S. A.

    2018-03-01

    One of the main problems of modern supercomputers is the low efficiency of their usage, which leads to the significant idle time of computational resources, and, in turn, to the decrease in speed of scientific research. This paper presents three approaches to study the efficiency of supercomputer resource usage based on monitoring data analysis. The first approach performs an analysis of computing resource utilization statistics, which allows to identify different typical classes of programs, to explore the structure of the supercomputer job flow and to track overall trends in the supercomputer behavior. The second approach is aimed specifically at analyzing off-the-shelf software packages and libraries installed on the supercomputer, since efficiency of their usage is becoming an increasingly important factor for the efficient functioning of the entire supercomputer. Within the third approach, abnormal jobs – jobs with abnormally inefficient behavior that differs significantly from the standard behavior of the overall supercomputer job flow – are being detected. For each approach, the results obtained in practice in the Supercomputer Center of Moscow State University are demonstrated.

  19. A visual analytics system for optimizing the performance of large-scale networks in supercomputing systems

    Directory of Open Access Journals (Sweden)

    Takanori Fujiwara

    2018-03-01

    Full Text Available The overall efficiency of an extreme-scale supercomputer largely relies on the performance of its network interconnects. Several of the state of the art supercomputers use networks based on the increasingly popular Dragonfly topology. It is crucial to study the behavior and performance of different parallel applications running on Dragonfly networks in order to make optimal system configurations and design choices, such as job scheduling and routing strategies. However, in order to study these temporal network behavior, we would need a tool to analyze and correlate numerous sets of multivariate time-series data collected from the Dragonfly’s multi-level hierarchies. This paper presents such a tool–a visual analytics system–that uses the Dragonfly network to investigate the temporal behavior and optimize the communication performance of a supercomputer. We coupled interactive visualization with time-series analysis methods to help reveal hidden patterns in the network behavior with respect to different parallel applications and system configurations. Our system also provides multiple coordinated views for connecting behaviors observed at different levels of the network hierarchies, which effectively helps visual analysis tasks. We demonstrate the effectiveness of the system with a set of case studies. Our system and findings can not only help improve the communication performance of supercomputing applications, but also the network performance of next-generation supercomputers. Keywords: Supercomputing, Parallel communication network, Dragonfly networks, Time-series data, Performance analysis, Visual analytics

  20. Symbolic simulation of engineering systems on a supercomputer

    International Nuclear Information System (INIS)

    Ragheb, M.; Gvillo, D.; Makowitz, H.

    1986-01-01

    Model-Based Production-Rule systems for analysis are developed for the symbolic simulation of Complex Engineering systems on a CRAY X-MP Supercomputer. The Fault-Tree and Event-Tree Analysis methodologies from Systems-Analysis are used for problem representation and are coupled to the Rule-Based System Paradigm from Knowledge Engineering to provide modelling of engineering devices. Modelling is based on knowledge of the structure and function of the device rather than on human expertise alone. To implement the methodology, we developed a production-Rule Analysis System that uses both backward-chaining and forward-chaining: HAL-1986. The inference engine uses an Induction-Deduction-Oriented antecedent-consequent logic and is programmed in Portable Standard Lisp (PSL). The inference engine is general and can accommodate general modifications and additions to the knowledge base. The methodologies used will be demonstrated using a model for the identification of faults, and subsequent recovery from abnormal situations in Nuclear Reactor Safety Analysis. The use of the exposed methodologies for the prognostication of future device responses under operational and accident conditions using coupled symbolic and procedural programming is discussed

  1. Integration of Titan supercomputer at OLCF with ATLAS Production System

    Science.gov (United States)

    Barreiro Megino, F.; De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wells, J.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job submission to Titan’s batch queues and local data management, with lightweight MPI wrappers to run single node workloads in parallel on Titan’s multi-core worker nodes. It provides for running of standard ATLAS production jobs on unused resources (backfill) on Titan. The system already allowed ATLAS to collect on Titan millions of core-hours per month, execute hundreds of thousands jobs, while simultaneously improving Titans utilization efficiency. We will discuss the details of the implementation, current experience with running the system, as well as future plans aimed at improvements in scalability and efficiency. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to

  2. Computational fluid dynamics research at the United Technologies Research Center requiring supercomputers

    Science.gov (United States)

    Landgrebe, Anton J.

    1987-01-01

    An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.

  3. Assessment techniques for a learning-centered curriculum: evaluation design for adventures in supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Helland, B. [Ames Lab., IA (United States); Summers, B.G. [Oak Ridge National Lab., TN (United States)

    1996-09-01

    As the classroom paradigm shifts from being teacher-centered to being learner-centered, student assessments are evolving from typical paper and pencil testing to other methods of evaluation. Students should be probed for understanding, reasoning, and critical thinking abilities rather than their ability to return memorized facts. The assessment of the Department of Energy`s pilot program, Adventures in Supercomputing (AiS), offers one example of assessment techniques developed for learner-centered curricula. This assessment has employed a variety of methods to collect student data. Methods of assessment used were traditional testing, performance testing, interviews, short questionnaires via email, and student presentations of projects. The data obtained from these sources have been analyzed by a professional assessment team at the Center for Children and Technology. The results have been used to improve the AiS curriculum and establish the quality of the overall AiS program. This paper will discuss the various methods of assessment used and the results.

  4. What is supercomputing ?

    International Nuclear Information System (INIS)

    Asai, Kiyoshi

    1992-01-01

    Supercomputing means the high speed computation using a supercomputer. Supercomputers and the technical term ''supercomputing'' have spread since ten years ago. The performances of the main computers installed so far in Japan Atomic Energy Research Institute are compared. There are two methods to increase computing speed by using existing circuit elements, parallel processor system and vector processor system. CRAY-1 is the first successful vector computer. Supercomputing technology was first applied to meteorological organizations in foreign countries, and to aviation and atomic energy research institutes in Japan. The supercomputing for atomic energy depends on the trend of technical development in atomic energy, and the contents are divided into the increase of computing speed in existing simulation calculation and the acceleration of the new technical development of atomic energy. The examples of supercomputing in Japan Atomic Energy Research Institute are reported. (K.I.)

  5. Integration Of PanDA Workload Management System With Supercomputers

    CERN Document Server

    Klimentov, Alexei; The ATLAS collaboration; Maeno, Tadashi; Mashinistov, Ruslan; Nilsson, Paul; Oleynik, Danila; Panitkin, Sergey; Read, Kenneth; Ryabinkin, Evgeny; Wenaus, Torre

    2015-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 100,000 co...

  6. Visualization environment of the large-scale data of JAEA's supercomputer system

    Energy Technology Data Exchange (ETDEWEB)

    Sakamoto, Kensaku [Japan Atomic Energy Agency, Center for Computational Science and e-Systems, Tokai, Ibaraki (Japan); Hoshi, Yoshiyuki [Research Organization for Information Science and Technology (RIST), Tokai, Ibaraki (Japan)

    2013-11-15

    On research and development of various fields of nuclear energy, visualization of calculated data is especially useful to understand the result of simulation in an intuitive way. Many researchers who run simulations on the supercomputer in Japan Atomic Energy Agency (JAEA) are used to transfer calculated data files from the supercomputer to their local PCs for visualization. In recent years, as the size of calculated data has gotten larger with improvement of supercomputer performance, reduction of visualization processing time as well as efficient use of JAEA network is being required. As a solution, we introduced a remote visualization system which has abilities to utilize parallel processors on the supercomputer and to reduce the usage of network resources by transferring data of intermediate visualization process. This paper reports a study on the performance of image processing with the remote visualization system. The visualization processing time is measured and the influence of network speed is evaluated by varying the drawing mode, the size of visualization data and the number of processors. Based on this study, a guideline for using the remote visualization system is provided to show how the system can be used effectively. An upgrade policy of the next system is also shown. (author)

  7. Application of Supercomputer Technologies for Simulation Of Socio-Economic Systems

    Directory of Open Access Journals (Sweden)

    Vladimir Valentinovich Okrepilov

    2015-06-01

    Full Text Available To date, an extensive experience has been accumulated in investigation of problems related to quality, assessment of management systems, modeling of economic system sustainability. The performed studies have created a basis for development of a new research area — Economics of Quality. Its tools allow to use opportunities of model simulation for construction of the mathematical models adequately reflecting the role of quality in natural, technical, social regularities of functioning of the complex socio-economic systems. Extensive application and development of models, and also system modeling with use of supercomputer technologies, on our deep belief, will bring the conducted research of socio-economic systems to essentially new level. Moreover, the current scientific research makes a significant contribution to model simulation of multi-agent social systems and that is not less important, it belongs to the priority areas in development of science and technology in our country. This article is devoted to the questions of supercomputer technologies application in public sciences, first of all, — regarding technical realization of the large-scale agent-focused models (AFM. The essence of this tool is that owing to the power computer increase it has become possible to describe the behavior of many separate fragments of a difficult system, as socio-economic systems are. The article also deals with the experience of foreign scientists and practicians in launching the AFM on supercomputers, and also the example of AFM developed in CEMI RAS, stages and methods of effective calculating kernel display of multi-agent system on architecture of a modern supercomputer will be analyzed. The experiments on the basis of model simulation on forecasting the population of St. Petersburg according to three scenarios as one of the major factors influencing the development of socio-economic system and quality of life of the population are presented in the

  8. A training program for scientific supercomputing users

    Energy Technology Data Exchange (ETDEWEB)

    Hanson, F.; Moher, T.; Sabelli, N.; Solem, A.

    1988-01-01

    There is need for a mechanism to transfer supercomputing technology into the hands of scientists and engineers in such a way that they will acquire a foundation of knowledge that will permit integration of supercomputing as a tool in their research. Most computing center training emphasizes computer-specific information about how to use a particular computer system; most academic programs teach concepts to computer scientists. Only a few brief courses and new programs are designed for computational scientists. This paper describes an eleven-week training program aimed principally at graduate and postdoctoral students in computationally-intensive fields. The program is designed to balance the specificity of computing center courses, the abstractness of computer science courses, and the personal contact of traditional apprentice approaches. It is based on the experience of computer scientists and computational scientists, and consists of seminars and clinics given by many visiting and local faculty. It covers a variety of supercomputing concepts, issues, and practices related to architecture, operating systems, software design, numerical considerations, code optimization, graphics, communications, and networks. Its research component encourages understanding of scientific computing and supercomputer hardware issues. Flexibility in thinking about computing needs is emphasized by the use of several different supercomputer architectures, such as the Cray X/MP48 at the National Center for Supercomputing Applications at University of Illinois at Urbana-Champaign, IBM 3090 600E/VF at the Cornell National Supercomputer Facility, and Alliant FX/8 at the Advanced Computing Research Facility at Argonne National Laboratory. 11 refs., 6 tabs.

  9. Supercomputational science

    CERN Document Server

    Wilson, S

    1990-01-01

    In contemporary research, the supercomputer now ranks, along with radio telescopes, particle accelerators and the other apparatus of "big science", as an expensive resource, which is nevertheless essential for state of the art research. Supercomputers are usually provided as shar.ed central facilities. However, unlike, telescopes and accelerators, they are find a wide range of applications which extends across a broad spectrum of research activity. The difference in performance between a "good" and a "bad" computer program on a traditional serial computer may be a factor of two or three, but on a contemporary supercomputer it can easily be a factor of one hundred or even more! Furthermore, this factor is likely to increase with future generations of machines. In keeping with the large capital and recurrent costs of these machines, it is appropriate to devote effort to training and familiarization so that supercomputers are employed to best effect. This volume records the lectures delivered at a Summer School ...

  10. Monte Carlo simulations of quantum systems on massively parallel supercomputers

    International Nuclear Information System (INIS)

    Ding, H.Q.

    1993-01-01

    A large class of quantum physics applications uses operator representations that are discrete integers by nature. This class includes magnetic properties of solids, interacting bosons modeling superfluids and Cooper pairs in superconductors, and Hubbard models for strongly correlated electrons systems. This kind of application typically uses integer data representations and the resulting algorithms are dominated entirely by integer operations. The authors implemented an efficient algorithm for one such application on the Intel Touchstone Delta and iPSC/860. The algorithm uses a multispin coding technique which allows significant data compactification and efficient vectorization of Monte Carlo updates. The algorithm regularly switches between two data decompositions, corresponding naturally to different Monte Carlo updating processes and observable measurements such that only nearest-neighbor communications are needed within a given decomposition. On 128 nodes of Intel Delta, this algorithm updates 183 million spins per second (compared to 21 million on CM-2 and 6.2 million on a Cray Y-MP). A systematic performance analysis shows a better than 90% efficiency in the parallel implementation

  11. TOP500 Supercomputers for June 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-06-23

    21st Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 21st edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2003). The Earth Simulator supercomputer built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan, with its Linpack benchmark performance of 35.86 Tflop/s (teraflops or trillions of calculations per second), retains the number one position. The number 2 position is held by the re-measured ASCI Q system at Los Alamos National Laboratory. With 13.88 Tflop/s, it is the second system ever to exceed the 10 Tflop/smark. ASCIQ was built by Hewlett-Packard and is based on the AlphaServerSC computer system.

  12. TOP500 Supercomputers for June 2002

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-06-20

    19th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 19th edition of the TOP500 list of the worlds fastest supercomputers was released today (June 20, 2002). The recently installed Earth Simulator supercomputer at the Earth Simulator Center in Yokohama, Japan, is as expected the clear new number 1. Its performance of 35.86 Tflop/s (trillions of calculations per second) running the Linpack benchmark is almost five times higher than the performance of the now No.2 IBM ASCI White system at Lawrence Livermore National Laboratory (7.2 Tflop/s). This powerful leap frogging to the top by a system so much faster than the previous top system is unparalleled in the history of the TOP500.

  13. Integration of PanDA workload management system with Titan supercomputer at OLCF

    Science.gov (United States)

    De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  14. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00300320; Klimentov, Alexei; Oleynik, Danila; Panitkin, Sergey; Petrosyan, Artem; Vaniachine, Alexandre; Wenaus, Torre; Schovancova, Jaroslava

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modi ed PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real time, information about unused...

  15. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration; Klimentov, Alexei; Oleynik, Danila; Petrosyan, Artem; Schovancova, Jaroslava; Vaniachine, Alexandre; Wenaus, Torre

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently uses more than 100,000 cores at well over 100 Grid sites with a peak performance of 0.3 petaFLOPS, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real tim...

  16. Development of a high performance eigensolver on the peta-scale next generation supercomputer system

    International Nuclear Information System (INIS)

    Imamura, Toshiyuki; Yamada, Susumu; Machida, Masahiko

    2010-01-01

    For the present supercomputer systems, a multicore and multisocket processors are necessary to build a system, and choice of interconnection is essential. In addition, for effective development of a new code, high performance, scalable, and reliable numerical software is one of the key items. ScaLAPACK and PETSc are well-known software on distributed memory parallel computer systems. It is needless to say that highly tuned software towards new architecture like many-core processors must be chosen for real computation. In this study, we present a high-performance and high-scalable eigenvalue solver towards the next-generation supercomputer system, so called 'K-computer' system. We have developed two versions, the standard version (eigen s) and enhanced performance version (eigen sx), which are developed on the T2K cluster system housed at University of Tokyo. Eigen s employs the conventional algorithms; Householder tridiagonalization, divide and conquer (DC) algorithm, and Householder back-transformation. They are carefully implemented with blocking technique and flexible two-dimensional data-distribution to reduce the overhead of memory traffic and data transfer, respectively. Eigen s performs excellently on the T2K system with 4096 cores (theoretical peak is 37.6 TFLOPS), and it shows fine performance 3.0 TFLOPS with a two hundred thousand dimensional matrix. The enhanced version, eigen sx, uses more advanced algorithms; the narrow-band reduction algorithm, DC for band matrices, and the block Householder back-transformation with WY-representation. Even though this version is still on a test stage, it shows 4.7 TFLOPS with the same dimensional matrix on eigen s. (author)

  17. Applications of supercomputing and the utility industry: Calculation of power transfer capabilities

    International Nuclear Information System (INIS)

    Jensen, D.D.; Behling, S.R.; Betancourt, R.

    1990-01-01

    Numerical models and iterative simulation using supercomputers can furnish cost-effective answers to utility industry problems that are all but intractable using conventional computing equipment. An example of the use of supercomputers by the utility industry is the determination of power transfer capability limits for power transmission systems. This work has the goal of markedly reducing the run time of transient stability codes used to determine power distributions following major system disturbances. To date, run times of several hours on a conventional computer have been reduced to several minutes on state-of-the-art supercomputers, with further improvements anticipated to reduce run times to less than a minute. In spite of the potential advantages of supercomputers, few utilities have sufficient need for a dedicated in-house supercomputing capability. This problem is resolved using a supercomputer center serving a geographically distributed user base coupled via high speed communication networks

  18. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Widener, Patrick (University of New Mexico); Jaconette, Steven (Northwestern University); Bridges, Patrick G. (University of New Mexico); Xia, Lei (Northwestern University); Dinda, Peter (Northwestern University); Cui, Zheng.; Lange, John (Northwestern University); Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  19. Use of QUADRICS supercomputer as embedded simulator in emergency management systems

    International Nuclear Information System (INIS)

    Bove, R.; Di Costanzo, G.; Ziparo, A.

    1996-07-01

    The experience related to the implementation of a MRBT, atmospheric spreading model with a short duration releasing, are reported. This model was implemented on a QUADRICS-Q1 supercomputer. First is reported a description of the MRBT model. It is an analytical model to study the speadings of light gases realised in the atmosphere cause incidental releasing. The solution of diffusion equation is Gaussian like. It yield the concentration of pollutant substance released. The concentration is function of space and time. Thus the QUADRICS architecture is introduced. And the implementation of the model is described. At the end it will be consider the integration of the QUADRICS-based model as simulator in a emergency management system

  20. Japanese supercomputer technology

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Ewald, R.H.; Worlton, W.J.

    1982-01-01

    In February 1982, computer scientists from the Los Alamos National Laboratory and Lawrence Livermore National Laboratory visited several Japanese computer manufacturers. The purpose of these visits was to assess the state of the art of Japanese supercomputer technology and to advise Japanese computer vendors of the needs of the US Department of Energy (DOE) for more powerful supercomputers. The Japanese foresee a domestic need for large-scale computing capabilities for nuclear fusion, image analysis for the Earth Resources Satellite, meteorological forecast, electrical power system analysis (power flow, stability, optimization), structural and thermal analysis of satellites, and very large scale integrated circuit design and simulation. To meet this need, Japan has launched an ambitious program to advance supercomputer technology. This program is described

  1. An assessment of worldwide supercomputer usage

    Energy Technology Data Exchange (ETDEWEB)

    Wasserman, H.J.; Simmons, M.L.; Hayes, A.H.

    1995-01-01

    This report provides a comparative study of advanced supercomputing usage in Japan and the United States as of Spring 1994. It is based on the findings of a group of US scientists whose careers have centered on programming, evaluating, and designing high-performance supercomputers for over ten years. The report is a follow-on to an assessment of supercomputing technology in Europe and Japan that was published in 1993. Whereas the previous study focused on supercomputer manufacturing capabilities, the primary focus of the current work was to compare where and how supercomputers are used. Research for this report was conducted through both literature studies and field research in Japan.

  2. Supercomputing and related national projects in Japan

    International Nuclear Information System (INIS)

    Miura, Kenichi

    1985-01-01

    Japanese supercomputer development activities in the industry and research projects are outlined. Architecture, technology, software, and applications of Fujitsu's Vector Processor Systems are described as an example of Japanese supercomputers. Applications of supercomputers to high energy physics are also discussed. (orig.)

  3. Earth and environmental science in the 1980's: Part 1: Environmental data systems, supercomputer facilities and networks

    Science.gov (United States)

    1986-01-01

    Overview descriptions of on-line environmental data systems, supercomputer facilities, and networks are presented. Each description addresses the concepts of content, capability, and user access relevant to the point of view of potential utilization by the Earth and environmental science community. The information on similar systems or facilities is presented in parallel fashion to encourage and facilitate intercomparison. In addition, summary sheets are given for each description, and a summary table precedes each section.

  4. Benchmarking Further Single Board Computers for Building a Mini Supercomputer for Simulation of Telecommunication Systems

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-01-01

    Full Text Available Parallel Discrete Event Simulation (PDES with the conservative synchronization method can be efficiently used for the performance analysis of telecommunication systems because of their good lookahead properties. For PDES, a cost effective execution platform may be built by using single board computers (SBCs, which offer relatively high computation capacity compared to their price or power consumption and especially to the space they take up. A benchmarking method is proposed and its operation is demonstrated by benchmarking ten different SBCs, namely Banana Pi, Beaglebone Black, Cubieboard2, Odroid-C1+, Odroid-U3+, Odroid-XU3 Lite, Orange Pi Plus, Radxa Rock Lite, Raspberry Pi Model B+, and Raspberry Pi 2 Model B+. Their benchmarking results are compared to find out which one should be used for building a mini supercomputer for parallel discrete-event simulation of telecommunication systems. The SBCs are also used to build a heterogeneous cluster and the performance of the cluster is tested, too.

  5. Automatic discovery of the communication network topology for building a supercomputer model

    Science.gov (United States)

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim

    2016-10-01

    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.

  6. Status of supercomputers in the US

    International Nuclear Information System (INIS)

    Fernbach, S.

    1985-01-01

    Current Supercomputers; that is, the Class VI machines which first became available in 1976 are being delivered in greater quantity than ever before. In addition, manufacturers are busily working on Class VII machines to be ready for delivery in CY 1987. Mainframes are being modified or designed to take on some features of the supercomputers and new companies with the intent of either competing directly in the supercomputer arena or in providing entry-level systems from which to graduate to supercomputers are springing up everywhere. Even well founded organizations like IBM and CDC are adding machines with vector instructions in their repertoires. Japanese - manufactured supercomputers are also being introduced into the U.S. Will these begin to compete with those of U.S. manufacture. Are they truly competitive. It turns out that both from the hardware and software points of view they may be superior. We may be facing the same problems in supercomputers that we faced in videosystems

  7. Computational Dimensionalities of Global Supercomputing

    Directory of Open Access Journals (Sweden)

    Richard S. Segall

    2013-12-01

    Full Text Available This Invited Paper pertains to subject of my Plenary Keynote Speech at the 17th World Multi-Conference on Systemics, Cybernetics and Informatics (WMSCI 2013 held in Orlando, Florida on July 9-12, 2013. The title of my Plenary Keynote Speech was: "Dimensionalities of Computation: from Global Supercomputing to Data, Text and Web Mining" but this Invited Paper will focus only on the "Computational Dimensionalities of Global Supercomputing" and is based upon a summary of the contents of several individual articles that have been previously written with myself as lead author and published in [75], [76], [77], [78], [79], [80] and [11]. The topics of these of the Plenary Speech included Overview of Current Research in Global Supercomputing [75], Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing [76], Data Mining Supercomputing with SAS™ JMP® Genomics ([77], [79], [80], and Visualization by Supercomputing Data Mining [81]. ______________________ [11.] Committee on the Future of Supercomputing, National Research Council (2003, The Future of Supercomputing: An Interim Report, ISBN-13: 978-0-309-09016- 2, http://www.nap.edu/catalog/10784.html [75.] Segall, Richard S.; Zhang, Qingyu and Cook, Jeffrey S.(2013, "Overview of Current Research in Global Supercomputing", Proceedings of Forty- Fourth Meeting of Southwest Decision Sciences Institute (SWDSI, Albuquerque, NM, March 12-16, 2013. [76.] Segall, Richard S. and Zhang, Qingyu (2010, "Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing", Proceedings of 5th INFORMS Workshop on Data Mining and Health Informatics, Austin, TX, November 6, 2010. [77.] Segall, Richard S., Zhang, Qingyu and Pierce, Ryan M.(2010, "Data Mining Supercomputing with SAS™ JMP®; Genomics: Research-in-Progress, Proceedings of 2010 Conference on Applied Research in Information Technology, sponsored by

  8. Integration Of PanDA Workload Management System With Supercomputers for ATLAS

    CERN Document Server

    Oleynik, Danila; The ATLAS collaboration; De, Kaushik; Wenaus, Torre; Maeno, Tadashi; Barreiro Megino, Fernando Harald; Nilsson, Paul; Guan, Wen; Panitkin, Sergey

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production ANd Distributed Analysis system) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more t...

  9. TOP500 Supercomputers for November 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-11-16

    22nd Edition of TOP500 List of World s Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 22nd edition of the TOP500 list of the worlds fastest supercomputers was released today (November 16, 2003). The Earth Simulator supercomputer retains the number one position with its Linpack benchmark performance of 35.86 Tflop/s (''teraflops'' or trillions of calculations per second). It was built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan.

  10. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  11. A workbench for tera-flop supercomputing

    International Nuclear Information System (INIS)

    Resch, M.M.; Kuester, U.; Mueller, M.S.; Lang, U.

    2003-01-01

    Supercomputers currently reach a peak performance in the range of TFlop/s. With but one exception - the Japanese Earth Simulator - none of these systems has so far been able to also show a level of sustained performance for a variety of applications that comes close to the peak performance. Sustained TFlop/s are therefore rarely seen. The reasons are manifold and are well known: Bandwidth and latency both for main memory and for the internal network are the key internal technical problems. Cache hierarchies with large caches can bring relief but are no remedy to the problem. However, there are not only technical problems that inhibit the full exploitation by scientists of the potential of modern supercomputers. More and more organizational issues come to the forefront. This paper shows the approach of the High Performance Computing Center Stuttgart (HLRS) to deliver a sustained performance of TFlop/s for a wide range of applications from a large group of users spread over Germany. The core of the concept is the role of the data. Around this we design a simulation workbench that hides the complexity of interacting computers, networks and file systems from the user. (authors)

  12. NASA Advanced Supercomputing Facility Expansion

    Science.gov (United States)

    Thigpen, William W.

    2017-01-01

    The NASA Advanced Supercomputing (NAS) Division enables advances in high-end computing technologies and in modeling and simulation methods to tackle some of the toughest science and engineering challenges facing NASA today. The name "NAS" has long been associated with leadership and innovation throughout the high-end computing (HEC) community. We play a significant role in shaping HEC standards and paradigms, and provide leadership in the areas of large-scale InfiniBand fabrics, Lustre open-source filesystems, and hyperwall technologies. We provide an integrated high-end computing environment to accelerate NASA missions and make revolutionary advances in science. Pleiades, a petaflop-scale supercomputer, is used by scientists throughout the U.S. to support NASA missions, and is ranked among the most powerful systems in the world. One of our key focus areas is in modeling and simulation to support NASA's real-world engineering applications and make fundamental advances in modeling and simulation methods.

  13. Installation of the CDC 7600 supercomputer system in the computer centre in 1972

    CERN Multimedia

    Nettz, William

    1972-01-01

    The CDC 7600 was installed in 1972 in the newly built computer centre. It was said to be the largest and most powerful computer system in Europe at that time and remained the fastest machine at CERN for 9 years. It was replaced after 12 years. Dr. Julian Blake (CERN), Dr. Tor Bloch (CERN), Erwin Gasser (Control Data Corporation), Jean-Marie LaPorte (Control Data Corporation), Peter McWilliam (Control Data Corporation), Hans Oeshlein (Control Data Corporation), and Peter Warn (Control Data Corporation) were heavily involved in this project and may appear on the pictures. William Nettz (who took the pictures) was in charge of the installation. Excerpt from CERN annual report 1972: 'Data handling and evaluation is becoming an increasingly important part of physics experiments. In order to meet these requirements a new central computer system, CDC 7600/6400, has been acquired and it was brought into more or less regular service during the year. Some initial hardware problems have disappeared but work has still to...

  14. LDRD final report : a lightweight operating system for multi-core capability class supercomputers.

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Suzanne Marie; Hudson, Trammell B. (OS Research); Ferreira, Kurt Brian; Bridges, Patrick G. (University of New Mexico); Pedretti, Kevin Thomas Tauke; Levenhagen, Michael J.; Brightwell, Ronald Brian

    2010-09-01

    The two primary objectives of this LDRD project were to create a lightweight kernel (LWK) operating system(OS) designed to take maximum advantage of multi-core processors, and to leverage the virtualization capabilities in modern multi-core processors to create a more flexible and adaptable LWK environment. The most significant technical accomplishments of this project were the development of the Kitten lightweight kernel, the co-development of the SMARTMAP intra-node memory mapping technique, and the development and demonstration of a scalable virtualization environment for HPC. Each of these topics is presented in this report by the inclusion of a published or submitted research paper. The results of this project are being leveraged by several ongoing and new research projects.

  15. The design and implementation of cost-effective algorithms for direct solution of banded linear systems on the vector processor system 32 supercomputer

    Science.gov (United States)

    Samba, A. S.

    1985-01-01

    The problem of solving banded linear systems by direct (non-iterative) techniques on the Vector Processor System (VPS) 32 supercomputer is considered. Two efficient direct methods for solving banded linear systems on the VPS 32 are described. The vector cyclic reduction (VCR) algorithm is discussed in detail. The performance of the VCR on a three parameter model problem is also illustrated. The VCR is an adaptation of the conventional point cyclic reduction algorithm. The second direct method is the Customized Reduction of Augmented Triangles' (CRAT). CRAT has the dominant characteristics of an efficient VPS 32 algorithm. CRAT is tailored to the pipeline architecture of the VPS 32 and as a consequence the algorithm is implicitly vectorizable.

  16. Supercomputers to transform Science

    CERN Multimedia

    2006-01-01

    "New insights into the structure of space and time, climate modeling, and the design of novel drugs, are but a few of the many research areas that will be transforned by the installation of three supercomputers at the Unversity of Bristol." (1/2 page)

  17. Supercomputers Of The Future

    Science.gov (United States)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1992-01-01

    Report evaluates supercomputer needs of five key disciplines: turbulence physics, aerodynamics, aerothermodynamics, chemistry, and mathematical modeling of human vision. Predicts these fields will require computer speed greater than 10(Sup 18) floating-point operations per second (FLOP's) and memory capacity greater than 10(Sup 15) words. Also, new parallel computer architectures and new structured numerical methods will make necessary speed and capacity available.

  18. Introduction to Reconfigurable Supercomputing

    CERN Document Server

    Lanzagorta, Marco; Rosenberg, Robert

    2010-01-01

    This book covers technologies, applications, tools, languages, procedures, advantages, and disadvantages of reconfigurable supercomputing using Field Programmable Gate Arrays (FPGAs). The target audience is the community of users of High Performance Computers (HPe who may benefit from porting their applications into a reconfigurable environment. As such, this book is intended to guide the HPC user through the many algorithmic considerations, hardware alternatives, usability issues, programming languages, and design tools that need to be understood before embarking on the creation of reconfigur

  19. Status reports of supercomputing astrophysics in Japan

    International Nuclear Information System (INIS)

    Nakamura, Takashi; Nagasawa, Mikio

    1990-01-01

    The Workshop on Supercomputing Astrophysics was held at National Laboratory for High Energy Physics (KEK, Tsukuba) from August 31 to September 2, 1989. More than 40 participants of physicists, astronomers were attendant and discussed many topics in the informal atmosphere. The main purpose of this workshop was focused on the theoretical activities in computational astrophysics in Japan. It was also aimed to promote effective collaboration between the numerical experimentists working on supercomputing technique. The various subjects of the presented papers of hydrodynamics, plasma physics, gravitating systems, radiative transfer and general relativity are all stimulating. In fact, these numerical calculations become possible now in Japan owing to the power of Japanese supercomputer such as HITAC S820, Fujitsu VP400E and NEC SX-2. (J.P.N.)

  20. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  1. Supercomputer applications in nuclear research

    International Nuclear Information System (INIS)

    Ishiguro, Misako

    1992-01-01

    The utilization of supercomputers in Japan Atomic Energy Research Institute is mainly reported. The fields of atomic energy research which use supercomputers frequently and the contents of their computation are outlined. What is vectorizing is simply explained, and nuclear fusion, nuclear reactor physics, the hydrothermal safety of nuclear reactors, the parallel property that the atomic energy computations of fluids and others have, the algorithm for vector treatment and the effect of speed increase by vectorizing are discussed. At present Japan Atomic Energy Research Institute uses two systems of FACOM VP 2600/10 and three systems of M-780. The contents of computation changed from criticality computation around 1970, through the analysis of LOCA after the TMI accident, to nuclear fusion research, the design of new type reactors and reactor safety assessment at present. Also the method of using computers advanced from batch processing to time sharing processing, from one-dimensional to three dimensional computation, from steady, linear to unsteady nonlinear computation, from experimental analysis to numerical simulation and so on. (K.I.)

  2. Summaries of research and development activities by using supercomputer system of JAEA in FY2015. April 1, 2015 - March 31, 2016

    International Nuclear Information System (INIS)

    2017-01-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As shown in the fact that about 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology. In FY2015, the system was used for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as a priority issue, as well as for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science. This report presents a great number of R and D results accomplished by using the system in FY2015, as well as user support, operational records and overviews of the system, and so on. (author)

  3. Summaries of research and development activities by using supercomputer system of JAEA in FY2014. April 1, 2014 - March 31, 2015

    International Nuclear Information System (INIS)

    2016-02-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As shown in the fact that about 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology. In FY2014, the system was used for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as a priority issue, as well as for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science. This report presents a great number of R and D results accomplished by using the system in FY2014, as well as user support, operational records and overviews of the system, and so on. (author)

  4. Summaries of research and development activities by using supercomputer system of JAEA in FY2013. April 1, 2013 - March 31, 2014

    International Nuclear Information System (INIS)

    2015-02-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. About 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology utilization. In FY2013, the system was used not only for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science, but also for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as a priority issue. This report presents a great amount of R and D results accomplished by using the system in FY2013, as well as user support, operational records and overviews of the system, and so on. (author)

  5. Summaries of research and development activities by using supercomputer system of JAEA in FY2012. April 1, 2012 - March 31, 2013

    International Nuclear Information System (INIS)

    2014-01-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As more than 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology utilization. In FY2012, the system was used not only for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science, but also for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as apriority issue. This report presents a great amount of R and D results accomplished by using the system in FY2012, as well as user support, operational records and overviews of the system, and so on. (author)

  6. Summaries of research and development activities by using supercomputer system of JAEA in FY2011. April 1, 2011 - March 31, 2012

    International Nuclear Information System (INIS)

    2013-01-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As more than 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology utilization. In FY2011, the system was used for analyses of the accident at the Fukushima Daiichi Nuclear Power Station and establishment of radioactive decontamination plan, as well as the JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science. This report presents a great amount of R and D results accomplished by using the system in FY2011, as well as user support structure, operational records and overviews of the system, and so on. (author)

  7. The Pawsey Supercomputer geothermal cooling project

    Science.gov (United States)

    Regenauer-Lieb, K.; Horowitz, F.; Western Australian Geothermal Centre Of Excellence, T.

    2010-12-01

    The Australian Government has funded the Pawsey supercomputer in Perth, Western Australia, providing computational infrastructure intended to support the future operations of the Australian Square Kilometre Array radiotelescope and to boost next-generation computational geosciences in Australia. Supplementary funds have been directed to the development of a geothermal exploration well to research the potential for direct heat use applications at the Pawsey Centre site. Cooling the Pawsey supercomputer may be achieved by geothermal heat exchange rather than by conventional electrical power cooling, thus reducing the carbon footprint of the Pawsey Centre and demonstrating an innovative green technology that is widely applicable in industry and urban centres across the world. The exploration well is scheduled to be completed in 2013, with drilling due to commence in the third quarter of 2011. One year is allocated to finalizing the design of the exploration, monitoring and research well. Success in the geothermal exploration and research program will result in an industrial-scale geothermal cooling facility at the Pawsey Centre, and will provide a world-class student training environment in geothermal energy systems. A similar system is partially funded and in advanced planning to provide base-load air-conditioning for the main campus of the University of Western Australia. Both systems are expected to draw ~80-95 degrees C water from aquifers lying between 2000 and 3000 meters depth from naturally permeable rocks of the Perth sedimentary basin. The geothermal water will be run through absorption chilling devices, which only require heat (as opposed to mechanical work) to power a chilled water stream adequate to meet the cooling requirements. Once the heat has been removed from the geothermal water, licensing issues require the water to be re-injected back into the aquifer system. These systems are intended to demonstrate the feasibility of powering large-scale air

  8. Performance Analysis and Scaling Behavior of the Terrestrial Systems Modeling Platform TerrSysMP in Large-Scale Supercomputing Environments

    Science.gov (United States)

    Kollet, S. J.; Goergen, K.; Gasper, F.; Shresta, P.; Sulis, M.; Rihani, J.; Simmer, C.; Vereecken, H.

    2013-12-01

    In studies of the terrestrial hydrologic, energy and biogeochemical cycles, integrated multi-physics simulation platforms take a central role in characterizing non-linear interactions, variances and uncertainties of system states and fluxes in reciprocity with observations. Recently developed integrated simulation platforms attempt to honor the complexity of the terrestrial system across multiple time and space scales from the deeper subsurface including groundwater dynamics into the atmosphere. Technically, this requires the coupling of atmospheric, land surface, and subsurface-surface flow models in supercomputing environments, while ensuring a high-degree of efficiency in the utilization of e.g., standard Linux clusters and massively parallel resources. A systematic performance analysis including profiling and tracing in such an application is crucial in the understanding of the runtime behavior, to identify optimum model settings, and is an efficient way to distinguish potential parallel deficiencies. On sophisticated leadership-class supercomputers, such as the 28-rack 5.9 petaFLOP IBM Blue Gene/Q 'JUQUEEN' of the Jülich Supercomputing Centre (JSC), this is a challenging task, but even more so important, when complex coupled component models are to be analysed. Here we want to present our experience from coupling, application tuning (e.g. 5-times speedup through compiler optimizations), parallel scaling and performance monitoring of the parallel Terrestrial Systems Modeling Platform TerrSysMP. The modeling platform consists of the weather prediction system COSMO of the German Weather Service; the Community Land Model, CLM of NCAR; and the variably saturated surface-subsurface flow code ParFlow. The model system relies on the Multiple Program Multiple Data (MPMD) execution model where the external Ocean-Atmosphere-Sea-Ice-Soil coupler (OASIS3) links the component models. TerrSysMP has been instrumented with the performance analysis tool Scalasca and analyzed

  9. Enabling department-scale supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, D.S.; Hart, W.E.; Phillips, C.A.

    1997-11-01

    The Department of Energy (DOE) national laboratories have one of the longest and most consistent histories of supercomputer use. The authors summarize the architecture of DOE`s new supercomputers that are being built for the Accelerated Strategic Computing Initiative (ASCI). The authors then argue that in the near future scaled-down versions of these supercomputers with petaflop-per-weekend capabilities could become widely available to hundreds of research and engineering departments. The availability of such computational resources will allow simulation of physical phenomena to become a full-fledged third branch of scientific exploration, along with theory and experimentation. They describe the ASCI and other supercomputer applications at Sandia National Laboratories, and discuss which lessons learned from Sandia`s long history of supercomputing can be applied in this new setting.

  10. Adaptability of supercomputers to nuclear computations

    International Nuclear Information System (INIS)

    Asai, Kiyoshi; Ishiguro, Misako; Matsuura, Toshihiko.

    1983-01-01

    Recently in the field of scientific and technical calculation, the usefulness of supercomputers represented by CRAY-1 has been recognized, and they are utilized in various countries. The rapid computation of supercomputers is based on the function of vector computation. The authors investigated the adaptability to vector computation of about 40 typical atomic energy codes for the past six years. Based on the results of investigation, the adaptability of the function of vector computation that supercomputers have to atomic energy codes, the problem regarding the utilization and the future prospect are explained. The adaptability of individual calculation codes to vector computation is largely dependent on the algorithm and program structure used for the codes. The change to high speed by pipeline vector system, the investigation in the Japan Atomic Energy Research Institute and the results, and the examples of expressing the codes for atomic energy, environmental safety and nuclear fusion by vector are reported. The magnification of speed up for 40 examples was from 1.5 to 9.0. It can be said that the adaptability of supercomputers to atomic energy codes is fairly good. (Kako, I.)

  11. Use of QUADRICS supercomputer as embedded simulator in emergency management systems; Utilizzo del calcolatore QUADRICS come simulatore in linea in un sistema di gestione delle emergenze

    Energy Technology Data Exchange (ETDEWEB)

    Bove, R.; Di Costanzo, G.; Ziparo, A. [ENEA, Centro Ricerche Casaccia, Rome (Italy). Dip. Energia

    1996-07-01

    The experience related to the implementation of a MRBT, atmospheric spreading model with a short duration releasing, are reported. This model was implemented on a QUADRICS-Q1 supercomputer. First is reported a description of the MRBT model. It is an analytical model to study the speadings of light gases realised in the atmosphere cause incidental releasing. The solution of diffusion equation is Gaussian like. It yield the concentration of pollutant substance released. The concentration is function of space and time. Thus the QUADRICS architecture is introduced. And the implementation of the model is described. At the end it will be consider the integration of the QUADRICS-based model as simulator in a emergency management system.

  12. OpenMP Performance on the Columbia Supercomputer

    Science.gov (United States)

    Haoqiang, Jin; Hood, Robert

    2005-01-01

    This presentation discusses Columbia World Class Supercomputer which is one of the world's fastest supercomputers providing 61 TFLOPs (10/20/04). Conceived, designed, built, and deployed in just 120 days. A 20-node supercomputer built on proven 512-processor nodes. The largest SGI system in the world with over 10,000 Intel Itanium 2 processors and provides the largest node size incorporating commodity parts (512) and the largest shared-memory environment (2048) with 88% efficiency tops the scalar systems on the Top500 list.

  13. Problem solving in nuclear engineering using supercomputers

    International Nuclear Information System (INIS)

    Schmidt, F.; Scheuermann, W.; Schatz, A.

    1987-01-01

    The availability of supercomputers enables the engineer to formulate new strategies for problem solving. One such strategy is the Integrated Planning and Simulation System (IPSS). With the integrated systems, simulation models with greater consistency and good agreement with actual plant data can be effectively realized. In the present work some of the basic ideas of IPSS are described as well as some of the conditions necessary to build such systems. Hardware and software characteristics as realized are outlined. (orig.) [de

  14. Heuristic simulation of nuclear systems on a supercomputer using the HAL-1987 general-purpose production-rule analysis system

    International Nuclear Information System (INIS)

    Ragheb, M.; Gvillo, D.; Makowitz, H.

    1987-01-01

    HAL-1987 is a general-purpose tool for the construction of production-rule analysis systems. It uses the rule-based paradigm from the part of artificial intelligence concerned with knowledge engineering. It uses backward-chaining and forward-chaining in an antecedent-consequent logic, and is programmed in Portable Standard Lisp (PSL). The inference engine is flexible and accommodates general additions and modifications to the knowledge base. The system is used in coupled symbolic-procedural programming adaptive methodologies for stochastic simulations. In Monte Carlo simulations of particle transport, the system considers the pre-processing of the input data to the simulation and adaptively controls the variance reduction process as the simulation progresses. This is accomplished through the use of a knowledge base of rules which encompass the user's expertise in the variance reduction process. It is also applied to the construction of model-based systems for monitoring, fault-diagnosis and crisis-alert in engineering devices, particularly in the field of nuclear reactor safety analysis

  15. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-01-01

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  16. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-12-31

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  17. Graphics supercomputer for computational fluid dynamics research

    Science.gov (United States)

    Liaw, Goang S.

    1994-11-01

    The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.

  18. The Erasmus Computing Grid - Building a Super-Computer Virtually for Free at the Erasmus Medical Center and the Hogeschool Rotterdam

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2006-01-01

    textabstractThe Set-Up of the 20 Teraflop Erasmus Computing Grid: To meet the enormous computational needs of live- science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop

  19. World's fastest supercomputer opens up to users

    Science.gov (United States)

    Xin, Ling

    2016-08-01

    China's latest supercomputer - Sunway TaihuLight - has claimed the crown as the world's fastest computer according to the latest TOP500 list, released at the International Supercomputer Conference in Frankfurt in late June.

  20. Role of supercomputers in magnetic fusion and energy research programs

    International Nuclear Information System (INIS)

    Killeen, J.

    1985-06-01

    The importance of computer modeling in magnetic fusion (MFE) and energy research (ER) programs is discussed. The need for the most advanced supercomputers is described, and the role of the National Magnetic Fusion Energy Computer Center in meeting these needs is explained

  1. Mistral Supercomputer Job History Analysis

    OpenAIRE

    Zasadziński, Michał; Muntés-Mulero, Victor; Solé, Marc; Ludwig, Thomas

    2018-01-01

    In this technical report, we show insights and results of operational data analysis from petascale supercomputer Mistral, which is ranked as 42nd most powerful in the world as of January 2018. Data sources include hardware monitoring data, job scheduler history, topology, and hardware information. We explore job state sequences, spatial distribution, and electric power patterns.

  2. Supercomputers and quantum field theory

    International Nuclear Information System (INIS)

    Creutz, M.

    1985-01-01

    A review is given of why recent simulations of lattice gauge theories have resulted in substantial demands from particle theorists for supercomputer time. These calculations have yielded first principle results on non-perturbative aspects of the strong interactions. An algorithm for simulating dynamical quark fields is discussed. 14 refs

  3. Computational plasma physics and supercomputers

    International Nuclear Information System (INIS)

    Killeen, J.; McNamara, B.

    1984-09-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular codes, but parallel processing poses new coding difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematics

  4. The Fermilab Advanced Computer Program multi-array processor system (ACPMAPS): A site oriented supercomputer for theoretical physics

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.

    1988-08-01

    The ACP Multi-Array Processor System (ACPMAPS) is a highly cost effective, local memory parallel computer designed for floating point intensive grid based problems. The processing nodes of the system are single board array processors based on the FORTRAN and C programmable Weitek XL chip set. The nodes are connected by a network of very high bandwidth 16 port crossbar switches. The architecture is designed to achieve the highest possible cost effectiveness while maintaining a high level of programmability. The primary application of the machine at Fermilab will be lattice gauge theory. The hardware is supported by a transparent site oriented software system called CANOPY which shields theorist users from the underlying node structure. 4 refs., 2 figs

  5. The GF11 supercomputer

    International Nuclear Information System (INIS)

    Beetem, J.; Weingarten, D.

    1986-01-01

    GF11 is a parallel computer currently under construction at the IBM Yorktown Research Center. The machine incorporates 576 floating-point processors arrangedin a modified SIMD architecture. Each has space for 2 Mbytes of memory and is capable of 20 Mflops, giving the total machine a peak of 1.125 Gbytes of memory and 11.52 Gflops. The floating-point processors are interconnected by a dynamically reconfigurable non-blocking switching network. At each machine cycle any of 1024 pre-selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics

  6. The GF11 supercomputer

    International Nuclear Information System (INIS)

    Beetem, J.; Denneau, M.; Weingarten, D.

    1985-01-01

    GF11 is a parallel computer currently under construction at the IBM Yorktown Research Center. The machine incorporates 576 floating- point processors arranged in a modified SIMD architecture. Each has space for 2 Mbytes of memory and is capable of 20 Mflops, giving the total machine a peak of 1.125 Gbytes of memory and 11.52 Gflops. The floating-point processors are interconnected by a dynamically reconfigurable nonblocking switching network. At each machine cycle any of 1024 pre-selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics

  7. ATLAS Software Installation on Supercomputers

    CERN Document Server

    Undrus, Alexander; The ATLAS collaboration

    2018-01-01

    PowerPC and high performance computers (HPC) are important resources for computing in the ATLAS experiment. The future LHC data processing will require more resources than Grid computing, currently using approximately 100,000 cores at well over 100 sites, can provide. Supercomputers are extremely powerful as they use resources of hundreds of thousands CPUs joined together. However their architectures have different instruction sets. ATLAS binary software distributions for x86 chipsets do not fit these architectures, as emulation of these chipsets results in huge performance loss. This presentation describes the methodology of ATLAS software installation from source code on supercomputers. The installation procedure includes downloading the ATLAS code base as well as the source of about 50 external packages, such as ROOT and Geant4, followed by compilation, and rigorous unit and integration testing. The presentation reports the application of this procedure at Titan HPC and Summit PowerPC at Oak Ridge Computin...

  8. The GF11 supercomputer

    International Nuclear Information System (INIS)

    Beetem, J.; Denneau, M.; Weingarten, D.

    1985-01-01

    GF11 is a parallel computer currently under construction at the Yorktown Research Center. The machine incorporates 576 floating-point processors arranged in a modified SIMD architecture. Each processor has space for 2 Mbytes of memory and is capable of 20 MFLOPS, giving the total machine a peak of 1.125 Gbytes of memory and 11.52 GFLOPS. The floating-point processors are interconnected by a dynamically reconfigurable non-blocking switching network. At each machine cycle any of 1024 pre-selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics, a proposed theory of the elementary particles which participate in nuclear interactions

  9. JINR supercomputer of the module type for event parallel analysis

    International Nuclear Information System (INIS)

    Kolpakov, I.F.; Senner, A.E.; Smirnov, V.A.

    1987-01-01

    A model of a supercomputer with 50 million of operations per second is suggested. Its realization allows one to solve JINR data analysis problems for large spectrometers (in particular DELPHY collaboration). The suggested module supercomputer is based on 32-bit commercial available microprocessor with a processing rate of about 1 MFLOPS. The processors are combined by means of VME standard busbars. MicroVAX-11 is a host computer organizing the operation of the system. Data input and output is realized via microVAX-11 computer periphery. Users' software is based on the FORTRAN-77. The supercomputer is connected with a JINR net port and all JINR users get an access to the suggested system

  10. Learning System Center App Controller

    CERN Document Server

    Naeem, Nasir

    2015-01-01

    This book is intended for IT professionals working with Hyper-V, Azure cloud, VMM, and private cloud technologies who are looking for a quick way to get up and running with System Center 2012 R2 App Controller. To get the most out of this book, you should be familiar with Microsoft Hyper-V technology. Knowledge of Virtual Machine Manager is helpful but not mandatory.

  11. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  12. TOP500 Supercomputers for June 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-06-23

    23rd Edition of TOP500 List of World's Fastest Supercomputers Released: Japan's Earth Simulator Enters Third Year in Top Position MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 23rd edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2004) at the International Supercomputer Conference in Heidelberg, Germany.

  13. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    Science.gov (United States)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  14. Mathematical methods and supercomputing in nuclear applications. Proceedings. Vol. 2

    International Nuclear Information System (INIS)

    Kuesters, H.; Stein, E.; Werner, W.

    1993-04-01

    All papers of the two volumes are separately indexed in the data base. Main topics are: Progress in advanced numerical techniques, fluid mechanics, on-line systems, artificial intelligence applications, nodal methods reactor kinetics, reactor design, supercomputer architecture, probabilistic estimation of risk assessment, methods in transport theory, advances in Monte Carlo techniques, and man-machine interface. (orig.)

  15. Mathematical methods and supercomputing in nuclear applications. Proceedings. Vol. 1

    International Nuclear Information System (INIS)

    Kuesters, H.; Stein, E.; Werner, W.

    1993-04-01

    All papers of the two volumes are separately indexed in the data base. Main topics are: Progress in advanced numerical techniques, fluid mechanics, on-line systems, artificial intelligence applications, nodal methods reactor kinetics, reactor design, supercomputer architecture, probabilistic estimation of risk assessment, methods in transport theory, advances in Monte Carlo techniques, and man-machine interface. (orig.)

  16. Tryton Supercomputer Capabilities for Analysis of Massive Data Streams

    Directory of Open Access Journals (Sweden)

    Krawczyk Henryk

    2015-09-01

    Full Text Available The recently deployed supercomputer Tryton, located in the Academic Computer Center of Gdansk University of Technology, provides great means for massive parallel processing. Moreover, the status of the Center as one of the main network nodes in the PIONIER network enables the fast and reliable transfer of data produced by miscellaneous devices scattered in the area of the whole country. The typical examples of such data are streams containing radio-telescope and satellite observations. Their analysis, especially with real-time constraints, can be challenging and requires the usage of dedicated software components. We propose a solution for such parallel analysis using the supercomputer, supervised by the KASKADA platform, which with the conjunction with immerse 3D visualization techniques can be used to solve problems such as pulsar detection and chronometric or oil-spill simulation on the sea surface.

  17. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen-Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2018-05-15

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time and supports DMA functionality allowing for parallel processing message-passing.

  18. Convex unwraps its first grown-up supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Manuel, T.

    1988-03-03

    Convex Computer Corp.'s new supercomputer family is even more of an industry blockbuster than its first system. At a tenfold jump in performance, it's far from just an incremental upgrade over its first minisupercomputer, the C-1. The heart of the new family, the new C-2 processor, churning at 50 million floating-point operations/s, spawns a group of systems whose performance could pass for some fancy supercomputers-namely those of the Cray Research Inc. family. When added to the C-1, Convex's five new supercomputers create the C series, a six-member product group offering a performance range from 20 to 200 Mflops. They mark an important transition for Convex from a one-product high-tech startup to a multinational company with a wide-ranging product line. It's a tough transition but the Richardson, Texas, company seems to be doing it. The extended product line propels Convex into the upper end of the minisupercomputer class and nudges it into the low end of the big supercomputers. It positions Convex in an uncrowded segment of the market in the $500,000 to $1 million range offering 50 to 200 Mflops of performance. The company is making this move because the minisuper area, which it pioneered, quickly became crowded with new vendors, causing prices and gross margins to drop drastically.

  19. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-01-01

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  20. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  1. Accelerating Science Impact through Big Data Workflow Management and Supercomputing

    Directory of Open Access Journals (Sweden)

    De K.

    2016-01-01

    Full Text Available The Large Hadron Collider (LHC, operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. ATLAS, one of the largest collaborations ever assembled in the the history of science, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. To manage the workflow for all data processing on hundreds of data centers the PanDA (Production and Distributed AnalysisWorkload Management System is used. An ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF, is realizing within BigPanDA and megaPanDA projects. These projects are now exploring how PanDA might be used for managing computing jobs that run on supercomputers including OLCF’s Titan and NRC-KI HPC2. The main idea is to reuse, as much as possible, existing components of the PanDA system that are already deployed on the LHC Grid for analysis of physics data. The next generation of PanDA will allow many data-intensive sciences employing a variety of computing platforms to benefit from ATLAS experience and proven tools in highly scalable processing.

  2. TOP500 Supercomputers for June 2005

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2005-06-22

    25th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/L LNL BlueGene/L and IBM gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 25th edition of the TOP500 list of the world's fastest supercomputers was released today (June 22, 2005) at the 20th International Supercomputing Conference (ISC2005) in Heidelberg Germany.

  3. Records Center Program Billing System

    Data.gov (United States)

    National Archives and Records Administration — RCPBS supports the Records center programs (RCP) in producing invoices for the storage (NARS-5) and servicing of National Archives and Records Administration’s...

  4. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  5. TOP500 Supercomputers for November 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-11-08

    24th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/IBM BlueGene/L and NASA/SGI's Columbia gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 24th edition of the TOP500 list of the worlds fastest supercomputers was released today (November 8, 2004) at the SC2004 Conference in Pittsburgh, Pa.

  6. Quantum Hamiltonian Physics with Supercomputers

    International Nuclear Information System (INIS)

    Vary, James P.

    2014-01-01

    The vision of solving the nuclear many-body problem in a Hamiltonian framework with fundamental interactions tied to QCD via Chiral Perturbation Theory is gaining support. The goals are to preserve the predictive power of the underlying theory, to test fundamental symmetries with the nucleus as laboratory and to develop new understandings of the full range of complex quantum phenomena. Advances in theoretical frameworks (renormalization and many-body methods) as well as in computational resources (new algorithms and leadership-class parallel computers) signal a new generation of theory and simulations that will yield profound insights into the origins of nuclear shell structure, collective phenomena and complex reaction dynamics. Fundamental discovery opportunities also exist in such areas as physics beyond the Standard Model of Elementary Particles, the transition between hadronic and quark–gluon dominated dynamics in nuclei and signals that characterize dark matter. I will review some recent achievements and present ambitious consensus plans along with their challenges for a coming decade of research that will build new links between theory, simulations and experiment. Opportunities for graduate students to embark upon careers in the fast developing field of supercomputer simulations is also discussed

  7. Plasma turbulence calculations on supercomputers

    International Nuclear Information System (INIS)

    Carreras, B.A.; Charlton, L.A.; Dominguez, N.; Drake, J.B.; Garcia, L.; Leboeuf, J.N.; Lee, D.K.; Lynch, V.E.; Sidikman, K.

    1991-01-01

    Although the single-particle picture of magnetic confinement is helpful in understanding some basic physics of plasma confinement, it does not give a full description. Collective effects dominate plasma behavior. Any analysis of plasma confinement requires a self-consistent treatment of the particles and fields. The general picture is further complicated because the plasma, in general, is turbulent. The study of fluid turbulence is a rather complex field by itself. In addition to the difficulties of classical fluid turbulence, plasma turbulence studies face the problems caused by the induced magnetic turbulence, which couples field by itself. In addition to the difficulties of classical fluid turbulence, plasma turbulence studies face the problems caused by the induced magnetic turbulence, which couples back to the fluid. Since the fluid is not a perfect conductor, this turbulence can lead to changes in the topology of the magnetic field structure, causing the magnetic field lines to wander radially. Because the plasma fluid flows along field lines, they carry the particles with them, and this enhances the losses caused by collisions. The changes in topology are critical for the plasma confinement. The study of plasma turbulence and the concomitant transport is a challenging problem. Because of the importance of solving the plasma turbulence problem for controlled thermonuclear research, the high complexity of the problem, and the necessity of attacking the problem with supercomputers, the study of plasma turbulence in magnetic confinement devices is a Grand Challenge problem

  8. Quantum Hamiltonian Physics with Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Vary, James P.

    2014-06-15

    The vision of solving the nuclear many-body problem in a Hamiltonian framework with fundamental interactions tied to QCD via Chiral Perturbation Theory is gaining support. The goals are to preserve the predictive power of the underlying theory, to test fundamental symmetries with the nucleus as laboratory and to develop new understandings of the full range of complex quantum phenomena. Advances in theoretical frameworks (renormalization and many-body methods) as well as in computational resources (new algorithms and leadership-class parallel computers) signal a new generation of theory and simulations that will yield profound insights into the origins of nuclear shell structure, collective phenomena and complex reaction dynamics. Fundamental discovery opportunities also exist in such areas as physics beyond the Standard Model of Elementary Particles, the transition between hadronic and quark–gluon dominated dynamics in nuclei and signals that characterize dark matter. I will review some recent achievements and present ambitious consensus plans along with their challenges for a coming decade of research that will build new links between theory, simulations and experiment. Opportunities for graduate students to embark upon careers in the fast developing field of supercomputer simulations is also discussed.

  9. Extracting the Textual and Temporal Structure of Supercomputing Logs

    Energy Technology Data Exchange (ETDEWEB)

    Jain, S; Singh, I; Chandra, A; Zhang, Z; Bronevetsky, G

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an online clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.

  10. Development of seismic tomography software for hybrid supercomputers

    Science.gov (United States)

    Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton

    2015-04-01

    Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on

  11. Holistic Approach to Data Center Energy Efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Steven W [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-18

    This presentation discusses NREL's Energy System Integrations Facility and NREL's holistic design approach to sustainable data centers that led to the world's most energy-efficient data center. It describes Peregrine, a warm water liquid cooled supercomputer, waste heat reuse in the data center, demonstrated PUE and ERE, and lessons learned during four years of operation.

  12. Army Continuing Education System Centers

    Science.gov (United States)

    1979-05-01

    Scheme C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... 6-20 Example Plan–Scheme Coeducation Center for 21,000 Military...Information to supplement construction completion records shall be prepared to instruct the installation on how to gain the most benefit from such...High School Completion Program (HSCP). This gives soldiers a chance to earn a high school diploma or a State-issued high school equivalency certificate

  13. Analytic reducibility of nondegenerate centers: Cherkas systems

    Directory of Open Access Journals (Sweden)

    Jaume Giné

    2016-07-01

    where $P_i(x$ are polynomials of degree $n$, $P_0(0=0$ and $P_0'(0 <0$. Computing the focal values we find the center conditions for such systems for degree $3$, and using modular arithmetics for degree $4$. Finally we do a conjecture about the center conditions for Cherkas polynomial differential systems of degree $n$.

  14. Frequently updated noise threat maps created with use of supercomputing grid

    Directory of Open Access Journals (Sweden)

    Szczodrak Maciej

    2014-09-01

    Full Text Available An innovative supercomputing grid services devoted to noise threat evaluation were presented. The services described in this paper concern two issues, first is related to the noise mapping, while the second one focuses on assessment of the noise dose and its influence on the human hearing system. The discussed serviceswere developed within the PL-Grid Plus Infrastructure which accumulates Polish academic supercomputer centers. Selected experimental results achieved by the usage of the services proposed were presented. The assessment of the environmental noise threats includes creation of the noise maps using either ofline or online data, acquired through a grid of the monitoring stations. A concept of estimation of the source model parameters based on the measured sound level for the purpose of creating frequently updated noise maps was presented. Connecting the noise mapping grid service with a distributed sensor network enables to automatically update noise maps for a specified time period. Moreover, a unique attribute of the developed software is the estimation of the auditory effects evoked by the exposure to noise. The estimation method uses a modified psychoacoustic model of hearing and is based on the calculated noise level values and on the given exposure period. Potential use scenarios of the grid services for research or educational purpose were introduced. Presentation of the results of predicted hearing threshold shift caused by exposure to excessive noise can raise the public awareness of the noise threats.

  15. Use of high performance networks and supercomputers for real-time flight simulation

    Science.gov (United States)

    Cleveland, Jeff I., II

    1993-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.

  16. Microsoft System Center 2012 Orchestrator cookbook

    CERN Document Server

    Erskine, Samuel

    2013-01-01

    This book is written in a practical, Cookbook style with numerous chapters and recipes focusing on creating runbooks to automate mission critical and everyday administration tasks.System Center 2012 Orchestrator is for administrators who wish to simplify the process of automating systems administration tasks. This book assumes that you have a basic knowledge of Windows Server 2008 Administration, Active Directory, Network Systems, and Microsoft System Center technologies.

  17. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  18. Desktop supercomputer: what can it do?

    Science.gov (United States)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-12-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  19. Desktop supercomputer: what can it do?

    International Nuclear Information System (INIS)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-01-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  20. Reactive flow simulations in complex geometries with high-performance supercomputing

    International Nuclear Information System (INIS)

    Rehm, W.; Gerndt, M.; Jahn, W.; Vogelsang, R.; Binninger, B.; Herrmann, M.; Olivier, H.; Weber, M.

    2000-01-01

    In this paper, we report on a modern field code cluster consisting of state-of-the-art reactive Navier-Stokes- and reactive Euler solvers that has been developed on vector- and parallel supercomputers at the research center Juelich. This field code cluster is used for hydrogen safety analyses of technical systems, for example, in the field of nuclear reactor safety and conventional hydrogen demonstration plants with fuel cells. Emphasis is put on the assessment of combustion loads, which could result from slow, fast or rapid flames, including transition from deflagration to detonation. As a sample of proof tests, the special tools have been tested for specific tasks, based on the comparison of experimental and numerical results, which are in reasonable agreement. (author)

  1. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    Science.gov (United States)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  2. Centralized supercomputer support for magnetic fusion energy research

    International Nuclear Information System (INIS)

    Fuss, D.; Tull, G.G.

    1984-01-01

    High-speed computers with large memories are vital to magnetic fusion energy research. Magnetohydrodynamic (MHD), transport, equilibrium, Vlasov, particle, and Fokker-Planck codes that model plasma behavior play an important role in designing experimental hardware and interpreting the resulting data, as well as in advancing plasma theory itself. The size, architecture, and software of supercomputers to run these codes are often the crucial constraints on the benefits such computational modeling can provide. Hence, vector computers such as the CRAY-1 offer a valuable research resource. To meet the computational needs of the fusion program, the National Magnetic Fusion Energy Computer Center (NMFECC) was established in 1974 at the Lawrence Livermore National Laboratory. Supercomputers at the central computing facility are linked to smaller computer centers at each of the major fusion laboratories by a satellite communication network. In addition to providing large-scale computing, the NMFECC environment stimulates collaboration and the sharing of computer codes and data among the many fusion researchers in a cost-effective manner

  3. Computational plasma physics and supercomputers. Revision 1

    International Nuclear Information System (INIS)

    Killeen, J.; McNamara, B.

    1985-01-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular models, but parallel processing poses new programming difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematical models

  4. Current state and future direction of computer systems at NASA Langley Research Center

    Science.gov (United States)

    Rogers, James L. (Editor); Tucker, Jerry H. (Editor)

    1992-01-01

    Computer systems have advanced at a rate unmatched by any other area of technology. As performance has dramatically increased there has been an equally dramatic reduction in cost. This constant cost performance improvement has precipitated the pervasiveness of computer systems into virtually all areas of technology. This improvement is due primarily to advances in microelectronics. Most people are now convinced that the new generation of supercomputers will be built using a large number (possibly thousands) of high performance microprocessors. Although the spectacular improvements in computer systems have come about because of these hardware advances, there has also been a steady improvement in software techniques. In an effort to understand how these hardware and software advances will effect research at NASA LaRC, the Computer Systems Technical Committee drafted this white paper to examine the current state and possible future directions of computer systems at the Center. This paper discusses selected important areas of computer systems including real-time systems, embedded systems, high performance computing, distributed computing networks, data acquisition systems, artificial intelligence, and visualization.

  5. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  6. Data Center Equipment Location and Monitoring System

    DEFF Research Database (Denmark)

    2013-01-01

    Abstract: Data center equipment location systems include hardware and software to provide information on the location, monitoring, and security of servers and other equipment in equipment racks. The systems provide a wired alternative to the wireless RFID tag system by using electronic ID tags...... connected to each piece of equipment, each electronic ID tag connected directly by wires to an equipment rack controller on the equipment rack. The equipment rack controllers link to a central control computer that provides an operator ...

  7. Resonances in the two centers Coulomb system

    Energy Technology Data Exchange (ETDEWEB)

    Seri, Marcello

    2012-09-14

    In this work we investigate the existence of resonances for two-centers Coulomb systems with arbitrary charges in two and three dimensions, defining them in terms of generalized complex eigenvalues of a non-selfadjoint deformation of the two-center Schroedinger operator. After giving a description of the bifurcation of the classical system for positive energies, we construct the resolvent kernel of the operators and we prove that they can be extended analytically to the second Riemann sheet. The resonances are then defined and studied with numerical methods and perturbation theory.

  8. Resonances in the two centers Coulomb system

    International Nuclear Information System (INIS)

    Seri, Marcello

    2012-01-01

    In this work we investigate the existence of resonances for two-centers Coulomb systems with arbitrary charges in two and three dimensions, defining them in terms of generalized complex eigenvalues of a non-selfadjoint deformation of the two-center Schroedinger operator. After giving a description of the bifurcation of the classical system for positive energies, we construct the resolvent kernel of the operators and we prove that they can be extended analytically to the second Riemann sheet. The resonances are then defined and studied with numerical methods and perturbation theory.

  9. Supercomputer algorithms for reactivity, dynamics and kinetics of small molecules

    International Nuclear Information System (INIS)

    Lagana, A.

    1989-01-01

    Even for small systems, the accurate characterization of reactive processes is so demanding of computer resources as to suggest the use of supercomputers having vector and parallel facilities. The full advantages of vector and parallel architectures can sometimes be obtained by simply modifying existing programs, vectorizing the manipulation of vectors and matrices, and requiring the parallel execution of independent tasks. More often, however, a significant time saving can be obtained only when the computer code undergoes a deeper restructuring, requiring a change in the computational strategy or, more radically, the adoption of a different theoretical treatment. This book discusses supercomputer strategies based upon act and approximate methods aimed at calculating the electronic structure and the reactive properties of small systems. The book shows how, in recent years, intense design activity has led to the ability to calculate accurate electronic structures for reactive systems, exact and high-level approximations to three-dimensional reactive dynamics, and to efficient directive and declaratory software for the modelling of complex systems

  10. The Regional Test Center Data Transfer System

    Energy Technology Data Exchange (ETDEWEB)

    Riley, Daniel M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Photovoltaic and Distributed Systems Dept.; Stein, Joshua S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Photovoltaic and Distributed Systems Dept.

    2016-09-01

    The Regional Test Centers are a group of several sites around the US for testing photovoltaic systems and components related to photovoltaic systems. The RTCs are managed by Sandia National Laboratories. The data collected by the RTCs must be transmitted to Sandia for storage, analysis, and reporting. This document describes the methods that transfer the data between remote sites and Sandia as well as data movement within Sandia’s network. The methods described are in force as of September, 2016.

  11. FPS scientific and supercomputers computers in chemistry

    International Nuclear Information System (INIS)

    Curington, I.J.

    1987-01-01

    FPS Array Processors, scientific computers, and highly parallel supercomputers are used in nearly all aspects of compute-intensive computational chemistry. A survey is made of work utilizing this equipment, both published and current research. The relationship of the computer architecture to computational chemistry is discussed, with specific reference to Molecular Dynamics, Quantum Monte Carlo simulations, and Molecular Graphics applications. Recent installations of the FPS T-Series are highlighted, and examples of Molecular Graphics programs running on the FPS-5000 are shown

  12. The current state of the development of the supercomputer system in plasma science and nuclear fusion research in the case of Japan Atomic Energy Research Institute

    International Nuclear Information System (INIS)

    Azumi, Masafumi

    2004-01-01

    The progress of large scale scientific simulation environment in JAERI is briefly described. The expansion of fusion simulation science has been played a key role in the increasing performances of super computers and computer network system in JAERI. Both scalar parallel and vector parallel computer systems are now working at the Naka and Tokai sites respectively, and particle and fluid simulation codes developed under the fusion simulation project, NEXT, are running on each system. The storage grid system has been also successfully developed for effective visualization analysis by remote users. Fusion research is going to enter the new phase of ITER, and the need for the super computer system with higher performance are increasing more than as ever along with the development of reliable simulation models. (author)

  13. NASA Langley Research Center tethered balloon systems

    Science.gov (United States)

    Owens, Thomas L.; Storey, Richard W.; Youngbluth, Otto

    1987-01-01

    The NASA Langley Research Center tethered balloon system operations are covered in this report for the period of 1979 through 1983. Meteorological data, ozone concentrations, and other data were obtained from in situ measurements. The large tethered balloon had a lifting capability of 30 kilograms to 2500 meters. The report includes descriptions of the various components of the balloon systems such as the balloons, the sensors, the electronics, and the hardware. Several photographs of the system are included as well as a list of projects including the types of data gathered.

  14. Personal Supercomputing for Monte Carlo Simulation Using a GPU

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2008-05-15

    Since the usability, accessibility, and maintenance of a personal computer (PC) are very good, a PC is a useful computer simulation tool for researchers. It has enough calculation power to simulate a small scale system with the improved performance of a PC's CPU. However, if a system is large or long time scale, we need a cluster computer or supercomputer. Recently great changes have occurred in the PC calculation environment. A graphic process unit (GPU) on a graphic card, only used to calculate display data, has a superior calculation capability to a PC's CPU. This GPU calculation performance is a match for the supercomputer in 2000. Although it has such a great calculation potential, it is not easy to program a simulation code for GPU due to difficult programming techniques for converting a calculation matrix to a 3D rendering image using graphic APIs. In 2006, NVIDIA provided the Software Development Kit (SDK) for the programming environment for NVIDIA's graphic cards, which is called the Compute Unified Device Architecture (CUDA). It makes the programming on the GPU easy without knowledge of the graphic APIs. This paper describes the basic architectures of NVIDIA's GPU and CUDA, and carries out a performance benchmark for the Monte Carlo simulation.

  15. Personal Supercomputing for Monte Carlo Simulation Using a GPU

    International Nuclear Information System (INIS)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho

    2008-01-01

    Since the usability, accessibility, and maintenance of a personal computer (PC) are very good, a PC is a useful computer simulation tool for researchers. It has enough calculation power to simulate a small scale system with the improved performance of a PC's CPU. However, if a system is large or long time scale, we need a cluster computer or supercomputer. Recently great changes have occurred in the PC calculation environment. A graphic process unit (GPU) on a graphic card, only used to calculate display data, has a superior calculation capability to a PC's CPU. This GPU calculation performance is a match for the supercomputer in 2000. Although it has such a great calculation potential, it is not easy to program a simulation code for GPU due to difficult programming techniques for converting a calculation matrix to a 3D rendering image using graphic APIs. In 2006, NVIDIA provided the Software Development Kit (SDK) for the programming environment for NVIDIA's graphic cards, which is called the Compute Unified Device Architecture (CUDA). It makes the programming on the GPU easy without knowledge of the graphic APIs. This paper describes the basic architectures of NVIDIA's GPU and CUDA, and carries out a performance benchmark for the Monte Carlo simulation

  16. Cooperative visualization and simulation in a supercomputer environment

    International Nuclear Information System (INIS)

    Ruehle, R.; Lang, U.; Wierse, A.

    1993-01-01

    The article takes a closer look on the requirements being imposed by the idea to integrate all the components into a homogeneous software environment. To this end several methods for the distribtuion of applications in dependence of certain problem types are discussed. The currently available methods at the University of Stuttgart Computer Center for the distribution of applications are further explained. Finally the aims and characteristics of a European sponsored project, called PAGEIN, are explained, which fits perfectly into the line of developments at RUS. The aim of the project is to experiment with future cooperative working modes of aerospace scientists in a high speed distributed supercomputing environment. Project results will have an impact on the development of real future scientific application environments. (orig./DG)

  17. Design and performance characterization of electronic structure calculations on massively parallel supercomputers

    DEFF Research Database (Denmark)

    Romero, N. A.; Glinsvad, Christian; Larsen, Ask Hjorth

    2013-01-01

    Density function theory (DFT) is the most widely employed electronic structure method because of its favorable scaling with system size and accuracy for a broad range of molecular and condensed-phase systems. The advent of massively parallel supercomputers has enhanced the scientific community...

  18. Mantle Convection on Modern Supercomputers

    Science.gov (United States)

    Weismüller, J.; Gmeiner, B.; Huber, M.; John, L.; Mohr, M.; Rüde, U.; Wohlmuth, B.; Bunge, H. P.

    2015-12-01

    Mantle convection is the cause for plate tectonics, the formation of mountains and oceans, and the main driving mechanism behind earthquakes. The convection process is modeled by a system of partial differential equations describing the conservation of mass, momentum and energy. Characteristic to mantle flow is the vast disparity of length scales from global to microscopic, turning mantle convection simulations into a challenging application for high-performance computing. As system size and technical complexity of the simulations continue to increase, design and implementation of simulation models for next generation large-scale architectures is handled successfully only in an interdisciplinary context. A new priority program - named SPPEXA - by the German Research Foundation (DFG) addresses this issue, and brings together computer scientists, mathematicians and application scientists around grand challenges in HPC. Here we report from the TERRA-NEO project, which is part of the high visibility SPPEXA program, and a joint effort of four research groups. TERRA-NEO develops algorithms for future HPC infrastructures, focusing on high computational efficiency and resilience in next generation mantle convection models. We present software that can resolve the Earth's mantle with up to 1012 grid points and scales efficiently to massively parallel hardware with more than 50,000 processors. We use our simulations to explore the dynamic regime of mantle convection and assess the impact of small scale processes on global mantle flow.

  19. Joint Logistics Systems Center Reporting of Systems Development Costs

    National Research Council Canada - National Science Library

    1998-01-01

    ...." The Joint Logistics Systems Center (JLSC) was organized in FY 1992 to accomplish Corporate Information Management goals for the depot maintenance and supply management business areas of the DoD Working Capital Funds...

  20. The TeraGyroid Experiment – Supercomputing 2003

    Directory of Open Access Journals (Sweden)

    R.J. Blake

    2005-01-01

    Full Text Available Amphiphiles are molecules with hydrophobic tails and hydrophilic heads. When dispersed in solvents, they self assemble into complex mesophases including the beautiful cubic gyroid phase. The goal of the TeraGyroid experiment was to study defect pathways and dynamics in these gyroids. The UK's supercomputing and USA's TeraGrid facilities were coupled together, through a dedicated high-speed network, into a single computational Grid for research work that peaked around the Supercomputing 2003 conference. The gyroids were modeled using lattice Boltzmann methods with parameter spaces explored using many 1283 and 3grid point simulations, this data being used to inform the world's largest three-dimensional time dependent simulation with 10243-grid points. The experiment generated some 2 TBytes of useful data. In terms of Grid technology the project demonstrated the migration of simulations (using Globus middleware to and fro across the Atlantic exploiting the availability of resources. Integration of the systems accelerated the time to insight. Distributed visualisation of the output datasets enabled the parameter space of the interactions within the complex fluid to be explored from a number of sites, informed by discourse over the Access Grid. The project was sponsored by EPSRC (UK and NSF (USA with trans-Atlantic optical bandwidth provided by British Telecommunications.

  1. Mastering System Center 2012 Configuration Manager

    CERN Document Server

    Rachui, Steve; Martinez, Santos; Daalmans, Peter

    2012-01-01

    Expert coverage of Microsoft's highly anticipated network software deployment tool The latest version of System Center Configuration Manager (SCCM) is a dramatic update of its predecessor Configuration Manager 2007, and this book offers intermediate-to-advanced coverage of how the new SCCM boasts a simplified hierarchy, role-based security, a new console, flexible application deployment, and mobile management. You'll explore planning and installation, migrating from SCCM 2007, deploying software and operating systems, security, monitoring and troubleshooting, and automating and customizing SCC

  2. Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers

    International Nuclear Information System (INIS)

    Dreher, Patrick; Scullin, William; Vouk, Mladen

    2015-01-01

    Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers. (paper)

  3. Optical multicast system for data center networks.

    Science.gov (United States)

    Samadi, Payman; Gupta, Varun; Xu, Junjie; Wang, Howard; Zussman, Gil; Bergman, Keren

    2015-08-24

    We present the design and experimental evaluation of an Optical Multicast System for Data Center Networks, a hardware-software system architecture that uniquely integrates passive optical splitters in a hybrid network architecture for faster and simpler delivery of multicast traffic flows. An application-driven control plane manages the integrated optical and electronic switched traffic routing in the data plane layer. The control plane includes a resource allocation algorithm to optimally assign optical splitters to the flows. The hardware architecture is built on a hybrid network with both Electronic Packet Switching (EPS) and Optical Circuit Switching (OCS) networks to aggregate Top-of-Rack switches. The OCS is also the connectivity substrate of splitters to the optical network. The optical multicast system implementation requires only commodity optical components. We built a prototype and developed a simulation environment to evaluate the performance of the system for bulk multicasting. Experimental and numerical results show simultaneous delivery of multicast flows to all receivers with steady throughput. Compared to IP multicast that is the electronic counterpart, optical multicast performs with less protocol complexity and reduced energy consumption. Compared to peer-to-peer multicast methods, it achieves at minimum an order of magnitude higher throughput for flows under 250 MB with significantly less connection overheads. Furthermore, for delivering 20 TB of data containing only 15% multicast flows, it reduces the total delivery energy consumption by 50% and improves latency by 55% compared to a data center with a sole non-blocking EPS network.

  4. Data center equipment location and monitoring system

    DEFF Research Database (Denmark)

    2011-01-01

    A data center equipment location system includes both hardware and software to provide for location, monitoring, security and identification of servers and other equipment in equipment racks. The system provides a wired alternative to the wireless RFID tag system by using electronic ID tags...... connected to each piece of equipment, each electronic ID tag connected directly by wires to a equipment rack controller on the equipment rack. The equipment rack controllers then link over a local area network to a central control computer. The central control computer provides an operator interface......, and runs a software application program that communicates with the equipment rack controllers. The software application program of the central control computer stores IDs of the equipment rack controllers and each of its connected electronic ID tags in a database.; The software application program...

  5. Development of a Cloud Resolving Model for Heterogeneous Supercomputers

    Science.gov (United States)

    Sreepathi, S.; Norman, M. R.; Pal, A.; Hannah, W.; Ponder, C.

    2017-12-01

    A cloud resolving climate model is needed to reduce major systematic errors in climate simulations due to structural uncertainty in numerical treatments of convection - such as convective storm systems. This research describes the porting effort to enable SAM (System for Atmosphere Modeling) cloud resolving model on heterogeneous supercomputers using GPUs (Graphical Processing Units). We have isolated a standalone configuration of SAM that is targeted to be integrated into the DOE ACME (Accelerated Climate Modeling for Energy) Earth System model. We have identified key computational kernels from the model and offloaded them to a GPU using the OpenACC programming model. Furthermore, we are investigating various optimization strategies intended to enhance GPU utilization including loop fusion/fission, coalesced data access and loop refactoring to a higher abstraction level. We will present early performance results, lessons learned as well as optimization strategies. The computational platform used in this study is the Summitdev system, an early testbed that is one generation removed from Summit, the next leadership class supercomputer at Oak Ridge National Laboratory. The system contains 54 nodes wherein each node has 2 IBM POWER8 CPUs and 4 NVIDIA Tesla P100 GPUs. This work is part of a larger project, ACME-MMF component of the U.S. Department of Energy(DOE) Exascale Computing Project. The ACME-MMF approach addresses structural uncertainty in cloud processes by replacing traditional parameterizations with cloud resolving "superparameterization" within each grid cell of global climate model. Super-parameterization dramatically increases arithmetic intensity, making the MMF approach an ideal strategy to achieve good performance on emerging exascale computing architectures. The goal of the project is to integrate superparameterization into ACME, and explore its full potential to scientifically and computationally advance climate simulation and prediction.

  6. PNNL supercomputer to become largest computing resource on the Grid

    CERN Multimedia

    2002-01-01

    Hewlett Packard announced that the US DOE Pacific Northwest National Laboratory will connect a 9.3-teraflop HP supercomputer to the DOE Science Grid. This will be the largest supercomputer attached to a computer grid anywhere in the world (1 page).

  7. Supercomputing - Use Cases, Advances, The Future (2/2)

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the second day, we will focus on software and software paradigms driving supercomputers, workloads that need supercomputing treatment, advances in technology and possible future developments. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and i...

  8. Grid Integration Science, NREL Power Systems Engineering Center

    Energy Technology Data Exchange (ETDEWEB)

    Kroposki, Benjamin [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-04-25

    This report highlights journal articles published in 2016 by researchers in the Power Systems Engineering Center. NREL's Power Systems Engineering Center published 47 journal and magazine articles in the past year, highlighting recent research in grid modernization.

  9. Lectures in Supercomputational Neurosciences Dynamics in Complex Brain Networks

    CERN Document Server

    Graben, Peter beim; Thiel, Marco; Kurths, Jürgen

    2008-01-01

    Computational Neuroscience is a burgeoning field of research where only the combined effort of neuroscientists, biologists, psychologists, physicists, mathematicians, computer scientists, engineers and other specialists, e.g. from linguistics and medicine, seem to be able to expand the limits of our knowledge. The present volume is an introduction, largely from the physicists' perspective, to the subject matter with in-depth contributions by system neuroscientists. A conceptual model for complex networks of neurons is introduced that incorporates many important features of the real brain, such as various types of neurons, various brain areas, inhibitory and excitatory coupling and the plasticity of the network. The computational implementation on supercomputers, which is introduced and discussed in detail in this book, will enable the readers to modify and adapt the algortihm for their own research. Worked-out examples of applications are presented for networks of Morris-Lecar neurons to model the cortical co...

  10. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel's MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  11. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  12. Remote Sensing/Geographic Information Systems Center

    Data.gov (United States)

    Federal Laboratory Consortium — The RS/GIS Center, located at ERDC's Cold Regions Research and Engineering Laboratory, in Hanover, New Hampshire, is the Corps of Engineers Center of Expertise for...

  13. Center conditions and limit cycles for BiLienard systems

    Directory of Open Access Journals (Sweden)

    Jaume Gine

    2017-03-01

    Full Text Available In this article we study the center problem for polynomial BiLienard systems of degree n. Computing the focal values and using Grobner bases we find the center conditions for such systems for n=6. We also establish a conjecture about the center conditions for polynomial BiLienard systems of arbitrary degree.

  14. Adventures in supercomputing: An innovative program for high school teachers

    Energy Technology Data Exchange (ETDEWEB)

    Oliver, C.E.; Hicks, H.R.; Summers, B.G. [Oak Ridge National Lab., TN (United States); Staten, D.G. [Wartburg Central High School, TN (United States)

    1994-12-31

    Within the realm of education, seldom does an innovative program become available with the potential to change an educator`s teaching methodology. Adventures in Supercomputing (AiS), sponsored by the U.S. Department of Energy (DOE), is such a program. It is a program for high school teachers that changes the teacher paradigm from a teacher-directed approach of teaching to a student-centered approach. {open_quotes}A student-centered classroom offers better opportunities for development of internal motivation, planning skills, goal setting and perseverance than does the traditional teacher-directed mode{close_quotes}. Not only is the process of teaching changed, but the cross-curricula integration within the AiS materials is remarkable. Written from a teacher`s perspective, this paper will describe the AiS program and its effects on teachers and students, primarily at Wartburg Central High School, in Wartburg, Tennessee. The AiS program in Tennessee is sponsored by Oak Ridge National Laboratory (ORNL).

  15. Analyzing the Interplay of Failures and Workload on a Leadership-Class Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Meneses, Esteban [University of Pittsburgh; Ni, Xiang [University of Illinois at Urbana-Champaign; Jones, Terry R [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The unprecedented computational power of cur- rent supercomputers now makes possible the exploration of complex problems in many scientific fields, from genomic analysis to computational fluid dynamics. Modern machines are powerful because they are massive: they assemble millions of cores and a huge quantity of disks, cards, routers, and other components. But it is precisely the size of these machines that glooms the future of supercomputing. A system that comprises many components has a high chance to fail, and fail often. In order to make the next generation of supercomputers usable, it is imperative to use some type of fault tolerance platform to run applications on large machines. Most fault tolerance strategies can be optimized for the peculiarities of each system and boost efficacy by keeping the system productive. In this paper, we aim to understand how failure characterization can improve resilience in several layers of the software stack: applications, runtime systems, and job schedulers. We examine the Titan supercomputer, one of the fastest systems in the world. We analyze a full year of Titan in production and distill the failure patterns of the machine. By looking into Titan s log files and using the criteria of experts, we provide a detailed description of the types of failures. In addition, we inspect the job submission files and describe how the system is used. Using those two sources, we cross correlate failures in the machine to executing jobs and provide a picture of how failures affect the user experience. We believe such characterization is fundamental in developing appropriate fault tolerance solutions for Cray systems similar to Titan.

  16. Center for Efficiency in Sustainable Energy Systems

    Energy Technology Data Exchange (ETDEWEB)

    Abraham, Martin [Youngstown State Univ., OH (United States)

    2016-01-31

    The main goal of the Center for Efficiency in Sustainable Energy Systems is to produce a methodology that evaluates a variety of energy systems. Task I. Improved Energy Efficiency for Industrial Processes: This task, completed in partnership with area manufacturers, analyzes the operation of complex manufacturing facilities to provide flexibilities that allow them to improve active-mode power efficiency, lower standby-mode power consumption, and use low cost energy resources to control energy costs in meeting their economic incentives; (2) Identify devices for the efficient transformation of instantaneous or continuous power to different devices and sections of industrial plants; and (3) use these manufacturing sites to demonstrate and validate general principles of power management. Task II. Analysis of a solid oxide fuel cell operating on landfill gas: This task consists of: (1) analysis of a typical landfill gas; (2) establishment of a comprehensive design of the fuel cell system (including the SOFC stack and BOP), including durability analysis; (3) development of suitable reforming methods and catalysts that are tailored to the specific SOFC system concept; and (4) SOFC stack fabrication with testing to demonstrate the salient operational characteristics of the stack, including an analysis of the overall energy conversion efficiency of the system. Task III. Demonstration of an urban wind turbine system: This task consists of (1) design and construction of two side-by-side wind turbine systems on the YSU campus, integrated through power control systems with grid power; (2) preliminary testing of aerodynamic control effectors (provided by a small business partner) to demonstrate improved power control, and evaluation of the system performance, including economic estimates of viability in an urban environment; and (3) computational analysis of the wind turbine system as an enabling activity for development of smart rotor blades that contain integrated sensor

  17. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    Energy Technology Data Exchange (ETDEWEB)

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold (Hal) Edward; Stevenson, Joel O.; Benner, Robert E., Jr. (.,; .); Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  18. Direct exploitation of a top 500 Supercomputer for Analysis of CMS Data

    International Nuclear Information System (INIS)

    Cabrillo, I; Cabellos, L; Marco, J; Fernandez, J; Gonzalez, I

    2014-01-01

    The Altamira Supercomputer hosted at the Instituto de Fisica de Cantatbria (IFCA) entered in operation in summer 2012. Its last generation FDR Infiniband network used (for message passing) in parallel jobs, supports the connection to General Parallel File System (GPFS) servers, enabling an efficient simultaneous processing of multiple data demanding jobs. Sharing a common GPFS system and a single LDAP-based identification with the existing Grid clusters at IFCA allows CMS researchers to exploit the large instantaneous capacity of this supercomputer to execute analysis jobs. The detailed experience describing this opportunistic use for skimming and final analysis of CMS 2012 data for a specific physics channel, resulting in an order of magnitude reduction of the waiting time, is presented.

  19. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    Energy Technology Data Exchange (ETDEWEB)

    Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brightwell, Ron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In this paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.

  20. Distribution control centers in the Croatian power system with particular consideration on ZAgreb distribution control center

    International Nuclear Information System (INIS)

    Cupin, N.

    2000-01-01

    Discussion about control of Croatian Power system in the view of forthcoming free electricity market did not included do far distribution level. With this article we would like to clarify the role of distribution control centers pointing out importance of Zagreb Distribution control center, with controls one third of Croatian (HEP) consumption. (author)

  1. On the evolution of stellar systems with a massive center

    International Nuclear Information System (INIS)

    Gurzadyan, V.G.; Kocharyan, A.A.

    1986-01-01

    The evolution of stellar systems with the massive center is investigated within the framework of dynamic system theory. Open dissipative systems, for which the Liouville theorem of the phase volume preservation is not implemented, are considered. Equations determining variation, in time, of main physical system parameters have been derived and studied. Results of the investigation show a principal possibility for determining the evolution path of stellar systems with the massive centers depending on physical parameters

  2. Research and development of grid computing technology in center for computational science and e-systems of Japan Atomic Energy Agency

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2007-01-01

    Center for Computational Science and E-systems of the Japan Atomic Energy Agency (CCSE/JAEA) has carried out R and D of grid computing technology. Since 1995, R and D to realize computational assistance for researchers called Seamless Thinking Aid (STA) and then to share intellectual resources called Information Technology Based Laboratory (ITBL) have been conducted, leading to construct an intelligent infrastructure for the atomic energy research called Atomic Energy Grid InfraStructure (AEGIS) under the Japanese national project 'Development and Applications of Advanced High-Performance Supercomputer'. It aims to enable synchronization of three themes: 1) Computer-Aided Research and Development (CARD) to realize and environment for STA, 2) Computer-Aided Engineering (CAEN) to establish Multi Experimental Tools (MEXT), and 3) Computer Aided Science (CASC) to promote the Atomic Energy Research and Investigation (AERI). This article reviewed achievements in R and D of grid computing technology so far obtained. (T. Tanaka)

  3. National Finance Center Personnel/Payroll System

    Data.gov (United States)

    US Agency for International Development — The NFC system is an USDA system used for processing transactions for payroll/personnel systems. Personnel processing is done through EPIC/HCUP, which is web-based....

  4. Microsoft System Center 2012 R2 compliance management cookbook

    CERN Document Server

    Baumgarten, Andreas; Roesner, Susan

    2014-01-01

    Whether you are an IT manager, an administrator, or security professional who wants to learn how Microsoft Security Compliance Manager and Microsoft System Center can help fulfil compliance and security requirements, this is the book for you. Prior knowledge of Microsoft System Center is required.

  5. FY17 Transportation and Hydrogen Systems Center Journal Publication Highlights

    Energy Technology Data Exchange (ETDEWEB)

    2017-12-08

    NREL's Transportation and Hydrogen Systems Center published 39 journal articles in fiscal year 2017 highlighting recent research in advanced vehicle technology, alternative fuels, and hydrogen systems.

  6. Supercomputing - Use Cases, Advances, The Future (1/2)

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the first day, we will focus on the history and theory of supercomputing, the top500 list and the hardware that makes supercomputers tick. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP an...

  7. Study on the climate system and mass transport by a climate model

    International Nuclear Information System (INIS)

    Numaguti, A.; Sugata, S.; Takahashi, M.; Nakajima, T.; Sumi, A.

    1997-01-01

    The Center for Global Environmental Research (CGER), an organ of the National Institute for Environmental Studies of the Environment Agency of Japan, was established in October 1990 to contribute broadly to the scientific understanding of global change, and to the elucidation of and solution for our pressing environmental problems. CGER conducts environmental research from interdisciplinary, multiagency, and international perspective, provides research support facilities such as a supercomputer and databases, and offers its own data from long-term monitoring of the global environment. In March 1992, CGER installed a supercomputer system (NEC SX-3, Model 14) to facilitate research on global change. The system is open to environmental researchers worldwide. Proposed research programs are evaluated by the Supercomputer Steering Committee which consists of leading scientists in climate modeling, atmospheric chemistry, oceanic circulation, and computer science. After project approval, authorization for system usage is provided. In 1995 and 1996, several research proposals were designated as priority research and allocated larger shares of computer resources. The CGER supercomputer monograph report Vol. 3 is a report of priority research of CGER's supercomputer. The report covers the description of CCSR-NIES atmospheric general circulation model, lagragian general circulation based on the time-scale of particle motion, and ability of the CCSR-NIES atmospheric general circulation model in the stratosphere. The results obtained from these three studies are described in three chapters. We hope this report provides you with useful information on the global environmental research conducted on our supercomputer

  8. A Comparison of Organization-Centered and Agent-Centered Multi-Agent Systems

    DEFF Research Database (Denmark)

    Jensen, Andreas Schmidt; Villadsen, Jørgen

    2013-01-01

    Whereas most classical multi-agent systems have the agent in center, there has recently been a development towards focusing more on the organization of the system, thereby allowing the designer to focus on what the system goals are, without considering how the goals should be fulfilled. We have d...

  9. HEP Computing Tools, Grid and Supercomputers for Genome Sequencing Studies

    Science.gov (United States)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Novikov, A.; Poyda, A.; Tertychnyy, I.; Wenaus, T.

    2017-10-01

    PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive sciences such as bioinformatics. Recent advances of Next Generation Genome Sequencing (NGS) technology led to increasing streams of sequencing data that need to be processed, analysed and made available for bioinformaticians worldwide. Analysis of genomes sequencing data using popular software pipeline PALEOMIX can take a month even running it on the powerful computer resource. In this paper we will describe the adaptation the PALEOMIX pipeline to run it on a distributed computing environment powered by PanDA. To run pipeline we split input files into chunks which are run separately on different nodes as separate inputs for PALEOMIX and finally merge output file, it is very similar to what it done by ATLAS to process and to simulate data. We dramatically decreased the total walltime because of jobs (re)submission automation and brokering within PanDA. Using software tools developed initially for HEP and Grid can reduce payload execution time for Mammoths DNA samples from weeks to days.

  10. VR system CompleXcope programming guide

    International Nuclear Information System (INIS)

    Kageyama, Akira; Sato, Tetsuya

    1998-09-01

    A CAVE virtual reality system CompleXcope is installed in Theory and Computer Center, National Institute for Fusion Science, for the purpose of the interactive analysis/visualization of 3-dimensional complex data of supercomputer simulations. This guide explains how to make a CompleXcope application with Open GL and CAVE library. (author)

  11. Training Center for Industrial Control Systems

    Directory of Open Access Journals (Sweden)

    V. D. Yezhov

    2013-01-01

    Full Text Available We consider the application of embedded microcontrollers and industrial controllers with built-in operating systems.With the development of embedded operating systems and technology of open standard IEC 61131-3 product developer can write their own program management and support staff – to modernize management program.

  12. Development and Testing of the Glenn Research Center Visitor's Center Grid-Tied Photovoltaic Power System

    Science.gov (United States)

    Eichenberg, Dennis J.

    2009-01-01

    The NASA Glenn Research Center (GRC) has developed, installed, and tested a 12 kW DC grid-tied photovoltaic (PV) power system at the GRC Visitor s Center. This system utilizes a unique ballast type roof mount for installing the photovoltaic panels on the roof of the Visitor s Center with no alterations or penetrations to the roof. The PV system has generated in excess of 15000 kWh since operation commenced in August 2008. The PV system is providing power to the GRC grid for use by all. Operation of the GRC Visitor s Center PV system has been completely trouble free. A grid-tied PV power system is connected directly to the utility distribution grid. Facility power can be obtained from the utility system as normal. The PV system is synchronized with the utility system to provide power for the facility, and excess power is provided to the utility. The project transfers space technology to terrestrial use via nontraditional partners. GRC personnel glean valuable experience with PV power systems that are directly applicable to various space power systems, and provides valuable space program test data. PV power systems help to reduce harmful emissions and reduce the Nation s dependence on fossil fuels. Power generated by the PV system reduces the GRC utility demand, and the surplus power aids the community. Present global energy concerns reinforce the need for the development of alternative energy systems. Modern PV panels are readily available, reliable, efficient, and economical with a life expectancy of at least 25 years. Modern electronics has been the enabling technology behind grid-tied power systems, making them safe, reliable, efficient, and economical with a life expectancy of at least 25 years. Based upon the success of the GRC Visitor s Center PV system, additional PV power system expansion at GRC is under consideration. The GRC Visitor s Center grid-tied PV power system was successfully designed and developed which served to validate the basic principles

  13. Exploiting Thread Parallelism for Ocean Modeling on Cray XC Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sarje, Abhinav [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jacobsen, Douglas W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Samuel W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ringler, Todd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-05-01

    The incorporation of increasing core counts in modern processors used to build state-of-the-art supercomputers is driving application development towards exploitation of thread parallelism, in addition to distributed memory parallelism, with the goal of delivering efficient high-performance codes. In this work we describe the exploitation of threading and our experiences with it with respect to a real-world ocean modeling application code, MPAS-Ocean. We present detailed performance analysis and comparisons of various approaches and configurations for threading on the Cray XC series supercomputers.

  14. Effects of Quality Improvement System for Child Care Centers

    Science.gov (United States)

    Ma, Xin; Shen, Jianping; Kavanaugh, Amy; Lu, Xuejin; Brandi, Karen; Goodman, Jeff; Till, Lance; Watson, Grace

    2011-01-01

    Using multiple years of data collected from about 100 child care centers in Palm Beach County, Florida, the authors studied whether the Quality Improvement System (QIS) made a significant impact on quality of child care centers. Based on a pre- and postresearch design spanning a period of 13 months, QIS appeared to be effective in improving…

  15. Microsoft System Center Data Protection Manager 2012

    CERN Document Server

    Buchannan, Steve; Gomaa, Islam

    2013-01-01

    This book is a Packt tutorial, walking the administrator through the steps needed to create real solutions to the problems and tasks faced when ensuring that their data is protected. This book is for network administrators, system administrators, backup administrators, or IT consultants who are looking to expand their knowledge on how to utilize DPM to protect their organization's data.

  16. Center for Information Systems Research Research Briefings 2002

    OpenAIRE

    ROSS, JEANNE W.

    2003-01-01

    This paper is comprised of research briefings from the MIT Sloan School of Management's Center for Information Systems Research (CISR). CISR's mission is to perform practical empirical research on how firms generate business value from IT.

  17. Microsoft System Center 2012 R2 Operations Manager cookbook

    CERN Document Server

    Beaumont (MVP), Steve; Odika, Chiyo; Ryan, Robert

    2015-01-01

    If you are tasked with monitoring the IT infrastructure within your organization, this book demonstrates how System Center 2012 R2 Operations Manager offers a radical and exciting solution to modern administration.

  18. Comments on the parallelization efficiency of the Sunway TaihuLight supercomputer

    OpenAIRE

    Végh, János

    2016-01-01

    In the world of supercomputers, the large number of processors requires to minimize the inefficiencies of parallelization, which appear as a sequential part of the program from the point of view of Amdahl's law. The recently suggested new figure of merit is applied to the recently presented supercomputer, and the timeline of "Top 500" supercomputers is scrutinized using the metric. It is demonstrated, that in addition to the computing performance and power consumption, the new supercomputer i...

  19. QCD on the BlueGene/L Supercomputer

    International Nuclear Information System (INIS)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-01-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented

  20. QCD on the BlueGene/L Supercomputer

    Science.gov (United States)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-03-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented.

  1. Supercomputers and the future of computational atomic scattering physics

    International Nuclear Information System (INIS)

    Younger, S.M.

    1989-01-01

    The advent of the supercomputer has opened new vistas for the computational atomic physicist. Problems of hitherto unparalleled complexity are now being examined using these new machines, and important connections with other fields of physics are being established. This talk briefly reviews some of the most important trends in computational scattering physics and suggests some exciting possibilities for the future. 7 refs., 2 figs

  2. Marshall Space Flight Center Ground Systems Development and Integration

    Science.gov (United States)

    Wade, Gina

    2016-01-01

    Ground Systems Development and Integration performs a variety of tasks in support of the Mission Operations Laboratory (MOL) and other Center and Agency projects. These tasks include various systems engineering processes such as performing system requirements development, system architecture design, integration, verification and validation, software development, and sustaining engineering of mission operations systems that has evolved the Huntsville Operations Support Center (HOSC) into a leader in remote operations for current and future NASA space projects. The group is also responsible for developing and managing telemetry and command configuration and calibration databases. Personnel are responsible for maintaining and enhancing their disciplinary skills in the areas of project management, software engineering, software development, software process improvement, telecommunications, networking, and systems management. Domain expertise in the ground systems area is also maintained and includes detailed proficiency in the areas of real-time telemetry systems, command systems, voice, video, data networks, and mission planning systems.

  3. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Papka, M.; Messina, P.; Coffey, R.; Drugan, C. (LCF)

    2012-08-16

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to

  4. Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Villa, Oreste; Tumeo, Antonino; Secchi, Simone; Manzano Franco, Joseph B.

    2012-12-31

    Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, we introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.

  5. A Dynamic and Interactive Monitoring System of Data Center Resources

    Directory of Open Access Journals (Sweden)

    Yu Ling-Fei

    2016-01-01

    Full Text Available To maximize the utilization and effectiveness of resources, it is very necessary to have a well suited management system for modern data centers. Traditional approaches to resource provisioning and service requests have proven to be ill suited for virtualization and cloud computing. The manual handoffs between technology teams were also highly inefficient and poorly documented. In this paper, a dynamic and interactive monitoring system for data center resources, ResourceView, is presented. By consolidating all data center management functionality into a single interface, ResourceView shares a common view of the timeline metric status, while providing comprehensive, centralized monitoring of data center physical and virtual IT assets including power, cooling, physical space and VMs, so that to improve availability and efficiency. In addition, servers and VMs can be monitored from several viewpoints such as clusters, racks and projects, which is very convenient for users.

  6. NASA Center for Climate Simulation (NCCS) Presentation

    Science.gov (United States)

    Webster, William P.

    2012-01-01

    The NASA Center for Climate Simulation (NCCS) offers integrated supercomputing, visualization, and data interaction technologies to enhance NASA's weather and climate prediction capabilities. It serves hundreds of users at NASA Goddard Space Flight Center, as well as other NASA centers, laboratories, and universities across the US. Over the past year, NCCS has continued expanding its data-centric computing environment to meet the increasingly data-intensive challenges of climate science. We doubled our Discover supercomputer's peak performance to more than 800 teraflops by adding 7,680 Intel Xeon Sandy Bridge processor-cores and most recently 240 Intel Xeon Phi Many Integrated Core (MIG) co-processors. A supercomputing-class analysis system named Dali gives users rapid access to their data on Discover and high-performance software including the Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT), with interfaces from user desktops and a 17- by 6-foot visualization wall. NCCS also is exploring highly efficient climate data services and management with a new MapReduce/Hadoop cluster while augmenting its data distribution to the science community. Using NCCS resources, NASA completed its modeling contributions to the Intergovernmental Panel on Climate Change (IPCG) Fifth Assessment Report this summer as part of the ongoing Coupled Modellntercomparison Project Phase 5 (CMIP5). Ensembles of simulations run on Discover reached back to the year 1000 to test model accuracy and projected climate change through the year 2300 based on four different scenarios of greenhouse gases, aerosols, and land use. The data resulting from several thousand IPCC/CMIP5 simulations, as well as a variety of other simulation, reanalysis, and observationdatasets, are available to scientists and decision makers through an enhanced NCCS Earth System Grid Federation Gateway. Worldwide downloads have totaled over 110 terabytes of data.

  7. Watson will see you now: a supercomputer to help clinicians make informed treatment decisions.

    Science.gov (United States)

    Doyle-Lindrud, Susan

    2015-02-01

    IBM has collaborated with several cancer care providers to develop and train the IBM supercomputer Watson to help clinicians make informed treatment decisions. When a patient is seen in clinic, the oncologist can input all of the clinical information into the computer system. Watson will then review all of the data and recommend treatment options based on the latest evidence and guidelines. Once the oncologist makes the treatment decision, this information can be sent directly to the insurance company for approval. Watson has the ability to standardize care and accelerate the approval process, a benefit to the healthcare provider and the patient.

  8. BASIN-CENTERED GAS SYSTEMS OF THE U.S.

    Energy Technology Data Exchange (ETDEWEB)

    Marin A. Popov; Vito F. Nuccio; Thaddeus S. Dyman; Timothy A. Gognat; Ronald C. Johnson; James W. Schmoker; Michael S. Wilson; Charles Bartberger

    2000-11-01

    The USGS is re-evaluating the resource potential of basin-centered gas accumulations in the U.S. because of changing perceptions of the geology of these accumulations, and the availability of new data since the USGS 1995 National Assessment of United States oil and gas resources (Gautier et al., 1996). To attain these objectives, this project used knowledge of basin-centered gas systems and procedures such as stratigraphic analysis, organic geochemistry, modeling of basin thermal dynamics, reservoir characterization, and pressure analysis. This project proceeded in two phases which had the following objectives: Phase I (4/1998 through 5/1999): Identify and describe the geologic and geographic distribution of potential basin-centered gas systems, and Phase II (6/1999 through 11/2000): For selected systems, estimate the location of those basin-centered gas resources that are likely to be produced over the next 30 years. In Phase I, we characterize thirty-three (33) potential basin-centered gas systems (or accumulations) based on information published in the literature or acquired from internal computerized well and reservoir data files. These newly defined potential accumulations vary from low to high risk and may or may not survive the rigorous geologic scrutiny leading towards full assessment by the USGS. For logistical reasons, not all basins received the level of detail desired or required.

  9. Block Fusion Systems and the Center of the Group Ring

    DEFF Research Database (Denmark)

    Jacobsen, Martin Wedel

    This thesis develops some aspects of the theory of block fusion systems. Chapter 1 contains a brief introduction to the group algebra and some simple results about algebras over a field of positive characteristic. In chapter 2 we define the concept of a fusion system and the fundamental property...... of saturation. We also define block fusion systems and prove that they are saturated. Chapter 3 develops some tools for relating block fusion systems to the structure of the center of the group algebra. In particular, it is proven that a block has trivial defect group if and only if the center of the block...... algebra is one-dimensional. Chapter 4 consists of a proof that block fusion systems of symmetric groups are always group fusion systems of symmetric groups, and an analogous result holds for the alternating groups....

  10. Unique Methodologies for Nano/Micro Manufacturing Job Training Via Desktop Supercomputer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kimball, Clyde [Northern Illinois Univ., DeKalb, IL (United States); Karonis, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Lurio, Laurence [Northern Illinois Univ., DeKalb, IL (United States); Piot, Philippe [Northern Illinois Univ., DeKalb, IL (United States); Xiao, Zhili [Northern Illinois Univ., DeKalb, IL (United States); Glatz, Andreas [Northern Illinois Univ., DeKalb, IL (United States); Pohlman, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Hou, Minmei [Northern Illinois Univ., DeKalb, IL (United States); Demir, Veysel [Northern Illinois Univ., DeKalb, IL (United States); Song, Jie [Northern Illinois Univ., DeKalb, IL (United States); Duffin, Kirk [Northern Illinois Univ., DeKalb, IL (United States); Johns, Mitrick [Northern Illinois Univ., DeKalb, IL (United States); Sims, Thomas [Northern Illinois Univ., DeKalb, IL (United States); Yin, Yanbin [Northern Illinois Univ., DeKalb, IL (United States)

    2012-11-21

    This project establishes an initiative in high speed (Teraflop)/large-memory desktop supercomputing for modeling and simulation of dynamic processes important for energy and industrial applications. It provides a training ground for employment of current students in an emerging field with skills necessary to access the large supercomputing systems now present at DOE laboratories. It also provides a foundation for NIU faculty to quantum leap beyond their current small cluster facilities. The funding extends faculty and student capability to a new level of analytic skills with concomitant publication avenues. The components of the Hewlett Packard computer obtained by the DOE funds create a hybrid combination of a Graphics Processing System (12 GPU/Teraflops) and a Beowulf CPU system (144 CPU), the first expandable via the NIU GAEA system to ~60 Teraflops integrated with a 720 CPU Beowulf system. The software is based on access to the NVIDIA/CUDA library and the ability through MATLAB multiple licenses to create additional local programs. A number of existing programs are being transferred to the CPU Beowulf Cluster. Since the expertise necessary to create the parallel processing applications has recently been obtained at NIU, this effort for software development is in an early stage. The educational program has been initiated via formal tutorials and classroom curricula designed for the coming year. Specifically, the cost focus was on hardware acquisitions and appointment of graduate students for a wide range of applications in engineering, physics and computer science.

  11. DOE Heat Pump Centered Integrated Community Energy Systems Project

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J. M.

    1979-01-01

    The Heat Pump Centered Integrated Community Energy Systems (HP-ICES) Project is a multiphase undertaking seeking to demonstrate one or more operational HP-ICES by the end of 1983. The seven phases include System Development, Demonstration Design, Design Completion, HP-ICES Construction, Operation and Data Acquisition, HP-ICES Evaluation, and Upgraded Continuation. This project is sponsored by the Community Systems Branch, Office of Buildings and Community Systems, Assistant Secretary for Conservation and Solar Applicaions, U.S. Department of Energy (DOE). It is part of the Community Systems Program and is managed by the Energy and Environmental Systems Division of Argonne Natinal Laboratory.

  12. Building the Teraflops/Petabytes Production Computing Center

    International Nuclear Information System (INIS)

    Kramer, William T.C.; Lucas, Don; Simon, Horst D.

    1999-01-01

    In just one decade, the 1990s, supercomputer centers have undergone two fundamental transitions which require rethinking their operation and their role in high performance computing. The first transition in the early to mid-1990s resulted from a technology change in high performance computing architecture. Highly parallel distributed memory machines built from commodity parts increased the operational complexity of the supercomputer center, and required the introduction of intellectual services as equally important components of the center. The second transition is happening in the late 1990s as centers are introducing loosely coupled clusters of SMPs as their premier high performance computing platforms, while dealing with an ever-increasing volume of data. In addition, increasing network bandwidth enables new modes of use of a supercomputer center, in particular, computational grid applications. In this paper we describe what steps NERSC is taking to address these issues and stay at the leading edge of supercomputing centers.; N

  13. University of Rhode Island Regional Earth Systems Center

    Energy Technology Data Exchange (ETDEWEB)

    Rothstein, Lewis [Univ. of Rhode Island, Kingston, RI (United States); Cornillon, P. [Univ. of Rhode Island, Kingston, RI (United States)

    2017-02-06

    The primary objective of this program was to establish the URI Regional Earth System Center (“Center”) that would enhance overall societal wellbeing (health, financial, environmental) by utilizing the best scientific information and technology to achieve optimal policy decisions with maximum stakeholder commitment for energy development, coastal environmental management, water resources protection and human health protection, while accelerating regional economic growth. The Center was to serve to integrate existing URI institutional strengths in energy, coastal environmental management, water resources, and human wellbeing. This integrated research, educational and public/private sector outreach Center was to focus on local, state and regional resources. The centerpiece activity of the Center was in the development and implementation of integrated assessment models (IAMs) that both ‘downscaled’ global observations and interpolated/extrapolated regional observations for analyzing the complexity of interactions among humans and the natural climate system to further our understanding and, ultimately, to predict the future state of our regional earth system. The Center was to begin by first ‘downscaling’ existing global earth systems management tools for studying the causes of local, state and regional climate change and potential social and environmental consequences, with a focus on the regional resources identified above. The Center would ultimately need to address the full feedbacks inherent in the nonlinear earth systems by quantifying the “upscaled” impacts of those regional changes on the global earth system. Through an interacting suite of computer simulations that are informed by observations from the nation’s evolving climate observatories, the Center activities integrates climate science, technology, economics, and social policy into forecasts that will inform solutions to pressing issues in regional climate change science,

  14. Annual report of R and D activities in Center for Promotion of Computational Science and Engineering and Center for Computational Science and e-Systems from April 1, 2005 to March 31, 2006

    International Nuclear Information System (INIS)

    2007-03-01

    This report provides an overview of research and development activities in Center for Computational Science and Engineering (CCSE), JAERI in the former half of the fiscal year 2005 (April 1, 2005 - Sep. 30, 2006) and those in Center for Computational Science and e-Systems (CCSE), JAEA, in the latter half of the fiscal year 2005(Oct 1, 2005 - March 31, 2006). In the former half term, the activities have been performed by 5 research groups, Research Group for Computational Science in Atomic Energy, Research Group for Computational Material Science in Atomic Energy, R and D Group for Computer Science, R and D Group for Numerical Experiments, and Quantum Bioinformatics Group in CCSE. At the beginning of the latter half term, these 5 groups were integrated into two offices, Simulation Technology Research and Development Office and Computer Science Research and Development Office at the moment of the unification of JNC (Japan Nuclear Cycle Development Institute) and JAERI (Japan Atomic Energy Research Institute), and the latter-half term activities were operated by the two offices. A big project, ITBL (Information Technology Based Laboratory) project and fundamental computational research for atomic energy plant were performed mainly by two groups, the R and D Group for Computer Science and the Research Group for Computational Science in Atomic Energy in the former half term and their integrated office, Computer Science Research and Development Office in the latter half one, respectively. The main result was verification by using structure analysis for real plant executable on the Grid environment, and received Honorable Mentions of Analytic Challenge in the conference 'Supercomputing (SC05)'. The materials science and bioinformatics in atomic energy research field were carried out by three groups, Research Group for Computational Material Science in Atomic Energy, R and D Group for Computer Science, R and D Group for Numerical Experiments, and Quantum Bioinformatics

  15. Extending ATLAS Computing to Commercial Clouds and Supercomputers

    CERN Document Server

    Nilsson, P; The ATLAS collaboration; Filipcic, A; Klimentov, A; Maeno, T; Oleynik, D; Panitkin, S; Wenaus, T; Wu, W

    2014-01-01

    The Large Hadron Collider will resume data collection in 2015 with substantially increased computing requirements relative to its first 2009-2013 run. A near doubling of the energy and the data rate, high level of event pile-up, and detector upgrades will mean the number and complexity of events to be analyzed will increase dramatically. A naive extrapolation of the Run 1 experience would suggest that a 5-6 fold increase in computing resources are needed - impossible within the anticipated flat computing budgets in the near future. Consequently ATLAS is engaged in an ambitious program to expand its computing to all available resources, notably including opportunistic use of commercial clouds and supercomputers. Such resources present new challenges in managing heterogeneity, supporting data flows, parallelizing workflows, provisioning software, and other aspects of distributed computing, all while minimizing operational load. We will present the ATLAS experience to date with clouds and supercomputers, and des...

  16. Data Mining Supercomputing with SAS JMP® Genomics

    Directory of Open Access Journals (Sweden)

    Richard S. Segall

    2011-02-01

    Full Text Available JMP® Genomics is statistical discovery software that can uncover meaningful patterns in high-throughput genomics and proteomics data. JMP® Genomics is designed for biologists, biostatisticians, statistical geneticists, and those engaged in analyzing the vast stores of data that are common in genomic research (SAS, 2009. Data mining was performed using JMP® Genomics on the two collections of microarray databases available from National Center for Biotechnology Information (NCBI for lung cancer and breast cancer. The Gene Expression Omnibus (GEO of NCBI serves as a public repository for a wide range of highthroughput experimental data, including the two collections of lung cancer and breast cancer that were used for this research. The results for applying data mining using software JMP® Genomics are shown in this paper with numerous screen shots.

  17. Visualizing quantum scattering on the CM-2 supercomputer

    International Nuclear Information System (INIS)

    Richardson, J.L.

    1991-01-01

    We implement parallel algorithms for solving the time-dependent Schroedinger equation on the CM-2 supercomputer. These methods are unconditionally stable as well as unitary at each time step and have the advantage of being spatially local and explicit. We show how to visualize the dynamics of quantum scattering using techniques for visualizing complex wave functions. Several scattering problems are solved to demonstrate the use of these methods. (orig.)

  18. Intelligent Personal Supercomputer for Solving Scientific and Technical Problems

    Directory of Open Access Journals (Sweden)

    Khimich, O.M.

    2016-09-01

    Full Text Available New domestic intellіgent personal supercomputer of hybrid architecture Inparkom_pg for the mathematical modeling of processes in the defense industry, engineering, construction, etc. was developed. Intelligent software for the automatic research and tasks of computational mathematics with approximate data of different structures was designed. Applied software to provide mathematical modeling problems in construction, welding and filtration processes was implemented.

  19. Cellular-automata supercomputers for fluid-dynamics modeling

    International Nuclear Information System (INIS)

    Margolus, N.; Toffoli, T.; Vichniac, G.

    1986-01-01

    We report recent developments in the modeling of fluid dynamics, and give experimental results (including dynamical exponents) obtained using cellular automata machines. Because of their locality and uniformity, cellular automata lend themselves to an extremely efficient physical realization; with a suitable architecture, an amount of hardware resources comparable to that of a home computer can achieve (in the simulation of cellular automata) the performance of a conventional supercomputer

  20. World Key Information Service System Designed For EPCOT Center

    Science.gov (United States)

    Kelsey, J. A.

    1984-03-01

    An advanced Bell Laboratories and Western Electric designed electronic information retrieval system utilizing the latest Information Age technologies, and a fiber optic transmission system is featured at the Walt Disney World Resort's newest theme park - The Experimental Prototype Community of Tomorrow (EPCOT Center). The project is an interactive audio, video and text information system that is deployed at key locations within the park. The touch sensitive terminals utilizing the ARIEL (Automatic Retrieval of Information Electronically) System is interconnected by a Western Electric designed and manufactured lightwave transmission system.

  1. Porting Ordinary Applications to Blue Gene/Q Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy; Katz, Daniel S.; Binkowski, T. Andrew; Zhong, Xiaoliang; Heinonen, Olle; Karpeyev, Dmitry; Wilde, Michael

    2015-08-31

    Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt's sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.

  2. Proceedings of the first energy research power supercomputer users symposium

    International Nuclear Information System (INIS)

    1991-01-01

    The Energy Research Power Supercomputer Users Symposium was arranged to showcase the richness of science that has been pursued and accomplished in this program through the use of supercomputers and now high performance parallel computers over the last year: this report is the collection of the presentations given at the Symposium. ''Power users'' were invited by the ER Supercomputer Access Committee to show that the use of these computational tools and the associated data communications network, ESNet, go beyond merely speeding up computations. Today the work often directly contributes to the advancement of the conceptual developments in their fields and the computational and network resources form the very infrastructure of today's science. The Symposium also provided an opportunity, which is rare in this day of network access to computing resources, for the invited users to compare and discuss their techniques and approaches with those used in other ER disciplines. The significance of new parallel architectures was highlighted by the interesting evening talk given by Dr. Stephen Orszag of Princeton University

  3. A supercomputer for parallel data analysis

    International Nuclear Information System (INIS)

    Kolpakov, I.F.; Senner, A.E.; Smirnov, V.A.

    1987-01-01

    The project of a powerful multiprocessor system is proposed. The main purpose of the project is to develop a low cost computer system with a processing rate of a few tens of millions of operations per second. The system solves many problems of data analysis from high-energy physics spectrometers. It includes about 70 MOTOROLA-68020 based powerful slave microprocessor boards liaisoned through the VME crates to a host VAX micro computer. Each single microprocessor board performs the same algorithm requiring large computing time. The host computer distributes data over the microprocessor board, collects and combines obtained results. The architecture of the system easily allows one to use it in the real time mode

  4. MOD control center automated information systems security evolution

    Science.gov (United States)

    Owen, Rich

    1991-01-01

    The role of the technology infusion process in future Control Center Automated Information Systems (AIS) is highlighted. The following subject areas are presented in the form of the viewgraphs: goals, background, threat, MOD's AISS program, TQM, SDLC integration, payback, future challenges, and bottom line.

  5. Performance evaluation of a center pivot variable rate irrigation system

    Science.gov (United States)

    Variable Rate Irrigation (VRI) for center pivots offers potential to match specific application rates to non-uniform soil conditions along the length of the lateral. The benefit of such systems is influenced by the areal extent of these variations and the smallest scale to which the irrigation syste...

  6. Microsoft System Center Data Protection Manager 2012 R2 cookbook

    CERN Document Server

    Hedblom, Robert

    2015-01-01

    If you are a DPM administrator, this book will help you verify your knowledge and provide you with everything you need to know about the 2012 R2 release. No prior knowledge about System Center DPM is required, however some experience of running backups will come in handy.

  7. System security in the space flight operations center

    Science.gov (United States)

    Wagner, David A.

    1988-01-01

    The Space Flight Operations Center is a networked system of workstation-class computers that will provide ground support for NASA's next generation of deep-space missions. The author recounts the development of the SFOC system security policy and discusses the various management and technology issues involved. Particular attention is given to risk assessment, security plan development, security implications of design requirements, automatic safeguards, and procedural safeguards.

  8. Lewis Research Center space station electric power system test facilities

    Science.gov (United States)

    Birchenough, Arthur G.; Martin, Donald F.

    1988-01-01

    NASA Lewis Research Center facilities were developed to support testing of the Space Station Electric Power System. The capabilities and plans for these facilities are described. The three facilities which are required in the Phase C/D testing, the Power Systems Facility, the Space Power Facility, and the EPS Simulation Lab, are described in detail. The responsibilities of NASA Lewis and outside groups in conducting tests are also discussed.

  9. NASA Space Engineering Research Center for VLSI systems design

    Science.gov (United States)

    1991-01-01

    This annual review reports the center's activities and findings on very large scale integration (VLSI) systems design for 1990, including project status, financial support, publications, the NASA Space Engineering Research Center (SERC) Symposium on VLSI Design, research results, and outreach programs. Processor chips completed or under development are listed. Research results summarized include a design technique to harden complementary metal oxide semiconductors (CMOS) memory circuits against single event upset (SEU); improved circuit design procedures; and advances in computer aided design (CAD), communications, computer architectures, and reliability design. Also described is a high school teacher program that exposes teachers to the fundamentals of digital logic design.

  10. Feynman diagrams sampling for quantum field theories on the QPACE 2 supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Rappl, Florian

    2016-08-01

    This work discusses the application of Feynman diagram sampling in quantum field theories. The method uses a computer simulation to sample the diagrammatic space obtained in a series expansion. For running large physical simulations powerful computers are obligatory, effectively splitting the thesis in two parts. The first part deals with the method of Feynman diagram sampling. Here the theoretical background of the method itself is discussed. Additionally, important statistical concepts and the theory of the strong force, quantum chromodynamics, are introduced. This sets the context of the simulations. We create and evaluate a variety of models to estimate the applicability of diagrammatic methods. The method is then applied to sample the perturbative expansion of the vertex correction. In the end we obtain the value for the anomalous magnetic moment of the electron. The second part looks at the QPACE 2 supercomputer. This includes a short introduction to supercomputers in general, as well as a closer look at the architecture and the cooling system of QPACE 2. Guiding benchmarks of the InfiniBand network are presented. At the core of this part, a collection of best practices and useful programming concepts are outlined, which enables the development of efficient, yet easily portable, applications for the QPACE 2 system.

  11. Parallel Multivariate Spatio-Temporal Clustering of Large Ecological Datasets on Hybrid Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; Kumar, Jitendra [ORNL; Mills, Richard T. [Argonne National Laboratory; Hoffman, Forrest M. [ORNL; Sripathi, Vamsi [Intel Corporation; Hargrove, William Walter [United States Department of Agriculture (USDA), United States Forest Service (USFS)

    2017-09-01

    A proliferation of data from vast networks of remote sensing platforms (satellites, unmanned aircraft systems (UAS), airborne etc.), observational facilities (meteorological, eddy covariance etc.), state-of-the-art sensors, and simulation models offer unprecedented opportunities for scientific discovery. Unsupervised classification is a widely applied data mining approach to derive insights from such data. However, classification of very large data sets is a complex computational problem that requires efficient numerical algorithms and implementations on high performance computing (HPC) platforms. Additionally, increasing power, space, cooling and efficiency requirements has led to the deployment of hybrid supercomputing platforms with complex architectures and memory hierarchies like the Titan system at Oak Ridge National Laboratory. The advent of such accelerated computing architectures offers new challenges and opportunities for big data analytics in general and specifically, large scale cluster analysis in our case. Although there is an existing body of work on parallel cluster analysis, those approaches do not fully meet the needs imposed by the nature and size of our large data sets. Moreover, they had scaling limitations and were mostly limited to traditional distributed memory computing platforms. We present a parallel Multivariate Spatio-Temporal Clustering (MSTC) technique based on k-means cluster analysis that can target hybrid supercomputers like Titan. We developed a hybrid MPI, CUDA and OpenACC implementation that can utilize both CPU and GPU resources on computational nodes. We describe performance results on Titan that demonstrate the scalability and efficacy of our approach in processing large ecological data sets.

  12. Paramagnetic centers in nanocrystalline TiC/C system

    International Nuclear Information System (INIS)

    Guskos, N.; Bodziony, T.; Maryniak, M.; Typek, J.; Biedunkiewicz, A.

    2008-01-01

    Electron paramagnetic resonance is applied to study the defect centers in nanocrystalline titanium carbide dispersed in carbon matrix (TiC x /C) synthesized by the non-hydrolytic sol-gel process. The presence of Ti 3+ paramagnetic centers is identified below 120 K along with a minor contribution from localized defect spins coupled with the conduction electron system in the carbon matrix. The temperature dependence of the resonance intensity of the latter signal indicates weak antiferromagnetic interactions. The presence of paramagnetic centers connected with trivalent titanium is suggested to be the result of chemical disorder, which can be further related to the observed anomalous behavior of conductivity, hardness, and corrosion resistance of nanocrystalline TiC x /C

  13. Wavelet transform-vector quantization compression of supercomputer ocean model simulation output

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J N; Brislawn, C M

    1992-11-12

    We describe a new procedure for efficient compression of digital information for storage and transmission purposes. The algorithm involves a discrete wavelet transform subband decomposition of the data set, followed by vector quantization of the wavelet transform coefficients using application-specific vector quantizers. The new vector quantizer design procedure optimizes the assignment of both memory resources and vector dimensions to the transform subbands by minimizing an exponential rate-distortion functional subject to constraints on both overall bit-rate and encoder complexity. The wavelet-vector quantization method, which originates in digital image compression. is applicable to the compression of other multidimensional data sets possessing some degree of smoothness. In this paper we discuss the use of this technique for compressing the output of supercomputer simulations of global climate models. The data presented here comes from Semtner-Chervin global ocean models run at the National Center for Atmospheric Research and at the Los Alamos Advanced Computing Laboratory.

  14. Digital optical computers at the optoelectronic computing systems center

    Science.gov (United States)

    Jordan, Harry F.

    1991-01-01

    The Digital Optical Computing Program within the National Science Foundation Engineering Research Center for Opto-electronic Computing Systems has as its specific goal research on optical computing architectures suitable for use at the highest possible speeds. The program can be targeted toward exploiting the time domain because other programs in the Center are pursuing research on parallel optical systems, exploiting optical interconnection and optical devices and materials. Using a general purpose computing architecture as the focus, we are developing design techniques, tools and architecture for operation at the speed of light limit. Experimental work is being done with the somewhat low speed components currently available but with architectures which will scale up in speed as faster devices are developed. The design algorithms and tools developed for a general purpose, stored program computer are being applied to other systems such as optimally controlled optical communication networks.

  15. Trends in supercomputers and computational physics

    International Nuclear Information System (INIS)

    Bloch, T.

    1985-01-01

    Today, scientists using numerical models explore the basic mechanisms of semiconductors, apply global circulation models to climatic and oceanographic problems, probe into the behaviour of galaxies and try to verify basic theories of matter, such as Quantum Chromo Dynamics by simulating the constitution of elementary particles. Chemists, crystallographers and molecular dynamics researchers develop models for chemical reactions, formation of crystals and try to deduce the chemical properties of molecules as a function of the shapes of their states. Chaotic systems are studied extensively in turbulence (combustion included) and the design of the next generation of controlled fusion devices relies heavily on computational physics. (orig./HSI)

  16. Communication Characterization and Optimization of Applications Using Topology-Aware Task Mapping on Large Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; D' Azevedo, Eduardo [ORNL; Philip, Bobby [ORNL; Worley, Patrick H [ORNL

    2016-01-01

    On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phase of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.

  17. Federal Market Information Technology in the Post Flash Crash Era: Roles for Supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; Leinweber, David; Ruebel, Oliver; Wu, Kesheng

    2011-09-16

    This paper describes collaborative work between active traders, regulators, economists, and supercomputing researchers to replicate and extend investigations of the Flash Crash and other market anomalies in a National Laboratory HPC environment. Our work suggests that supercomputing tools and methods will be valuable to market regulators in achieving the goal of market safety, stability, and security. Research results using high frequency data and analytics are described, and directions for future development are discussed. Currently the key mechanism for preventing catastrophic market action are “circuit breakers.” We believe a more graduated approach, similar to the “yellow light” approach in motorsports to slow down traffic, might be a better way to achieve the same goal. To enable this objective, we study a number of indicators that could foresee hazards in market conditions and explore options to confirm such predictions. Our tests confirm that Volume Synchronized Probability of Informed Trading (VPIN) and a version of volume Herfindahl-Hirschman Index (HHI) for measuring market fragmentation can indeed give strong signals ahead of the Flash Crash event on May 6 2010. This is a preliminary step toward a full-fledged early-warning system for unusual market conditions.

  18. System Center 2012 R2 Virtual Machine Manager cookbook

    CERN Document Server

    Cardoso, Edvaldo Alessandro

    2014-01-01

    This book is a step-by-step guide packed with recipes that cover architecture design and planning. The book is also full of deployment tips, techniques, and solutions. If you are a solutions architect, technical consultant, administrator, or any other virtualization enthusiast who needs to use Microsoft System Center Virtual Machine Manager in a real-world environment, then this is the book for you. We assume that you have previous experience with Windows 2012 R2 and Hyper-V.

  19. The Center-TRACON Automation System: Simulation and field testing

    Science.gov (United States)

    Denery, Dallas G.; Erzberger, Heinz

    1995-01-01

    A new concept for air traffic management in the terminal area, implemented as the Center-TRACON Automation System, has been under development at NASA Ames in a cooperative program with the FAA since 1991. The development has been strongly influenced by concurrent simulation and field site evaluations. The role of simulation and field activities in the development process will be discussed. Results of recent simulation and field tests will be presented.

  20. Space and Missile Systems Center Standard: Space Flight Pressurized Systems

    Science.gov (United States)

    2015-02-28

    as an adhesive , as dictated by the application. [4.3.3.1-2] The effects of fabrication process, temperature/humidity, load spectra, and other...5.2.1-1] System connections for incompatible propellants shall be keyed, sized, or located so that it is physically impossible to interconnect them

  1. New computer system for the Japan Tier-2 center

    CERN Multimedia

    Hiroyuki Matsunaga

    2007-01-01

    The ICEPP (International Center for Elementary Particle Physics) of the University of Tokyo has been operating an LCG Tier-2 center dedicated to the ATLAS experiment, and is going to switch over to the new production system which has been recently installed. The system will be of great help to the exciting physics analyses for coming years. The new computer system includes brand-new blade servers, RAID disks, a tape library system and Ethernet switches. The blade server is DELL PowerEdge 1955 which contains two Intel dual-core Xeon (WoodCrest) CPUs running at 3GHz, and a total of 650 servers will be used as compute nodes. Each of the RAID disks is configured to be RAID-6 with 16 Serial ATA HDDs. The equipment as well as the cooling system is placed in a new large computer room, and both are hooked up to UPS (uninterruptible power supply) units for stable operation. As a whole, the system has been built with redundant configuration in a cost-effective way. The next major upgrade will take place in thre...

  2. Engineering system dynamics a unified graph-centered approach

    CERN Document Server

    Brown, Forbes T

    2006-01-01

    For today's students, learning to model the dynamics of complex systems is increasingly important across nearly all engineering disciplines. First published in 2001, Forbes T. Brown's Engineering System Dynamics: A Unified Graph-Centered Approach introduced students to a unique and highly successful approach to modeling system dynamics using bond graphs. Updated with nearly one-third new material, this second edition expands this approach to an even broader range of topics. What's New in the Second Edition? In addition to new material, this edition was restructured to build students' competence in traditional linear mathematical methods before they have gone too far into the modeling that still plays a pivotal role. New topics include magnetic circuits and motors including simulation with magnetic hysteresis; extensive new material on the modeling, analysis, and simulation of distributed-parameter systems; kinetic energy in thermodynamic systems; and Lagrangian and Hamiltonian methods. MATLAB(R) figures promi...

  3. Supercomputers and the mathematical modeling of high complexity problems

    International Nuclear Information System (INIS)

    Belotserkovskii, Oleg M

    2010-01-01

    This paper is a review of many works carried out by members of our scientific school in past years. The general principles of constructing numerical algorithms for high-performance computers are described. Several techniques are highlighted and these are based on the method of splitting with respect to physical processes and are widely used in computing nonlinear multidimensional processes in fluid dynamics, in studies of turbulence and hydrodynamic instabilities and in medicine and other natural sciences. The advances and developments related to the new generation of high-performance supercomputing in Russia are presented.

  4. A fast random number generator for the Intel Paragon supercomputer

    Science.gov (United States)

    Gutbrod, F.

    1995-06-01

    A pseudo-random number generator is presented which makes optimal use of the architecture of the i860-microprocessor and which is expected to have a very long period. It is therefore a good candidate for use on the parallel supercomputer Paragon XP. In the assembler version, it needs 6.4 cycles for a real∗4 random number. There is a FORTRAN routine which yields identical numbers up to rare and minor rounding discrepancies, and it needs 28 cycles. The FORTRAN performance on other microprocessors is somewhat better. Arguments for the quality of the generator and some numerical tests are given.

  5. The Center for Space Telemetering and Telecommunications Systems

    Science.gov (United States)

    Horan, S.; DeLeon, P.; Borah, D.; Lyman, R.

    2003-01-01

    This report comprises the final technical report for the research grant 'Center for Space Telemetering and Telecommunications Systems' sponsored by the National Aeronautics and Space Administration's Goddard Space Flight Center. The grant activities are broken down into the following technology areas: (1) Space Protocol Testing; (2) Autonomous Reconfiguration of Ground Station Receivers; (3) Satellite Cluster Communications; and (4) Bandwidth Efficient Modulation. The grant activity produced a number of technical reports and papers that were communicated to NASA as they were generated. This final report contains the final summary papers or final technical report conclusions for each of the project areas. Additionally, the grant supported students who made progress towards their degrees while working on the research.

  6. Systems analysis support to the waste management technology center

    International Nuclear Information System (INIS)

    Rivera, A.L.; Osborne-Lee, I.W.; DePaoli, S.M.

    1988-01-01

    This paper describes a systems analysis concept being developed in support of waste management planning and analysis activities for Martin Marietta Energy Systems, Inc. (Energy Systems), sites. This integrated systems model serves as a focus for the accumulation and documentation of technical and economic information from current waste management practices, improved operations projects, remedial actions, and new system development activities. The approach is generic and could be applied to a larger group of sites. This integrated model is a source of technical support to waste management groups in the Energy Systems complex for integrated waste management planning and related technology assessment activities. This problem-solving methodology for low-level waste (LLW) management is being developed through the Waste Management Technology Center (WMTC) for the Low-Level Waste Disposal, Development, and Demonstration (LLWDDD) Program. In support of long-range planning activities, this capability will include the development of management support tools such as specialized systems models, data bases, and information systems. These management support tools will provide continuing support in the identification and definition of technical and economic uncertainties to be addressed by technology demonstration programs. Technical planning activities and current efforts in the development of this system analysis capability for the LLWDDD Program are presented in this paper

  7. System Engineering Processes at Kennedy Space Center for Development of SLS and Orion Launch Systems

    Science.gov (United States)

    Schafer, Eric; Stambolian, Damon; Henderson, Gena

    2013-01-01

    There are over 40 subsystems being developed for the future SLS and Orion Launch Systems at Kennedy Space Center. These subsystems are developed at the Kennedy Space Center Engineering Directorate. The Engineering Directorate at Kennedy Space Center follows a comprehensive design process which requires several different product deliverables during each phase of each of the subsystems. This Presentation describes this process with examples of where the process has been applied.

  8. Plane-wave electronic structure calculations on a parallel supercomputer

    International Nuclear Information System (INIS)

    Nelson, J.S.; Plimpton, S.J.; Sears, M.P.

    1993-01-01

    The development of iterative solutions of Schrodinger's equation in a plane-wave (pw) basis over the last several years has coincided with great advances in the computational power available for performing the calculations. These dual developments have enabled many new and interesting condensed matter phenomena to be studied from a first-principles approach. The authors present a detailed description of the implementation on a parallel supercomputer (hypercube) of the first-order equation-of-motion solution to Schrodinger's equation, using plane-wave basis functions and ab initio separable pseudopotentials. By distributing the plane-waves across the processors of the hypercube many of the computations can be performed in parallel, resulting in decreases in the overall computation time relative to conventional vector supercomputers. This partitioning also provides ample memory for large Fast Fourier Transform (FFT) meshes and the storage of plane-wave coefficients for many hundreds of energy bands. The usefulness of the parallel techniques is demonstrated by benchmark timings for both the FFT's and iterations of the self-consistent solution of Schrodinger's equation for different sized Si unit cells of up to 512 atoms

  9. New data acquisition system for the lujan center

    International Nuclear Information System (INIS)

    Nelson, R.; Bowling, P.S.; Cooper, G.M.; Kozlowski, T.

    2001-01-01

    To meet the data acquisition requirements for six new neutron scattering instruments at the Los Alamos Science Center (LANSCE), we are building systems using Web tools, commercial hardware and software, software developed by the controls community, and custom hardware developed by the neutron scattering community. To service these new instruments as well as seven existing instruments, our data acquisition system needs common software and hardware core capabilities and the means to flexibly integrate them while differentiating the needs of the diverse instrument suite. Neutron events are captured and processed in VXI modules while controls for sample environment and beam line setup are processed with PCs. Typically users access the system through web browsers. (author)

  10. Neuromorphic cognitive systems a learning and memory centered approach

    CERN Document Server

    Yu, Qiang; Hu, Jun; Tan Chen, Kay

    2017-01-01

    This book presents neuromorphic cognitive systems from a learning and memory-centered perspective. It illustrates how to build a system network of neurons to perform spike-based information processing, computing, and high-level cognitive tasks. It is beneficial to a wide spectrum of readers, including undergraduate and postgraduate students and researchers who are interested in neuromorphic computing and neuromorphic engineering, as well as engineers and professionals in industry who are involved in the design and applications of neuromorphic cognitive systems, neuromorphic sensors and processors, and cognitive robotics. The book formulates a systematic framework, from the basic mathematical and computational methods in spike-based neural encoding, learning in both single and multi-layered networks, to a near cognitive level composed of memory and cognition. Since the mechanisms for integrating spiking neurons integrate to formulate cognitive functions as in the brain are little understood, studies of neuromo...

  11. CCSDS telemetry systems experience at the Goddard Space Flight Center

    Science.gov (United States)

    Carper, Richard D.; Stallings, William H., III

    1990-01-01

    NASA Goddard Space Flight Center (GSFC) designs, builds, manages, and operates science and applications spacecraft in near-earth orbit, and provides data capture, data processing, and flight control services for these spacecraft. In addition, GSFC has the responsibility of providing space-ground and ground-ground communications for near-earth orbiting spacecraft, including those of the manned spaceflight programs. The goal of reducing both the developmental and operating costs of the end-to-end information system has led the GSFC to support and participate in the standardization activities of the Consultative Committee for Space Data Systems (CCSDS), including those for packet telemetry. The environment in which such systems function is described, and the GSFC experience with CCSDS packet telemetry in the context of the Gamma-Ray Observatory project is discussed.

  12. New data acquisition system for the lujan center

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, R.; Bowling, P.S.; Cooper, G.M.; Kozlowski, T. [Los Alamos National Loboratory, Los Alamos, NM (United States)

    2001-03-01

    To meet the data acquisition requirements for six new neutron scattering instruments at the Los Alamos Science Center (LANSCE), we are building systems using Web tools, commercial hardware and software, software developed by the controls community, and custom hardware developed by the neutron scattering community. To service these new instruments as well as seven existing instruments, our data acquisition system needs common software and hardware core capabilities and the means to flexibly integrate them while differentiating the needs of the diverse instrument suite. Neutron events are captured and processed in VXI modules while controls for sample environment and beam line setup are processed with PCs. Typically users access the system through web browsers. (author)

  13. NREL Receives Editors' Choice Awards for Supercomputer Research | News |

    Science.gov (United States)

    performance data center, high-bay labs, and office space. NREL's Martha Symko-Davies honored by Women in successful women working in the energy field. As NREL's Director of Partnerships for Energy Systems awards for the Peregrine high-performance computer and the groundbreaking research it made possible. The

  14. System Engineering Processes at Kennedy Space Center for Development of the SLS and Orion Launch Systems

    Science.gov (United States)

    Schafer, Eric J.

    2012-01-01

    There are over 40 subsystems being developed for the future SLS and Orion Launch Systems at Kennedy Space Center. These subsystems developed at the Kennedy Space Center Engineering Directorate follow a comprehensive design process which requires several different product deliverables during each phase of each of the subsystems. This Paper describes this process and gives an example of where the process has been applied.

  15. Assessment center energy collector system of crude Puerto Escondido

    International Nuclear Information System (INIS)

    Rodríguez Sosa, Yadier; Morón Álvarez, Carlos J.; Gozá León, Osvaldo

    2015-01-01

    In this paper the results of the evaluation of the energy system Collector Crude Center of Puerto Escondido in the first half of 2014. By implementing the overall strategy presented Process Analysis developed and implemented an energy assessment procedure allowed characterize current plant conditions, and raise a number of measures and recommendations that lead to improved energy use and reduced environmental impact. It also presents the computational tools used for both process simulation (Hysys v 3.2) as for technical analysis - economic and environmental (Microsoft Excel). (full text)

  16. KfK-seminar series on supercomputing und visualization from May till September 1992

    International Nuclear Information System (INIS)

    Hohenhinnebusch, W.

    1993-05-01

    During the period of may 1992 to september 1992 a series of seminars was held at KfK on several topics of supercomputing in different fields of application. The aim was to demonstrate the importance of supercomputing and visualization in numerical simulations of complex physical and technical phenomena. This report contains the collection of all submitted seminar papers. (orig./HP) [de

  17. Heat-pump-centered integrated community energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Schaetzle, W.J.; Brett, C.E.; Seppanen, M.S.

    1979-12-01

    The heat-pump-centered integrated community energy system (HP-ICES) supplies district heating and cooling using heat pumps and a thermal energy storage system which is provided by nature in underground porous formations filled with water, i.e., aquifers. The energy is transported by a two-pipe system, one for warm water and one for cool water, between the aquifers and the controlled environments. Each energy module contains the controlled environments, an aquifer, wells for access to the aquifer, the two pipe water distribution system and water source heat pumps. The heat pumps upgrade the energy in the distribution system for use in the controlled environments. Economically, the system shows improvement on both energy usage and capital costs. The system saves over 60% of the energy required for resistance heating; saves over 30% of the energy required for most air-source heat pumps and saves over 60% of the energy required for gas, coal, or oil heating, when comparing to energy input required at the power plant for heat pump usage. The proposed system has been analyzed as demonstration projects for a downtown portion of Louisville, Kentucky, and a section of Fort Rucker, Alabama. The downtown Louisville demonstration project is tied directly to major buildings while the Fort Rucker demonstration project is tied to a dispersed subdivision of homes. The Louisville project shows a payback of approximately 3 y, while Fort Rucker is approximately 30 y. The primary difference is that at Fort Rucker new heat pumps are charged to the system. In Louisville, either new construction requiring heating and cooling systems or existing chillers are utilized. (LCL)

  18. Model-Based Systems Engineering in Concurrent Engineering Centers

    Science.gov (United States)

    Iwata, Curtis; Infeld, Samantha; Bracken, Jennifer Medlin; McGuire, Melissa; McQuirk, Christina; Kisdi, Aron; Murphy, Jonathan; Cole, Bjorn; Zarifian, Pezhman

    2015-01-01

    Concurrent Engineering Centers (CECs) are specialized facilities with a goal of generating and maturing engineering designs by enabling rapid design iterations. This is accomplished by co-locating a team of experts (either physically or virtually) in a room with a narrow design goal and a limited timeline of a week or less. The systems engineer uses a model of the system to capture the relevant interfaces and manage the overall architecture. A single model that integrates other design information and modeling allows the entire team to visualize the concurrent activity and identify conflicts more efficiently, potentially resulting in a systems model that will continue to be used throughout the project lifecycle. Performing systems engineering using such a system model is the definition of model-based systems engineering (MBSE); therefore, CECs evolving their approach to incorporate advances in MBSE are more successful in reducing time and cost needed to meet study goals. This paper surveys space mission CECs that are in the middle of this evolution, and the authors share their experiences in order to promote discussion within the community.

  19. High Temporal Resolution Mapping of Seismic Noise Sources Using Heterogeneous Supercomputers

    Science.gov (United States)

    Paitz, P.; Gokhberg, A.; Ermert, L. A.; Fichtner, A.

    2017-12-01

    The time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems like earthquake fault zones, volcanoes, geothermal and hydrocarbon reservoirs. We present results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service providing seismic noise source maps for Central Europe with high temporal resolution. We use source imaging methods based on the cross-correlation of seismic noise records from all seismic stations available in the region of interest. The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept to provide the interested researchers worldwide with regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for collecting, on a periodic basis, raw seismic records from the European seismic networks, (2) high-performance noise source mapping application responsible for the generation of source maps using cross-correlation of seismic records, (3) back-end infrastructure for the coordination of various tasks and computations, (4) front-end Web interface providing the service to the end-users and (5) data repository. The noise source mapping itself rests on the measurement of logarithmic amplitude ratios in suitably pre-processed noise correlations, and the use of simplified sensitivity kernels. During the implementation we addressed various challenges, in particular, selection of data sources and transfer protocols, automation and monitoring of daily data downloads, ensuring the required data processing performance, design of a general service-oriented architecture for coordination of various sub-systems, and

  20. Quality management system of Saraykoy Nuclear Research and Training center

    International Nuclear Information System (INIS)

    Gurellier, R.; Akchay, S.; Zararsiz, S.

    2014-01-01

    Full text : Technical competence and national/international acceptance of independency of laboratories is ensured by going through accreditations. It provides decreasing the risk of a slowdown in international trade due to unnecessary repetition of testing and analyses. It also eliminates the cost of additional experiments and analyses. Saraykoy Nuclear Research and Training Center (SANAEM) has performed intensive studies to establish an effective and well-functioning QMS (Quality Management System) by full accordance with the requirements of ISO/IEC 17025, since the begining of 2006. Laboratories, especially serving to public health studies and important trade duties require urgent accreditation. In this regard, SANAEM has established a quality management system and performed accreditation studies

  1. Implementation of a virtual link between power system testbeds at Marshall Spaceflight Center and Lewis Research Center

    Science.gov (United States)

    Doreswamy, Rajiv

    1990-01-01

    The Marshall Space Flight Center (MSFC) owns and operates a space station module power management and distribution (SSM-PMAD) testbed. This system, managed by expert systems, is used to analyze and develop power system automation techniques for Space Station Freedom. The Lewis Research Center (LeRC), Cleveland, Ohio, has developed and implemented a space station electrical power system (EPS) testbed. This system and its power management controller are representative of the overall Space Station Freedom power system. A virtual link is being implemented between the testbeds at MSFC and LeRC. This link would enable configuration of SSM-PMAD as a load center for the EPS testbed at LeRC. This connection will add to the versatility of both systems, and provide an environment of enhanced realism for operation of both testbeds.

  2. Affordable and accurate large-scale hybrid-functional calculations on GPU-accelerated supercomputers

    Science.gov (United States)

    Ratcliff, Laura E.; Degomme, A.; Flores-Livas, José A.; Goedecker, Stefan; Genovese, Luigi

    2018-03-01

    Performing high accuracy hybrid functional calculations for condensed matter systems containing a large number of atoms is at present computationally very demanding or even out of reach if high quality basis sets are used. We present a highly optimized multiple graphics processing unit implementation of the exact exchange operator which allows one to perform fast hybrid functional density-functional theory (DFT) calculations with systematic basis sets without additional approximations for up to a thousand atoms. With this method hybrid DFT calculations of high quality become accessible on state-of-the-art supercomputers within a time-to-solution that is of the same order of magnitude as traditional semilocal-GGA functionals. The method is implemented in a portable open-source library.

  3. Reliability Lessons Learned From GPU Experience With The Titan Supercomputer at Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Gallarno, George [Christian Brothers University; Rogers, James H [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learned in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.

  4. Development of a computer system at La Hague center

    International Nuclear Information System (INIS)

    Mimaud, Robert; Malet, Georges; Ollivier, Francis; Fabre, J.-C.; Valois, Philippe; Desgranges, Patrick; Anfossi, Gilbert; Gentizon, Michel; Serpollet, Roger.

    1977-01-01

    The U.P.2 plant, built at La Hague Center is intended mainly for the reprocessing of spent fuels coming from (as metal) graphite-gas reactors and (as oxide) light-water, heavy-water and breeder reactors. In each of the five large nuclear units the digital processing of measurements was dealt with until 1974 by CAE 3030 data processors. During the period 1974-1975 a modern industrial computer system was set up. This system, equipped with T 2000/20 material from the Telemecanique company, consists of five measurement acquisition devices (for a total of 1500 lines processed) and two central processing units (CPU). The connection of these two PCU (Hardware and Software) enables an automatic connection of the system either on the first CPU or on the second one. The system covers, at present, data processing, threshold monitoring, alarm systems, display devices, periodical listing, and specific calculations concerning the process (balances etc), and at a later stage, an automatic control of certain units of the Process [fr

  5. Center for Advanced Biofuel Systems (CABS) Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Kutchan, Toni M. [Donald Danforth Plant Science Center, St. Louis, MO (United States)

    2015-12-02

    One of the great challenges facing current and future generations is how to meet growing energy demands in an environmentally sustainable manner. Renewable energy sources, including wind, geothermal, solar, hydroelectric, and biofuel energy systems, are rapidly being developed as sustainable alternatives to fossil fuels. Biofuels are particularly attractive to the U.S., given its vast agricultural resources. The first generation of biofuel systems was based on fermentation of sugars to produce ethanol, typically from food crops. Subsequent generations of biofuel systems, including those included in the CABS project, will build upon the experiences learned from those early research results and will have improved production efficiencies, reduced environmental impacts and decreased reliance on food crops. Thermodynamic models predict that the next generations of biofuel systems will yield three- to five-fold more recoverable energy products. To address the technological challenges necessary to develop enhanced biofuel systems, greater understanding of the non-equilibrium processes involved in solar energy conversion and the channeling of reduced carbon into biofuel products must be developed. The objective of the proposed Center for Advanced Biofuel Systems (CABS) was to increase the thermodynamic and kinetic efficiency of select plant- and algal-based fuel production systems using rational metabolic engineering approaches grounded in modern systems biology. The overall strategy was to increase the efficiency of solar energy conversion into oils and other specialty biofuel components by channeling metabolic flux toward products using advanced catalysts and sensible design:1) employing novel protein catalysts that increase the thermodynamic and kinetic efficiencies of photosynthesis and oil biosynthesis; 2) engineering metabolic networks to enhance acetyl-CoA production and its channeling towards lipid synthesis; and 3) engineering new metabolic networks for the

  6. Harnessing Petaflop-Scale Multi-Core Supercomputing for Problems in Space Science

    Science.gov (United States)

    Albright, B. J.; Yin, L.; Bowers, K. J.; Daughton, W.; Bergen, B.; Kwan, T. J.

    2008-12-01

    The particle-in-cell kinetic plasma code VPIC has been migrated successfully to the world's fastest supercomputer, Roadrunner, a hybrid multi-core platform built by IBM for the Los Alamos National Laboratory. How this was achieved will be described and examples of state-of-the-art calculations in space science, in particular, the study of magnetic reconnection, will be presented. With VPIC on Roadrunner, we have performed, for the first time, plasma PIC calculations with over one trillion particles, >100× larger than calculations considered "heroic" by community standards. This allows examination of physics at unprecedented scale and fidelity. Roadrunner is an example of an emerging paradigm in supercomputing: the trend toward multi-core systems with deep hierarchies and where memory bandwidth optimization is vital to achieving high performance. Getting VPIC to perform well on such systems is a formidable challenge: the core algorithm is memory bandwidth limited with low compute-to-data ratio and requires random access to memory in its inner loop. That we were able to get VPIC to perform and scale well, achieving >0.374 Pflop/s and linear weak scaling on real physics problems on up to the full 12240-core Roadrunner machine, bodes well for harnessing these machines for our community's needs in the future. Many of the design considerations encountered commute to other multi-core and accelerated (e.g., via GPU) platforms and we modified VPIC with flexibility in mind. These will be summarized and strategies for how one might adapt a code for such platforms will be shared. Work performed under the auspices of the U.S. DOE by the LANS LLC Los Alamos National Laboratory. Dr. Bowers is a LANL Guest Scientist; he is presently at D. E. Shaw Research LLC, 120 W 45th Street, 39th Floor, New York, NY 10036.

  7. A supercomputing application for reactors core design and optimization

    International Nuclear Information System (INIS)

    Hourcade, Edouard; Gaudier, Fabrice; Arnaud, Gilles; Funtowiez, David; Ammar, Karim

    2010-01-01

    Advanced nuclear reactor designs are often intuition-driven processes where designers first develop or use simplified simulation tools for each physical phenomenon involved. Through the project development, complexity in each discipline increases and implementation of chaining/coupling capabilities adapted to supercomputing optimization process are often postponed to a further step so that task gets increasingly challenging. In the context of renewal in reactor designs, project of first realization are often run in parallel with advanced design although very dependant on final options. As a consequence, the development of tools to globally assess/optimize reactor core features, with the on-going design methods accuracy, is needed. This should be possible within reasonable simulation time and without advanced computer skills needed at project management scale. Also, these tools should be ready to easily cope with modeling progresses in each discipline through project life-time. An early stage development of multi-physics package adapted to supercomputing is presented. The URANIE platform, developed at CEA and based on the Data Analysis Framework ROOT, is very well adapted to this approach. It allows diversified sampling techniques (SRS, LHS, qMC), fitting tools (neuronal networks...) and optimization techniques (genetic algorithm). Also data-base management and visualization are made very easy. In this paper, we'll present the various implementing steps of this core physics tool where neutronics, thermo-hydraulics, and fuel mechanics codes are run simultaneously. A relevant example of optimization of nuclear reactor safety characteristics will be presented. Also, flexibility of URANIE tool will be illustrated with the presentation of several approaches to improve Pareto front quality. (author)

  8. Information System Success Model for Customer Relationship Management System in Health Promotion Centers

    Science.gov (United States)

    Choi, Wona; Rho, Mi Jung; Park, Jiyun; Kim, Kwang-Jum; Kwon, Young Dae

    2013-01-01

    Objectives Intensified competitiveness in the healthcare industry has increased the number of healthcare centers and propelled the introduction of customer relationship management (CRM) systems to meet diverse customer demands. This study aimed to develop the information system success model of the CRM system by investigating previously proposed indicators within the model. Methods The evaluation areas of the CRM system includes three areas: the system characteristics area (system quality, information quality, and service quality), the user area (perceived usefulness and user satisfaction), and the performance area (personal performance and organizational performance). Detailed evaluation criteria of the three areas were developed, and its validity was verified by a survey administered to CRM system users in 13 nationwide health promotion centers. The survey data were analyzed by the structural equation modeling method, and the results confirmed that the model is feasible. Results Information quality and service quality showed a statistically significant relationship with perceived usefulness and user satisfaction. Consequently, the perceived usefulness and user satisfaction had significant influence on individual performance as well as an indirect influence on organizational performance. Conclusions This study extends the research area on information success from general information systems to CRM systems in health promotion centers applying a previous information success model. This lays a foundation for evaluating health promotion center systems and provides a useful guide for successful implementation of hospital CRM systems. PMID:23882416

  9. Information system success model for customer relationship management system in health promotion centers.

    Science.gov (United States)

    Choi, Wona; Rho, Mi Jung; Park, Jiyun; Kim, Kwang-Jum; Kwon, Young Dae; Choi, In Young

    2013-06-01

    Intensified competitiveness in the healthcare industry has increased the number of healthcare centers and propelled the introduction of customer relationship management (CRM) systems to meet diverse customer demands. This study aimed to develop the information system success model of the CRM system by investigating previously proposed indicators within the model. THE EVALUATION AREAS OF THE CRM SYSTEM INCLUDES THREE AREAS: the system characteristics area (system quality, information quality, and service quality), the user area (perceived usefulness and user satisfaction), and the performance area (personal performance and organizational performance). Detailed evaluation criteria of the three areas were developed, and its validity was verified by a survey administered to CRM system users in 13 nationwide health promotion centers. The survey data were analyzed by the structural equation modeling method, and the results confirmed that the model is feasible. Information quality and service quality showed a statistically significant relationship with perceived usefulness and user satisfaction. Consequently, the perceived usefulness and user satisfaction had significant influence on individual performance as well as an indirect influence on organizational performance. This study extends the research area on information success from general information systems to CRM systems in health promotion centers applying a previous information success model. This lays a foundation for evaluating health promotion center systems and provides a useful guide for successful implementation of hospital CRM systems.

  10. Development of a component centered fault monitoring and diagnosis knowledge based system for space power system

    Science.gov (United States)

    Lee, S. C.; Lollar, Louis F.

    1988-01-01

    The overall approach currently being taken in the development of AMPERES (Autonomously Managed Power System Extendable Real-time Expert System), a knowledge-based expert system for fault monitoring and diagnosis of space power systems, is discussed. The system architecture, knowledge representation, and fault monitoring and diagnosis strategy are examined. A 'component-centered' approach developed in this project is described. Critical issues requiring further study are identified.

  11. Operating The Central Process Systems At Glenn Research Center

    Science.gov (United States)

    Weiler, Carly P.

    2004-01-01

    As a research facility, the Glenn Research Center (GRC) trusts and expects all the systems, controlling their facilities to run properly and efficiently in order for their research and operations to occur proficiently and on time. While there are many systems necessary for the operations at GRC, one of those most vital systems is the Central Process Systems (CPS). The CPS controls operations used by GRC's wind tunnels, propulsion systems lab, engine components research lab, and compressor, turbine and combustor test cells. Used widely throughout the lab, it operates equipment such as exhausters, chillers, cooling towers, compressors, dehydrators, and other such equipment. Through parameters such as pressure, temperature, speed, flow, etc., it performs its primary operations on the major systems of Electrical Dispatch (ED), Central Air Dispatch (CAD), Central Air Equipment Building (CAEB), and Engine Research Building (ERB). In order for the CPS to continue its operations at Glenn, a new contract must be awarded. Consequently, one of my primary responsibilities was assisting the Source Evaluation Board (SEB) with the process of awarding the recertification contract of the CPS. The job of the SEB was to evaluate the proposals of the contract bidders and then to present their findings to the Source Selecting Official (SSO). Before the evaluations began, the Center Director established the level of the competition. For this contract, the competition was limited to those companies classified as a small, disadvantaged business. After an industry briefing that explained to qualified companies the CPS and type of work required, each of the interested companies then submitted proposals addressing three components: Mission Suitability, Cost, and Past Performance. These proposals were based off the Statement of Work (SOW) written by the SEB. After companies submitted their proposals, the SEB reviewed all three components and then presented their results to the SSO. While the

  12. Summaries of research and development activities by using JAERI computer system in FY2003. April 1, 2003 - March 31, 2004

    International Nuclear Information System (INIS)

    2005-03-01

    Center for Promotion of Computational Science and Engineering (CCSE) of Japan Atomic Energy Research Institute (JAERI) installed large computer system included super-computers in order to support research and development activities in JAERI. CCSE operates and manages the computer system and network system. This report presents usage records of the JAERI computer system and big user's research and development activities by using the computer system in FY2003 (April 1, 2003 - March 31, 2004). (author)

  13. Summaries of research and development activities by using JAEA computer system in FY2005. April 1, 2005 - March, 31, 2006

    International Nuclear Information System (INIS)

    2006-10-01

    Center for Promotion of Computational Science and Engineering (CCSE) of Japan Atomic Energy Agency (JAEA) installed large computer systems including super-computers in order to support research and development activities in JAEA. CCSE operates and manages the computer system and network system. This report presents usage records of the JAERI computer system and the big users' research and development activities by using the computer system in FY2005 (April 1, 2005 - March 31, 2006). (author)

  14. Summaries of research and development activities by using JAERI computer system in FY2004 (April 1, 2004 - March 31, 2005)

    International Nuclear Information System (INIS)

    2005-08-01

    Center for Promotion of Computational Science and Engineering (CCSE) of Japan Atomic Energy Research Institute (JAERI) installed large computer systems including super-computers in order to support research and development activities in JAERI. CCSE operates and manages the computer system and network system. This report presents usage records of the JAERI computer system and the big users' research and development activities by using the computer system in FY2004 (April 1, 2004 - March 31, 2005). (author)

  15. Summaries of research and development activities by using JAEA computer system in FY2006. April 1, 2006 - March 31, 2007

    International Nuclear Information System (INIS)

    2008-02-01

    Center for Promotion of Computational Science and Engineering (CCSE) of Japan Atomic Energy Agency (JAEA) installed large computer systems including super-computers in order to support research and development activities in JAEA. CCSE operates and manages the computer system and network system. This report presents usage records of the JAEA computer system and the big users' research and development activities by using the computer system in FY2006 (April 1, 2006 - March 31, 2007). (author)

  16. MaMiCo: Transient multi-instance molecular-continuum flow simulation on supercomputers

    Science.gov (United States)

    Neumann, Philipp; Bian, Xin

    2017-11-01

    coupling algorithmics are abstracted and incorporated in MaMiCo. Once an algorithm is set up in MaMiCo, it can be used and extended, even if other solvers are used (as soon as the respective interfaces are implemented/available). Reasons for the new version: We have incorporated a new algorithm to simulate transient molecular-continuum systems and to automatically sample data over multiple MD runs that can be executed simultaneously (on, e.g., a compute cluster). MaMiCo has further been extended by an interface to incorporate boundary forcing to account for open molecular dynamics boundaries. Besides support for coupling with various MD and CFD frameworks, the new version contains a test case that allows to run molecular-continuum Couette flow simulations out-of-the-box. No external tools or simulation codes are required anymore. However, the user is free to switch from the included MD simulation package to LAMMPS. For details on how to run the transient Couette problem, see the file README in the folder coupling/tests, Remark on MaMiCo V1.1. Summary of revisions: Open boundary forcing; Multi-instance MD sampling; support for transient molecular-continuum systems Restrictions: Currently, only single-centered systems are supported. For access to the LAMMPS-based implementation of DPD boundary forcing, please contact Xin Bian, xin.bian@tum.de. Additional comments: Please see file license_mamico.txt for further details regarding distribution and advertising of this software.

  17. Efficient development of memory bounded geo-applications to scale on modern supercomputers

    Science.gov (United States)

    Räss, Ludovic; Omlin, Samuel; Licul, Aleksandar; Podladchikov, Yuri; Herman, Frédéric

    2016-04-01

    Numerical modeling is an actual key tool in the area of geosciences. The current challenge is to solve problems that are multi-physics and for which the length scale and the place of occurrence might not be known in advance. Also, the spatial extend of the investigated domain might strongly vary in size, ranging from millimeters for reactive transport to kilometers for glacier erosion dynamics. An efficient way to proceed is to develop simple but robust algorithms that perform well and scale on modern supercomputers and permit therefore very high-resolution simulations. We propose an efficient approach to solve memory bounded real-world applications on modern supercomputers architectures. We optimize the software to run on our newly acquired state-of-the-art GPU cluster "octopus". Our approach shows promising preliminary results on important geodynamical and geomechanical problematics: we have developed a Stokes solver for glacier flow and a poromechanical solver including complex rheologies for nonlinear waves in stressed rocks porous rocks. We solve the system of partial differential equations on a regular Cartesian grid and use an iterative finite difference scheme with preconditioning of the residuals. The MPI communication happens only locally (point-to-point); this method is known to scale linearly by construction. The "octopus" GPU cluster, which we use for the computations, has been designed to achieve maximal data transfer throughput at minimal hardware cost. It is composed of twenty compute nodes, each hosting four Nvidia Titan X GPU accelerators. These high-density nodes are interconnected with a parallel (dual-rail) FDR InfiniBand network. Our efforts show promising preliminary results for the different physics investigated. The glacier flow solver achieves good accuracy in the relevant benchmarks and the coupled poromechanical solver permits to explain previously unresolvable focused fluid flow as a natural outcome of the porosity setup. In both cases

  18. SOFTWARE FOR SUPERCOMPUTER SKIF “ProLit-lC” and “ProNRS-lC” FOR FOUNDRY AND METALLURGICAL PRODUCTIONS

    Directory of Open Access Journals (Sweden)

    A. N. Chichko

    2008-01-01

    Full Text Available The data of modeling on supercomputer system SKIF of technological process of  molds filling by means of computer system 'ProLIT-lc', and also data of modeling of the steel pouring process by means ofTroNRS-lc'are presented. The influence of number of  processors of  multinuclear computer system SKIF on acceleration and time of  modeling of technological processes, connected with production of castings and slugs, is shown.

  19. Small Radioisotope Power System Testing at NASA Glenn Research Center

    Science.gov (United States)

    Dugala, Gina; Bell, Mark; Oriti, Salvatore; Fraeman, Martin; Frankford, David; Duven, Dennis

    2013-01-01

    In April 2009, NASA Glenn Research Center (GRC) formed an integrated product team (IPT) to develop a Small Radioisotope Power System (SRPS) utilizing a single Advanced Stirling Convertor (ASC) with passive balancer. A single ASC produces approximately 80 We making this system advantageous for small distributed lunar science stations. The IPT consists of Sunpower, Inc., to provide the single ASC with a passive balancer, The Johns Hopkins University Applied Physics Laboratory (JHUAPL) to design an engineering model Single Convertor Controller (SCC) for an ASC with a passive balancer, and NASA GRC to provide technical support to these tasks and to develop a simulated lunar lander test stand. The single ASC with a passive balancer, simulated lunar lander test stand, and SCC were delivered to GRC and were tested as a system. The testing sequence at GRC included SCC fault tolerance, integration, electromagnetic interference (EMI), vibration, and extended operation testing. The SCC fault tolerance test characterized the SCCs ability to handle various fault conditions, including high or low bus power consumption, total open load or short circuit, and replacing a failed SCC card while the backup maintains control of the ASC. The integrated test characterized the behavior of the system across a range of operating conditions, including variations in cold-end temperature and piston amplitude, including the emitted vibration to both the sensors on the lunar lander and the lunar surface. The EMI test characterized the AC and DC magnetic and electric fields emitted by the SCC and single ASC. The vibration test confirms the SCCs ability to control the single ASC during launch. The extended operation test allows data to be collected over a period of thousands of hours to obtain long term performance data of the ASC with a passive balancer and the SCC. This paper will discuss the results of each of these tests.

  20. Joint Space Operations Center (JSpOC) Mission System (JMS)

    Science.gov (United States)

    Morton, M.; Roberts, T.

    2011-09-01

    US space capabilities benefit the economy, national security, international relationships, scientific discovery, and our quality of life. Realizing these space responsibilities is challenging not only because the space domain is increasingly congested, contested, and competitive but is further complicated by the legacy space situational awareness (SSA) systems approaching end of life and inability to provide the breadth of SSA and command and control (C2) of space forces in this challenging domain. JMS will provide the capabilities to effectively employ space forces in this challenging domain. Requirements for JMS were developed based on regular, on-going engagement with the warfighter. The use of DoD Architecture Framework (DoDAF) products facilitated requirements scoping and understanding and transferred directly to defining and documenting the requirements in the approved Capability Development Document (CDD). As part of the risk reduction efforts, the Electronic System Center (ESC) JMS System Program Office (SPO) fielded JMS Capability Package (CP) 0 which includes an initial service oriented architecture (SOA) and user defined operational picture (UDOP) along with force status, sensor management, and analysis tools. Development efforts are planned to leverage and integrate prototypes and other research projects from Defense Advanced Research Projects Agency, Air Force Research Laboratories, Space Innovation and Development Center, and Massachusetts Institute of Technology/Lincoln Laboratories. JMS provides a number of benefits to the space community: a reduction in operational “transaction time” to accomplish key activities and processes; ability to process the increased volume of metric observations from new sensors (e.g., SBSS, SST, Space Fence), as well as owner/operator ephemerides thus enhancing the high accuracy near-real-time catalog, and greater automation of SSA data sharing supporting collaboration with government, civil, commercial, and foreign

  1. Scalable DDoS Mitigation System for Data Centers

    Directory of Open Access Journals (Sweden)

    Zdenek Martinasek

    2015-01-01

    Full Text Available Distributed Denial of Service attacks (DDoS have been used by attackers for over two decades because of their effectiveness. This type of the cyber-attack is one of the most destructive attacks in the Internet. In recent years, the intensity of DDoS attacks has been rapidly increasing and the attackers combine more often different techniques of DDoS to bypass the protection. Therefore, the main goal of our research is to propose a DDoS solution that allows to increase the filtering capacity linearly and allows to protect against the combination of attacks. The main idea is to develop the DDoS defense system in the form of a portable software image that can be installed on the reserve hardware capacities. During a DDoS attack, these servers will be used as filters of this DDoS attack. Our solution is suitable for data centers and eliminates some lacks of commercial solutions. The system employs modular DDoS filters in the form of special grids containing specific protocol parameters and conditions.

  2. Centering Pregnancy in Missouri: A System Level Analysis

    Directory of Open Access Journals (Sweden)

    Pamela K. Xaverius

    2014-01-01

    Full Text Available Background. Centering Pregnancy (CP is an effective method of delivering prenatal care, yet providers have been slow to adopt the CP model. Our main hypothesis is that a site’s adoption of CP is contingent upon knowledge of the CP, characteristics health care personnel, anticipated patient impact, and system readiness. Methods. Using a matched, pretest-posttest, observational design, 223 people completed pretest and posttest surveys. Our analysis included the effect of the seminar on the groups’ knowledge of CP essential elements, barriers to prenatal care, and perceived value of CP to the patients and to the system of care. Results. Before the CP Seminar only 34% of respondents were aware of the model, while knowledge significantly after the Seminar. The three greatest improvements were in understanding that the group is conducted in a circle, the health assessment occurs in the group space, and a facilitative leadership style is used. Child care, transportation, and language issues were the top three barriers. The greatest improvements reported for patients included improvements in timeliness, patient-centeredness and efficiency, although readiness for adoption was influenced by costs, resources, and expertise. Discussion. Readiness to adopt CP will require support for the start-up and sustainability of this model.

  3. Summaries of research and development activities by using JAEA computer system in FY2007. April 1, 2007 - March 31, 2008

    International Nuclear Information System (INIS)

    2008-11-01

    Center for Computational Science and e-Systems (CCSE) of Japan Atomic Energy Agency (JAEA) installed large computer systems including super-computers in order to support research and development activities in JAEA. This report presents usage records of the JAEA computer system and the big users' research and development activities by using the computer system in FY2007 (April 1, 2007 - March 31, 2008). (author)

  4. Summaries of research and development activities by using JAEA computer system in FY2009. April 1, 2009 - March 31, 2010

    International Nuclear Information System (INIS)

    2010-11-01

    Center for Computational Science and e-Systems (CCSE) of Japan Atomic Energy Agency (JAEA) installed large computer systems including super-computers in order to support research and development activities in JAEA. This report presents usage records of the JAEA computer system and the big users' research and development activities by using the computer system in FY2009 (April 1, 2009 - March 31, 2010). (author)

  5. The advantages of reliability centered maintenance for standby safety systems

    International Nuclear Information System (INIS)

    Dam, R.F.; Ayazzudin, S.; Nickerson, J.H.; DeLong, A.I.

    2002-01-01

    Full text: On standby safety systems, nuclear plants have to balance the requirements of demonstrating the reliability of each system, while maintaining the system and plant availability. With the goal of demonstrating statistical reliability, these systems have extensive testing programs, which often makes the system unavailable and this can impact the plant capacity. The inputs to the process are often safety and regulatory related, resulting in programs that provide a high level of scrutiny on the systems being considered. In such cases, the value of the application of a maintenance optimization strategy, such as Reliability Centered Maintenance (RCM), is questioned. Part of the question stems from the use of the word 'Reliability' in RCM, which implies a level of redundancy when applied to a system maintenance program driven by reliability requirements. A deeper look at the RCM process, however, shows that RCM has the goal of ensuring that the system operates 'reliably' through the application of an integrated maintenance strategy. This is a subtle, but important distinction. Although the system reliability requirements are an important part of the strategy evaluation, RCM provides a broader context where testing is only one part of an overall strategy focused on ensuring that component function is maintained through a combination of monitoring technologies (including testing), predictive techniques, and intrusive maintenance strategies. Each strategy is targeted to identify known component degradation mechanisms. The conclusion is that a maintenance program driven by reliability requirements will tend to have testing defined at a frequency intended to support the needed statistics. The testing demonstrates that the desired function is available today. Maintenance driven by functional requirements and known failure causes, as developed through an RCM assessment, will have frequencies tied to industry experience with components and rely on a higher degree of

  6. Development of the real time monitor system

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Katsumi [Research Organization for Information Science and Technology, Tokai, Ibaraki (Japan); Watanabe, Tadashi; Kaburaki, Hideo

    1996-10-01

    Large-scale simulation technique is studied at the Center for Promotion of Computational Science and Engineering (CCSE) for the computational science research in nuclear fields. Visualization and animation processing technique are studied and developed for efficient understanding of simulation results. The real time monitor system, in which on-going simulation results are transferred from a supercomputer or workstation to a graphic workstation and are visualized and recorded, is described in this report. This system is composed of the graphic workstation and the video equipment connected to the network. The control shell programs are the job-execution shell for simulations on supercomputers, the file-transfer shell for output files for visualization, and the shell for starting visualization tools. Special image processing technique and hardware are not necessary in this system and the standard visualization tool AVS and the UNIX commands are used, so that this system can be implemented and applied in various computer environments. (author)

  7. 23 CFR 752.8 - Privately operated information centers and systems.

    Science.gov (United States)

    2010-04-01

    ... 23 Highways 1 2010-04-01 2010-04-01 false Privately operated information centers and systems. 752... may permit privately operated information centers and systems which conform with the standards of this... AND ENVIRONMENT LANDSCAPE AND ROADSIDE DEVELOPMENT § 752.8 Privately operated information centers and...

  8. Space and Missile Systems Center Standard: Systems Engineering Requirements and Products

    Science.gov (United States)

    2013-07-01

    MISSILE SYSTEMS CENTER Air Force Space Command 483 N. Aviation Blvd. El Segundo, CA 90245 4. This standard has been approved for use on all Space...Any RF receiver with a burnout level of less than 30 dBm (1 mW). b. A summary of all significant areas are addressed in the EMC Control Plan...address 7. Date Submitted 8. Preparing Activity Space and Missile Systems Center AIR FORCE SPACE COMMAND 483 N. Aviation Blvd. El Segundo, CA 91245 Attention: SMC/EN February 2013

  9. Novel Supercomputing Approaches for High Performance Linear Algebra Using FPGAs, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Supercomputing plays a major role in many areas of science and engineering, and it has had tremendous impact for decades in areas such as aerospace, defense, energy,...

  10. De Novo Ultrascale Atomistic Simulations On High-End Parallel Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Nakano, A; Kalia, R K; Nomura, K; Sharma, A; Vashishta, P; Shimojo, F; van Duin, A; Goddard, III, W A; Biswas, R; Srivastava, D; Yang, L H

    2006-09-04

    /MD simulation on a Grid consisting of 6 supercomputer centers in the US and Japan (in total of 150 thousand processor-hours), in which the number of processors change dynamically on demand and resources are allocated and migrated dynamically in response to faults. Furthermore, performance portability has been demonstrated on a wide range of platforms such as BlueGene/L, Altix 3000, and AMD Opteron-based Linux clusters.

  11. SUPERCOMPUTERS FOR AIDING ECONOMIC PROCESSES WITH REFERENCE TO THE FINANCIAL SECTOR

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2014-12-01

    Full Text Available The article discusses the use of supercomputers to support business processes with particular emphasis on the financial sector. A reference was made to the selected projects that support economic development. In particular, we propose the use of supercomputers to perform artificial intel-ligence methods in banking. The proposed methods combined with modern technology enables a significant increase in the competitiveness of enterprises and banks by adding new functionality.

  12. Micro-mechanical Simulations of Soils using Massively Parallel Supercomputers

    Directory of Open Access Journals (Sweden)

    David W. Washington

    2004-06-01

    Full Text Available In this research a computer program, Trubal version 1.51, based on the Discrete Element Method was converted to run on a Connection Machine (CM-5,a massively parallel supercomputer with 512 nodes, to expedite the computational times of simulating Geotechnical boundary value problems. The dynamic memory algorithm in Trubal program did not perform efficiently in CM-2 machine with the Single Instruction Multiple Data (SIMD architecture. This was due to the communication overhead involving global array reductions, global array broadcast and random data movement. Therefore, a dynamic memory algorithm in Trubal program was converted to a static memory arrangement and Trubal program was successfully converted to run on CM-5 machines. The converted program was called "TRUBAL for Parallel Machines (TPM." Simulating two physical triaxial experiments and comparing simulation results with Trubal simulations validated the TPM program. With a 512 nodes CM-5 machine TPM produced a nine-fold speedup demonstrating the inherent parallelism within algorithms based on the Discrete Element Method.

  13. 20 CFR 670.530 - Are Job Corps centers required to maintain a student accountability system?

    Science.gov (United States)

    2010-04-01

    ... student accountability system? 670.530 Section 670.530 Employees' Benefits EMPLOYMENT AND TRAINING... accountability system? Yes, each Job Corps center must establish and operate an effective system to account for... student absence. Each center must operate its student accountability system according to requirements and...

  14. High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers

    Science.gov (United States)

    Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas

    2017-04-01

    Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for

  15. Large space antenna communications systems: Integrated Langley Research Center/Jet Propulsion Laboratory development activities. 2: Langley Research Center activities

    Science.gov (United States)

    Cambell, T. G.; Bailey, M. C.; Cockrell, C. R.; Beck, F. B.

    1983-01-01

    The electromagnetic analysis activities at the Langley Research Center are resulting in efficient and accurate analytical methods for predicting both far- and near-field radiation characteristics of large offset multiple-beam multiple-aperture mesh reflector antennas. The utilization of aperture integration augmented with Geometrical Theory of Diffraction in analyzing the large reflector antenna system is emphasized.

  16. Specification for Visual Requirements of Work-Centered Software Systems

    National Research Council Canada - National Science Library

    Knapp, James R; Chung, Soon M; Schmidt, Vincent A

    2006-01-01

    .... In order to ensure the coherent development and delivery of work-centered software products, WCSS visual requirements must be specified to capture the cognitive aspects of the user interface design...

  17. Using the LANSCE irradiation facility to predict the number of fatal soft errors in one of the world's fastest supercomputers

    International Nuclear Information System (INIS)

    Michalak, S.E.; Harris, K.W.; Hengartner, N.W.; Takala, B.E.; Wender, S.A.

    2005-01-01

    Los Alamos National Laboratory (LANL) is home to the Los Alamos Neutron Science Center (LANSCE). LANSCE is a unique facility because its neutron spectrum closely mimics the neutron spectrum at terrestrial and aircraft altitudes, but is many times more intense. Thus, LANSCE provides an ideal setting for accelerated testing of semiconductor and other devices that are susceptible to cosmic ray induced neutrons. Many industrial companies use LANSCE to estimate device susceptibility to cosmic ray induced neutrons, and it has also been used to test parts from one of LANL's supercomputers, the ASC (Advanced Simulation and Computing Program) Q. This paper discusses our use of the LANSCE facility to study components in Q including a comparison with failure data from Q

  18. Evaluating the networking characteristics of the Cray XC-40 Intel Knights Landing-based Cori supercomputer at NERSC

    Energy Technology Data Exchange (ETDEWEB)

    Doerfler, Douglas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Austin, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cook, Brandon [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Deslippe, Jack [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kandalla, Krishna [Cray Inc, Bloomington, MN (United States); Mendygral, Peter [Cray Inc, Bloomington, MN (United States)

    2017-09-12

    There are many potential issues associated with deploying the Intel Xeon Phi™ (code named Knights Landing [KNL]) manycore processor in a large-scale supercomputer. One in particular is the ability to fully utilize the high-speed communications network, given that the serial performance of a Xeon Phi TM core is a fraction of a Xeon®core. In this paper, we take a look at the trade-offs associated with allocating enough cores to fully utilize the Aries high-speed network versus cores dedicated to computation, e.g., the trade-off between MPI and OpenMP. In addition, we evaluate new features of Cray MPI in support of KNL, such as internode optimizations. We also evaluate one-sided programming models such as Unified Parallel C. We quantify the impact of the above trade-offs and features using a suite of National Energy Research Scientific Computing Center applications.

  19. Activity report of Computing Research Center

    Energy Technology Data Exchange (ETDEWEB)

    1997-07-01

    On April 1997, National Laboratory for High Energy Physics (KEK), Institute of Nuclear Study, University of Tokyo (INS), and Meson Science Laboratory, Faculty of Science, University of Tokyo began to work newly as High Energy Accelerator Research Organization after reconstructing and converting their systems, under aiming at further development of a wide field of accelerator science using a high energy accelerator. In this Research Organization, Applied Research Laboratory is composed of four Centers to execute assistance of research actions common to one of the Research Organization and their relating research and development (R and D) by integrating the present four centers and their relating sections in Tanashi. What is expected for the assistance of research actions is not only its general assistance but also its preparation and R and D of a system required for promotion and future plan of the research. Computer technology is essential to development of the research and can communize for various researches in the Research Organization. On response to such expectation, new Computing Research Center is required for promoting its duty by coworking and cooperating with every researchers at a range from R and D on data analysis of various experiments to computation physics acting under driving powerful computer capacity such as supercomputer and so forth. Here were described on report of works and present state of Data Processing Center of KEK at the first chapter and of the computer room of INS at the second chapter and on future problems for the Computing Research Center. (G.K.)

  20. 20 CFR 670.535 - Are Job Corps centers required to establish behavior management systems?

    Science.gov (United States)

    2010-04-01

    ... behavior management systems? 670.535 Section 670.535 Employees' Benefits EMPLOYMENT AND TRAINING... systems? (a) Yes, each Job Corps center must establish and maintain its own student incentives system to encourage and reward students' accomplishments. (b) The Job Corps center must establish and maintain a...

  1. It is time to talk about people: a human-centered healthcare system

    Directory of Open Access Journals (Sweden)

    Borgi Lea

    2010-11-01

    Full Text Available Abstract Examining vulnerabilities within our current healthcare system we propose borrowing two tools from the fields of engineering and design: a Reason's system approach 1 and b User-centered design 23. Both approaches are human-centered in that they consider common patterns of human behavior when analyzing systems to identify problems and generate solutions. This paper examines these two human-centered approaches in the context of healthcare. We argue that maintaining a human-centered orientation in clinical care, research, training, and governance is critical to the evolution of an effective and sustainable healthcare system.

  2. Study of ATLAS TRT performance with GRID and supercomputers.

    CERN Document Server

    Krasnopevtsev, Dimitriy; The ATLAS collaboration; Mashinistov, Ruslan; Belyaev, Nikita; Ryabinkin, Evgeny

    2015-01-01

    After the early success in discovering a new particle consistent with the long awaited Higgs boson, Large Hadron Collider experiments are ready for the precision measurements and further discoveries that will be made possible by much higher LHC collision rates from spring 2015. A proper understanding of the detectors performance at high occupancy conditions is important for many on-going physics analyses. The ATLAS Transition Radiation Tracker (TRT) is one of these detectors. TRT is a large straw tube tracking system that is the outermost of the three subsystems of the ATLAS Inner Detector (ID). TRT contributes significantly to the resolution for high-pT tracks in the ID providing excellent particle identification capabilities and electron-pion separation. ATLAS experiment is using Worldwide LHC Computing Grid. WLCG is a global collaboration of computer centers and provides seamless access to computing resources which include data storage capacity, processing power, sensors, visualisation tools and more. WLCG...

  3. Study of ATLAS TRT performance with GRID and supercomputers.

    CERN Document Server

    Krasnopevtsev, Dimitriy; The ATLAS collaboration; Belyaev, Nikita; Mashinistov, Ruslan; Ryabinkin, Evgeny

    2015-01-01

    After the early success in discovering a new particle consistent with the long awaited Higgs boson, Large Hadron Collider experiments are ready for the precision measurements and further discoveries that will be made possible by much higher LHC collision rates from spring 2015. A proper understanding of the detectors performance at highoccupancy conditions is important for many on-going physics analyses. The ATLAS Transition Radiation Tracker (TRT) is one of these detectors. TRT is a large straw tube tracking system that is the outermost of the three subsystems of the ATLAS Inner Detector (ID). TRT contributes significantly to the resolution for high-pT tracks in the ID providing excellent particle identification capabilities and electron-pion separation. ATLAS experiment is using Worldwide LHC Computing Grid. WLCG is a global collaboration of computer centers and provides seamless access to computing resources which include data storage capacity, processing power, sensors, visualization tools and more. WLCG ...

  4. Human-centered design of the human-system interfaces of medical equipment: thyroid uptake system

    International Nuclear Information System (INIS)

    Monteiro, Jonathan K.R.; Farias, Marcos S.; Santos, Isaac J.A. Luquetti; Monteiro, Beany G.

    2013-01-01

    Technology plays an important role in modern medical centers, making healthcare increasingly complex, relying on complex technical equipment. This technical complexity is particularly noticeable in the nuclear medicine. Poorly design human-system interfaces can increase the risks for human error. The human-centered approach emphasizes the development of the equipment with a deep understanding of the users activities, current work practices, needs and abilities of the users. An important concept of human-centered design is that the ease-of-use of the equipment can be ensured only if users are actively incorporated in all phases of the life cycle of design process. Representative groups of users are exposed to the equipment at various stages in development, in a variety of testing, evaluation and interviewing situations. The users feedback obtained is then used to refine the design, with the result serving as input to the next interaction of design process. The limits of the approach are that the users cannot address any particular future needs without prior experience or knowledge about the equipment operation. The aim of this paper is to present a methodological framework that contributes to the design of the human-system interfaces, through an approach related to the users and their activities. A case study is described in which the methodological framework is being applied in development of new human-system interfaces of the thyroid uptake system. (author)

  5. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  6. Plant Resources Center and the Vietnamese genebank system

    Science.gov (United States)

    The highly diverse floristic composition of Vietnam has been recognized as a center of angiosperm expansion and crop biodiversity. The broad range of climatic environments include habitats from tropical and subtropical, to temperate and alpine flora. The human component of the country includes 54 et...

  7. Buying Program of the Standard Automated Materiel Management System. Automated Small Purchase System: Defense Supply Center Philadelphia

    National Research Council Canada - National Science Library

    2001-01-01

    The Standard Automated Materiel Management System Automated Small Purchase System is a fully automated micro-purchases system used by the General and Industrial Directorate at the Defense Supply Center Philadelphia...

  8. System of automatic control over data Acquisition and Transmission to IGR NNC RK Data Center

    International Nuclear Information System (INIS)

    Komarov, I.I.; Gordienko, D.D.; Kunakov, A.V.

    2005-01-01

    Automated system for seismic and acoustic data acquisition and transmission in real time was established in Data Center IGR NNC RK, which functions very successively. The system monitors quality and volume of acquired information and also controls the status of the system and communication channels. Statistical data on system operation are accumulated in created database. Information on system status is reflected on the Center Web page. (author)

  9. Space Operations Center system analysis. Volume 3, book 2: SOC system definition report, revision A

    Science.gov (United States)

    1982-01-01

    The Space Operations Center (SOC) orbital space station program operations are described. A work breakdown structure for the general purpose support equipment, construction and transportation support, and resupply and logistics support systems is given. The basis for the design of each element is presented, and a mass estimate for each element supplied. The SOC build-up operation, construction, flight support, and satellite servicing operations are described. Detailed programmatics and cost analysis are presented.

  10. [Outline and effectiveness of support system in the surgical center by supply, processing and distribution center (SPD)].

    Science.gov (United States)

    Ito, Nobuko; Chinzei, Mieko; Fujiwara, Haruko; Usui, Hisako; Hanaoka, Kazuo; Saitoh, Eisho

    2006-04-01

    Supply, Processing and Distribution system had been introduced to surgical center (the University of Tokyo Hospital) since October of 2002. This system had reduced stock for medicine and materials and decreased medical cost dramatically. We designed some kits for therapeutic drugs related to anesthesia. They were prepared for general anesthesia, epidural and spinal anesthesia, and cardiovascular anesthesia, respectively. One kit had been used for one patient, and new kits were prepared in the anesthesia preparation room by pharmaceutical department staffs. Equipment, for general anesthesia as well as epidural and spinal anesthesia, and central catheter set were also designed and provided for each patient by SPD system. According to the questionnaire of anesthesia residents before and after introduction of SPD system, the time spent for anesthesia preparation had been reduced and 92.3% residents had answered that preparation for anesthesia on the previous day was getting easier. Most of the anesthesia residents had been less stressed after introduction of SPD system. Beside the dramatic economical effect, coordination with SPD system and pharmaceutical department reduced anesthesia preparation time and stress of the staff. Introduction of Support system of SPD to surgical center is important for safe and effective management of operating rooms.

  11. Evaluating energy saving system of data centers based on AHP and fuzzy comprehensive evaluation model

    Science.gov (United States)

    Jiang, Yingni

    2018-03-01

    Due to the high energy consumption of communication, energy saving of data centers must be enforced. But the lack of evaluation mechanisms has restrained the process on energy saving construction of data centers. In this paper, energy saving evaluation index system of data centers was constructed on the basis of clarifying the influence factors. Based on the evaluation index system, analytical hierarchy process was used to determine the weights of the evaluation indexes. Subsequently, a three-grade fuzzy comprehensive evaluation model was constructed to evaluate the energy saving system of data centers.

  12. Staff roster for 1979: National Center for Analysis of Energy Systems

    Energy Technology Data Exchange (ETDEWEB)

    1980-01-01

    This publication is a compilation of resumes from the current staff of the National Center for Analysis of Energy Systems. The Center, founded in January 1976, is one of four areas within the Department of Energy and Environment at Brookhaven National Laboratory. The emphasis of programs at the Center is on energy policy and planning studies at the regional, national, and international levels, involving quantitative, interdisciplinary studies of the technological, economic, social, and environmental aspects of energy systems. To perform these studies the Center has assembled a staff of experts in the areas of science, technology, economics planning, health and safety, information systems, and quantitative analysis.

  13. The Patient-Centered Medical Home Neighbor: A Critical Concept for a Redesigned Healthcare Delivery System

    Science.gov (United States)

    2011-01-25

    Sharing Knowledge: Achieving Breakthrough Performance 2010 Military Health System Conference The Patient -Centered Medical Home Neighbor: A Critical...DATE 25 JAN 2011 2. REPORT TYPE 3. DATES COVERED 00-00-2011 to 00-00-2011 4. TITLE AND SUBTITLE The Patient -Centered Medical Home Neighbor: A...Conference What is the Patient -Centered Medical Home?  …a vision of health care as it should be  …a framework for organizing systems of care at both the

  14. The Design of HVAC System in the Conventional Facility of Proton Accelerator Research Center

    International Nuclear Information System (INIS)

    Jeon, G. P.; Kim, J. Y.; Choi, B. H.

    2007-01-01

    The HVAC systems for conventional facility of Proton Accelerator Research Center consist of 3 systems : accelerator building HVAC system, beam application building HVAC system and miscellaneous HVAC system. We designed accelerator building HVAC system and beam application research area HVAC system in the conventional facilities of Proton Accelerator research center. Accelerator building HVAC system is divided into accelerator tunnel area, klystron area, klystron gallery area, accelerator assembly area. Also, Beam application research area HVAC system is divided into those of beam experimental hall, accelerator control area, beam application research area and Ion beam application building. In this paper, We described system design requirements and explained system configuration for each systems. We presented operation scenario of HVAC system in the Conventional Facility of Proton Accelerator Research Center

  15. Gas injection system in the Tara center cell

    International Nuclear Information System (INIS)

    Brau, K.; Post, R.S.; Sevillano, E.

    1985-11-01

    Precise control of the gas fueling is essential to the successful operation of tandem mirror plasmas. Improper choice of fueling location, magnetic geometry, and gas injection rates can prevent potential and thermal barrier formation, as well as reduce the energy confinement time. In designing the new gas injection configuration for the Tara center cell, the following issues were addressed: RF potential barriers, gas leakage, and charge exchange recombination. 2 refs., 6 figs

  16. 78 FR 74163 - Harrison Medical Center, a Subsidiary of Franciscan Health System Bremerton, Washington; Notice...

    Science.gov (United States)

    2013-12-10

    ... DEPARTMENT OF LABOR Employment and Training Administration [TA-W-83,070] Harrison Medical Center, a Subsidiary of Franciscan Health System Bremerton, Washington; Notice of Negative Determination... workers of Harrison Medical Center, a subsidiary of Franciscan Health System, Bremerton, Washington...

  17. Application of NASA Kennedy Space Center system assurance analysis methodology to nuclear power plant systems designs

    International Nuclear Information System (INIS)

    Page, D.W.

    1985-01-01

    The Kennedy Space Center (KSC) entered into an agreement with the Nuclear Regulatory Commission (NRC) to conduct a study to demonstrate the feasibility and practicality of applying the KSC System Assurance Analysis (SAA) methodology to nuclear power plant systems designs. In joint meetings of KSC and Duke Power personnel, an agreement was made to select to CATAWBA systems, the Containment Spray System and the Residual Heat Removal System, for the analyses. Duke Power provided KSC with a full set a Final Safety Analysis Reports as well as schematics for the two systems. During Phase I of the study the reliability analyses of the SAA were performed. During Phase II the hazard analyses were performed. The final product of Phase II is a handbook for implementing the SAA methodology into nuclear power plant systems designs. The purpose of this paper is to describe the SAA methodology as it applies to nuclear power plant systems designs and to discuss the feasibility of its application. The conclusion is drawn that nuclear power plant systems and aerospace ground support systems are similar in complexity and design and share common safety and reliability goals. The SAA methodology is readily adaptable to nuclear power plant designs because of it's practical application of existing and well known safety and reliability analytical techniques tied to an effective management information system

  18. Baselining the New GSFC Information Systems Center: The Foundation for Verifiable Software Process Improvement

    Science.gov (United States)

    Parra, A.; Schultz, D.; Boger, J.; Condon, S.; Webby, R.; Morisio, M.; Yakimovich, D.; Carver, J.; Stark, M.; Basili, V.; hide

    1999-01-01

    This paper describes a study performed at the Information System Center (ISC) in NASA Goddard Space Flight Center. The ISC was set up in 1998 as a core competence center in information technology. The study aims at characterizing people, processes and products of the new center, to provide a basis for proposing improvement actions and comparing the center before and after these actions have been performed. The paper presents the ISC, goals and methods of the study, results and suggestions for improvement, through the branch-level portion of this baselining effort.

  19. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2011-01-01

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  20. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-03-29

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  1. From product centered design to value centered design: understanding the value-system

    DEFF Research Database (Denmark)

    Randmaa, Merili; Howard, Thomas J.; Otto, T.

    Product design has focused on different parameters through history- design for usability, design for manufacturing, design for assembly etc. Today, as the products get bundled with service, it is important to interconnect product, service and business model design to create synergy effect and offer...... more value for the customer for less eford. Value and understanding the value-system needs to be in the focus of business strategy. Value can be created, exchanged and perceived. It can be tangible (physical products, money) or intangible (information, experience, relationships, service). Creating...... value is usually a co-creation process, where customers, suppliers and manufacturers all have their part. This paper describes a paradigm shift towards value-based thinking and proposes a new methodology for understanding and analysing the value system....

  2. The BlueGene/L Supercomputer and Quantum ChromoDynamics

    International Nuclear Information System (INIS)

    Vranas, P; Soltz, R

    2006-01-01

    In summary our update contains: (1) Perfect speedup sustaining 19.3% of peak for the Wilson D D-slash Dirac operator. (2) Measurements of the full Conjugate Gradient (CG) inverter that inverts the Dirac operator. The CG inverter contains two global sums over the entire machine. Nevertheless, our measurements retain perfect speedup scaling demonstrating the robustness of our methods. (3) We ran on the largest BG/L system, the LLNL 64 rack BG/L supercomputer, and obtained a sustained speed of 59.1 TFlops. Furthermore, the speedup scaling of the Dirac operator and of the CG inverter are perfect all the way up to the full size of the machine, 131,072 cores (please see Figure II). The local lattice is rather small (4 x 4 x 4 x 16) while the total lattice has been a lattice QCD vision for thermodynamic studies (a total of 128 x 128 x 256 x 32 lattice sites). This speed is about five times larger compared to the speed we quoted in our submission. As we have pointed out in our paper QCD is notoriously sensitive to network and memory latencies, has a relatively high communication to computation ratio which can not be overlapped in BGL in virtual node mode, and as an application is in a class of its own. The above results are thrilling to us and a 30 year long dream for lattice QCD

  3. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  4. 369 TFlop/s molecular dynamics simulations on the Roadrunner general-purpose heterogeneous supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Swaminarayan, Sriram [Los Alamos National Laboratory; Germann, Timothy C [Los Alamos National Laboratory; Kadau, Kai [Los Alamos National Laboratory; Fossum, Gordon C [IBM CORPORATION

    2008-01-01

    The authors present timing and performance numbers for a short-range parallel molecular dynamics (MD) code, SPaSM, that has been rewritten for the heterogeneous Roadrunner supercomputer. Each Roadrunner compute node consists of two AMD Opteron dual-core microprocessors and four PowerXCell 8i enhanced Cell microprocessors, so that there are four MPI ranks per node, each with one Opteron and one Cell. The interatomic forces are computed on the Cells (each with one PPU and eight SPU cores), while the Opterons are used to direct inter-rank communication and perform I/O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementation of a standard Lennard-Jones pair potential benchmark reached a peak of 369 Tflop/s double-precision floating-point performance on the full Roadrunner system (27.7% of peak), corresponding to 124 MFlop/Watt/s at a price of approximately 3.69 MFlops/dollar. They demonstrate an initial target application, the jetting and ejection of material from a shocked surface.

  5. Portable implementation model for CFD simulations. Application to hybrid CPU/GPU supercomputers

    Science.gov (United States)

    Oyarzun, Guillermo; Borrell, Ricard; Gorobets, Andrey; Oliva, Assensi

    2017-10-01

    Nowadays, high performance computing (HPC) systems experience a disruptive moment with a variety of novel architectures and frameworks, without any clarity of which one is going to prevail. In this context, the portability of codes across different architectures is of major importance. This paper presents a portable implementation model based on an algebraic operational approach for direct numerical simulation (DNS) and large eddy simulation (LES) of incompressible turbulent flows using unstructured hybrid meshes. The strategy proposed consists in representing the whole time-integration algorithm using only three basic algebraic operations: sparse matrix-vector product, a linear combination of vectors and dot product. The main idea is based on decomposing the nonlinear operators into a concatenation of two SpMV operations. This provides high modularity and portability. An exhaustive analysis of the proposed implementation for hybrid CPU/GPU supercomputers has been conducted with tests using up to 128 GPUs. The main objective consists in understanding the challenges of implementing CFD codes on new architectures.

  6. Solar heating and hot water system installed at the Senior Citizen Center, Huntsville, Alabama

    Science.gov (United States)

    1980-01-01

    The solar energy system installed at the Huntsville Senior Citizen Center is described. Detailed drawings of the complete system and discussions of the planning, the hardware, recommendations, and other pertinent information are presented.

  7. Heat pump centered integrated community energy systems: system development. Georgia Institute of Technology final report

    Energy Technology Data Exchange (ETDEWEB)

    Wade, D.W.; Trammell, B.C.; Dixit, B.S.; McCurry, D.C.; Rindt, B.A.

    1979-12-01

    Heat Pump Centered-Integrated Community Energy Systems (HP-ICES) show the promise of utilizing low-grade thermal energy for low-quality energy requirements such as space heating and cooling. The Heat Pump - Wastewater Heat Recovery (HP-WHR) scheme is one approach to an HP-ICES that proposes to reclaim low-grade thermal energy from a community's wastewater effluent. This report develops the concept of an HP-WHR system, evaluates the potential performance and economics of such a system, and examines the potential for application. A thermodynamic performance analysis of a hypothetical system projects an overall system Coefficient of Performance (C.O.P.) of from 2.181 to 2.264 for waste-water temperatures varying from 50/sup 0/F to 80/sup 0/F. Primary energy source savings from the nationwide implementation of this system is projected to be 6.0 QUADS-fuel oil, or 8.5 QUADS - natural gas, or 29.7 QUADS - coal for the period 1980 to 2000, depending upon the type and mix of conventional space conditioning systems which could be displaced with the HP-WHR system. Site-specific HP-WHR system designs are presented for two application communities in Georgia. Performance analyses for these systems project annual cycle system C.O.P.'s of 2.049 and 2.519. Economic analysis on the basis of a life cycle cost comparison shows one site-specific system design to be cost competitive in the immediate market with conventional residential and light commercial HVAC systems. The second site-specific system design is shown through a similar economic analysis to be more costly than conventional systems due mainly to the current low energy costs for natural gas. It is anticipated that, as energy costs escalate, this HP-WHR system will also approach the threshold of economic viability.

  8. Implementing the patient-centered medical home in complex adaptive systems: Becoming a relationship-centered patient-centered medical home.

    Science.gov (United States)

    Flieger, Signe Peterson

    This study explores the implementation experience of nine primary care practices becoming patient-centered medical homes (PCMH) as part of the New Hampshire Citizens Health Initiative Multi-Stakeholder Medical Home Pilot. The purpose of this study is to apply complex adaptive systems theory and relationship-centered organizations theory to explore how nine diverse primary care practices in New Hampshire implemented the PCMH model and to offer insights for how primary care practices can move from a structural PCMH to a relationship-centered PCMH. Eighty-three interviews were conducted with administrative and clinical staff at the nine pilot practices, payers, and conveners of the pilot between November and December 2011. The interviews were transcribed, coded, and analyzed using both a priori and emergent themes. Although there is value in the structural components of the PCMH (e.g., disease registries), these structures are not enough. Becoming a relationship-centered PCMH requires attention to reflection, sensemaking, learning, and collaboration. This can be facilitated by settings aside time for communication and relationship building through structured meetings about PCMH components as well as the implementation process itself. Moreover, team-based care offers a robust opportunity to move beyond the structures to focus on relationships and collaboration. (a) Recognize that PCMH implementation is not a linear process. (b) Implementing the PCMH from a structural perspective is not enough. Although the National Committee for Quality Assurance or other guidelines can offer guidance on the structural components of PCMH implementation, this should serve only as a starting point. (c) During implementation, set aside structured time for reflection and sensemaking. (d) Use team-based care as a cornerstone of transformation. Reflect on team structures and also interactions of the team members. Taking the time to reflect will facilitate greater sensemaking and learning and

  9. Information systems performance evaluation, introducing a two-level technique: Case study call centers

    Directory of Open Access Journals (Sweden)

    Hesham A. Baraka

    2015-03-01

    The objective of this paper was to introduce a new technique that can support decision makers in the call centers industry to evaluate, and analyze the performance of call centers. The technique presented is derived from the research done on measuring the success or failure of information systems. Two models are mainly adopted namely: the Delone and Mclean model first introduced in 1992 and the Design Reality Gap model introduced by Heeks in 2002. Two indices are defined to calculate the performance of the call center; the success index and the Gap Index. An evaluation tool has been developed to allow call centers managers to evaluate the performance of their call centers in a systematic analytical approach; the tool was applied on 4 call centers from different areas, simple applications such as food ordering, marketing, and sales, technical support systems, to more real time services such as the example of emergency control systems. Results showed the importance of using information systems models to evaluate complex systems as call centers. The models used allow identifying the dimensions for the call centers that are facing challenges, together with an identification of the individual indicators in these dimensions that are causing the poor performance of the call center.

  10. Unifying Human Centered Design and Systems Engineering for Human Systems Integration

    Science.gov (United States)

    Boy, Guy A.; McGovernNarkevicius, Jennifer

    2013-01-01

    Despite the holistic approach of systems engineering (SE), systems still fail, and sometimes spectacularly. Requirements, solutions and the world constantly evolve and are very difficult to keep current. SE requires more flexibility and new approaches to SE have to be developed to include creativity as an integral part and where the functions of people and technology are appropriately allocated within our highly interconnected complex organizations. Instead of disregarding complexity because it is too difficult to handle, we should take advantage of it, discovering behavioral attractors and the emerging properties that it generates. Human-centered design (HCD) provides the creativity factor that SE lacks. It promotes modeling and simulation from the early stages of design and throughout the life cycle of a product. Unifying HCD and SE will shape appropriate human-systems integration (HSI) and produce successful systems.

  11. Constraint based scheduling for the Goddard Space Flight Center distributed Active Archive Center's data archive and distribution system

    Science.gov (United States)

    Short, Nick, Jr.; Bedet, Jean-Jacques; Bodden, Lee; Boddy, Mark; White, Jim; Beane, John

    1994-01-01

    The Goddard Space Flight Center (GSFC) Distributed Active Archive Center (DAAC) has been operational since October 1, 1993. Its mission is to support the Earth Observing System (EOS) by providing rapid access to EOS data and analysis products, and to test Earth Observing System Data and Information System (EOSDIS) design concepts. One of the challenges is to ensure quick and easy retrieval of any data archived within the DAAC's Data Archive and Distributed System (DADS). Over the 15-year life of EOS project, an estimated several Petabytes (10(exp 15)) of data will be permanently stored. Accessing that amount of information is a formidable task that will require innovative approaches. As a precursor of the full EOS system, the GSFC DAAC with a few Terabits of storage, has implemented a prototype of a constraint-based task and resource scheduler to improve the performance of the DADS. This Honeywell Task and Resource Scheduler (HTRS), developed by Honeywell Technology Center in cooperation the Information Science and Technology Branch/935, the Code X Operations Technology Program, and the GSFC DAAC, makes better use of limited resources, prevents backlog of data, provides information about resources bottlenecks and performance characteristics. The prototype which is developed concurrently with the GSFC Version 0 (V0) DADS, models DADS activities such as ingestion and distribution with priority, precedence, resource requirements (disk and network bandwidth) and temporal constraints. HTRS supports schedule updates, insertions, and retrieval of task information via an Application Program Interface (API). The prototype has demonstrated with a few examples, the substantial advantages of using HTRS over scheduling algorithms such as a First In First Out (FIFO) queue. The kernel scheduling engine for HTRS, called Kronos, has been successfully applied to several other domains such as space shuttle mission scheduling, demand flow manufacturing, and avionics communications

  12. A systems engineering perspective on the human-centered design of health information systems.

    Science.gov (United States)

    Samaras, George M; Horst, Richard L

    2005-02-01

    The discipline of systems engineering, over the past five decades, has used a structured systematic approach to managing the "cradle to grave" development of products and processes. While elements of this approach are typically used to guide the development of information systems that instantiate a significant user interface, it appears to be rare for the entire process to be implemented. In fact, a number of authors have put forth development lifecycle models that are subsets of the classical systems engineering method, but fail to include steps such as incremental hazard analysis and post-deployment corrective and preventative actions. In that most health information systems have safety implications, we argue that the design and development of such systems would benefit by implementing this systems engineering approach in full. Particularly with regard to bringing a human-centered perspective to the formulation of system requirements and the configuration of effective user interfaces, this classical systems engineering method provides an excellent framework for incorporating human factors (ergonomics) knowledge and integrating ergonomists in the interdisciplinary development of health information systems.

  13. Guide to dataflow supercomputing basic concepts, case studies, and a detailed example

    CERN Document Server

    Milutinovic, Veljko; Trifunovic, Nemanja; Giorgi, Roberto

    2015-01-01

    This unique text/reference describes an exciting and novel approach to supercomputing in the DataFlow paradigm. The major advantages and applications of this approach are clearly described, and a detailed explanation of the programming model is provided using simple yet effective examples. The work is developed from a series of lecture courses taught by the authors in more than 40 universities across more than 20 countries, and from research carried out by Maxeler Technologies, Inc. Topics and features: presents a thorough introduction to DataFlow supercomputing for big data problems; revie

  14. Application of NASA Kennedy Space Center System Assurance Analysis methodology to nuclear power plant systems designs

    International Nuclear Information System (INIS)

    Page, D.W.

    1985-01-01

    In May of 1982, the Kennedy Space Center (KSC) entered into an agreement with the NRC to conduct a study to demonstrate the feasibility and practicality of applying the KSC System Assurance Analysis (SAA) methodology to nuclear power plant systems designs. North Carolina's Duke Power Company expressed an interest in the study and proposed the nuclear power facility at CATAWBA for the basis of the study. In joint meetings of KSC and Duke Power personnel, an agreement was made to select two CATAWBA systems, the Containment Spray System and the Residual Heat Removal System, for the analyses. Duke Power provided KSC with a full set of Final Safety Analysis Reports (FSAR) as well as schematics for the two systems. During Phase I of the study the reliability analyses of the SAA were performed. During Phase II the hazard analyses were performed. The final product of Phase II is a handbook for implementing the SAA methodology into nuclear power plant systems designs. The purpose of this paper is to describe the SAA methodology as it applies to nuclear power plant systems designs and to discuss the feasibility of its application. (orig./HP)

  15. NASA Johnson Space Center Life Sciences Data System

    Science.gov (United States)

    Rahman, Hasan; Cardenas, Jeffery

    1994-01-01

    The Life Sciences Project Division (LSPD) at JSC, which manages human life sciences flight experiments for the NASA Life Sciences Division, augmented its Life Sciences Data System (LSDS) in support of the Spacelab Life Sciences-2 (SLS-2) mission, October 1993. The LSDS is a portable ground system supporting Shuttle, Spacelab, and Mir based life sciences experiments. The LSDS supports acquisition, processing, display, and storage of real-time experiment telemetry in a workstation environment. The system may acquire digital or analog data, storing the data in experiment packet format. Data packets from any acquisition source are archived and meta-parameters are derived through the application of mathematical and logical operators. Parameters may be displayed in text and/or graphical form, or output to analog devices. Experiment data packets may be retransmitted through the network interface and database applications may be developed to support virtually any data packet format. The user interface provides menu- and icon-driven program control and the LSDS system can be integrated with other workstations to perform a variety of functions. The generic capabilities, adaptability, and ease of use make the LSDS a cost-effective solution to many experiment data processing requirements. The same system is used for experiment systems functional and integration tests, flight crew training sessions and mission simulations. In addition, the system has provided the infrastructure for the development of the JSC Life Sciences Data Archive System scheduled for completion in December 1994.

  16. An Instructional Systems Approach or FAA Student Centered Training.

    Science.gov (United States)

    Federal Aviation Administration (DOT), Washington, DC.

    The Federal Aviation Administration (FAA) Academy has been using a systems approach as part of its training program since 1969. This booklet describes the general characteristics of an instructional system and explains the steps the FAA goes through in implementing the approach. These steps are: 1) recognize a need for training, 2) specify the…

  17. NOSC (Naval Ocean Systems Center) Program Manager’s Handbook.

    Science.gov (United States)

    1986-07-01

    19-3 19.1.3 Summary............................................................. 19-3 19. Gene r...Component Optimum System Performance Benefits of Human Factors Engineering HFE Methodological Tools and Design Aids Allocation of Function or Who Does What...those investigators who treat the human user as part of the system design process. Human factors engineering, or HFE , is the practice of designing

  18. Control system at the Synchrotron Radiation Research Center

    International Nuclear Information System (INIS)

    Jan, G.J.

    1991-01-01

    A modern control system was designed for SRRC to control and monitor the facilities of storage ring, beam transport line and injection system. The SRRC control system is a distributed system which is divided into two logical levels. Several process computers and workstations at upper level provide the computing power for physics simulation, data storage and graphical user interfaces. VME-based Intelligent Local Controllers (ILC) are the backbone of the lower level system which handle the real time devices access and the closed loop control. Ethernet network provides the interconnection between these two layers using IEEE 802.3 and TCP/IP protocol. The software in upper level computers includes data base server, network server, simulation programs, various application codes and X windows based graphical user interfaces. Device drivers, application programs for devices control and communication programs are the major software components at the ILC level

  19. Heat-pump-centered integrated community energy systems: system development summary

    Energy Technology Data Exchange (ETDEWEB)

    Calm, J.M.

    1980-02-01

    An introduction to district heating systems employing heat pumps to enable use of low-temperature energy sources is presented. These systems operate as thermal utilities to provide space heating and may also supply space cooling, service-water heating, and other thermal services. Otherwise-wasted heat from industrial and commercial processes, natural sources including solar and geothermal heat, and heat stored on an annual cycle from summer cooling may be effectively utilized by the systems described. These sources are abundant, and their use would conserve scarce resources and reduce adverse environmental impacts. More than one-quarter of the energy consumed in the United States is used to heat and cool buildings and to heat service water. Natural gas and oil provide approximately 83% of this energy. The systems described show potential to reduce net energy consumption for these services by 20 to 50% and to allow fuel substitution with less-scarce resources not practical in smaller, individual-building systems. Seven studies performed for the system development phase of the Department of Energy's Heat-Pump-Centered Integrated Community Energy Systems Project and to related studies are summarized. A concluding chapter tabulates data from these separately published studies.

  20. NASA's Climate in a Box: Desktop Supercomputing for Open Scientific Model Development

    Science.gov (United States)

    Wojcik, G. S.; Seablom, M. S.; Lee, T. J.; McConaughy, G. R.; Syed, R.; Oloso, A.; Kemp, E. M.; Greenseid, J.; Smith, R.

    2009-12-01

    NASA's High Performance Computing Portfolio in cooperation with its Modeling, Analysis, and Prediction program intends to make its climate and earth science models more accessible to a larger community. A key goal of this effort is to open the model development and validation process to the scientific community at large such that a natural selection process is enabled and results in a more efficient scientific process. One obstacle to others using NASA models is the complexity of the models and the difficulty in learning how to use them. This situation applies not only to scientists who regularly use these models but also non-typical users who may want to use the models such as scientists from different domains, policy makers, and teachers. Another obstacle to the use of these models is that access to high performance computing (HPC) accounts, from which the models are implemented, can be restrictive with long wait times in job queues and delays caused by an arduous process of obtaining an account, especially for foreign nationals. This project explores the utility of using desktop supercomputers in providing a complete ready-to-use toolkit of climate research products to investigators and on demand access to an HPC system. One objective of this work is to pre-package NASA and NOAA models so that new users will not have to spend significant time porting the models. In addition, the prepackaged toolkit will include tools, such as workflow, visualization, social networking web sites, and analysis tools, to assist users in running the models and analyzing the data. The system architecture to be developed will allow for automatic code updates for each user and an effective means with which to deal with data that are generated. We plan to investigate several desktop systems, but our work to date has focused on a Cray CX1. Currently, we are investigating the potential capabilities of several non-traditional development environments. While most NASA and NOAA models are

  1. Virtual Reality Training System for a Submarine Command Center

    National Research Council Canada - National Science Library

    Maxwell, Douglas B

    2008-01-01

    The invention as disclosed is a system that uses a combined real and virtual display interaction methodology to generate the visual appearance of submarine combat control rooms and allow interaction...

  2. The meterological information system of the Karlsruhe Nuclear Research Center

    International Nuclear Information System (INIS)

    Holleuffer-Kypke, R. von; Huebschmann, W.G.; Thomas, P.; Suess, F.

    1984-01-01

    The Meteorological Information System (MIS) comprising the meteorological instruments, the computers, and the software for data processing and recording, is part of the KfK safety and control system. In 1982 is was equipped with an independent data processing system. The report explains the arrangement and the operation of the sensors and thw two process computers. For selected meteorological situations the ability of the system is demonstrated, i.e., the presentation of the vertical profiles of wind, temperature and turbulence in the lower atmospheric boundary layer as well as the calculation and graphical representation of the transport and dispersion into the KfK environment of radioactive pollutants being released by the nuclear installations of the KfK into the atmosphere. (orig.) [de

  3. The meteorological measurement system of the Karlsruhe Nuclear Research Center

    International Nuclear Information System (INIS)

    Dilger, H.

    1976-08-01

    The system mainly serves to record the parameters which are important for the diffusion of offgas plume. The system includes 47 instruments in total which are used to measure the wind velocity, the wind direction, the wind vector, the temperature, the dew point, the solar and heat radiation, the precipitations and the atmospheric pressure, most of them mounted at the 200 m high meteorological tower. (orig./HP) [de

  4. A New Method for Research on the Center-Focus Problem of Differential Systems

    OpenAIRE

    Zhou, Zhengxin

    2014-01-01

    We will introduce Mironenko’s method to discuss the Poincaré center-focus problem, and compare the methods of Lyapunov and Mironenko. We apply the Mironenko method to discuss the qualitative behavior of solutions of some planar polynomial differential systems and derive the sufficient conditions for a critical point to be a center.

  5. Intention and Usage of Computer Based Information Systems in Primary Health Centers

    Science.gov (United States)

    Hosizah; Kuntoro; Basuki N., Hari

    2016-01-01

    The computer-based information system (CBIS) is adopted by almost all of in health care setting, including the primary health center in East Java Province Indonesia. Some of softwares available were SIMPUS, SIMPUSTRONIK, SIKDA Generik, e-puskesmas. Unfortunately they were most of the primary health center did not successfully implemented. This…

  6. Intelligent adaptive systems an interaction-centered design perspective

    CERN Document Server

    Hou, Ming; Burns, Catherine

    2014-01-01

    A synthesis of recent research and developments on intelligent adaptive systems from the HF (human factors) and HCI (human-computer interaction) domains, this book provides integrated design guidance and recommendations for researchers and system developers. It addresses a recognized lack of integration between the HF and HCI research communities, which has led to inconsistencies between the research approaches adopted, and a lack of exploitation of research from one field by the other. The book establishes design guidance through the review of conceptual frameworks, analytical methodologies,

  7. Development of a user-centered radiology teaching file system

    Science.gov (United States)

    dos Santos, Marcelo; Fujino, Asa

    2011-03-01

    Learning radiology requires systematic and comprehensive study of a large knowledge base of medical images. In this work is presented the development of a digital radiology teaching file system. The proposed system has been created in order to offer a set of customized services regarding to users' contexts and their informational needs. This has been done by means of an electronic infrastructure that provides easy and integrated access to all relevant patient data at the time of image interpretation, so that radiologists and researchers can examine all available data to reach well-informed conclusions, while protecting patient data privacy and security. The system is presented such as an environment which implements a distributed clinical database, including medical images, authoring tools, repository for multimedia documents, and also a peer-reviewed model which assures dataset quality. The current implementation has shown that creating clinical data repositories on networked computer environments points to be a good solution in terms of providing means to review information management practices in electronic environments and to create customized and contextbased tools for users connected to the system throughout electronic interfaces.

  8. Interactive real-time nuclear plant simulations on a UNIX based supercomputer

    International Nuclear Information System (INIS)

    Behling, S.R.

    1990-01-01

    Interactive real-time nuclear plant simulations are critically important to train nuclear power plant engineers and operators. In addition, real-time simulations can be used to test the validity and timing of plant technical specifications and operational procedures. To accurately and confidently simulate a nuclear power plant transient in real-time, sufficient computer resources must be available. Since some important transients cannot be simulated using preprogrammed responses or non-physical models, commonly used simulation techniques may not be adequate. However, the power of a supercomputer allows one to accurately calculate the behavior of nuclear power plants even during very complex transients. Many of these transients can be calculated in real-time or quicker on the fastest supercomputers. The concept of running interactive real-time nuclear power plant transients on a supercomputer has been tested. This paper describes the architecture of the simulation program, the techniques used to establish real-time synchronization, and other issues related to the use of supercomputers in a new and potentially very important area. (author)

  9. Argonne National Lab deploys Force10 networks' massively dense ethernet switch for supercomputing cluster

    CERN Multimedia

    2003-01-01

    "Force10 Networks, Inc. today announced that Argonne National Laboratory (Argonne, IL) has successfully deployed Force10 E-Series switch/routers to connect to the TeraGrid, the world's largest supercomputing grid, sponsored by the National Science Foundation (NSF)" (1/2 page).

  10. An efficient implementation of a backpropagation learning algorithm on quadrics parallel supercomputer

    International Nuclear Information System (INIS)

    Taraglio, S.; Massaioli, F.

    1995-08-01

    A parallel implementation of a library to build and train Multi Layer Perceptrons via the Back Propagation algorithm is presented. The target machine is the SIMD massively parallel supercomputer Quadrics. Performance measures are provided on three different machines with different number of processors, for two network examples. A sample source code is given

  11. Reliability centered maintenance pilot system implementation 241-AP-tank farm primary ventilation system final report

    International Nuclear Information System (INIS)

    MOORE TL

    2001-01-01

    When the Hanford Site Tank Farms' mission was safe storage of radioactive waste in underground storage tanks, maintenance activities focused on time-based preventive maintenance. Tank Farms' new mission to deliver waste to a vitrification plant where the waste will be processed into a form suitable for permanent storage requires a more efficient and proactive approach to maintenance. Systems must be maintained to ensure that they are operational and available to support waste feed delivery on schedule with a minimum of unplanned outages. This report describes the Reliability Centered Maintenance (RCM) pilot system that was implemented in the 241-AP Tank Farm Primary Ventilation System under PI-ORP-009 of the contract between the U.S. Department of Energy, Office of River Protection and CH2M HILL Hanford Group Inc. (CHG). The RCM analytical techniques focus on monitoring the condition of operating systems to predict equipment failures so that maintenance activities can be completed in time to prevent or mitigate unplanned equipment outages. This approach allows maintenance activities to be managed with minimal impact on plant operations. The pilot demonstration provided an opportunity for CHG staff-training in RCM principles and tailoring of the RCM approach to the Hanford Tank Farms' unique needs. This report details the implementation of RCM on a pilot system in Tank Farms

  12. Top scientific research center deploys Zambeel Aztera (TM) network storage system in high performance environment

    CERN Multimedia

    2002-01-01

    " The National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory has implemented a Zambeel Aztera storage system and software to accelerate the productivity of scientists running high performance scientific simulations and computations" (1 page).

  13. Factors Affecting Innovation Within Aeronautical Systems Center (ASC) Organizations - An inductive Study

    National Research Council Canada - National Science Library

    Feil, Eric

    2003-01-01

    .... This thesis analyzed data collected during the 2002 Chief of Staff of the Air Force Organizational Climate Survey to identify factors that affect innovation within Aeronautical Systems Center (ASC) organizations...

  14. Climate Prediction Center (CPC) NCEP-Global Forecast System (GFS) Precipitation Forecast Product

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Global Forecast System (GFS) forecast precipitation data at 37.5km resolution is created at the NOAA Climate Prediction Center for the purpose of near real-time...

  15. Louisville region demonstration of travel management coordination center : system pre-deployment preparation.

    Science.gov (United States)

    2013-03-01

    The purpose of the Greater Louisville Region Demonstration of Travel Management Coordination Center (TMCC): System Pre-Deploy-ment Preparation grant was to further phased implementation of the regions TMCC design by focusing on two major component...

  16. Nuclear Energy Center Site Survey, 1975. Part II. The U.S. electric power system and the potential role of nuclear energy centers

    International Nuclear Information System (INIS)

    1976-01-01

    Information related to Nuclear Energy Centers (NEC) in the U.S. is presented concerning the U.S. electric power system today; electricity demand history and forecasts; history and forecasts of the electric utility industry; regional notes; the status, history, and forecasts of the nuclear role; power plant siting problems and practices; nuclear facilities siting problems and practices; origin and evolution of the nuclear energy center concept; conceptualized description of nuclear energy centers; potential role of nuclear energy centers; assumptions, criteria, and bases; typical evolution of a nuclear energy center; and the nuclear fuel cycle

  17. Airbreathing Hypersonic Systems Focus at NASA Langley Research Center

    Science.gov (United States)

    Hunt, James L.; Rausch, Vincent L.

    1998-01-01

    This paper presents the status of the airbreathing hypersonic airplane and space-access vehicle design matrix, reflects on the synergies and issues, and indicates the thrust of the effort to resolve the design matrix and to focus/advance systems technology maturation. Priority is given to the design of the vision operational vehicles followed by flow-down requirements to flight demonstrator vehicles and their design for eventual consideration in the Future-X Program.

  18. Natick Soldier Systems Center Science and Technology Board (9th)

    Science.gov (United States)

    2012-05-29

    NSRDEC overarching CRADA’s with all five UMass campuses (in routing) • Patent License Agreement with Niche, Inc , New Bedford, MA (Ground impact...84.2 1 201 1 2014 GEAR4 UNITY REMOTE (Ben created.’deveil)j)ild~icensed) 2017 :>O?O SUM DEVICES QUANTIFIED SELF JAWBONE UP ATBIT ULTRA NIKE FUEL...Rudolph gained system development experience in multiple IT companies. In March 1994, he co-founded Paradigm Technologies, Inc ., an industry partner

  19. LANSCE (Los Alamos Neutron Scattering Center) target system performance

    International Nuclear Information System (INIS)

    Russell, G.J.; Gilmore, J.S.; Robinson, H.; Legate, G.L.; Bridge, A.; Sanchez, R.J.; Brewton, R.J.; Woods, R.; Hughes, H.G. III

    1989-01-01

    The authors measured neutron beam fluxes at LANSCE using gold foil activation techniques. They did an extensive computer simulation of the as-built LANSCE Target/Moderator/Reflector/Shield geometry. They used this mockup in a Monte Carlo calculation to predict LANSCE neutronic performance for comparison with measured results. For neutron beam fluxes at 1 eV, the ratio of measured data to calculated varies from ∼0.6-0.9. The computed 1 eV neutron leakage at the moderator surface is 3.9 x 10 10 n/eV-sr-s-μA for LANSCE high-intensity water moderators. The corresponding values for the LANSCE high-resolution water moderator and the liquid hydrogen moderator are 3.3 and 2.9 x 10 10 , respectively. LANSCE predicted moderator intensities (per proton) for a tungsten target are essentially the same as ISIS predicted moderator intensities for a depleted uranium target. The calculated LANSCE steady state unperturbed thermal (E 13 n/cm 2 -s. The unique LANSCE split-target/flux-trap-moderator system is performing exceedingly well. The system has operated without a target or moderator change for over three years at nominal proton currents of 25 μA of 800-MeV protons. 17 refs., 8 figs., 3 tabs

  20. Single Center Experience with the AngioVac Aspiration System

    Energy Technology Data Exchange (ETDEWEB)

    Salsamendi, Jason, E-mail: jsalsamendi@med.miami.edu; Doshi, Mehul, E-mail: mdoshi@med.miami.edu; Bhatia, Shivank, E-mail: sbhatia1@med.miami.edu [University of Miami Miller School of Medicine/Jackson Memorial Hospital, Department of Vascular and Interventional Radiology (United States); Bordegaray, Matthew, E-mail: matthewbordegaray@gmail.com [University of Miami Miller School of Medicine/Jackson Memorial Hospital, Department Radiology (United States); Arya, Rahul, E-mail: rahul.arya@jhsmiami.org [University of Miami Miller School of Medicine/Jackson Memorial Hospital, Department of Vascular and Interventional Radiology (United States); Morton, Connor, E-mail: cmorton@med.miami.edu [University of Miami Miller School of Medicine (United States); Narayanan, Govindarajan, E-mail: gnarayanan@med.miami.edu [University of Miami Miller School of Medicine/Jackson Memorial Hospital, Department of Vascular and Interventional Radiology (United States)

    2015-08-15

    PurposeThe AngioVac catheter system is a mechanical suction device designed for removal of intravascular material using extracorporeal veno-venous bypass circuit. The purpose of this study is to present the outcomes in patients treated with the AngioVac aspiration system and to discuss its efficacy in different vascular beds.Materials and MethodsA retrospectively review was performed of seven patients treated with AngioVac between October 2013 and December 2014. In 6/7 cases, the AngioVac cannula was inserted percutaneously and the patient was placed on veno-venous bypass. In one of the cases, the cannula was inserted directly into the Fontan circuit after sternotomy and the patient was maintained on cardiopulmonary bypass. Thrombus location included iliocaval (2), SVC (1), pulmonary arteries (1), Fontan circuit and Glenn shunt with pulmonary artery extension (1), right atrium (1), and IVC with renal vein extension (1).ResultsThe majority of thrombus (50–95 %) was removed in 5/7 cases, and partial thrombus removal (<50 %) was confirmed in 2/7 cases. Mean follow-up was 205 days (range 64–403 days). All patients were alive at latest follow-up. Minor complications included three neck hematomas in two total patients. No major complications occurred.ConclusionAngioVac is a useful tool for acute thrombus removal in the large vessels. The setup and substantial cost may limit its application in straightforward cases. More studies are needed to establish the utility of AngioVac in treatment of intravascular and intracardiac material.

  1. Integrated Micro-Power System (IMPS) Development at NASA Glenn Research Center

    Science.gov (United States)

    Wilt, David; Hepp, Aloysius; Moran, Matt; Jenkins, Phillip; Scheiman, David; Raffaelle, Ryne

    2003-01-01

    Glenn Research Center (GRC) has a long history of energy related technology developments for large space related power systems, including photovoltaics, thermo-mechanical energy conversion, electrochemical energy storage. mechanical energy storage, power management and distribution and power system design. Recently, many of these technologies have begun to be adapted for small, distributed power system applications or Integrated Micro-Power Systems (IMPS). This paper will describe the IMPS component and system demonstration efforts to date.

  2. Impact of configuration management system of computer center on support of scientific projects throughout their lifecycle

    International Nuclear Information System (INIS)

    Bogdanov, A.V.; Yuzhanin, N.V.; Zolotarev, V.I.; Ezhakova, T.R.

    2017-01-01

    In this article the problem of scientific projects support throughout their lifecycle in the computer center is considered in every aspect of support. Configuration Management system plays a connecting role in processes related to the provision and support of services of a computer center. In view of strong integration of IT infrastructure components with the use of virtualization, control of infrastructure becomes even more critical to the support of research projects, which means higher requirements for the Configuration Management system. For every aspect of research projects support, the influence of the Configuration Management system is reviewed and development of the corresponding elements of the system is described in the present paper.

  3. Impact of configuration management system of computer center on support of scientific projects throughout their lifecycle

    Science.gov (United States)

    Bogdanov, A. V.; Iuzhanin, N. V.; Zolotarev, V. I.; Ezhakova, T. R.

    2017-12-01

    In this article the problem of scientific projects support throughout their lifecycle in the computer center is considered in every aspect of support. Configuration Management system plays a connecting role in processes related to the provision and support of services of a computer center. In view of strong integration of IT infrastructure components with the use of virtualization, control of infrastructure becomes even more critical to the support of research projects, which means higher requirements for the Configuration Management system. For every aspect of research projects support, the influence of the Configuration Management system is being reviewed and development of the corresponding elements of the system is being described in the present paper.

  4. Implementing health management information systems: measuring success in Korea's health centers.

    Science.gov (United States)

    Chae, Y M; Kim, S I; Lee, B H; Choi, S H; Kim, I S

    1994-01-01

    This article analyses the effects that the introduction and adoption of a health management information system (HMIS) can have on both the productivity of health center staff as well as on user-satisfaction. The focus is upon the service provided by the Kwonsun Health Center located in Suwon City, Korea. Two surveys were conducted to measure the changes in productivity and adoption (knowledge, persuasion, decision, implementation and confirmation) of health center staff over time. In addition, a third survey was conducted to measure the effects of HMIS on the level of satisfaction perceived by the visitors, by comparing the satisfaction level between the study health center and a similar health center identified as a control. The results suggest that HMIS increased the productivity and satisfaction of the staff but did not increase their persuasion and decision levels; and, that is also succeeded in increasing the levels of visitors' satisfaction with the services provided.

  5. Evolution of the Systems Engineering Education Development (SEED) Program at NASA Goddard Space Flight Center

    Science.gov (United States)

    Bagg, Thomas C., III; Brumfield, Mark D.; Jamison, Donald E.; Granata, Raymond L.; Casey, Carolyn A.; Heller, Stuart

    2003-01-01

    The Systems Engineering Education Development (SEED) Program at NASA Goddard Space Flight Center develops systems engineers from existing discipline engineers. The program has evolved significantly since the report to INCOSE in 2003. This paper describes the SEED Program as it is now, outlines the changes over the last year, discusses current status and results, and shows the value of human systems and leadership skills for practicing systems engineers.

  6. Description of the surface water filtration and ozone treatment system at the Northeast Fishery Center

    Science.gov (United States)

    A water filtration and ozone disinfection system was installed at the U.S. Fish and Wildlife Service's Northeast Fishery Center in Lamar, Pennsylvania to treat a surface water supply that is used to culture sensitive and endangered fish. The treatment system first passes the surface water through dr...

  7. Measurements and predictions of the air distribution systems in high compute density (Internet) data centers

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jinkyun [HIMEC (Hanil Mechanical Electrical Consultants) Ltd., Seoul 150-103 (Korea); Department of Architectural Engineering, Yonsei University, Seoul 120-749 (Korea); Lim, Taesub; Kim, Byungseon Sean [Department of Architectural Engineering, Yonsei University, Seoul 120-749 (Korea)

    2009-10-15

    When equipment power density increases, a critical goal of a data center cooling system is to separate the equipment exhaust air from the equipment intake air in order to prevent the IT server from overheating. Cooling systems for data centers are primarily differentiated according to the way they distribute air. The six combinations of flooded and locally ducted air distribution make up the vast majority of all installations, except fully ducted air distribution methods. Once the air distribution system (ADS) is selected, there are other elements that must be integrated into the system design. In this research, the design parameters and IT environmental aspects of the cooling system were studied with a high heat density data center. CFD simulation analysis was carried out in order to compare the heat removal efficiencies of various air distribution systems. The IT environment of an actual operating data center is measured to validate a model for predicting the effect of different air distribution systems. A method for planning and design of the appropriate air distribution system is described. IT professionals versed in precision air distribution mechanisms, components, and configurations can work more effectively with mechanical engineers to ensure the specification and design of optimized cooling solutions. (author)

  8. Test bed control center design concept for Tank Waste Retrieval Manipulator Systems

    International Nuclear Information System (INIS)

    Sundstrom, E.; Draper, J.V.; Fausz, A.

    1995-01-01

    This paper describes the design concept for the control center for the Single Shell Tank Waste Retrieval Manipulator System test bed and the design process behind the concept. The design concept supports all phases of the test bed mission, including technology demonstration, comprehensive system testing, and comparative evaluation for further development and refinement of the TWRMS for field operations

  9. A User-Centered Cooperative Information System for Medical Imaging Diagnosis.

    Science.gov (United States)

    Gomez, Enrique J.; Quiles, Jose A.; Sanz, Marcos F.; del Pozo, Francisco

    1998-01-01

    Presents a cooperative information system for remote medical imaging diagnosis. General computer-supported cooperative work (CSCW) problems addressed are definition of a procedure for the design of user-centered cooperative systems (conceptual level); and improvement of user feedback and optimization of the communication bandwidth in highly…

  10. Evaluating trauma center structural performance: The experience of a Canadian provincial trauma system

    Directory of Open Access Journals (Sweden)

    Lynne Moore

    2013-01-01

    Full Text Available Background: Indicators of structure, process, and outcome are required to evaluate the performance of trauma centers to improve the quality and efficiency of care. While periodic external accreditation visits are part of most trauma systems, a quantitative indicator of structural performance has yet to be proposed. The objective of this study was to develop and validate a trauma center structural performance indicator using accreditation report data. Materials and Methods: Analyses were based on accreditation reports completed during on-site visits in the Quebec trauma system (1994-2005. Qualitative report data was retrospectively transposed onto an evaluation grid and the weighted average of grid items was used to quantify performance. The indicator of structural performance was evaluated in terms of test-retest reliability (kappa statistic, discrimination between centers (coefficient of variation, content validity (correlation with accreditation decision, designation level, and patient volume and forecasting (correlation between visits performed in 1994-1999 and 1998-2005. Results: Kappa statistics were >0.8 for 66 of the 73 (90% grid items. Mean structural performance score over 59 trauma centers was 47.4 (95% CI: 43.6-51.1. Two centers were flagged as outliers and the coefficient of variation was 31.2% (95% CI: 25.5% to 37.6%, showing good discrimination. Correlation coefficients of associations with accreditation decision, designation level, and volume were all statistically significant (r = 0.61, -0.40, and 0.24, respectively. No correlation was observed over time (r = 0.03. Conclusion: This study demonstrates the feasibility of quantifying trauma center structural performance using accreditation reports. The proposed performance indicator shows good test-retest reliability, between-center discrimination, and construct validity. The observed variability in structural performance across centers and over-time underlines the importance of

  11. BSMBench: a flexible and scalable supercomputer benchmark from computational particle physics

    CERN Document Server

    Bennett, Ed; Del Debbio, Luigi; Jordan, Kirk; Patella, Agostino; Pica, Claudio; Rago, Antonio

    2016-01-01

    Benchmarking plays a central role in the evaluation of High Performance Computing architectures. Several benchmarks have been designed that allow users to stress various components of supercomputers. In order for the figures they provide to be useful, benchmarks need to be representative of the most common real-world scenarios. In this work, we introduce BSMBench, a benchmarking suite derived from Monte Carlo code used in computational particle physics. The advantage of this suite (which can be freely downloaded from http://www.bsmbench.org/) over others is the capacity to vary the relative importance of computation and communication. This enables the tests to simulate various practical situations. To showcase BSMBench, we perform a wide range of tests on various architectures, from desktop computers to state-of-the-art supercomputers, and discuss the corresponding results. Possible future directions of development of the benchmark are also outlined.

  12. ParaBTM: A Parallel Processing Framework for Biomedical Text Mining on Supercomputers.

    Science.gov (United States)

    Xing, Yuting; Wu, Chengkun; Yang, Xi; Wang, Wei; Zhu, En; Yin, Jianping

    2018-04-27

    A prevailing way of extracting valuable information from biomedical literature is to apply text mining methods on unstructured texts. However, the massive amount of literature that needs to be analyzed poses a big data challenge to the processing efficiency of text mining. In this paper, we address this challenge by introducing parallel processing on a supercomputer. We developed paraBTM, a runnable framework that enables parallel text mining on the Tianhe-2 supercomputer. It employs a low-cost yet effective load balancing strategy to maximize the efficiency of parallel processing. We evaluated the performance of paraBTM on several datasets, utilizing three types of named entity recognition tasks as demonstration. Results show that, in most cases, the processing efficiency can be greatly improved with parallel processing, and the proposed load balancing strategy is simple and effective. In addition, our framework can be readily applied to other tasks of biomedical text mining besides NER.

  13. Explaining the gap between theoretical peak performance and real performance for supercomputer architectures

    International Nuclear Information System (INIS)

    Schoenauer, W.; Haefner, H.

    1993-01-01

    The basic architectures of vector and parallel computers with their properties are presented. Then the memory size and the arithmetic operations in the context of memory bandwidth are discussed. For the exemplary discussion of a single operation micro-measurements of the vector triad for the IBM 3090 VF and the CRAY Y-MP/8 are presented. They reveal the details of the losses for a single operation. Then we analyze the global performance of a whole supercomputer by identifying reduction factors that bring down the theoretical peak performance to the poor real performance. The responsibilities of the manufacturer and of the user for these losses are dicussed. Then the price-performance ratio for different architectures in a snapshot of January 1991 is briefly mentioned. Finally some remarks to a user-friendly architecture for a supercomputer will be made. (orig.)

  14. VIRTUAL COGNITIVE CENTERS AS INTELLIGENT SYSTEMS FOR MANAGEMENT INFORMATION SUPPORT OF REGIONAL SECURITY

    Directory of Open Access Journals (Sweden)

    A. V. Masloboev

    2014-03-01

    Full Text Available The paper deals with engineering problems and application perspectives of virtual cognitive centers as intelligent systems for information support of interagency activities in the field of complex security management of regional development. A research prototype of virtual cognitive center for regional security management in crisis situations, implemented as hybrid cloud service based on IaaS architectural framework with the usage of multi-agent and web-service technologies has been developed. Virtual cognitive center is a training simulator software system and is intended for solving on the basis of distributed simulation such problems as: strategic planning and forecasting of risk-sustainable development of regional socioeconomic systems, agents of management interaction specification synthesis for regional components security in different crisis situations within the planning stage of joint anti-crisis actions.

  15. Heat dissipation computations of a HVDC ground electrode using a supercomputer

    International Nuclear Information System (INIS)

    Greiss, H.; Mukhedkar, D.; Lagace, P.J.

    1990-01-01

    This paper reports on the temperature, of soil surrounding a High Voltage Direct Current (HVDC) toroidal ground electrode of practical dimensions, in both homogeneous and non-homogeneous soils that was computed at incremental points in time using finite difference methods on a supercomputer. Curves of the response were computed and plotted at several locations within the soil in the vicinity of the ground electrode for various values of the soil parameters

  16. Transportable Payload Operations Control Center reusable software: Building blocks for quality ground data systems

    Science.gov (United States)

    Mahmot, Ron; Koslosky, John T.; Beach, Edward; Schwarz, Barbara

    1994-01-01

    The Mission Operations Division (MOD) at Goddard Space Flight Center builds Mission Operations Centers which are used by Flight Operations Teams to monitor and control satellites. Reducing system life cycle costs through software reuse has always been a priority of the MOD. The MOD's Transportable Payload Operations Control Center development team established an extensive library of 14 subsystems with over 100,000 delivered source instructions of reusable, generic software components. Nine TPOCC-based control centers to date support 11 satellites and achieved an average software reuse level of more than 75 percent. This paper shares experiences of how the TPOCC building blocks were developed and how building block developer's, mission development teams, and users are all part of the process.

  17. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    Science.gov (United States)

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  18. Building more powerful less expensive supercomputers using Processing-In-Memory (PIM) LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, Richard C.

    2009-09-01

    This report details the accomplishments of the 'Building More Powerful Less Expensive Supercomputers Using Processing-In-Memory (PIM)' LDRD ('PIM LDRD', number 105809) for FY07-FY09. Latency dominates all levels of supercomputer design. Within a node, increasing memory latency, relative to processor cycle time, limits CPU performance. Between nodes, the same increase in relative latency impacts scalability. Processing-In-Memory (PIM) is an architecture that directly addresses this problem using enhanced chip fabrication technology and machine organization. PIMs combine high-speed logic and dense, low-latency, high-bandwidth DRAM, and lightweight threads that tolerate latency by performing useful work during memory transactions. This work examines the potential of PIM-based architectures to support mission critical Sandia applications and an emerging class of more data intensive informatics applications. This work has resulted in a stronger architecture/implementation collaboration between 1400 and 1700. Additionally, key technology components have impacted vendor roadmaps, and we are in the process of pursuing these new collaborations. This work has the potential to impact future supercomputer design and construction, reducing power and increasing performance. This final report is organized as follow: this summary chapter discusses the impact of the project (Section 1), provides an enumeration of publications and other public discussion of the work (Section 1), and concludes with a discussion of future work and impact from the project (Section 1). The appendix contains reprints of the refereed publications resulting from this work.

  19. Task analysis and structure scheme for center manager station in large container inspection system

    International Nuclear Information System (INIS)

    Li Zheng; Gao Wenhuan; Wang Jingjin; Kang Kejun; Chen Zhiqiang

    1997-01-01

    LCIS works as follows: the accelerator generates beam pulses which are formed into fan shape; the scanning system drags a lorry with a container passing through the beam in constant speed; the detector array detects the beam penetrating the lorry; the projection data acquisition system reads the projections and completes an inspection image of the lorry. All these works are controlled and synchronized by the center manage station. The author will describe the process of the projection data acquisition in scanning mode and the methods of real-time projection data processing. the task analysis and the structure scheme of center manager station is presented

  20. Approach to training the trainer at the Bell System Training Center

    International Nuclear Information System (INIS)

    Housley, E.A.; Stevenson, J.L.

    1981-01-01

    The major activity of the Bell System Training Center is to develop and deliver technical training. Experts in various technical areas are selected as course developers or instructors, usually on rotational assignments. Through a series of workshops, described in this paper, combined with coaching, use of job aids and working with more experienced peers, they become competent developers or instructors. There may be similarities between the mission of the Bell System Training Center and other contexts where criticality of job performance and technical subject matter are training characteristics

  1. Interactive and Large Scale Supercomputing Simulations in Nonlinear Optics

    National Research Council Canada - National Science Library

    Moloney, J

    2001-01-01

    .... The upgrade consisted in purchasing 8 of the newest generation of 400 MHz CPUs, converting one of ONYX2 racks into a fully loaded 16-processor Origin 2000/2400 system and moving both high performance...

  2. The TESS Science Processing Operations Center

    Science.gov (United States)

    Jenkins, Jon M.; Twicken, Joseph D.; McCauliff, Sean; Campbell, Jennifer; Sanderfer, Dwight; Lung, David; Mansouri-Samani, Masoud; Girouard, Forrest; Tenenbaum, Peter; Klaus, Todd; hide

    2016-01-01

    The Transiting Exoplanet Survey Satellite (TESS) will conduct a search for Earth's closest cousins starting in early 2018 and is expected to discover approximately 1,000 small planets with R(sub p) less than 4 (solar radius) and measure the masses of at least 50 of these small worlds. The Science Processing Operations Center (SPOC) is being developed at NASA Ames Research Center based on the Kepler science pipeline and will generate calibrated pixels and light curves on the NASA Advanced Supercomputing Division's Pleiades supercomputer. The SPOC will also search for periodic transit events and generate validation products for the transit-like features in the light curves. All TESS SPOC data products will be archived to the Mikulski Archive for Space Telescopes (MAST).

  3. Influence of socioeconomic status on trauma center performance evaluations in a Canadian trauma system.

    Science.gov (United States)

    Moore, Lynne; Turgeon, Alexis F; Sirois, Marie-Josée; Murat, Valérie; Lavoie, André

    2011-09-01

    Trauma center performance evaluations generally include adjustment for injury severity, age, and comorbidity. However, disparities across trauma centers may be due to other differences in source populations that are not accounted for, such as socioeconomic status (SES). We aimed to evaluate whether SES influences trauma center performance evaluations in an inclusive trauma system with universal access to health care. The study was based on data collected between 1999 and 2006 in a Canadian trauma system. Patient SES was quantified using an ecologic index of social and material deprivation. Performance evaluations were based on mortality adjusted using the Trauma Risk Adjustment Model. Agreement between performance results with and without additional adjustment for SES was evaluated with correlation coefficients. The study sample comprised a total of 71,784 patients from 48 trauma centers, including 3,828 deaths within 30 days (4.5%) and 5,549 deaths within 6 months (7.7%). The proportion of patients in the highest quintile of social and material deprivation varied from 3% to 43% and from 11% to 90% across hospitals, respectively. The correlation between performance results with or without adjustment for SES was almost perfect (r = 0.997; 95% CI 0.995-0.998) and the same hospital outliers were identified. We observed an important variation in SES across trauma centers but no change in risk-adjusted mortality estimates when SES was added to adjustment models. Results suggest that after adjustment for injury severity, age, comorbidity, and transfer status, disparities in SES across trauma center source populations do not influence trauma center performance evaluations in a system offering universal health coverage. Copyright © 2011 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  4. Multiscale Hy3S: Hybrid stochastic simulation for supercomputers

    Directory of Open Access Journals (Sweden)

    Kaznessis Yiannis N

    2006-02-01

    Full Text Available Abstract Background Stochastic simulation has become a useful tool to both study natural biological systems and design new synthetic ones. By capturing the intrinsic molecular fluctuations of "small" systems, these simulations produce a more accurate picture of single cell dynamics, including interesting phenomena missed by deterministic methods, such as noise-induced oscillations and transitions between stable states. However, the computational cost of the original stochastic simulation algorithm can be high, motivating the use of hybrid stochastic methods. Hybrid stochastic methods partition the system into multiple subsets and describe each subset as a different representation, such as a jump Markov, Poisson, continuous Markov, or deterministic process. By applying valid approximations and self-consistently merging disparate descriptions, a method can be considerably faster, while retaining accuracy. In this paper, we describe Hy3S, a collection of multiscale simulation programs. Results Building on our previous work on developing novel hybrid stochastic algorithms, we have created the Hy3S software package to enable scientists and engineers to both study and design extremely large well-mixed biological systems with many thousands of reactions and chemical species. We have added adaptive stochastic numerical integrators to permit the robust simulation of dynamically stiff biological systems. In addition, Hy3S has many useful features, including embarrassingly parallelized simulations with MPI; special discrete events, such as transcriptional and translation elongation and cell division; mid-simulation perturbations in both the number of molecules of species and reaction kinetic parameters; combinatorial variation of both initial conditions and kinetic parameters to enable sensitivity analysis; use of NetCDF optimized binary format to quickly read and write large datasets; and a simple graphical user interface, written in Matlab, to help users

  5. Design of a Mission Data Storage and Retrieval System for NASA Dryden Flight Research Center

    Science.gov (United States)

    Lux, Jessica; Downing, Bob; Sheldon, Jack

    2007-01-01

    The Western Aeronautical Test Range (WATR) at the NASA Dryden Flight Research Center (DFRC) employs the WATR Integrated Next Generation System (WINGS) for the processing and display of aeronautical flight data. This report discusses the post-mission segment of the WINGS architecture. A team designed and implemented a system for the near- and long-term storage and distribution of mission data for flight projects at DFRC, providing the user with intelligent access to data. Discussed are the legacy system, an industry survey, system operational concept, high-level system features, and initial design efforts.

  6. The LANSCE (Los Alamos Neutron Scattering Center) target data collection system

    International Nuclear Information System (INIS)

    Kernodle, A.K.

    1989-01-01

    The Los Alamos Neutron Scattering Center (LANSCE) Target Data Collection System is the result of an effort to provide a base of information from which to draw conclusions on the performance and operational condition of the overall LANSCE target system. During the conceptualization of the system, several goals were defined. A survey was made of both custom-made and off-the-shelf hardware and software that were capable of meeting these goals. The first stage of the system was successfully implemented for the LANSCE run cycle 52. From the operational experience gained thus far, it appears that the LANSCE Target Data Collection System will meet all of the previously defined requirements

  7. Visualization on supercomputing platform level II ASC milestone (3537-1B) results from Sandia.

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk (Kitware, Inc., Clifton Park, NY); Fabian, Nathan; Marion, Patrick (Kitware, Inc., Clifton Park, NY); Moreland, Kenneth D.

    2010-09-01

    This report provides documentation for the completion of the Sandia portion of the ASC Level II Visualization on the platform milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratories and Los Alamos National Laboratories. This milestone contains functionality required for performing visualization directly on a supercomputing platform, which is necessary for peta-scale visualization. Sandia's contribution concerns in-situ visualization, running a visualization in tandem with a solver. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors(GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the performance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. Scientific simulation on parallel supercomputers is traditionally performed in four

  8. PRODEEDINGS OF RIKEN BNL RESEARCH CENTER WORKSHOP : HIGH PERFORMANCE COMPUTING WITH QCDOC AND BLUEGENE.

    Energy Technology Data Exchange (ETDEWEB)

    CHRIST,N.; DAVENPORT,J.; DENG,Y.; GARA,A.; GLIMM,J.; MAWHINNEY,R.; MCFADDEN,E.; PESKIN,A.; PULLEYBLANK,W.

    2003-03-11

    Staff of Brookhaven National Laboratory, Columbia University, IBM and the RIKEN BNL Research Center organized a one-day workshop held on February 28, 2003 at Brookhaven to promote the following goals: (1) To explore areas other than QCD applications where the QCDOC and BlueGene/L machines can be applied to good advantage, (2) To identify areas where collaboration among the sponsoring institutions can be fruitful, and (3) To expose scientists to the emerging software architecture. This workshop grew out of an informal visit last fall by BNL staff to the IBM Thomas J. Watson Research Center that resulted in a continuing dialog among participants on issues common to these two related supercomputers. The workshop was divided into three sessions, addressing the hardware and software status of each system, prospective applications, and future directions.

  9. ATLAS FTK a - very complex - custom parallel supercomputer

    CERN Document Server

    Kimura, Naoki; The ATLAS collaboration

    2016-01-01

    In the ever increasing pile-up LHC environment advanced techniques of analysing the data are implemented in order to increase the rate of relevant physics processes with respect to background processes. The Fast TracKer (FTK) is a track finding implementation at hardware level that is designed to deliver full-scan tracks with $p_{T}$ above 1GeV to the ATLAS trigger system for every L1 accept (at a maximum rate of 100kHz). In order to achieve this performance a highly parallel system was designed and now it is under installation in ATLAS. In the beginning of 2016 it will provide tracks for the trigger system in a region covering the central part of the ATLAS detector, and during the year it's coverage will be extended to the full detector coverage. The system relies on matching hits coming from the silicon tracking detectors against 1 billion patterns stored in specially designed ASICS chips (Associative memory - AM06). In a first stage coarse resolution hits are matched against the patterns and the accepted h...

  10. Scalable geocomputation: evolving an environmental model building platform from single-core to supercomputers

    Science.gov (United States)

    Schmitz, Oliver; de Jong, Kor; Karssenberg, Derek

    2017-04-01

    There is an increasing demand to run environmental models on a big scale: simulations over large areas at high resolution. The heterogeneity of available computing hardware such as multi-core CPUs, GPUs or supercomputer potentially provides significant computing power to fulfil this demand. However, this requires detailed knowledge of the underlying hardware, parallel algorithm design and the implementation thereof in an efficient system programming language. Domain scientists such as hydrologists or ecologists often lack this specific software engineering knowledge, their emphasis is (and should be) on exploratory building and analysis of simulation models. As a result, models constructed by domain specialists mostly do not take full advantage of the available hardware. A promising solution is to separate the model building activity from software engineering by offering domain specialists a model building framework with pre-programmed building blocks that they combine to construct a model. The model building framework, consequently, needs to have built-in capabilities to make full usage of the available hardware. Developing such a framework providing understandable code for domain scientists and being runtime efficient at the same time poses several challenges on developers of such a framework. For example, optimisations can be performed on individual operations or the whole model, or tasks need to be generated for a well-balanced execution without explicitly knowing the complexity of the domain problem provided by the modeller. Ideally, a modelling framework supports the optimal use of available hardware whichsoever combination of model building blocks scientists use. We demonstrate our ongoing work on developing parallel algorithms for spatio-temporal modelling and demonstrate 1) PCRaster, an environmental software framework (http://www.pcraster.eu) providing spatio-temporal model building blocks and 2) parallelisation of about 50 of these building blocks using

  11. Innovation in user-centered skills and performance improvement for sustainable complex service systems.

    Science.gov (United States)

    Karwowski, Waldemar; Ahram, Tareq Z

    2012-01-01

    In order to leverage individual and organizational learning and to remain competitive in current turbulent markets it is important for employees, managers, planners and leaders to perform at high levels over time. Employee competence and skills are extremely important matters in view of the general shortage of talent and the mobility of employees with talent. Two factors emerged to have the greatest impact on the competitiveness of complex service systems: improving managerial and employee's knowledge attainment for skills, and improving the training and development of the workforce. This paper introduces the knowledge-based user-centered service design approach for sustainable skill and performance improvement in education, design and modeling of the next generation of complex service systems. The rest of the paper cover topics in human factors and sustainable business process modeling for the service industry, and illustrates the user-centered service system development cycle with the integration of systems engineering concepts in service systems. A roadmap for designing service systems of the future is discussed. The framework introduced in this paper is based on key user-centered design principles and systems engineering applications to support service competitiveness.

  12. VR-Smart Home, prototyping of a user centered design system

    NARCIS (Netherlands)

    Heidari Jozam, M.; Allameh, E.; Vries, de B.; Timmermans, H.J.P.; Masoud, M.; Andreev, S.; Balandin, S.; Yevgeni, Koucheryavy

    2012-01-01

    In this paper, we propose a prototype of a user centered design system for Smart Homes which lets users: (1) configure different interactive tasks, and (2) express activity specifications and preferences during the design process. The main objective of this paper is how to create and to implement VR

  13. CENTER CONDITIONS AND CYCLICITY FOR A FAMILY OF CUBIC SYSTEMS: COMPUTER ALGEBRA APPROACH.

    Science.gov (United States)

    Ferčec, Brigita; Mahdi, Adam

    2013-01-01

    Using methods of computational algebra we obtain an upper bound for the cyclicity of a family of cubic systems. We overcame the problem of nonradicality of the associated Bautin ideal by moving from the ring of polynomials to a coordinate ring. Finally, we determine the number of limit cycles bifurcating from each component of the center variety.

  14. Lightcurve Analysis of Hilda Asteroids at the Center for Solar System Studies: 2017 October-December

    Science.gov (United States)

    Warner, Brian D.; Stephens, Robert D.; Coley, Daniel R.

    2018-04-01

    Lightcurves for 12 Hilda asteroids were obtained at the Center for Solar System Studies (CS3) from 2017 October-December. Preliminary shape and spin axis models are given for seven of the Hildas: 958 Asplinda, 1439 Vogita, 1539 Oterma, 2483 Guinevere, 3561 Devine, 4317 Garibaldi, and 17428 Charleroi. These will serve as good starting points for future modeling.

  15. Measuring Malaysia School Resource Centers' Standards through iQ-PSS: An Online Management Information System

    Science.gov (United States)

    Zainudin, Fadzliaton; Ismail, Kamarulzaman

    2010-01-01

    The Ministry of Education has come up with an innovative way to monitor the progress of 9,843 School Resource Centers (SRCs) using an online management information system called iQ-PSS (Quality Index of SRC). This paper aims to describe the data collection method and analyze the current state of SRCs in Malaysia and explain how the results can be…

  16. Transportation Systems Center Bibliography of Technical Reports, July 1970 - December 1976,

    Science.gov (United States)

    1977-04-01

    Systems Center. AD-733-763 Judith Gertler, Herbert Glynn, Vivian Hobbs, Frederick Interim Report. June 1971. 16p. Woolfall. AD-733-764 Air Traffic Control...of Deployment Cost Analysis .. .......... FAA-76-20 Airspace Control Environmnent Simulator - Final Report.... ............ .. TSC-131.3 *All- Wether

  17. Sandia`s network for Supercomputing `94: Linking the Los Alamos, Lawrence Livermore, and Sandia National Laboratories using switched multimegabit data service

    Energy Technology Data Exchange (ETDEWEB)

    Vahle, M.O.; Gossage, S.A.; Brenkosh, J.P. [Sandia National Labs., Albuquerque, NM (United States). Advanced Networking Integration Dept.

    1995-01-01

    Supercomputing `94, a high-performance computing and communications conference, was held November 14th through 18th, 1994 in Washington DC. For the past four years, Sandia National Laboratories has used this conference to showcase and focus its communications and networking endeavors. At the 1994 conference, Sandia built a Switched Multimegabit Data Service (SMDS) network running at 44.736 megabits per second linking its private SMDS network between its facilities in Albuquerque, New Mexico and Livermore, California to the convention center in Washington, D.C. For the show, the network was also extended from Sandia, New Mexico to Los Alamos National Laboratory and from Sandia, California to Lawrence Livermore National Laboratory. This paper documents and describes this network and how it was used at the conference.

  18. Benchmarking and tuning the MILC code on clusters and supercomputers

    International Nuclear Information System (INIS)

    Gottlieb, Steven

    2002-01-01

    Recently, we have benchmarked and tuned the MILC code on a number of architectures including Intel Itanium and Pentium IV (PIV), dual-CPU Athlon, and the latest Compaq Alpha nodes. Results will be presented for many of these, and we shall discuss some simple code changes that can result in a very dramatic speedup of the KS conjugate gradient on processors with more advanced memory systems such as PIV, IBM SP and Alpha

  19. Benchmarking and tuning the MILC code on clusters and supercomputers

    International Nuclear Information System (INIS)

    Steven A. Gottlieb

    2001-01-01

    Recently, we have benchmarked and tuned the MILC code on a number of architectures including Intel Itanium and Pentium IV (PIV), dual-CPU Athlon, and the latest Compaq Alpha nodes. Results will be presented for many of these, and we shall discuss some simple code changes that can result in a very dramatic speedup of the KS conjugate gradient on processors with more advanced memory systems such as PIV, IBM SP and Alpha

  20. Benchmarking and tuning the MILC code on clusters and supercomputers

    Science.gov (United States)

    Gottlieb, Steven

    2002-03-01

    Recently, we have benchmarked and tuned the MILC code on a number of architectures including Intel Itanium and Pentium IV (PIV), dual-CPU Athlon, and the latest Compaq Alpha nodes. Results will be presented for many of these, and we shall discuss some simple code changes that can result in a very dramatic speedup of the KS conjugate gradient on processors with more advanced memory systems such as PIV, IBM SP and Alpha.

  1. ONAV - An Expert System for the Space Shuttle Mission Control Center

    Science.gov (United States)

    Mills, Malise; Wang, Lui

    1992-01-01

    The ONAV (Onboard Navigation) Expert System is being developed as a real-time console assistant to the ONAV flight controller for use in the Mission Control Center at the Johnson Space Center. Currently, Oct. 1991, the entry and ascent systems have been certified for use on console as support tools, and were used for STS-48. The rendezvous system is in verification with the goal to have the system certified for STS-49, Intelsat retrieval. To arrive at this stage, from a prototype to real-world application, the ONAV project has had to deal with not only Al issues but operating environment issues. The Al issues included the maturity of Al languages and the debugging tools, verification, and availability, stability and size of the expert pool. The environmental issues included real time data acquisition, hardware suitability, and how to achieve acceptance by users and management.

  2. Toward human-centered man-machine system in nuclear power plants

    International Nuclear Information System (INIS)

    Tanabe, Fumiya

    1993-01-01

    The Japanese LWR power plants are classified into 4 categories, from the viewpoints of the control panel in central control room and the extent of automation. Their characteristics are outlined. The potential weaknesses indwelt in the conventional approaches are discussed; that are the loss of applicability to the unanticipated facts and the loss of morale of the operators. The need for the construction of human-centered man-machine system is emphasized in order to overcome these potential weaknesses. The most important features required for the system are, in short term, to support operators in dificulties, and at the same time, in long term, to assure the acquisition and conservation of the personnels' morale and potential to cope with the problems. The concepts of the 'ecological interface' and 'adaptive aiding' system are introduced as the design concepts for the human-centered man-machine system. (J.P.N.)

  3. Generation of a strong core-centering force in a submillimeter compound droplet system

    International Nuclear Information System (INIS)

    Lee, M.C.; Feng, I.; Elleman, D.D.; Wang, T.G.; Young, A.T.

    1981-01-01

    By amplitude-modulating the driving voltage of an acoustic levitating apparatus, a strong core-centering force can be generated in a submillimeter compound droplet system suspended by the radiation pressure in a gaseous medium. Depending on the acoustic characteristics of the droplet system, it has been found that the technique can be utilized advantageously in the multiple-layer coating of an inertial-confinement-fusion pellet

  4. Comparative analysis on operation strategies of CCHP system with cool thermal storage for a data center

    International Nuclear Information System (INIS)

    Song, Xu; Liu, Liuchen; Zhu, Tong; Zhang, Tao; Wu, Zhu

    2016-01-01

    Highlights: • Load characteristics of the data center make a good match with CCHP systems. • TRNSYS models was used to simulate the discussed CCHP system in a data center. • Comprehensive system performance under two operation strategies were evaluated. • Cool thermal storage was introduced to reuse the energy surplus by FEL system. • The suitable principle of equipment selection for a FEL system were proposed. - Abstract: Combined Cooling, Heating, and Power (CCHP) systems with cool thermal storage can provide an appropriate energy supply for data centers. In this work, we evaluate the CCHP system performance under two different operation strategies, i.e., following thermal load (FTL) and following electric load (FEL). The evaluation is performed through a case study by using TRNSYS software. In the FEL system, the amount of cool thermal energy generated by the absorption chillers is larger than the cooling load and it can be therefore stored and reused at the off-peak times. Results indicate that systems under both operation strategies have advantages in the fields of energy saving and environmental protection. The largest percentage of reduction of primary energy consumption, CO_2 emissions, and operation cost for the FEL system, are 18.5%, 37.4% and 46.5%, respectively. Besides, the system performance is closely dependent on the equipment selection. The relation between the amount of energy recovered through cool thermal storage and the primary energy consumption has also been taken into account. Moreover, the introduction of cool thermal storage can adjust the heat to power ratio on the energy supply side close to that on the consumer side and consequently promote system flexibility and energy efficiency.

  5. INSPACE CHEMICAL PROPULSION SYSTEMS AT NASA's MARSHALL SPACE FLIGHT CENTER: HERITAGE AND CAPABILITIES

    Science.gov (United States)

    McRight, P. S.; Sheehy, J. A.; Blevins, J. A.

    2005-01-01

    NASA s Marshall Space Flight Center (MSFC) is well known for its contributions to large ascent propulsion systems such as the Saturn V rocket and the Space Shuttle external tank, solid rocket boosters, and main engines. This paper highlights a lesser known but very rich side of MSFC-its heritage in the development of in-space chemical propulsion systems and its current capabilities for spacecraft propulsion system development and chemical propulsion research. The historical narrative describes the flight development activities associated with upper stage main propulsion systems such as the Saturn S-IVB as well as orbital maneuvering and reaction control systems such as the S-IVB auxiliary propulsion system, the Skylab thruster attitude control system, and many more recent activities such as Chandra, the Demonstration of Automated Rendezvous Technology (DART), X-37, the X-38 de-orbit propulsion system, the Interim Control Module, the US Propulsion Module, and multiple technology development activities. This paper also highlights MSFC s advanced chemical propulsion research capabilities, including an overview of the center s Propulsion Systems Department and ongoing activities. The authors highlight near-term and long-term technology challenges to which MSFC research and system development competencies are relevant. This paper concludes by assessing the value of the full range of aforementioned activities, strengths, and capabilities in light of NASA s exploration missions.

  6. Efficient multitasking of Choleski matrix factorization on CRAY supercomputers

    Science.gov (United States)

    Overman, Andrea L.; Poole, Eugene L.

    1991-01-01

    A Choleski method is described and used to solve linear systems of equations that arise in large scale structural analysis. The method uses a novel variable-band storage scheme and is structured to exploit fast local memory caches while minimizing data access delays between main memory and vector registers. Several parallel implementations of this method are described for the CRAY-2 and CRAY Y-MP computers demonstrating the use of microtasking and autotasking directives. A portable parallel language, FORCE, is used for comparison with the microtasked and autotasked implementations. Results are presented comparing the matrix factorization times for three representative structural analysis problems from runs made in both dedicated and multi-user modes on both computers. CPU and wall clock timings are given for the parallel implementations and are compared to single processor timings of the same algorithm.

  7. A knowledge continuity management program for the energy, infrastructure and knowledge systems center, Sandia National Laboratories.

    Energy Technology Data Exchange (ETDEWEB)

    Menicucci, David F.

    2006-07-01

    A growing recognition exists in companies worldwide that, when employees leave, they take with them valuable knowledge that is difficult and expensive to recreate. The concern is now particularly acute as the large ''baby boomer'' generation is reaching retirement age. A new field of science, Knowledge Continuity Management (KCM), is designed to capture and catalog the acquired knowledge and wisdom from experience of these employees before they leave. The KCM concept is in the final stages of being adopted by the Energy, Infrastructure, and Knowledge Systems Center and a program is being applied that should produce significant annual cost savings. This report discusses how the Center can use KCM to mitigate knowledge loss from employee departures, including a concise description of a proposed plan tailored to the Center's specific needs and resources.

  8. The design of neonatal incubators: a systems-oriented, human-centered approach.

    Science.gov (United States)

    Ferris, T K; Shepley, M M

    2013-04-01

    This report describes a multidisciplinary design project conducted in an academic setting reflecting a systems-oriented, human-centered philosophy in the design of neonatal incubator technologies. Graduate students in Architectural Design and Human Factors Engineering courses collaborated in a design effort that focused on supporting the needs of three user groups of incubator technologies: infant patients, family members and medical personnel. Design teams followed established human-centered design methods that included interacting with representatives from the user groups, analyzing sets of critical tasks and conducting usability studies with existing technologies. An iterative design and evaluation process produced four conceptual designs of incubators and supporting equipment that better address specific needs of the user groups. This report introduces the human-centered design approach, highlights some of the analysis findings and design solutions, and offers a set of design recommendations for future incubation technologies.

  9. Effluent Monitoring System Design for the Proton Accelerator Research Center of PEFP

    International Nuclear Information System (INIS)

    Kim, Jun Yeon; Mun, Kyeong Jun; Cho, Jang Hyung; Jo, Jeong Hee

    2010-01-01

    Since host site host site was selected Gyeong-ju city in January, 2006. we need design revision of Proton Accelerator research center to reflect on host site characteristics and several conditions. Also the IAC recommended maximization of space utilization and construction cost saving. After GA(General Arrangement) is made a decision, it is necessary to evaluate the radiation analysis of every controlled area in the proton accelerator research center such as accelerator tunnel, Klystron gallery, beam experimental hall, target rooms and ion beam application building to keep dose rate below the ALARA(As Low As Reasonably achievable) objective. Our staff has reviewed and made a shielding design of them. In this paper, According to accelerator operation mode and access conditions based on radiation analysis and shielding design, we made the exhaust system configuration of controlled area in the proton accelerator research center. Also, we installed radiation monitor and set its alarm value for each radiation area

  10. Energy Center Structure Optimization by using Smart Technologies in Process Control System

    Science.gov (United States)

    Shilkina, Svetlana V.

    2018-03-01

    The article deals with practical application of fuzzy logic methods in process control systems. A control object - agroindustrial greenhouse complex, which includes its own energy center - is considered. The paper analyzes object power supply options taking into account connection to external power grids and/or installation of own power generating equipment with various layouts. The main problem of a greenhouse facility basic process is extremely uneven power consumption, which forces to purchase redundant generating equipment idling most of the time, which quite negatively affects project profitability. Energy center structure optimization is largely based on solving the object process control system construction issue. To cut investor’s costs it was proposed to optimize power consumption by building an energy-saving production control system based on a fuzzy logic controller. The developed algorithm of automated process control system functioning ensured more even electric and thermal energy consumption, allowed to propose construction of the object energy center with a smaller number of units due to their more even utilization. As a result, it is shown how practical use of microclimate parameters fuzzy control system during object functioning leads to optimization of agroindustrial complex energy facility structure, which contributes to a significant reduction in object construction and operation costs.

  11. A Statewide Collaboration: Ohio Level III Trauma Centers' Approach to the Development of a Benchmarking System.

    Science.gov (United States)

    Lang, Carrie L; Simon, Diane; Kilgore, Jane

    The American College of Surgeons Committee on Trauma revised the Resources for Optimal Care of the Injured Patient to include the criteria for trauma centers to participate in a risk-adjusted benchmarking system. Trauma Quality Improvement Program is currently the risk-adjusted benchmarking program sponsored by the American College of Surgeons, which will be required of all trauma centers to participate in early 2017. Prior to this, there were no risk-adjusted programs for Level III verified trauma centers. The Ohio Society of Trauma Nurse Leaders is a collaborative group made up of trauma program managers, coordinators, and other trauma leaders who meet 6 times a year. Within this group, a Level III Subcommittee was formed initially to provide a place for the Level III centers to discuss issues specific to the Level III centers. When the new requirement regarding risk-adjustment became official, the subcommittee agreed to begin reporting simple data points with the idea to risk adjust in the future.

  12. What do we mean by Human-Centered Design of Life-Critical Systems?

    Science.gov (United States)

    Boy, Guy A

    2012-01-01

    Human-centered design is not a new approach to design. Aerospace is a good example of a life-critical systems domain where participatory design was fully integrated, involving experimental test pilots and design engineers as well as many other actors of the aerospace engineering community. This paper provides six topics that are currently part of the requirements of the Ph.D. Program in Human-Centered Design of the Florida Institute of Technology (FIT.) This Human-Centered Design program offers principles, methods and tools that support human-centered sustainable products such as mission or process control environments, cockpits and hospital operating rooms. It supports education and training of design thinkers who are natural leaders, and understand complex relationships among technology, organizations and people. We all need to understand what we want to do with technology, how we should organize ourselves to a better life and finally find out whom we are and have become. Human-centered design is being developed for all these reasons and issues.

  13. [Toxicological consultation data management system based on experience of Pomeranian Center of Toxicology].

    Science.gov (United States)

    Kabata, Piotr Maciej; Waldman, Wojciech; Sein Anand, Jacek

    2015-01-01

    In this paper the structure of poisonings is described, based on the material collected from tele-toxicology consults by the Pomeranian Center of Toxicology in Gdańsk and harvested from its Electronic Poison Information Management System. In addition, we analyzed conclusions drawn from a 27-month operation of the system. Data were harvested from the Electronic Poison Information Management System developed in 2012 and used by the Pomeranian Center of Toxicology since then. The research was based on 2550 tele-toxicology consults between January 1 and December 31, 2014. Subsequently the data were electronically cleaned and presented using R programming language. The Pomeranian voivodeship was the prevalent localisation of calls (N = 1879; 73.7%). Most of the calls came from emergency rooms (N = 1495; 58.63%). In the case of 1396 (54.7%) patients the time-lag between intoxication and the consult was less than 6 h. There were no differences in the age distribution between genders. Mean age was 26.3 years. Young people predominated among intoxicated individuals. The majority of intoxications were incidental (N = 888; 34.8%) or suicidal (N = 814; 31.9%) and the most of them took place in the patient's home. Information about Poison Control Center consultations access should be better spread among medical service providers. The extent of poison information collected by Polish Poison Control Centers should be limited and unified. This should contribute to the increased percentage of properly documented consultations. Additional duties stemming from the need of digital archiving of consults provided, require the involvement of additional staff, leading to the increased operation costs incurred by Poison Control Centers. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  14. NASA-Langley Research Center's Aircraft Condition Analysis and Management System Implementation

    Science.gov (United States)

    Frye, Mark W.; Bailey, Roger M.; Jessup, Artie D.

    2004-01-01

    This document describes the hardware implementation design and architecture of Aeronautical Radio Incorporated (ARINC)'s Aircraft Condition Analysis and Management System (ACAMS), which was developed at NASA-Langley Research Center (LaRC) for use in its Airborne Research Integrated Experiments System (ARIES) Laboratory. This activity is part of NASA's Aviation Safety Program (AvSP), the Single Aircraft Accident Prevention (SAAP) project to develop safety-enabling technologies for aircraft and airborne systems. The fundamental intent of these technologies is to allow timely intervention or remediation to improve unsafe conditions before they become life threatening.

  15. Design and performance of the Georgia Tech Aquatic Center photovoltaic system. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Rohatgi, A.; Begovic, M.; Long, R.; Ropp, M.; Pregelj, A.

    1996-12-31

    A building-integrated DC PV array has been constructed on the Georgia Tech campus. The array is mounted on the roof of the Georgia Tech Aquatic Center (GTAC), site of the aquatic events during the 1996 Paralympic and Olympic Games in Atlanta. At the time of its construction, it was the world`s largest roof-mounted photovoltaic array, comprised of 2,856 modules and rates at 342 kW. This section describes the electrical and physical layout of the PV system, and the associated data acquisition system (DAS) which monitors the performance of the system and collects measurements of several important meteorological parameters.

  16. Re-inventing electromagnetics - Supercomputing solution of Maxwell's equations via direct time integration on space grids

    International Nuclear Information System (INIS)

    Taflove, A.

    1992-01-01

    This paper summarizes the present state and future directions of applying finite-difference and finite-volume time-domain techniques for Maxwell's equations on supercomputers to model complex electromagnetic wave interactions with structures. Applications so far have been dominated by radar cross section technology, but by no means are limited to this area. In fact, the gains we have made place us on the threshold of being able to make tremendous contributions to non-defense electronics and optical technology. Some of the most interesting research in these commercial areas is summarized. 47 refs

  17. Space and Missile Systems Center Standard: Test Requirements for Launch, Upper-Stage and Space Vehicles

    Science.gov (United States)

    2014-09-05

    Aviation Blvd. El Segundo, CA 90245 4. This standard has been approved for use on all Space and Missile Systems Center/Air Force Program...140 Satellite Hardness and Survivability; Testing Rationale for Electronic Upset and Burnout Effects 30. JANNAF-GL-2012-01-RO Test and Evaluation...vehicle, subsystem, and unit lev- els . Acceptance testing shall be conducted on all subsequent flight items. The protoqualification strategy shall require

  18. Solid Waste Processing Center Primary Opening Cells Systems, Equipment and Tools

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, Sharon A.; Baker, Carl P.; Mullen, O Dennis; Valdez, Patrick LJ

    2006-04-17

    This document addresses the remote systems and design integration aspects of the development of the Solid Waste Processing Center (SWPC), a facility to remotely open, sort, size reduce, and repackage mixed low-level waste (MLLW) and transuranic (TRU)/TRU mixed waste that is either contact-handled (CH) waste in large containers or remote-handled (RH) waste in various-sized packages.

  19. High-level waste solidification system for the Western New York Nuclear Service Center

    International Nuclear Information System (INIS)

    Carrell, J.R.; Holton, L.K.; Siemens, D.H.

    1982-01-01

    A preconceptual design for a waste conditioning and solidification system for the immobilization of the high-level liquid wastes (HLLW) stored at the Western New York Nuclear Service Center (WNYNSC), West Valley, New York was completed in 1981. The preconceptual design was conducted as part of the Department of Energy's (DOE) West Valley Demonstration Project, which requires a waste management demonstration at the WNYNSC. This paper summarizes the bases, assumptions, results and conclusions of the preconceptual design study

  20. Technology Transfer Challenges: A Case Study of User-Centered Design in NASA's Systems Engineering Culture

    Science.gov (United States)

    Quick, Jason

    2009-01-01

    The Upper Stage (US) section of the National Aeronautics and Space Administration's (NASA) Ares I rocket will require internal access platforms for maintenance tasks performed by humans inside the vehicle. Tasks will occur during expensive critical path operations at Kennedy Space Center (KSC) including vehicle stacking and launch preparation activities. Platforms must be translated through a small human access hatch, installed in an enclosed worksite environment, support the weight of ground operators and be removed before flight - and their design must minimize additional vehicle mass at attachment points. This paper describes the application of a user-centered conceptual design process and the unique challenges encountered within NASA's systems engineering culture focused on requirements and "heritage hardware". The NASA design team at Marshall Space Flight Center (MSFC) initiated the user-centered design process by studying heritage internal access kits and proposing new design concepts during brainstorming sessions. Simultaneously, they partnered with the Technology Transfer/Innovative Partnerships Program to research inflatable structures and dynamic scaffolding solutions that could enable ground operator access. While this creative, technology-oriented exploration was encouraged by upper management, some design stakeholders consistently opposed ideas utilizing novel, untested equipment. Subsequent collaboration with an engineering consulting firm improved the technical credibility of several options, however, there was continued resistance from team members focused on meeting system requirements with pre-certified hardware. After a six-month idea-generating phase, an intensive six-week effort produced viable design concepts that justified additional vehicle mass while optimizing the human factors of platform installation and use. Although these selected final concepts closely resemble heritage internal access platforms, challenges from the application of the

  1. Improving energy efficiency of dedicated cooling system and its contribution towards meeting an energy-optimized data center

    International Nuclear Information System (INIS)

    Cho, Jinkyun; Kim, Yundeok

    2016-01-01

    Highlights: • Energy-optimized data center’s cooling solutions were derived for four different climate zones. • We studied practical technologies of green data center that greatly improved energy efficiency. • We identified the relationship between mutually dependent factors in datacenter cooling systems. • We evaluated the effect of the dedicated cooling system applications. • Power Usage Effectiveness (PUE) was computed with energy simulation for data centers. - Abstract: Data centers are approximately 50 times more energy-intensive than general buildings. The rapidly increasing energy demand for data center operation has motivated efforts to better understand data center electricity use and to identify strategies that reduce the environmental impact. This research is presented analytical approach to the energy efficiency optimization of high density data center, in a synergy with relevant performance analysis of corresponding case study. This paper builds on data center energy modeling efforts by characterizing climate and cooling system differences among data centers and then evaluating their consequences for building energy use. Representative climate conditions for four regions are applied to data center energy models for several different prototypical cooling types. This includes cooling system, supplemental cooling solutions, design conditions and controlling the environment of ICT equipment were generally used for each climate zone, how these affect energy efficiency, and how the prioritization of system selection is derived. Based on the climate classification and the required operating environmental conditions for data centers suggested by the ASHRAE TC 9.9, a dedicated data center energy evaluation tool was taken to examine the potential energy savings of the cooling technology. Incorporating economizer use into the cooling systems would increase the variation in energy efficiency among geographic regions, indicating that as data centers

  2. Grassroots Supercomputing

    CERN Multimedia

    Buchanan, Mark

    2005-01-01

    What started out as a way for SETI to plow through its piles or radio-signal data from deep space has turned into a powerful research tool as computer users acrosse the globe donate their screen-saver time to projects as diverse as climate-change prediction, gravitational-wave searches, and protein folding (4 pages)

  3. A knowledge based expert system for propellant system monitoring at the Kennedy Space Center

    Science.gov (United States)

    Jamieson, J. R.; Delaune, C.; Scarl, E.

    1985-01-01

    The Lox Expert System (LES) is the first attempt to build a realtime expert system capable of simulating the thought processes of NASA system engineers, with regard to fluids systems analysis and troubleshooting. An overview of the hardware and software describes the techniques used, and possible applications to other process control systems. LES is now in the advanced development stage, with a full implementation planned for late 1985.

  4. Self-Centering Seismic Lateral Force Resisting Systems: High Performance Structures for the City of Tomorrow

    Directory of Open Access Journals (Sweden)

    Nathan Brent Chancellor

    2014-09-01

    Full Text Available Structures designed in accordance with even the most modern buildings codes are expected to sustain damage during a severe earthquake; however; these structures are expected to protect the lives of the occupants. Damage to the structure can require expensive repairs; significant business downtime; and in some cases building demolition. If damage occurs to many structures within a city or region; the regional and national economy may be severely disrupted. To address these shortcomings with current seismic lateral force resisting systems and to work towards more resilient; sustainable cities; a new class of seismic lateral force resisting systems that sustains little or no damage under severe earthquakes has been developed. These new seismic lateral force resisting systems reduce or prevent structural damage to nonreplaceable structural elements by softening the structural response elastically through gap opening mechanisms. To dissipate seismic energy; friction elements or replaceable yielding energy dissipation elements are also included. Post-tensioning is often used as a part of these systems to return the structure to a plumb; upright position (self-center after the earthquake has passed. This paper summarizes the state-of-the art for self-centering seismic lateral force resisting systems and outlines current research challenges for these systems.

  5. Total integrated performance excellence system (TIPES): A true north direction for a clinical trial support center.

    Science.gov (United States)

    Sather, Mike R; Parsons, Sherry; Boardman, Kathy D; Warren, Stuart R; Davis-Karim, Anne; Griffin, Kevin; Betterton, Jane A; Jones, Mark S; Johnson, Stanley H; Vertrees, Julia E; Hickey, Jan H; Salazar, Thelma P; Huang, Grant D

    2018-03-01

    This paper presents the quality journey taken by a Federal organization over more than 20 years. These efforts have resulted in the implementation of a Total Integrated Performance Excellence System (TIPES) that combines key principles and practices of established quality systems. The Center has progressively integrated quality system frameworks including the Malcom Baldrige National Quality Award (MBNQA) Framework and Criteria for Performance Excellence, ISO 9001, and the Organizational Project Management Maturity Model (OPM3), as well as supplemental quality systems of ISO 15378 (packaging for medicinal products) and ISO 21500 (guide to project management) to systematically improve all areas of operations. These frameworks were selected for applicability to Center processes and systems, consistency and reinforcement of complimentary approaches, and international acceptance. External validations include the MBNQA, the highest quality award in the US, continued registration and conformance to ISO standards and guidelines, and multiple VA and state awards. With a focus on a holistic approach to quality involving processes, systems and personnel, this paper presents activities and lessons that were critical to building TIPES and establishing the quality environment for conducting clinical research in support of Veterans and national health care.

  6. Total integrated performance excellence system (TIPES: A true north direction for a clinical trial support center

    Directory of Open Access Journals (Sweden)

    Mike R. Sather

    2018-03-01

    Full Text Available This paper presents the quality journey taken by a Federal organization over more than 20 years. These efforts have resulted in the implementation of a Total Integrated Performance Excellence System (TIPES that combines key principles and practices of established quality systems. The Center has progressively integrated quality system frameworks including the Malcom Baldrige National Quality Award (MBNQA Framework and Criteria for Performance Excellence, ISO 9001, and the Organizational Project Management Maturity Model (OPM3, as well as supplemental quality systems of ISO 15378 (packaging for medicinal products and ISO 21500 (guide to project management to systematically improve all areas of operations. These frameworks were selected for applicability to Center processes and systems, consistency and reinforcement of complimentary approaches, and international acceptance. External validations include the MBNQA, the highest quality award in the US, continued registration and conformance to ISO standards and guidelines, and multiple VA and state awards. With a focus on a holistic approach to quality involving processes, systems and personnel, this paper presents activities and lessons that were critical to building TIPES and establishing the quality environment for conducting clinical research in support of Veterans and national health care.

  7. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  8. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  9. Telemedicine spirometry training and quality assurance program in primary care centers of a public health system.

    Science.gov (United States)

    Marina Malanda, Nuria; López de Santa María, Elena; Gutiérrez, Asunción; Bayón, Juan Carlos; Garcia, Larraitz; Gáldiz, Juan B

    2014-04-01

    Forced spirometry is essential for diagnosing respiratory diseases and is widely used across levels of care. However, several studies have shown that spirometry quality in primary care is not ideal, with risks of misdiagnosis. Our objective was to assess the feasibility and performance of a telemedicine-based training and quality assurance program for forced spirometry in primary care. The two phases included (1) a 9-month pilot study involving 15 centers, in which spirometry tests were assessed by the Basque Office for Health Technology Assessment, and (2) the introduction of the program to all centers in the Public Basque Health Service. Technicians first received 4 h of training, and, subsequently, they sent all tests to the reference laboratory using the program. Quality assessment was performed in accordance with clinical guidelines (A and B, good; C-F, poor). In the first phase, 1,894 spirometry tests were assessed, showing an improvement in quality: acceptable quality tests increased from 57% at the beginning to 78% after 6 months and 83% after 9 months (passessed after the inclusion of 36 additional centers, maintaining the positive trend (61%, 87%, and 84% at the same time points; pquality of spirometry tests improved in all centers. (2) The program provides a tool for transferring data that allows monitoring of its quality and training of technicians who perform the tests. (3) This approach is useful for improving spirometry quality in the routine practice of a public health system.

  10. Characterizing complexity in socio-technical systems: a case study of a SAMU Medical Regulation Center.

    Science.gov (United States)

    Righi, Angela Weber; Wachs, Priscila; Saurin, Tarcísio Abreu

    2012-01-01

    Complexity theory has been adopted by a number of studies as a benchmark to investigate the performance of socio-technical systems, especially those that are characterized by relevant cognitive work. However, there is little guidance on how to assess, systematically, the extent to which a system is complex. The main objective of this study is to carry out a systematic analysis of a SAMU (Mobile Emergency Medical Service) Medical Regulation Center in Brazil, based on the core characteristics of complex systems presented by previous studies. The assessment was based on direct observations and nine interviews: three of them with regulator of emergencies medical doctor, three with radio operators and three with telephone attendants. The results indicated that, to a great extent, the core characteristics of complexity are magnified) due to basic shortcomings in the design of the work system. Thus, some recommendations are put forward with a view to reducing unnecessary complexity that hinders the performance of the socio-technical system.

  11. Assessment team report on flight-critical systems research at NASA Langley Research Center

    Science.gov (United States)

    Siewiorek, Daniel P. (Compiler); Dunham, Janet R. (Compiler)

    1989-01-01

    The quality, coverage, and distribution of effort of the flight-critical systems research program at NASA Langley Research Center was assessed. Within the scope of the Assessment Team's review, the research program was found to be very sound. All tasks under the current research program were at least partially addressing the industry needs. General recommendations made were to expand the program resources to provide additional coverage of high priority industry needs, including operations and maintenance, and to focus the program on an actual hardware and software system that is under development.

  12. Direct numerical control of machine tools in a nuclear research center by the CAMAC system

    International Nuclear Information System (INIS)

    Zwoll, K.; Mueller, K.D.; Becks, B.; Erven, W.; Sauer, M.

    1977-01-01

    The production of mechanical parts in research centers can be improved by connecting several numerically controlled machine tools to a central process computer via a data link. The CAMAC Serial Highway with its expandable structure yields an economic and flexible system for this purpose. The CAMAC System also facilitates the development of modular components controlling the machine tools itself. A CAMAC installation controlling three different machine tools connected to a central computer (PDP11) via the CAMAC Serial Highway is described. Besides this application, part of the CAMAC hardware and software can also be used for a great variety of scientific experiments

  13. Fiber optic transmission system delivered to Fusion Research Center of Japan Atomic Energy Research Institute

    International Nuclear Information System (INIS)

    Hayashida, Mutsuo; Hiramoto, Kiyoshi; Yamazaki, Kunihiro

    1983-01-01

    In general there are many electromagnetically induced noises in the premises of factories, power plants and substations. Under such electrically bad environments, for the computer data transmission that needs high speed processing and high reliability, the optical fiber cable is superion to the coaxial cable or the flat-type cable in aspects of the inductionlessness and a wide bandwidth. Showa Electric Wire and Cable Co., Ltd. has delivered and installed a computer data transmission system consisting of optical modems and optical fiber cables for connecting every experiment building in the premises of Fusion Research Center of Japan Atomic Energy Research Institute. This paper describes the outline of this system. (author)

  14. A reliability centered maintenance model applied to the auxiliary feedwater system of a nuclear power plant

    International Nuclear Information System (INIS)

    Araujo, Jefferson Borges

    1998-01-01

    The main objective of maintenance in a nuclear power plant is to assure that structures, systems and components will perform their design functions with reliability and availability in order to obtain a safety and economic electric power generation. Reliability Centered Maintenance (RCM) is a method of systematic review to develop or optimize Preventive Maintenance Programs. This study presents the objectives, concepts, organization and methods used in the development of RCM application to nuclear power plants. Some examples of this application are included, considering the Auxiliary Feedwater System of a generic two loops PWR nuclear power plant of Westinghouse design. (author)

  15. Storage Information Management System (SIMS) Spaceflight Hardware Warehousing at Goddard Space Flight Center

    Science.gov (United States)

    Kubicko, Richard M.; Bingham, Lindy

    1995-01-01

    Goddard Space Flight Center (GSFC) on site and leased warehouses contain thousands of items of ground support equipment (GSE) and flight hardware including spacecraft, scaffolding, computer racks, stands, holding fixtures, test equipment, spares, etc. The control of these warehouses, and the management, accountability, and control of the items within them, is accomplished by the Logistics Management Division. To facilitate this management and tracking effort, the Logistics and Transportation Management Branch, is developing a system to provide warehouse personnel, property owners, and managers with storage and inventory information. This paper will describe that PC-based system and address how it will improve GSFC warehouse and storage management.

  16. Selection of melter systems for the DOE/Industrial Center for Waste Vitrification Research

    International Nuclear Information System (INIS)

    Bickford, D.F.

    1993-01-01

    The EPA has designated vitrification as the best developed available technology for immobilization of High-Level Nuclear Waste. In a recent federal facilities compliance agreement between the EPA, the State of Washington, and the DOE, the DOE agreed to vitrify all of the Low Level Radioactive Waste resulting from processing of High Level Radioactive Waste stored at the Hanford Site. This is expected to result in the requirement of 100 ton per day Low Level Radioactive Waste melters. Thus, there is increased need for the rapid adaptation of commercial melter equipment to DOE's needs. DOE has needed a facility where commercial pilot scale equipment could be operated on surrogate (non-radioactive) simulations of typical DOE waste streams. The DOE/Industry Center for Vitrification Research (Center) was established in 1992 at the Clemson University Department of Environmental Systems Engineering, Clemson, SC, to address that need. This report discusses some of the characteristics of the melter types selected for installation of the Center. An overall objective of the Center has been to provide the broadest possible treatment capability with the minimum number of melter units. Thus, units have been sought which have broad potential application, and which had construction characteristics which would allow their adaptation to various waste compositions, and various operating conditions, including extreme variations in throughput, and widely differing radiological control requirements. The report discusses waste types suitable for vitrification; technical requirements for the application of vitrification to low level mixed wastes; available melters and systems; and selection of melter systems. An annotated bibliography is included

  17. Selection of melter systems for the DOE/Industrial Center for Waste Vitrification Research

    Energy Technology Data Exchange (ETDEWEB)

    Bickford, D.F.

    1993-12-31

    The EPA has designated vitrification as the best developed available technology for immobilization of High-Level Nuclear Waste. In a recent federal facilities compliance agreement between the EPA, the State of Washington, and the DOE, the DOE agreed to vitrify all of the Low Level Radioactive Waste resulting from processing of High Level Radioactive Waste stored at the Hanford Site. This is expected to result in the requirement of 100 ton per day Low Level Radioactive Waste melters. Thus, there is increased need for the rapid adaptation of commercial melter equipment to DOE`s needs. DOE has needed a facility where commercial pilot scale equipment could be operated on surrogate (non-radioactive) simulations of typical DOE waste streams. The DOE/Industry Center for Vitrification Research (Center) was established in 1992 at the Clemson University Department of Environmental Systems Engineering, Clemson, SC, to address that need. This report discusses some of the characteristics of the melter types selected for installation of the Center. An overall objective of the Center has been to provide the broadest possible treatment capability with the minimum number of melter units. Thus, units have been sought which have broad potential application, and which had construction characteristics which would allow their adaptation to various waste compositions, and various operating conditions, including extreme variations in throughput, and widely differing radiological control requirements. The report discusses waste types suitable for vitrification; technical requirements for the application of vitrification to low level mixed wastes; available melters and systems; and selection of melter systems. An annotated bibliography is included.

  18. System design and as-built MCNP model comparison for the Lujan Center target moderator reflector system

    International Nuclear Information System (INIS)

    Muhrer, G.; Ferguson, P.D.; Russell, G.J.; Pitcher, E.J.

    2000-01-01

    During the design of the Manuel Lujan, Jr., Neutron Scattering Center target, a simplified Monte Carlo model was used to estimate target system performance and to aid engineers as decisions were made regarding the construction of the target system. Although the simplified model ideally would perfectly reflect the as-built system performance, assumptions were made in the model during the design process that may result in deviations between the model predictions and the as-built system performance. Now that the Lujan Center target system has been completed, a more detailed, as-built, model of the target system has been completed. The purpose of this work is to investigate differences between the predicted target system performance of the simplified model and the as-built model from the standpoint of time-averaged moderator brightness. Calculated discrepancies between the two models have been isolated to a few key issues. Figure 1 shows MCNP geometric plots of the simplified and as-built models. Major differences between these two models include details in the moderator designs (plena) and piping, full versus partial moderator canisters (only in the direction of the extracted neutron beam for the simplified model), and reflector details including cooling pipes and engineering tolerance gaps. In addition, Fig. 1 demonstrates that the detailed model includes shielding and additional material beyond that which was modeled by the original simplified model

  19. Computational Science with the Titan Supercomputer: Early Outcomes and Lessons Learned

    Science.gov (United States)

    Wells, Jack

    2014-03-01

    Modeling and simulation with petascale computing has supercharged the process of innovation and understanding, dramatically accelerating time-to-insight and time-to-discovery. This presentation will focus on early outcomes from the Titan supercomputer at the Oak Ridge National Laboratory. Titan has over 18,000 hybrid compute nodes consisting of both CPUs and GPUs. In this presentation, I will discuss the lessons we have learned in deploying Titan and preparing applications to move from conventional CPU architectures to a hybrid machine. I will present early results of materials applications running on Titan and the implications for the research community as we prepare for exascale supercomputer in the next decade. Lastly, I will provide an overview of user programs at the Oak Ridge Leadership Computing Facility with specific information how researchers may apply for allocations of computing resources. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

  20. An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.

    Science.gov (United States)

    Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei

    2017-12-01

    Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.

  1. Concurrent Mission and Systems Design at NASA Glenn Research Center: The Origins of the COMPASS Team

    Science.gov (United States)

    McGuire, Melissa L.; Oleson, Steven R.; Sarver-Verhey, Timothy R.

    2012-01-01

    Established at the NASA Glenn Research Center (GRC) in 2006 to meet the need for rapid mission analysis and multi-disciplinary systems design for in-space and human missions, the Collaborative Modeling for Parametric Assessment of Space Systems (COMPASS) team is a multidisciplinary, concurrent engineering group whose primary purpose is to perform integrated systems analysis, but it is also capable of designing any system that involves one or more of the disciplines present in the team. The authors were involved in the development of the COMPASS team and its design process, and are continuously making refinements and enhancements. The team was unofficially started in the early 2000s as part of the distributed team known as Team JIMO (Jupiter Icy Moons Orbiter) in support of the multi-center collaborative JIMO spacecraft design during Project Prometheus. This paper documents the origins of a concurrent mission and systems design team at GRC and how it evolved into the COMPASS team, including defining the process, gathering the team and tools, building the facility, and performing studies.

  2. Technological drivers in data centers and telecom systems: Multiscale thermal, electrical, and energy management

    International Nuclear Information System (INIS)

    Garimella, Suresh V.; Persoons, Tim; Weibel, Justin; Yeh, Lian-Tuu

    2013-01-01

    Highlights: ► Thermal management approaches reviewed against energy usage of IT industry. ► Challenges of energy efficiency in large-scale electronic systems highlighted. ► Underlying drivers for progress at the business and technology levels identified. ► Thermal, electrical and energy management challenges discussed as drivers. ► Views of IT system operators, manufacturers and integrators represented. - Abstract: We identify technological drivers for tomorrow’s data centers and telecommunications systems, including thermal, electrical and energy management challenges, based on discussions at the 2nd Workshop on Thermal Management in Telecommunication Systems and Data Centers in Santa Clara, California, on April 25–26, 2012. The relevance of thermal management in electronic systems is reviewed against the background of the energy usage of the information technology (IT) industry, encompassing perspectives of different sectors of the industry. The underlying drivers for progress at the business and technology levels are identified. The technological challenges are reviewed in two main categories – immediate needs and future needs. Enabling cooling techniques that are currently under development are also discussed

  3. Design of an advanced human-centered supervisory system for a nuclear fuel reprocessing system

    International Nuclear Information System (INIS)

    Riera, B.; Lambert, M.; Martel, G.

    1999-01-01

    In the field of highly automated processes, our research concerns supervisory system design adapted to supervisory and default diagnosis by human operators. The interpretation of decisional human behaviour models shows that the tasks of human operators require different information, which has repercussions on the supervisory system design. We propose an advanced human-centred supervisory system (AHCSS) which is more adapted to human-beings, because it integrates new representation of the production system,(such as functional and behavioural aspects) with the use of advanced algorithms of detection and location. Based on an approach using these new concepts, and AHCSS was created for a nuclear fuel reprocessing system. (authors)

  4. Changing the batch system in a Tier 1 computing center: why and how

    Science.gov (United States)

    Chierici, Andrea; Dal Pra, Stefano

    2014-06-01

    At the Italian Tierl Center at CNAF we are evaluating the possibility to change the current production batch system. This activity is motivated mainly because we are looking for a more flexible licensing model as well as to avoid vendor lock-in. We performed a technology tracking exercise and among many possible solutions we chose to evaluate Grid Engine as an alternative because its adoption is increasing in the HEPiX community and because it's supported by the EMI middleware that we currently use on our computing farm. Another INFN site evaluated Slurm and we will compare our results in order to understand pros and cons of the two solutions. We will present the results of our evaluation of Grid Engine, in order to understand if it can fit the requirements of a Tier 1 center, compared to the solution we adopted long ago. We performed a survey and a critical re-evaluation of our farming infrastructure: many production softwares (accounting and monitoring on top of all) rely on our current solution and changing it required us to write new wrappers and adapt the infrastructure to the new system. We believe the results of this investigation can be very useful to other Tier-ls and Tier-2s centers in a similar situation, where the effort of switching may appear too hard to stand. We will provide guidelines in order to understand how difficult this operation can be and how long the change may take.

  5. LORIS: A web-based data management system for multi-center studies.

    Directory of Open Access Journals (Sweden)

    Samir eDas

    2012-01-01

    Full Text Available LORIS (Longitudinal Online Research and Imaging System is a modular and extensible web-based data management system that integrates all aspects of a multi-center study: from heterogeneous data acquisition (imaging, clinical, behavior, genetics to storage, processing and ultimately dissemination. It provides a secure, user-friendly, and streamlined platform to automate the flow of clinical trials and complex multi-center studies. A subject-centric internal organization allows researchers to capture and subsequently extract all information, longitudinal or cross-sectional, from any subset of the study cohort. Extensive error-checking and quality control procedures, security, data management, data querying and administrative functions provide LORIS with a triple capability (i continuous project coordination and monitoring of data acquisition (ii data storage/cleaning/querying, (iii interface with arbitrary external data processing pipelines. LORIS is a complete solution that has been thoroughly tested through the full life cycle of a multi-center longitudinal project# and is now supporting numerous neurodevelopment and neurodegeneration research projects internationally.

  6. NIF pointing and centering systems and target alignment using a 351 nm laser source

    International Nuclear Information System (INIS)

    Boege, S.J.; Bliss, E.S.; Chocol, C.J.; Holdener, F.R.; Miller, J.L.; Toeppen, J.S.; Vann, C.S.; Zacharias, R.A.

    1996-10-01

    The operational requirements of the National Ignition Facility (NIF) place tight constraints upon its alignment system. In general, the alignment system must establish and maintain the correct relationships between beam position, beam angle, laser component clear apertures, and the target. At the target, this includes adjustment of beam focus to obtain the correct spot size. This must be accomplished for all beamlines in a time consistent with planned shot rates and yet, in the front end and main laser, beam control functions cannot be initiated until the amplifiers have sufficiently cooled so as to minimize dynamic thermal distortions during and after alignment and wavefront optimization. The scope of the task dictates an automated system that implements parallel processes. We describe reticle choices and other alignment references, insertion of alignment beams, principles of operation of the Chamber Center Reference System 2048 and Target Alignment Sensor, and the anticipated alignment sequence that will occur between shots

  7. Mathematical model for adaptive control system of ASEA robot at Kennedy Space Center

    Science.gov (United States)

    Zia, Omar

    1989-01-01

    The dynamic properties and the mathematical model for the adaptive control of the robotic system presently under investigation at Robotic Application and Development Laboratory at Kennedy Space Center are discussed. NASA is currently investigating the use of robotic manipulators for mating and demating of fuel lines to the Space Shuttle Vehicle prior to launch. The Robotic system used as a testbed for this purpose is an ASEA IRB-90 industrial robot with adaptive control capabilities. The system was tested and it's performance with respect to stability was improved by using an analogue force controller. The objective of this research project is to determine the mathematical model of the system operating under force feedback control with varying dynamic internal perturbation in order to provide continuous stable operation under variable load conditions. A series of lumped parameter models are developed. The models include some effects of robot structural dynamics, sensor compliance, and workpiece dynamics.

  8. Diversity and MIMO Performance Evaluation of Common Phase Center Multi Element Antenna Systems

    Directory of Open Access Journals (Sweden)

    V. Papamichael

    2008-06-01

    Full Text Available The diversity and Multiple Input Multiple Output (MIMO performance provided by common phase center multi element antenna (CPCMEA systems is evaluated using two practical methods which make use of the realized active element antenna patterns. These patterns include both the impact of the mutual coupling and the mismatch power loss at antenna ports. As a case study, two and four printed Inverted F Antenna (IFA systems are evaluated by means of Effective Diversity Gain (EDG and Capacity (C. EDG is measured in terms of the signal-to-noise ratio (SNR enhancement at a specific outage probability and in terms of the SNR reduction for achieving a desired average bit error rate (BER. The concept of receive antenna selection in MIMO systems is also investigated and the simulation results show a 43% improvement in the 1% outage C of a reconfigurable 2x2 MIMO system over a fixed 2x2 one.

  9. The Johnson Space Center management information systems: User's guide to JSCMIS

    Science.gov (United States)

    Bishop, Peter C.; Erickson, Lloyd

    1990-01-01

    The Johnson Space Center Management Information System (JSCMIS) is an interface to computer data bases at the NASA Johnson Space Center which allows an authorized user to browse and retrieve information from a variety of sources with minimum effort. The User's Guide to JSCMIS is the supplement to the JSCMIS Research Report which details the objectives, the architecture, and implementation of the interface. It is a tutorial on how to use the interface and a reference for details about it. The guide is structured like an extended JSCMIS session, describing all of the interface features and how to use them. It also contains an appendix with each of the standard FORMATs currently included in the interface. Users may review them to decide which FORMAT most suits their needs.

  10. Profitability indicators of milk production cost center in intensive systems of production

    Directory of Open Access Journals (Sweden)

    Glauber dos Santos

    2012-01-01

    Full Text Available The objective was to estimate some profitability indicators of dairy cost center farms with a high volume of daily production in feedlot. The Intended was also to identify the components that had the greatest influence on the operational cost. We used data from three milk systems production, with the origin of the purebred Holsteins. It was considered as a milk cost center production all expenses related in lactating and dry cows. The methodology used total cost and operating cost in profitability analysis. A production system, by presenting gross margin, net positive result, was able to produce short, medium and long term. Another production system had a positive gross margin and net, with conditions to survive in the short and medium term. Finally, the third system of production has shown a negative gross margin presenting decapitalizing and entering into debt, as revenues were not enough to pay operating expenses even effective. The component items of the effective operational cost that exercised higher “impact” cost and income from milk were, in decreasing order, the feeding, labor, miscellaneous expenses, sanitation, energy, milking, reproduction, equipment rental, BST and taxes.

  11. Johnson Space Center's Solar and Wind-Based Renewable Energy System

    Science.gov (United States)

    Vasquez, A.; Ewert, M.; Rowlands, J.; Post, K.

    2009-01-01

    The NASA Johnson Space Center (JSC) in Houston, Texas has a Sustainability Partnership team that seeks ways for earth-based sustainability practices to also benefit space exploration research. A renewable energy gathering system was installed in 2007 at the JSC Child Care Center (CCC) which also offers a potential test bed for space exploration power generation and remote monitoring and control concepts. The system comprises: 1) several different types of photovoltaic panels (29 kW), 2) two wind-turbines (3.6 kW total), and 3) one roof-mounted solar thermal water heater and tank. A tie to the JSC local electrical grid was provided to accommodate excess power. The total first year electrical energy production was 53 megawatt-hours. A web-based real-time metering system collects and reports system performance and weather data. Improvements in areas of the CCC that were detected during subsequent energy analyses and some concepts for future efforts are also presented.

  12. Health at the center of health systems reform: how philosophy can inform policy.

    Science.gov (United States)

    Sturmberg, Joachim P; Martin, Carmel M; Moes, Mark M

    2010-01-01

    Contemporary views hold that health and disease can be defined as objective states and thus should determine the design and delivery of health services. Yet health concepts are elusive and contestable. Health is neither an individual construction, a reflection of societal expectations, nor only the absence of pathologies. Based on philosophical and sociological theory, empirical evidence, and clinical experience, we argue that health has simultaneously objective and subjective features that converge into a dynamic complex-adaptive health model. Health (or its dysfunction, illness) is a dynamic state representing complex patterns of adaptation to body, mind, social, and environmental challenges, resulting in bodily homeostasis and personal internal coherence. The "balance of health" model-emergent, self-organizing, dynamic, and adaptive-underpins the very essence of medicine. This model should be the foundation for health systems design and also should inform therapeutic approaches, policy decision-making, and the development of emerging health service models. A complex adaptive health system focused on achieving the best possible "personal" health outcomes must provide the broad policy frameworks and resources required to implement people-centered health care. People-centered health systems are emergent in nature, resulting in locally different but mutually compatible solutions across the whole health system.

  13. Computed tomography evaluation of rotary systems on the root canal transportation and centering ability

    Energy Technology Data Exchange (ETDEWEB)

    Pagliosa, Andre; Raucci-Neto, Walter; Silva-Souza, Yara Teresinha Correa; Alfredo, Edson, E-mail: ysousa@unaerp.br [Universidade de Ribeirao Preto (UNAERP), SP (Brazil). Fac. de Odontologia; Sousa-Neto, Manoel Damiao; Versiani, Marco Aurelio [Universidade de Sao Paulo (USP), Ribeirao Preto, SP (Brazil). Fac. de Odoentologia

    2015-03-01

    The endodontic preparation of curved and narrow root canals is challenging, with a tendency for the prepared canal to deviate away from its natural axis. The aim of this study was to evaluate, by cone-beam computed tomography, the transportation and centering ability of curved mesiobuccal canals in maxillary molars after biomechanical preparation with different nickel-titanium (NiTi) rotary systems. Forty teeth with angles of curvature ranging from 20° to 40° and radii between 5.0 mm and 10.0 mm were selected and assigned into four groups (n = 10), according to the biomechanical preparative system used: Hero 642 (HR), Liberator (LB), ProTaper (PT), and Twisted File (TF). The specimens were inserted into an acrylic device and scanned with computed tomography prior to, and following, instrumentation at 3, 6 and 9 mm from the root apex. The canal degree of transportation and centering ability were calculated and analyzed using one-way ANOVA and Tukey’s tests (α = 0.05). The results demonstrated no significant difference (p > 0.05) in shaping ability among the rotary systems. The mean canal transportation was: -0.049 ± 0.083 mm (HR); -0.004 ± 0.044 mm (LB); -0.003 ± 0.064 mm (PT); -0.021 ± 0.064 mm (TF). The mean canal centering ability was: -0.093 ± 0.147 mm (HR); -0.001 ± 0.100 mm (LB); -0.002 ± 0.134 mm (PT); -0.033 ± 0.133 mm (TF). Also, there was no significant difference among the root segments (p > 0.05). It was concluded that the Hero 642, Liberator, ProTaper, and Twisted File rotary systems could be safely used in curved canal instrumentation, resulting in satisfactory preservation of the original canal shape. (author)

  14. Computed tomography evaluation of rotary systems on the root canal transportation and centering ability

    Directory of Open Access Journals (Sweden)

    André PAGLIOSA

    2015-01-01

    Full Text Available Abstract : The endodontic preparation of curved and narrow root canals is challenging, with a tendency for the prepared canal to deviate away from its natural axis. The aim of this study was to evaluate, by cone-beam computed tomography, the transportation and centering ability of curved mesiobuccal canals in maxillary molars after biomechanical preparation with different nickel-titanium (NiTi rotary systems. Forty teeth with angles of curvature ranging from 20° to 40° and radii between 5.0 mm and 10.0 mm were selected and assigned into four groups (n = 10, according to the biomechanical preparative system used: Hero 642 (HR, Liberator (LB, ProTaper (PT, and Twisted File (TF. The specimens were inserted into an acrylic device and scanned with computed tomography prior to, and following, instrumentation at 3, 6 and 9 mm from the root apex. The canal degree of transportation and centering ability were calculated and analyzed using one-way ANOVA and Tukey’s tests (α = 0.05. The results demonstrated no significant difference (p > 0.05 in shaping ability among the rotary systems. The mean canal transportation was: -0.049 ± 0.083 mm (HR; -0.004 ± 0.044 mm (LB; -0.003 ± 0.064 mm (PT; -0.021 ± 0.064 mm (TF. The mean canal centering ability was: -0.093 ± 0.147 mm (HR; -0.001 ± 0.100 mm (LB; -0.002 ± 0.134 mm (PT; -0.033 ± 0.133 mm (TF. Also, there was no significant difference among the root segments (p > 0.05. It was concluded that the Hero 642, Liberator, ProTaper, and Twisted File rotary systems could be safely used in curved canal instrumentation, resulting in satisfactory preservation of the original canal shape.

  15. Computed tomography evaluation of rotary systems on the root canal transportation and centering ability

    International Nuclear Information System (INIS)

    Pagliosa, Andre; Raucci-Neto, Walter; Silva-Souza, Yara Teresinha Correa; Alfredo, Edson; Sousa-Neto, Manoel Damiao; Versiani, Marco Aurelio

    2015-01-01

    The endodontic preparation of curved and narrow root canals is challenging, with a tendency for the prepared canal to deviate away from its natural axis. The aim of this study was to evaluate, by cone-beam computed tomography, the transportation and centering ability of curved mesiobuccal canals in maxillary molars after biomechanical preparation with different nickel-titanium (NiTi) rotary systems. Forty teeth with angles of curvature ranging from 20° to 40° and radii between 5.0 mm and 10.0 mm were selected and assigned into four groups (n = 10), according to the biomechanical preparative system used: Hero 642 (HR), Liberator (LB), ProTaper (PT), and Twisted File (TF). The specimens were inserted into an acrylic device and scanned with computed tomography prior to, and following, instrumentation at 3, 6 and 9 mm from the root apex. The canal degree of transportation and centering ability were calculated and analyzed using one-way ANOVA and Tukey’s tests (α = 0.05). The results demonstrated no significant difference (p > 0.05) in shaping ability among the rotary systems. The mean canal transportation was: -0.049 ± 0.083 mm (HR); -0.004 ± 0.044 mm (LB); -0.003 ± 0.064 mm (PT); -0.021 ± 0.064 mm (TF). The mean canal centering ability was: -0.093 ± 0.147 mm (HR); -0.001 ± 0.100 mm (LB); -0.002 ± 0.134 mm (PT); -0.033 ± 0.133 mm (TF). Also, there was no significant difference among the root segments (p > 0.05). It was concluded that the Hero 642, Liberator, ProTaper, and Twisted File rotary systems could be safely used in curved canal instrumentation, resulting in satisfactory preservation of the original canal shape. (author)

  16. The trauma ecosystem: The impact and economics of new trauma centers on a mature statewide trauma system.

    Science.gov (United States)

    Ciesla, David J; Pracht, Etienne E; Leitz, Pablo T; Spain, David A; Staudenmayer, Kristan L; Tepas, Joseph J

    2017-06-01

    Florida serves as a model for the study of trauma system performance. Between 2010 and 2104, 5 new trauma centers were opened alongside 20 existing centers. The purpose of this study was to explore the impact of trauma system expansion on system triage performance and trauma center patients' profiles. A statewide data set was queried for all injury-related discharges from adult acute care hospitals using International Classification of Diseases, Ninth Revision (ICD-9) codes for 2010 and 2014. The data set, inclusion criteria, and definitions of high-risk injury were chosen to match those used by the Florida Department of Health in its trauma registry. Hospitals were classified as existing Level I (E1) or Level II (E2) trauma centers and new E2 (N2) centers. Five N2 centers were established 11.6 to 85.3 miles from existing centers. Field and overall trauma system triage of high-risk patients was less accurate with increased overtriage and no change in undertriage. Annual volume at N2 centers increased but did not change at E1 and E2 centers. In 2014, Patients at E1 and E2 centers were slightly older and less severely injured, while those at N2 centers were substantially younger and more severely injured than in 2010. The injured patient-payer mix changed with a decrease in self-pay and commercial patients and an increase in government-sponsored patients at E1 and E2 centers and an increase in self-pay and commercial patients with a decrease in government-sponsored patients at N2 centers. Designation of new trauma centers in a mature system was associated with a change in established trauma center demographics and economics without an improvement in trauma system triage performance. These findings suggest that the health of an entire trauma system network must be considered in the design and implementation of a regional trauma system. Therapeutic/care management study, level IV; epidemiological, level IV.

  17. NASA Goddard Space Flight Center Robotic Processing System Program Automation Systems, volume 2

    Science.gov (United States)

    Dobbs, M. E.

    1991-01-01

    Topics related to robot operated materials processing in space (RoMPS) are presented in view graph form. Some of the areas covered include: (1) mission requirements; (2) automation management system; (3) Space Transportation System (STS) Hitchhicker Payload; (4) Spacecraft Command Language (SCL) scripts; (5) SCL software components; (6) RoMPS EasyLab Command & Variable summary for rack stations and annealer module; (7) support electronics assembly; (8) SCL uplink packet definition; (9) SC-4 EasyLab System Memory Map; (10) Servo Axis Control Logic Suppliers; and (11) annealing oven control subsystem.

  18. US8,994,532 "Data Center Equipment Location and Monitoring System"

    DEFF Research Database (Denmark)

    2014-01-01

    A data center equipment location system includes both hardware and software to provide for location, monitoring, security and identification of servers and other equipment in equipment racks. The system provides a wired alternative to the wireless RFID tag system by using electronic ID tags...... connected to each piece of equipment, each electronic ID tag connected directly by wires to a equipment rack controller on the equipment rack. The equipment rack controllers then link over a local area network to a central control computer. The central control computer provides an operator interface......, and runs a software application program that communicates with the equipment rack controllers. The software application program of the central control computer stores IDs of the equipment rack controllers and each of its connected electronic ID tags in a database. The software application program...

  19. Space Station Environmental Control and Life Support System Test Facility at Marshall Space Flight Center

    Science.gov (United States)

    Springer, Darlene

    1989-01-01

    Different aspects of Space Station Environmental Control and Life Support System (ECLSS) testing are currently taking place at Marshall Space Flight Center (MSFC). Unique to this testing is the variety of test areas and the fact that all are located in one building. The north high bay of building 4755, the Core Module Integration Facility (CMIF), contains the following test areas: the Subsystem Test Area, the Comparative Test Area, the Process Material Management System (PMMS), the Core Module Simulator (CMS), the End-use Equipment Facility (EEF), and the Pre-development Operational System Test (POST) Area. This paper addresses the facility that supports these test areas and briefly describes the testing in each area. Future plans for the building and Space Station module configurations will also be discussed.

  20. Internet Protocol Display Sharing Solution for Mission Control Center Video System

    Science.gov (United States)

    Brown, Michael A.

    2009-01-01

    With the advent of broadcast television as a constant source of information throughout the NASA manned space flight Mission Control Center (MCC) at the Johnson Space Center (JSC), the current Video Transport System (VTS) characteristics provides the ability to visually enhance real-time applications as a broadcast channel that decision making flight controllers come to rely on, but can be difficult to maintain and costly. The Operations Technology Facility (OTF) of the Mission Operations Facility Division (MOFD) has been tasked to provide insight to new innovative technological solutions for the MCC environment focusing on alternative architectures for a VTS. New technology will be provided to enable sharing of all imagery from one specific computer display, better known as Display Sharing (DS), to other computer displays and display systems such as; large projector systems, flight control rooms, and back supporting rooms throughout the facilities and other offsite centers using IP networks. It has been stated that Internet Protocol (IP) applications are easily readied to substitute for the current visual architecture, but quality and speed may need to be forfeited for reducing cost and maintainability. Although the IP infrastructure can support many technologies, the simple task of sharing ones computer display can be rather clumsy and difficult to configure and manage to the many operators and products. The DS process shall invest in collectively automating the sharing of images while focusing on such characteristics as; managing bandwidth, encrypting security measures, synchronizing disconnections from loss of signal / loss of acquisitions, performance latency, and provide functions like, scalability, multi-sharing, ease of initial integration / sustained configuration, integration with video adjustments packages, collaborative tools, host / recipient controllability, and the utmost paramount priority, an enterprise solution that provides ownership to the whole

  1. Systemic lupus erythematosus and thyroid disease - Experience in a single medical center in Taiwan.

    Science.gov (United States)

    Liu, Yu-Chuan; Lin, Wen-Ya; Tsai, Ming-Chin; Fu, Lin-Shien

    2017-06-28

    To investigate the association of systemic lupus erythematosus (SLE) with thyroid diseases in a medical center in central Taiwan. This is a retrospective cohort of 2796 SLE patients in a tertiary referral medical center from 2000 to 2013. We screened SLE by catastrophic illness registration from national insurance bureau; and thyroid diseases by ICD 9 codes, then confirmed by thyroid function test, auto-antibody, medical and/or surgical intervention. We compared the rate of hyperthyroidism, hypothyroidism and autoimmune thyroid disease (AITD) in SLE patients and the 11,184 match controls. We calculated the rate of these thyroid diseases and positive antibodies to thyroglobulin (ATGAb), thyroid peroxidase (TPOAb) in SLE patients grouped by the presence of overlap syndrome and anti-dsDNA antibody. We also compared the association of thyroid diseases to severe SLE conditions, including renal, central nervous system (CNS) involvement, and thrombocytopenia. Compared to the matched controls, the cumulative incidence of thyroid disease, including hyperthyroidism, hypothyroidism and AITD, were all higher in SLE patients (p hyperthyroidism. SLE patients with thyroid diseases also carry higher risk for severe complications such as renal involvement (p = 0.024) central nervous system involvement (p hyperthyroidism, hypothyroidism, and AITD than the matched control. Among lupus patients, the risks of thyroid diseases are even higher in the presence of overlap syndrome. SLE patients with thyroid diseases had higher risk of renal and CNS involvement. Copyright © 2017. Published by Elsevier B.V.

  2. BNL ALARA Center experience with an information exchange system on dose control at nuclear power plants

    International Nuclear Information System (INIS)

    Baum, J.W.; Khan, T.A.

    1992-01-01

    The essential elements of an international information exchange system on dose control at nuclear power plants are summarized. Information was collected from literature abstracting services, by attending technical meetings, by circulating data collection forms, and through personal contacts. Data are assembled in various databases and periodically disseminated to several hundred interested participants through a variety of publications and at technical meetings. Immediate on-line access to the data is available to participants with modems, commercially available communications software, and a password that is provided by the Brookhaven National Laboratory (BNL) ALARA Center to authorized users of the system. Since January 1992, rapid access also has been provided to persons with fax machines. Some information is available for ''polling'' the BNL system at any time, and other data can be installed for polling on request. Most information disseminated to data has been through publications; however, new protocols, simplified by the ALARA Center staff, and the convenience of fax machines are likely to make the earlier availability of information through these mechanisms increasingly important

  3. Simulation and off-line programming at Sandia`s Intelligent Systems and Robotics Center

    Energy Technology Data Exchange (ETDEWEB)

    Xavier, P.G.; Fahrenholtz, J.C.; McDonald, M. [Sandia National Labs., Albuquerque, NM (United States). Intelligent Systems and Robotics Center] [and others

    1997-11-01

    One role of the Intelligent Robotics and System Center (ISRC) at Sandia National Laboratories is to address certain aspects of Sandia`s mission to design, manufacture, maintain, and dismantle nuclear weapon components. Hazardous materials, devices, and environments are often involved. Because of shrinking resources, these tasks must be accomplished with a minimum of prototyping, while maintaining high reliability. In this paper, the authors describe simulation, off-line programming/planning, and related tools which are in use, under development, and being researched to solve these problems at the ISRC.

  4. Intermediate Photovoltaic System Application Experiment. Oklahoma Center for Science and Arts. Phase II. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1984-01-01

    This report presents the key results of the Phase II efforts for the Intermediate PV System Applications Experiment at the Oklahoma Center for Science and Arts (OCSA). This phase of the project involved fabrication, installation and integration of a nominal 140 kW flat panel PV system made up of large, square polycrystalline-silicon solar cell modules, each nominally 61 cm x 122 cm in size. The output of the PV modules, supplied by Solarex Corporation, was augmented, 1.35 to 1 at peak, by a row of glass reflectors, appropriately tilted northward. The PV system interfaces with the Oklahoma Gas and Electric Utility at the OCSA main switchgear. Any excess power generated by the system is fed into the utility under a one to one buyback arrangement. Except for a shortfall in the system output, presently suspected to be due to the poor performance of the modules, no serious problems were encountered. Certain value engineering changes implemented during construction and early operational failure events associated with the power conditioning system are also described. The system is currently undergoing extended testing and evaluation.

  5. Optimization of reliability centered predictive maintenance scheme for inertial navigation system

    International Nuclear Information System (INIS)

    Jiang, Xiuhong; Duan, Fuhai; Tian, Heng; Wei, Xuedong

    2015-01-01

    The goal of this study is to propose a reliability centered predictive maintenance scheme for a complex structure Inertial Navigation System (INS) with several redundant components. GO Methodology is applied to build the INS reliability analysis model—GO chart. Components Remaining Useful Life (RUL) and system reliability are updated dynamically based on the combination of components lifetime distribution function, stress samples, and the system GO chart. Considering the redundant design in INS, maintenance time is based not only on components RUL, but also (and mainly) on the timing of when system reliability fails to meet the set threshold. The definition of components maintenance priority balances three factors: components importance to system, risk degree, and detection difficulty. Maintenance Priority Number (MPN) is introduced, which may provide quantitative maintenance priority results for all components. A maintenance unit time cost model is built based on components MPN, components RUL predictive model and maintenance intervals for the optimization of maintenance scope. The proposed scheme can be applied to serve as the reference for INS maintenance. Finally, three numerical examples prove the proposed predictive maintenance scheme is feasible and effective. - Highlights: • A dynamic PdM with a rolling horizon is proposed for INS with redundant components. • GO Methodology is applied to build the system reliability analysis model. • A concept of MPN is proposed to quantify the maintenance sequence of components. • An optimization model is built to select the optimal group of maintenance components. • The optimization goal is minimizing the cost of maintaining system reliability

  6. Space Environment Testing of Photovoltaic Array Systems at NASA's Marshall Space Flight Center

    Science.gov (United States)

    Phillips, Brandon S.; Schneider, Todd A.; Vaughn, Jason A.; Wright, Kenneth H., Jr.

    2015-01-01

    To successfully operate a photovoltaic (PV) array system in space requires planning and testing to account for the effects of the space environment. It is critical to understand space environment interactions not only on the PV components, but also the array substrate materials, wiring harnesses, connectors, and protection circuitry (e.g. blocking diodes). Key elements of the space environment which must be accounted for in a PV system design include: Solar Photon Radiation, Charged Particle Radiation, Plasma, and Thermal Cycling. While solar photon radiation is central to generating power in PV systems, the complete spectrum includes short wavelength ultraviolet components, which photo-ionize materials, as well as long wavelength infrared which heat materials. High energy electron radiation has been demonstrated to significantly reduce the output power of III-V type PV cells; and proton radiation damages material surfaces - often impacting coverglasses and antireflective coatings. Plasma environments influence electrostatic charging of PV array materials, and must be understood to ensure that long duration arcs do not form and potentially destroy PV cells. Thermal cycling impacts all components on a PV array by inducing stresses due to thermal expansion and contraction. Given such demanding environments, and the complexity of structures and materials that form a PV array system, mission success can only be ensured through realistic testing in the laboratory. NASA's Marshall Space Flight Center has developed a broad space environment test capability to allow PV array designers and manufacturers to verify their system's integrity and avoid costly on-orbit failures. The Marshall Space Flight Center test capabilities are available to government, commercial, and university customers. Test solutions are tailored to meet the customer's needs, and can include performance assessments, such as flash testing in the case of PV cells.

  7. Real-Time Data Processing Systems and Products at the Alaska Earthquake Information Center

    Science.gov (United States)

    Ruppert, N. A.; Hansen, R. A.

    2007-05-01

    The Alaska Earthquake Information Center (AEIC) receives data from over 400 seismic sites located within the state boundaries and the surrounding regions and serves as a regional data center. In 2007, the AEIC reported ~20,000 seismic events, with the largest event of M6.6 in Andreanof Islands. The real-time earthquake detection and data processing systems at AEIC are based on the Antelope system from BRTT, Inc. This modular and extensible processing platform allows an integrated system complete from data acquisition to catalog production. Multiple additional modules constructed with the Antelope toolbox have been developed to fit particular needs of the AEIC. The real-time earthquake locations and magnitudes are determined within 2-5 minutes of the event occurrence. AEIC maintains a 24/7 seismologist-on-duty schedule. Earthquake alarms are based on the real- time earthquake detections. Significant events are reviewed by the seismologist on duty within 30 minutes of the occurrence with information releases issued for significant events. This information is disseminated immediately via the AEIC website, ANSS website via QDDS submissions, through e-mail, cell phone and pager notifications, via fax broadcasts and recorded voice-mail messages. In addition, automatic regional moment tensors are determined for events with M>=4.0. This information is posted on the public website. ShakeMaps are being calculated in real-time with the information currently accessible via a password-protected website. AEIC is designing an alarm system targeted for the critical lifeline operations in Alaska. AEIC maintains an extensive computer network to provide adequate support for data processing and archival. For real-time processing, AEIC operates two identical, interoperable computer systems in parallel.

  8. John M. Eisenberg Patient Safety Awards. System innovation: Veterans Health Administration National Center for Patient Safety.

    Science.gov (United States)

    Heget, Jeffrey R; Bagian, James P; Lee, Caryl Z; Gosbee, John W

    2002-12-01

    In 1998 the Veterans Health Administration (VHA) created the National Center for Patient Safety (NCPS) to lead the effort to reduce adverse events and close calls systemwide. NCPS's aim is to foster a culture of safety in the Department of Veterans Affairs (VA) by developing and providing patient safety programs and delivering standardized tools, methods, and initiatives to the 163 VA facilities. To create a system-oriented approach to patient safety, NCPS looked for models in fields such as aviation, nuclear power, human factors, and safety engineering. Core concepts included a non-punitive approach to patient safety activities that emphasizes systems-based learning, the active seeking out of close calls, which are viewed as opportunities for learning and investigation, and the use of interdisciplinary teams to investigate close calls and adverse events through a root cause analysis (RCA) process. Participation by VA facilities and networks was voluntary. NCPS has always aimed to develop a program that would be applicable both within the VA and beyond. NCPS's full patient safety program was tested and implemented throughout the VA system from November 1999 to August 2000. Program components included an RCA system for use by caregivers at the front line, a system for the aggregate review of RCA results, information systems software, alerts and advisories, and cognitive acids. Following program implementation, NCPS saw a 900-fold increase in reporting of close calls of high-priority events, reflecting the level of commitment to the program by VHA leaders and staff.

  9. JANE, A new information retrieval system for the Radiation Shielding Information Center

    International Nuclear Information System (INIS)

    Trubey, D.K.

    1991-05-01

    A new information storage and retrieval system has been developed for the Radiation Shielding Information Center (RSIC) at Oak Ridge National Laboratory to replace mainframe systems that have become obsolete. The database contains citations and abstracts of literature which were selected by RSIC analysts and indexed with terms from a controlled vocabulary. The database, begun in 1963, has been maintained continuously since that time. The new system, called JANE, incorporates automatic indexing techniques and on-line retrieval using the RSIC Data General Eclipse MV/4000 minicomputer, Automatic indexing and retrieval techniques based on fuzzy-set theory allow the presentation of results in order of Retrieval Status Value. The fuzzy-set membership function depends on term frequency in the titles and abstracts and on Term Discrimination Values which indicate the resolving power of the individual terms. These values are determined by the Cover Coefficient method. The use of a commercial database base to store and retrieve the indexing information permits rapid retrieval of the stored documents. Comparisons of the new and presently-used systems for actual searches of the literature indicate that it is practical to replace the mainframe systems with a minicomputer system similar to the present version of JANE. 18 refs., 10 figs

  10. Large scale simulations of lattice QCD thermodynamics on Columbia Parallel Supercomputers

    International Nuclear Information System (INIS)

    Ohta, Shigemi

    1989-01-01

    The Columbia Parallel Supercomputer project aims at the construction of a parallel processing, multi-gigaflop computer optimized for numerical simulations of lattice QCD. The project has three stages; 16-node, 1/4GF machine completed in April 1985, 64-node, 1GF machine completed in August 1987, and 256-node, 16GF machine now under construction. The machines all share a common architecture; a two dimensional torus formed from a rectangular array of N 1 x N 2 independent and identical processors. A processor is capable of operating in a multi-instruction multi-data mode, except for periods of synchronous interprocessor communication with its four nearest neighbors. Here the thermodynamics simulations on the two working machines are reported. (orig./HSI)

  11. Research to application: Supercomputing trends for the 90's - Opportunities for interdisciplinary computations

    International Nuclear Information System (INIS)

    Shankar, V.

    1991-01-01

    The progression of supercomputing is reviewed from the point of view of computational fluid dynamics (CFD), and multidisciplinary problems impacting the design of advanced aerospace configurations are addressed. The application of full potential and Euler equations to transonic and supersonic problems in the 70s and early 80s is outlined, along with Navier-Stokes computations widespread during the late 80s and early 90s. Multidisciplinary computations currently in progress are discussed, including CFD and aeroelastic coupling for both static and dynamic flexible computations, CFD, aeroelastic, and controls coupling for flutter suppression and active control, and the development of a computational electromagnetics technology based on CFD methods. Attention is given to computational challenges standing in a way of the concept of establishing a computational environment including many technologies. 40 refs

  12. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Science.gov (United States)

    DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug

    2018-03-01

    With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  13. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Directory of Open Access Journals (Sweden)

    DeTar Carleton

    2018-01-01

    Full Text Available With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  14. Solving sparse linear least squares problems on some supercomputers by using large dense blocks

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Ostromsky, T; Sameh, A

    1997-01-01

    technique is preferable to sparse matrix technique when the matrices are not large, because the high computational speed compensates fully the disadvantages of using more arithmetic operations and more storage. For very large matrices the computations must be organized as a sequence of tasks in each......Efficient subroutines for dense matrix computations have recently been developed and are available on many high-speed computers. On some computers the speed of many dense matrix operations is near to the peak-performance. For sparse matrices storage and operations can be saved by operating only...... and storing only nonzero elements. However, the price is a great degradation of the speed of computations on supercomputers (due to the use of indirect addresses, to the need to insert new nonzeros in the sparse storage scheme, to the lack of data locality, etc.). On many high-speed computers a dense matrix...

  15. An Optimized Parallel FDTD Topology for Challenging Electromagnetic Simulations on Supercomputers

    Directory of Open Access Journals (Sweden)

    Shugang Jiang

    2015-01-01

    Full Text Available It may not be a challenge to run a Finite-Difference Time-Domain (FDTD code for electromagnetic simulations on a supercomputer with more than 10 thousands of CPU cores; however, to make FDTD code work with the highest efficiency is a challenge. In this paper, the performance of parallel FDTD is optimized through MPI (message passing interface virtual topology, based on which a communication model is established. The general rules of optimal topology are presented according to the model. The performance of the method is tested and analyzed on three high performance computing platforms with different architectures in China. Simulations including an airplane with a 700-wavelength wingspan, and a complex microstrip antenna array with nearly 2000 elements are performed very efficiently using a maximum of 10240 CPU cores.

  16. EDF's experience with supercomputing and challenges ahead - towards multi-physics and multi-scale approaches

    International Nuclear Information System (INIS)

    Delbecq, J.M.; Banner, D.

    2003-01-01

    Nuclear power plants are a major asset of the EDF company. To remain so, in particular in a context of deregulation, competitiveness, safety and public acceptance are three conditions. These stakes apply both to existing plants and to future reactors. The purpose of the presentation is to explain how supercomputing can help EDF to satisfy these requirements. Three examples are described in detail: ensuring optimal use of nuclear fuel under wholly safe conditions, understanding and simulating the material deterioration mechanisms and moving forward with numerical simulation for the performance of EDF's activities. In conclusion, a broader vision of EDF long term R and D in the field of numerical simulation is given and especially of five challenges taken up by EDF together with its industrial and scientific partners. (author)

  17. An integrated methodology for process improvement and delivery system visualization at a multidisciplinary cancer center.

    Science.gov (United States)

    Singprasong, Rachanee; Eldabi, Tillal

    2013-01-01

    Multidisciplinary cancer centers require an integrated, collaborative, and stream-lined workflow in order to provide high quality of patient care. Due to the complex nature of cancer care and continuing changes to treatment techniques and technologies, it is a constant struggle for centers to obtain a systemic and holistic view of treatment workflow for improving the delivery systems. Project management techniques, Responsibility matrix and a swim-lane activity diagram representing sequence of activities can be combined for data collection, presentation, and evaluation of the patient care. This paper presents this integrated methodology using multidisciplinary meetings and walking the route approach for data collection, integrated responsibility matrix and swim-lane activity diagram with activity time for data representation and 5-why and gap analysis approach for data analysis. This enables collection of right detail of information in a shorter time frame by identifying process flaws and deficiencies while being independent of the nature of the patient's disease or treatment techniques. A case study of a multidisciplinary regional cancer centre is used to illustrate effectiveness of the proposed methodology and demonstrates that the methodology is simple to understand, allowing for minimal training of staff and rapid implementation. © 2011 National Association for Healthcare Quality.

  18. Using CLIPS in a distributed system: The Network Control Center (NCC) expert system

    Science.gov (United States)

    Wannemacher, Tom

    1990-01-01

    This paper describes an intelligent troubleshooting system for the Help Desk domain. It was developed on an IBM-compatible 80286 PC using Microsoft C and CLIPS and an AT&T 3B2 minicomputer using the UNIFY database and a combination of shell script, C programs and SQL queries. The two computers are linked by a lan. The functions of this system are to help non-technical NCC personnel handle trouble calls, to keep a log of problem calls with complete, concise information, and to keep a historical database of problems. The database helps identify hardware and software problem areas and provides a source of new rules for the troubleshooting knowledge base.

  19. Systems integration for the Kennedy Space Center (KSC) Robotics Applications Development Laboratory (RADL)

    Science.gov (United States)

    Davis, V. Leon; Nordeen, Ross

    1988-01-01

    A laboratory for developing robotics technology for hazardous and repetitive Shuttle and payload processing activities is discussed. An overview of the computer hardware and software responsible for integrating the laboratory systems is given. The center's anthropomorphic robot is placed on a track allowing it to be moved to different stations. Various aspects of the laboratory equipment are described, including industrial robot arm control, smart systems integration, the supervisory computer, programmable process controller, real-time tracking controller, image processing hardware, and control display graphics. Topics of research include: automated loading and unloading of hypergolics for space vehicles and payloads; the use of mobile robotics for security, fire fighting, and hazardous spill operations; nondestructive testing for SRB joint and seal verification; Shuttle Orbiter radiator damage inspection; and Orbiter contour measurements. The possibility of expanding the laboratory in the future is examined.

  20. Benefits of the implementation and use of a warehouse management system in a distribution center

    Directory of Open Access Journals (Sweden)

    Alexsander Machado

    2011-12-01

    Full Text Available The aim of this article was to describe how the deployment and use of a Warehouse Management System (WMS can help increase productivity, reduce errors and speed up the flow of information in a distribution center. The research method was the case study. We had chosen a distributor of goods, located in Vale do Rio dos Sinos, RS, which sells and distributes for companies throughout Brazil products for business use. The main research technique was participant observation. In order to highlight the observed results, we collected two indicators, productivity and errors for the separation of items into applications. After four months of observation, both showed significant improvement, strengthening the hypothesis that selection and implementation of management system was beneficial for the company.

  1. Activity-based costing via an information system: an application created for a breast imaging center.

    Science.gov (United States)

    Hawkins, H; Langer, J; Padua, E; Reaves, J

    2001-06-01

    Activity-based costing (ABC) is a process that enables the estimation of the cost of producing a product or service. More accurate than traditional charge-based approaches, it emphasizes analysis of processes, and more specific identification of both direct and indirect costs. This accuracy is essential in today's healthcare environment, in which managed care organizations necessitate responsible and accountable costing. However, to be successfully utilized, it requires time, effort, expertise, and support. Data collection can be tedious and expensive. By integrating ABC with information management (IM) and systems (IS), organizations can take advantage of the process orientation of both, extend and improve ABC, and decrease resource utilization for ABC projects. In our case study, we have examined the process of a multidisciplinary breast center. We have mapped the constituent activities and established cost drivers. This information has been structured and included in our information system database for subsequent analysis.

  2. PASTE: patient-centered SMS text tagging in a medication management system.

    Science.gov (United States)

    Stenner, Shane P; Johnson, Kevin B; Denny, Joshua C

    2012-01-01

    To evaluate the performance of a system that extracts medication information and administration-related actions from patient short message service (SMS) messages. Mobile technologies provide a platform for electronic patient-centered medication management. MyMediHealth (MMH) is a medication management system that includes a medication scheduler, a medication administration record, and a reminder engine that sends text messages to cell phones. The object of this work was to extend MMH to allow two-way interaction using mobile phone-based SMS technology. Unprompted text-message communication with patients using natural language could engage patients in their healthcare, but presents unique natural language processing challenges. The authors developed a new functional component of MMH, the Patient-centered Automated SMS Tagging Engine (PASTE). The PASTE web service uses natural language processing methods, custom lexicons, and existing knowledge sources to extract and tag medication information from patient text messages. A pilot evaluation of PASTE was completed using 130 medication messages anonymously submitted by 16 volunteers via a website. System output was compared with manually tagged messages. Verified medication names, medication terms, and action terms reached high F-measures of 91.3%, 94.7%, and 90.4%, respectively. The overall medication name F-measure was 79.8%, and the medication action term F-measure was 90%. Other studies have demonstrated systems that successfully extract medication information from clinical documents using semantic tagging, regular expression-based approaches, or a combination of both approaches. This evaluation demonstrates the feasibility of extracting medication information from patient-generated medication messages.

  3. Outpatient and Ambulatory Surgery Consumer Assessment of Healthcare Providers and Systems (OAS CAHPS) survey for ambulatory surgical centers - Facility

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of ambulatory surgical center ratings for the Outpatient and Ambulatory Surgery Consumer Assessment of Healthcare Providers and Systems (OAS CAHPS) survey....

  4. Design of a Glenn Research Center Solar Field Grid-Tied Photovoltaic Power System

    Science.gov (United States)

    Eichenberg, Dennis J.

    2009-01-01

    The NASA Glenn Research Center (GRC) designed, developed, and installed, a 37.5 kW DC photovoltaic (PV) Solar Field in the GRC West Area in the 1970s for the purpose of testing PV panels for various space and terrestrial applications. The PV panels are arranged to provide a nominal 120 VDC. The GRC Solar Field has been extremely successful in meeting its mission. The PV panels and the supporting electrical systems are all near their end of life. GRC has designed a 72 kW DC grid-tied PV power system to replace the existing GRC West Area Solar Field. The 72 kW DC grid-tied PV power system will provide DC solar power for GRC PV testing applications, and provide AC facility power for all times that research power is not required. A grid-tied system is connected directly to the utility distribution grid. Facility power can be obtained from the utility system as normal. The PV system is synchronized with the utility system to provide power for the facility, and excess power is provided to the utility for use by all. The project transfers space technology to terrestrial use via nontraditional partners. GRC personnel glean valuable experience with PV power systems that are directly applicable to various space power systems, and provide valuable space program test data. PV power systems help to reduce harmful emissions and reduce the Nation s dependence on fossil fuels. Power generated by the PV system reduces the GRC utility demand, and the surplus power aids the community. Present global energy concerns reinforce the need for the development of alternative energy systems. Modern PV panels are readily available, reliable, efficient, and economical with a life expectancy of at least 25 years. Modern electronics has been the enabling technology behind grid-tied power systems, making them safe, reliable, efficient, and economical with a life expectancy of at least 25 years. The report concludes that the GRC West Area grid-tied PV power system design is viable for a reliable

  5. Doppler Lidar System Design via Interdisciplinary Design Concept at NASA Langley Research Center - Part III

    Science.gov (United States)

    Barnes, Bruce W.; Sessions, Alaric M.; Beyon, Jeffrey; Petway, Larry B.

    2014-01-01

    Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. The existing power system was analyzed to rank components in terms of inefficiency, power dissipation, footprint and mass. Design considerations and priorities are compared along with the results of each design iteration. Overall power system improvements are summarized for design implementations.

  6. Doppler Lidar System Design via Interdisciplinary Design Concept at NASA Langley Research Center - Part II

    Science.gov (United States)

    Crasner, Aaron I.; Scola,Salvatore; Beyon, Jeffrey Y.; Petway, Larry B.

    2014-01-01

    Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. Thermal modeling software was used to run steady state thermal analyses, which were used to both validate the designs and recommend further changes. Analyses were run on each redesign, as well as the original system. Thermal Desktop was used to run trade studies to account for uncertainty and assumptions about fan performance and boundary conditions. The studies suggested that, even if the assumptions were significantly wrong, the redesigned systems would remain within operating temperature limits.

  7. Introduction of the non-technical skills for surgeons (NOTSS) system in a Japanese cancer center.

    Science.gov (United States)

    Tsuburaya, Akira; Soma, Takahiro; Yoshikawa, Takaki; Cho, Haruhiko; Miki, Tamotsu; Uramatsu, Masashi; Fujisawa, Yoshikazu; Youngson, George; Yule, Steven

    2016-12-01

    Non-technical skills rating systems, which are designed to support surgical performance, have been introduced worldwide, but not officially in Japan. We performed a pilot study to evaluate the "non-technical skills for surgeons" (NOTSS) rating system in a major Japanese cancer center. Upper gastrointestinal surgeons were selected as trainers or trainees. The trainers attended a master-class on NOTSS, which included simulated demo-videos, to promote consistency across the assessments. The trainers thereafter commenced observing the trainees and whole teams, utilizing the NOTSS and "observational teamwork assessment for surgery" (OTAS) rating systems, before and after their education. Four trainers and six trainees were involved in this study. Test scores for understanding human factors and the NOTSS system were 5.89 ± 1.69 and 8.00 ± 1.32 before and after the e-learning, respectively (mean ± SD, p = 0.010). The OTAS scores for the whole team improved significantly after the trainees' education in five out of nine stages (p < 0.05). There were no differences in the NOTSS scores before and after education, with a small improvement in the total scores for the "teamwork and communication" and "leadership" categories. These findings demonstrate that implementing the NOTSS system is feasible in Japan. Education of both surgical trainers and trainees would contribute to better team performance.

  8. Data base management system and display software for the National Geophysical Data Center geomagnetic CD-ROM's

    Science.gov (United States)

    Papitashvili, N. E.; Papitashvili, V. O.; Allen, J. H.; Morris, L. D.

    1995-01-01

    The National Geophysical Data Center has the largest collection of geomagnetic data from the worldwide network of magnetic observatories. The data base management system and retrieval/display software have been developed for the archived geomagnetic data (annual means, monthly, daily, hourly, and 1-minute values) and placed on the center's CD-ROM's to provide users with 'user-oriented' and 'user-friendly' support. This system is described in this paper with a brief outline of provided options.

  9. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  10. Interoperable Access to Near Real Time Ocean Observations with the Observing System Monitoring Center

    Science.gov (United States)

    O'Brien, K.; Hankin, S.; Mendelssohn, R.; Simons, R.; Smith, B.; Kern, K. J.

    2013-12-01

    The Observing System Monitoring Center (OSMC), a project funded by the National Oceanic and Atmospheric Administration's Climate Observations Division (COD), exists to join the discrete 'networks' of In Situ ocean observing platforms -- ships, surface floats, profiling floats, tide gauges, etc. - into a single, integrated system. The OSMC is addressing this goal through capabilities in three areas focusing on the needs of specific user groups: 1) it provides real time monitoring of the integrated observing system assets to assist management in optimizing the cost-effectiveness of the system for the assessment of climate variables; 2) it makes the stream of real time data coming from the observing system available to scientific end users into an easy-to-use form; and 3) in the future, it will unify the delayed-mode data from platform-focused data assembly centers into a standards- based distributed system that is readily accessible to interested users from the science and education communities. In this presentation, we will be focusing on the efforts of the OSMC to provide interoperable access to the near real time data stream that is available via the Global Telecommunications System (GTS). This is a very rich data source, and includes data from nearly all of the oceanographic platforms that are actively observing. We will discuss how the data is being served out using a number of widely used 'web services' (including OPeNDAP and SOS) and downloadable file formats (KML, csv, xls, netCDF), so that it can be accessed in web browsers and popular desktop analysis tools. We will also be discussing our use of the Environmental Research Division's Data Access Program (ERDDAP), available from NOAA/NMFS, which has allowed us to achieve our goals of serving the near real time data. From an interoperability perspective, it's important to note that access to the this stream of data is not just for humans, but also for machine-to-machine requests. We'll also delve into how we

  11. Systematic Review of Data Mining Applications in Patient-Centered Mobile-Based Information Systems.

    Science.gov (United States)

    Fallah, Mina; Niakan Kalhori, Sharareh R

    2017-10-01

    Smartphones represent a promising technology for patient-centered healthcare. It is claimed that data mining techniques have improved mobile apps to address patients' needs at subgroup and individual levels. This study reviewed the current literature regarding data mining applications in patient-centered mobile-based information systems. We systematically searched PubMed, Scopus, and Web of Science for original studies reported from 2014 to 2016. After screening 226 records at the title/abstract level, the full texts of 92 relevant papers were retrieved and checked against inclusion criteria. Finally, 30 papers were included in this study and reviewed. Data mining techniques have been reported in development of mobile health apps for three main purposes: data analysis for follow-up and monitoring, early diagnosis and detection for screening purpose, classification/prediction of outcomes, and risk calculation (n = 27); data collection (n = 3); and provision of recommendations (n = 2). The most accurate and frequently applied data mining method was support vector machine; however, decision tree has shown superior performance to enhance mobile apps applied for patients' self-management. Embedded data-mining-based feature in mobile apps, such as case detection, prediction/classification, risk estimation, or collection of patient data, particularly during self-management, would save, apply, and analyze patient data during and after care. More intelligent methods, such as artificial neural networks, fuzzy logic, and genetic algorithms, and even the hybrid methods may result in more patients-centered recommendations, providing education, guidance, alerts, and awareness of personalized output.

  12. Alaska Center for Unmanned Aircraft Systems Integration (ACUASI): Operational Support and Geoscience Research

    Science.gov (United States)

    Webley, P. W.; Cahill, C. F.; Rogers, M.; Hatfield, M. C.

    2016-12-01

    Unmanned Aircraft Systems (UAS) have enormous potential for use in geoscience research and supporting operational needs from natural hazard assessment to the mitigation of critical infrastructure failure. They provide a new tool for universities, local, state, federal, and military organizations to collect new measurements not readily available from other sensors. We will present on the UAS capabilities and research of the Alaska Center for Unmanned Aircraft Systems Integration (ACUASI, http://acuasi.alaska.edu/). Our UAS range from the Responder with its dual visible/infrared payload that can provide simultaneous data to our new SeaHunter UAS with 90 lb. payload and multiple hour flight time. ACUASI, as a designated US Federal Aviation Administration (FAA) test center, works closely with the FAA on integrating UAS into the national airspace. ACUASI covers all aspects of working with UAS from pilot training, airspace navigation, flight operations, and remote sensing analysis to payload design and integration engineers and policy experts. ACUASI's recent missions range from supporting the mapping of sea ice cover for safe passage of Alaskans across the hazardous winter ice to demonstrating how UAS can be used to provide support during oil spill response. Additionally, we will present on how ACUASI has worked with local authorities in Alaska to integrate UAS into search and rescue operations and with NASA and the FAA on their UAS Transport Management (UTM) project to fly UAS within the manned airspace. ACUASI is also working on developing new capabilities to sample volcanic plumes and clouds, map forest fire impacts and burn areas, and develop a new citizen network for monitoring snow extent and depth during Northern Hemisphere winters. We will demonstrate how UAS can be integrated in operational support systems and at the same time be used in geoscience research projects to provide high precision, accurate, and reliable observations.

  13. Governing Academic Medical Center Systems: Evaluating and Choosing Among Alternative Governance Approaches.

    Science.gov (United States)

    Chari, Ramya; O'Hanlon, Claire; Chen, Peggy; Leuschner, Kristin; Nelson, Christopher

    2018-02-01

    The ability of academic medical centers (AMCs) to fulfill their triple mission of patient care, medical education, and research is increasingly being threatened by rising financial pressures and resource constraints. Many AMCs are, therefore, looking to expand into academic medical systems, increasing their scale through consolidation or affiliation with other health care systems. As clinical operations grow, though, the need for effective governance becomes even more critical to ensure that the business of patient care does not compromise the rest of the triple mission. Multi-AMC systems, a model in which multiple AMCs are governed by a single body, pose a particular challenge in balancing unity with the needs of component AMCs, and therefore offer lessons for designing AMC governance approaches. This article describes the development and application of a set of criteria to evaluate governance options for one multi-AMC system-the University of California (UC) and its five AMCs. Based on a literature review and key informant interviews, the authors identified criteria for evaluating governance approaches (structures and processes), assessed current governance approaches using the criteria, identified alternative governance options, and assessed each option using the identified criteria. The assessment aided UC in streamlining governance operations to enhance their ability to respond efficiently to change and to act collectively. Although designed for UC and a multi-AMC model, the criteria may provide a systematic way for any AMC to assess the strengths and weaknesses of its governance approaches.

  14. Integration of footprints information systems in palliative care: the case of Medical Center of Central Georgia.

    Science.gov (United States)

    Tsavatewa, Christopher; Musa, Philip F; Ramsingh, Isaac

    2012-06-01

    Healthcare in America continues to be of paramount importance, and one of the most highly debated public policy issues of our time. With annual expenditures already exceeding $2.4 trillion, and yielding less than optimal results, it stands to reason that we must turn to promising tools and solutions, such as information technology (IT), to improve service efficiency and quality of care. Presidential addresses in 2004 and 2008 laid out an agenda, framework, and timeline for national health information technology investment and development. A national initiative was long overdue. This report we show that advancements in both medical technologies and information systems can be capitalized upon, hence extending information systems usage beyond data collection to include administrative and decision support, care plan development, quality improvement, etc. In this paper we focus on healthcare services for palliative patients. We present the development and preliminary accounts of a successful initiative in the Medical Center of Central Georgia where footprints information technology was modified and integrated into the hospital's palliative care service and existing EMR systems. The project provides evidence that there are a plethora of areas in healthcare in which innovative application of information systems could significantly enhance the care delivered to loved ones, and improve operations at the same time..

  15. The national carbon capture center at the power systems development facility

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2012-09-01

    The Power Systems Development Facility (PSDF) is a state-of-the-art test center sponsored by the U.S. Department of Energy and dedicated to the advancement of clean coal technology. In addition to the development of advanced coal gasification processes, the PSDF features the National Carbon Capture Center (NCCC) to study CO2 capture from coal-derived syngas and flue gas. The NCCC includes multiple, adaptable test skids that allow technology development of CO2 capture concepts using coal-derived syngas and flue gas in industrial settings. Because of the ability to operate under a wide range of flow rates and process conditions, research at the NCCC can effectively evaluate technologies at various levels of maturity. During the Budget Period Three reporting period, efforts at the NCCC/PSDF focused on testing of pre-combustion CO2 capture and related processes; commissioning and initial testing at the post-combustion CO2 capture facilities; and operating the gasification process to develop gasification related technologies and for syngas generation to test syngas conditioning technologies.

  16. THE NATIONAL CARBON CAPTURE CENTER AT THE POWER SYSTEMS DEVELOPMENT FACILITY

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2011-05-11

    The Power Systems Development Facility (PSDF) is a state-of-the-art test center sponsored by the U.S. Department of Energy and dedicated to the advancement of clean coal technology. In addition to the development of advanced coal gasification processes, the PSDF features the National Carbon Capture Center (NCCC) to study CO2 capture from coal-derived syngas and flue gas. The NCCC includes multiple, adaptable test skids that allow technology development of CO2 capture concepts using coal-derived syngas and flue gas in industrial settings. Because of the ability to operate under a wide range of flow rates and process conditions, research at the NCCC can effectively evaluate technologies at various levels of maturity. During the Budget Period Two reporting period, efforts at the PSDF/NCCC focused on new technology assessment and test planning; designing and constructing post-combustion CO2 capture facilities; testing of pre-combustion CO2 capture and related processes; and operating the gasification process to develop gasification related technologies and for syngas generation to test syngas conditioning technologies.

  17. Crop Production for Advanced Life Support Systems - Observations From the Kennedy Space Center Breadboard Project

    Science.gov (United States)

    Wheeler, R. M.; Sager, J. C.; Prince, R. P.; Knott, W. M.; Mackowiak, C. L.; Stutte, G. W.; Yorio, N. C.; Ruffe, L. M.; Peterson, B. V.; Goins, G. D.

    2003-01-01

    The use of plants for bioregenerative life support for space missions was first studied by the US Air Force in the 1950s and 1960s. Extensive testing was also conducted from the 1960s through the 1980s by Russian researchers located at the Institute of Biophysics in Krasnoyarsk, Siberia, and the Institute for Biomedical Problems in Moscow. NASA initiated bioregenerative research in the 1960s (e.g., Hydrogenomonas) but this research did not include testing with plants until about 1980, with the start of the Controlled Ecological Life Support System (CELSS) Program. The NASA CELSS research was carried out at universities, private corporations, and NASA field centers, including Kennedy Space Center (KSC). The project at KSC began in 1985 and was called the CELSS Breadboard Project to indicate the capability for plugging in and testing various life support technologies; this name has since been dropped but bioregenerative testing at KSC has continued to the present under the NASA s Advanced Life Support (ALS) Program. A primary objective of the KSC testing was to conduct pre-integration tests with plants (crops) in a large, atmospherically closed test chamber called the Biomass Production Chamber (BPC). Test protocols for the BPC were based on observations and growing procedures developed by university investigators, as well as procedures developed in plant growth chamber studies at KSC. Growth chamber studies to support BPC testing focused on plant responses to different carbon dioxide (CO2) concentrations, different spectral qualities from various electric lamps, and nutrient film hydroponic culture techniques.

  18. The National Carbon Capture Center at the Power Systems Development Facility: Topical Report

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2011-03-01

    The Power Systems Development Facility (PSDF) is a state-of-the-art test center sponsored by the U.S. Department of Energy and dedicated to the advancement of clean coal technology. In addition to the development of advanced coal gasification processes, the PSDF features the National Carbon Capture Center (NCCC) to study CO2 capture from coal-derived syngas and flue gas. The newly established NCCC will include multiple, adaptable test skids that will allow technology development of CO2 capture concepts using coal-derived syngas and flue gas in industrial settings. Because of the ability to operate under a wide range of flow rates and process conditions, research at the NCCC can effectively evaluate technologies at various levels of maturity. During the Budget Period One reporting period, efforts at the PSDF/NCCC focused on developing a screening process for testing consideration of new technologies; designing and constructing pre- and post-combustion CO2 capture facilities; developing sampling and analytical methods; expanding fuel flexibility of the Transport Gasification process; and operating the gasification process for technology research and for syngas generation to test syngas conditioning technologies.

  19. Heat-pump-centered integrated community energy systems. System development, Consolidated Natural Gas Service Company, interim report

    Energy Technology Data Exchange (ETDEWEB)

    Tison, R.R.; Baker, N.R.; Yudow, B.D.; Sala, D.L.; Donakowski, T.D.; Swenson, P.F.

    1979-08-01

    Heat-pump-centered integrated community energy systems are energy systems for communities that provide heating, cooling, and/or other thermal energy services through the use of heat pumps. Since heat pumps primarily transfer energy from existing and otherwise probably unused sources, rather than convert it from electrical or chemical to thermal form, HP-ICES offer a significant potential for energy savings. Results of the System Development Phase of the HP-ICES Project are given. The heat-actuated (gas) heat-pump incorporated into this HP-ICES concept is under current development and demonstration. The concurrent program was redirected in September 1977 toward large-tonnage applications; it is currently focusing on 60- to 400-ton built-up systems for multi-zone applications. This study evaluates the performance of a HAHP-ICES as applied to a community of residential and commercial buildings. To permit a general assessment of the concept in non-site-specific terms, the sensitivity of the system's performance and economics to climate, community size, utility rate structures, and economic assumptions is explored. (MCW)

  20. European type NPP electric power and vent systems. For safety improvement and proposal of international center

    International Nuclear Information System (INIS)

    Sugiyama, Kenichiro

    2011-01-01

    For prevention of reactor accidents of nuclear power plants, multiplicity and redundancy of emergency power would be most important. At station blackout accident, European type manually operated vent operation could minimize release amount of radioactive materials and keep safety of neighboring residents. After Fukushima Daiichi accident, nuclear power plants could not restart operation even after completion of periodical inspection. This article introduced European type emergency power and vent systems in Swiss, Sweden and Germany with state of nuclear power phaseout for reference at considering to upgrade safety and accident mitigation measures for better understanding of the public. In addition, it would be important to recover trust of nuclear technology to continue to disseminate latest information on new knowledge of accident site and decontamination technologies to domestic and overseas people. As its implementation, establishment of Fukushima international center was proposed. (T. Tanaka)

  1. Modeling and Analysis of Multidiscipline Research Teams at NASA Langley Research Center: A Systems Thinking Approach

    Science.gov (United States)

    Waszak, Martin R.; Barthelemy, Jean-Francois; Jones, Kenneth M.; Silcox, Richard J.; Silva, Walter A.; Nowaczyk, Ronald H.

    1998-01-01

    Multidisciplinary analysis and design is inherently a team activity due to the variety of required expertise and knowledge. As a team activity, multidisciplinary research cannot escape the issues that affect all teams. The level of technical diversity required to perform multidisciplinary analysis and design makes the teaming aspects even more important. A study was conducted at the NASA Langley Research Center to develop a model of multidiscipline teams that can be used to help understand their dynamics and identify key factors that influence their effectiveness. The study sought to apply the elements of systems thinking to better understand the factors, both generic and Langley-specific, that influence the effectiveness of multidiscipline teams. The model of multidiscipline research teams developed during this study has been valuable in identifying means to enhance team effectiveness, recognize and avoid problem behaviors, and provide guidance for forming and coordinating multidiscipline teams.

  2. Doing Systems Engineering Without Thinking About It at NASA Dryden Flight Research Center

    Science.gov (United States)

    Bohn-Meyer, Marta; Kilp, Stephen; Chun, Peggy; Mizukami, Masashi

    2004-01-01

    When asked about his processes in designing a new airplane, Burt Rutan responded: ...there is always a performance requirement. So I start with the basic physics of an airplane that can get those requirements, and that pretty much sizes an airplane... Then I look at the functionality... And then I try a lot of different configurations to meet that, and then justify one at a time, throwing them out... Typically I'll have several different configurations... But I like to experiment, certainly. I like to see if there are other ways to provide the utility. This kind of thinking engineering as a total systems engineering approach is what is being instilled in all engineers at the NASA Dryden Flight Research Center.

  3. NASA Marshall Space Flight Center Controls Systems Design and Analysis Branch

    Science.gov (United States)

    Gilligan, Eric

    2014-01-01

    Marshall Space Flight Center maintains a critical national capability in the analysis of launch vehicle flight dynamics and flight certification of GN&C algorithms. MSFC analysts are domain experts in the areas of flexible-body dynamics and control-structure interaction, thrust vector control, sloshing propellant dynamics, and advanced statistical methods. Marshall's modeling and simulation expertise has supported manned spaceflight for over 50 years. Marshall's unparalleled capability in launch vehicle guidance, navigation, and control technology stems from its rich heritage in developing, integrating, and testing launch vehicle GN&C systems dating to the early Mercury-Redstone and Saturn vehicles. The Marshall team is continuously developing novel methods for design, including advanced techniques for large-scale optimization and analysis.

  4. A Human-Centered Smart Home System with Wearable-Sensor Behavior Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ji, Jianting; Liu, Ting; Shen, Chao; Wu, Hongyu; Liu, Wenyi; Su, Man; Chen, Siyun; Jia, Zhanpei

    2016-11-17

    Smart home has recently attracted much research interest owing to its potential in improving the quality of human life. How to obtain user's demand is the most important and challenging task for appliance optimal scheduling in smart home, since it is highly related to user's unpredictable behavior. In this paper, a human-centered smart home system is proposed to identify user behavior, predict their demand and schedule the household appliances. Firstly, the sensor data from user's wearable devices are monitored to profile user's full-day behavior. Then, the appliance-demand matrix is constructed to predict user's demand on home environment, which is extracted from the history of appliance load data and user behavior. Two simulations are designed to demonstrate user behavior identification, appliance-demand matrix construction and strategy of appliance optimal scheduling generation.

  5. Evaluation of the Accelerate Pheno System: Results from Two Academic Medical Centers.

    Science.gov (United States)

    Lutgring, Joseph D; Bittencourt, Cassiana; McElvania TeKippe, Erin; Cavuoti, Dominick; Hollaway, Rita; Burd, Eileen M

    2018-04-01

    Rapid diagnostic tests are needed to improve patient care and to combat the problem of antimicrobial resistance. The Accelerate Pheno system (Accelerate Diagnostics, Tucson, AZ) is a new diagnostic device that can provide rapid bacterial identification and antimicrobial susceptibility test (AST) results directly from a positive blood culture. The device was compared to the standard of care at two academic medical centers. There were 298 blood cultures included in the study, and the Accelerate Pheno system provided a definitive identification result in 218 instances (73.2%). The Accelerate Pheno system provided a definitive and correct result for 173 runs (58.1%). The Accelerate Pheno system demonstrated an overall sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of 94.7%, 98.9%, 83.7%, and 99.7%, respectively. An AST result was available for analysis in 146 instances. The overall category agreement was 94.1% with 12 very major errors, 5 major errors, and 55 minor errors. After a discrepancy analysis, there were 5 very major errors and 4 major errors. The Accelerate Pheno system provided an identification result in 1.4 h and an AST result in 6.6 h; the identification and AST results were 41.5 h and 48.4 h faster than those with the standard of care, respectively. This study demonstrated that the Accelerate Pheno system is able to provide fast and accurate organism identification and AST data. A limitation is the frequency with which cultures required the use of alternative identification and AST methods. Copyright © 2018 American Society for Microbiology.

  6. The Digital Library for Earth System Education: A Progress Report from the DLESE Program Center

    Science.gov (United States)

    Marlino, M. R.; Sumner, T. R.; Kelly, K. K.; Wright, M.

    2002-12-01

    DLESE is a community-owned and governed digital library offering easy access to high quality electronic resources about the Earth system at all educational levels. Currently in its third year of development and operation, DLESE resources are designed to support systemic educational reform, and include web-based teaching resources, tools, and services for the inclusion of data in classroom activities, as well as a "virtual community center" that supports community goals and growth. "Community-owned" and "community-governed" embody the singularity of DLESE through its unique participatory approach to both library building and governance. DLESE is guided by policy development vested in the DLESE Steering Committee, and informed by Standing Committees centered on Collections, Services, Technology, and Users, and community working groups covering a wide variety of interest areas. This presentation highlights both current and projected status of the library and opportunities for community engagement. It is specifically structured to engage community members in the design of the next version of the library release. The current Version 1.0 of the library consists of a web-accessible graphical user interface connected to a database of catalogued educational resources (approximately 3000); a metadata framework enabling resource characterization; a cataloging tool allowing community cataloging and indexing of materials; a search and discovery system allowing browsing based on topic, grade level, and resource type, and permitting keyword and controlled vocabulary-based searches; and a portal website supporting library use, community action, and DLESE partnerships. Future stages of library development will focus on enhanced community collaborative support; development of controlled vocabularies; collections building and community review systems; resource discovery integrating the National Science Education Standards and geography standards; Earth system science vocabulary

  7. Set up and operation for medical radiation exposure quality control system of health promotion center

    International Nuclear Information System (INIS)

    Kim, Jung Su; Kim, Jung Min; Jung, Hae Kyoung

    2016-01-01

    In this study, standard model of medical radiation dosage quality control system will be suggested and the useful of this system in clinical field will be reviewed. Radiation dosage information of modalities are gathered from digital imaging and communications in medicine(DICOM) standard data(such as DICOM dose SR and DICOM header) and stored in database. One CT scan, two digital radiography modalities and two mammography modalities in one health promotion center in Seoul are used to derive clinical data for one month. After 1 months research with 703 CT scans, the study shows CT 357.9 mGy·cm in abdomen and pelvic CT, 572.4 mGy·cm in brain without CT, 55.9 mGy·cm in calcium score/heart CT, screening CT at 54 mGy·cm in chest screening CT(low dose screening CT scan), 284.99 mGy·cm in C-spine CT and 341.85 mGy·cm in L-spine CT as health promotion center reference level of each exam. And with 1955 digital radiography cases, it shows 274.0 mGy·cm"2 and for mammography 6.09 mGy is shown based on 536 cases. The use of medical radiation shall comply with the principles of justification and optimization. This quality management of medical radiation exposure must be performed in order to follow the principle. And the procedure to reduce the radiation exposure of patients and staff can be achieved through this. The results of this study can be applied as a useful tool to perform the quality control of medical radiation exposure

  8. Set up and operation for medical radiation exposure quality control system of health promotion center

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jung Su; Kim, Jung Min [Korea University,Seoul (Korea, Republic of); Jung, Hae Kyoung [Dept. of Diagnostic Radiology, CHA Bundang Medical Center, CHA University, Sungnam (Korea, Republic of)

    2016-03-15

    In this study, standard model of medical radiation dosage quality control system will be suggested and the useful of this system in clinical field will be reviewed. Radiation dosage information of modalities are gathered from digital imaging and communications in medicine(DICOM) standard data(such as DICOM dose SR and DICOM header) and stored in database. One CT scan, two digital radiography modalities and two mammography modalities in one health promotion center in Seoul are used to derive clinical data for one month. After 1 months research with 703 CT scans, the study shows CT 357.9 mGy·cm in abdomen and pelvic CT, 572.4 mGy·cm in brain without CT, 55.9 mGy·cm in calcium score/heart CT, screening CT at 54 mGy·cm in chest screening CT(low dose screening CT scan), 284.99 mGy·cm in C-spine CT and 341.85 mGy·cm in L-spine CT as health promotion center reference level of each exam. And with 1955 digital radiography cases, it shows 274.0 mGy·cm{sup 2} and for mammography 6.09 mGy is shown based on 536 cases. The use of medical radiation shall comply with the principles of justification and optimization. This quality management of medical radiation exposure must be performed in order to follow the principle. And the procedure to reduce the radiation exposure of patients and staff can be achieved through this. The results of this study can be applied as a useful tool to perform the quality control of medical radiation exposure.

  9. The National Carbon Capture Center at the Power Systems Development Facility

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2014-12-30

    The National Carbon Capture Center (NCCC) at the Power Systems Development Facility supports the Department of Energy (DOE) goal of promoting the United States’ energy security through reliable, clean, and affordable energy produced from coal. Work at the NCCC supports the development of new power technologies and the continued operation of conventional power plants under CO2 emission constraints. The NCCC includes adaptable slipstreams that allow technology development of CO2 capture concepts using coal-derived syngas and flue gas in industrial settings. Because of the ability to operate under a wide range of flow rates and process conditions, research at the NCCC can effectively evaluate technologies at various levels of maturity and accelerate their development path to commercialization. During its first contract period, from October 1, 2008, through December 30, 2014, the NCCC designed, constructed, and began operation of the Post-Combustion Carbon Capture Center (PC4). Testing of CO2 capture technologies commenced in 2011, and through the end of the contract period, more than 25,000 hours of testing had been achieved, supporting a variety of technology developers. Technologies tested included advanced solvents, enzymes, membranes, sorbents, and associated systems. The NCCC continued operation of the existing gasification facilities, which have been in operation since 1996, to support the advancement of technologies for next-generation gasification processes and pre-combustion CO2 capture. The gasification process operated for 13 test runs, supporting over 30,000 hours combined of both gasification and pre-combustion technology developer testing. Throughout the contract period, the NCCC incorporated numerous modifications to the facilities to accommodate technology developers and increase test capabilities. Preparations for further testing were ongoing to continue advancement of the most promising technologies for

  10. The National Carbon Capture Center at the Power Systems Development Facility

    Energy Technology Data Exchange (ETDEWEB)

    Mosser, Morgan [Southern Company Services, Inc., Wilsonville, AL (United States)

    2012-12-31

    The Power Systems Development Facility (PSDF) is a state-of-the-art test center sponsored by the U.S. Department of Energy and dedicated to the advancement of clean coal technology. In addition to the development of high efficiency coal gasification processes, the PSDF features the National Carbon Capture Center (NCCC) to promote new technologies for CO2 capture from coal-derived syngas and flue gas. The NCCC includes multiple, adaptable test skids that allow technology development of CO2 capture concepts using coal-derived syngas and flue gas in industrial settings. Because of the ability to operate under a wide range of flow rates and process conditions, research at the NCCC can effectively evaluate technologies at various levels of maturity and accelerate their development path to commercialization. During the calendar year 2012 portion of the Budget Period Four reporting period, efforts at the NCCC focused on testing of pre- and post-combustion CO2 capture processes and gasification support technologies. Preparations for future testing were on-going as well, and involved facility upgrades and collaboration with numerous technology developers. In the area of pre-combustion, testing was conducted on a new water-gas shift catalyst, a CO2 solvent, and gas separation membranes from four different technology developers, including two membrane systems incorporating major scale-ups. Post-combustion tests involved advanced solvents from three major developers, a gas separation membrane, and two different enzyme technologies. An advanced sensor for gasification operation was evaluated, operation with biomass co-feeding with coal under oxygen-blown conditions was achieved, and progress continued on refining several gasification support technologies.

  11. Location Allocation of Health Care Centers Using Geographical Information System: region 11 of Tehran

    OpenAIRE

    Mohsen Ahadnejad; Hosein Ghaderi; Mohammad Hadian; Payam Haghighatfard; Banafsheh Darvishi; Elham Haghighatfard; Bitasadat Zegordi; Arash Bordbar

    2015-01-01

     Background & Objective: Location allocation of healthcare centers facilitates the accessibility of health services and the lack of proper distribution of these centers leads to increasing problems of citizens' access to these centers. The main objective of this study was to evaluate the distribution of healthcare centers in the region of the study and to determine deprived areas from this services. Materials & Methods: This research is a case study that has b...

  12. The Geodetic Seamless Archive Centers Service Layer: A System Architecture for Federating Geodesy Data Repositories

    Science.gov (United States)

    McWhirter, J.; Boler, F. M.; Bock, Y.; Jamason, P.; Squibb, M. B.; Noll, C. E.; Blewitt, G.; Kreemer, C. W.

    2010-12-01

    Three geodesy Archive Centers, Scripps Orbit and Permanent Array Center (SOPAC), NASA's Crustal Dynamics Data Information System (CDDIS) and UNAVCO are engaged in a joint effort to define and develop a common Web Service Application Programming Interface (API) for accessing geodetic data holdings. This effort is funded by the NASA ROSES ACCESS Program to modernize the original GPS Seamless Archive Centers (GSAC) technology which was developed in the 1990s. A new web service interface, the GSAC-WS, is being developed to provide uniform and expanded mechanisms through which users can access our data repositories. In total, our respective archives hold tens of millions of files and contain a rich collection of site/station metadata. Though we serve similar user communities, we currently provide a range of different access methods, query services and metadata formats. This leads to a lack of consistency in the userís experience and a duplication of engineering efforts. The GSAC-WS API and its reference implementation in an underlying Java-based GSAC Service Layer (GSL) supports metadata and data queries into site/station oriented data archives. The general nature of this API makes it applicable to a broad range of data systems. The overall goals of this project include providing consistent and rich query interfaces for end users and client programs, the development of enabling technology to facilitate third party repositories in developing these web service capabilities and to enable the ability to perform data queries across a collection of federated GSAC-WS enabled repositories. A fundamental challenge faced in this project is to provide a common suite of query services across a heterogeneous collection of data yet enabling each repository to expose their specific metadata holdings. To address this challenge we are developing a "capabilities" based service where a repository can describe its specific query and metadata capabilities. Furthermore, the architecture of

  13. Doppler Lidar System Design via Interdisciplinary Design Concept at NASA Langley Research Center - Part I

    Science.gov (United States)

    Boyer, Charles M.; Jackson, Trevor P.; Beyon, Jeffrey Y.; Petway, Larry B.

    2013-01-01

    Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. Mechanical placement collaboration reduced potential electromagnetic interference (EMI). Through application of newly selected electrical components and thermal analysis data, a total electronic chassis redesign was accomplished. Use of an innovative forced convection tunnel heat sink was employed to meet and exceed project requirements for cooling, mass reduction, and volume reduction. Functionality was a key concern to make efficient use of airflow, and accessibility was also imperative to allow for servicing of chassis internals. The collaborative process provided for accelerated design maturation with substantiated function.

  14. User-centered requirements engineering in health information systems: a study in the hemophilia field.

    Science.gov (United States)

    Teixeira, Leonor; Ferreira, Carlos; Santos, Beatriz Sousa

    2012-06-01

    The use of sophisticated information and communication technologies (ICTs) in the health care domain is a way to improve the quality of services. However, there are also hazards associated with the introduction of ICTs in this domain and a great number of projects have failed due to the lack of systematic consideration of human and other non-technology issues throughout the design or implementation process, particularly in the requirements engineering process. This paper presents the methodological approach followed in the design process of a web-based information system (WbIS) for managing the clinical information in hemophilia care, which integrates the values and practices of user-centered design (UCD) activities into the principles of software engineering, particularly in the phase of requirements engineering (RE). This process followed a paradigm that combines a grounded theory for data collection with an evolutionary design based on constant development and refinement of the generic domain model using three well-known methodological approaches: (a) object-oriented system analysis; (b) task analysis; and, (c) prototyping, in a triangulation work. This approach seems to be a good solution for the requirements engineering process in this particular case of the health care domain, since the inherent weaknesses of individual methods are reduced, and emergent requirements are easier to elicit. Moreover, the requirements triangulation matrix gives the opportunity to look across the results of all used methods and decide what requirements are critical for the system success. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  15. Analysis of a Student-Centered, Self-Paced Pedagogy Style for Teaching Information Systems Courses

    Directory of Open Access Journals (Sweden)

    Sharon Paranto

    2006-12-01

    Full Text Available The entry-level skills for students enrolling in a college-level information systems course can vary widely. This paper analyzes the impact of a "student-centered" pedagogy model, in which students use a self-paced approach for learning the material in an introductory information systems course, with pre-assigned dates for lectures and for assignment/exam deadlines. This new paradigm was implemented in several sections of an introductory information systems course over a two-semester time span. Under the new model, tutorial-style textbooks were used to help students master the material, all other materials were available online, and all exams were given using a hands-on, task-oriented online testing package, which included a multiple-choice/true-false component to test student understanding of the conceptual portion of the course. An anonymous student survey was used to gain student perceptions of the level of learning that took place under the new paradigm, as well as to measure student satisfaction with the course design, and a pre-/post-test was used to provide a measure of student learning.

  16. Measuring the Usability of Augmented Reality e-Learning Systems: A User-Centered Evaluation Approach

    Science.gov (United States)

    Pribeanu, Costin; Balog, Alexandru; Iordache, Dragoş Daniel

    The development of Augmented Reality (AR) systems is creating new challenges and opportunities for the designers of e-learning systems. The mix of real and virtual requires appropriate interaction techniques that have to be evaluated with users in order to avoid usability problems. Formative usability aims at finding usability problems as early as possible in the development life cycle and is suitable to support the development of such novel interactive systems. This work presents an approach to the user-centered usability evaluation of an e-learning scenario for Biology developed on an Augmented Reality educational platform. The evaluation has been carried on during and after a summer school held within the ARiSE research project. The basic idea was to perform usability evaluation twice. In this respect, we conducted user testing with a small number of students during the summer school in order to get a fast feedback from users having good knowledge in Biology. Then, we repeated the user testing in different conditions and with a relatively larger number of representative users. In this paper we describe both experiments and compare the usability evaluation results.

  17. Evolution of the Building Management System in the INFN CNAF Tier-1 data center facility.

    Science.gov (United States)

    Ricci, Pier Paolo; Donatelli, Massimo; Falabella, Antonio; Mazza, Andrea; Onofri, Michele

    2017-10-01

    The INFN CNAF Tier-1 data center is composed by two different main rooms containing IT resources and four additional locations that hosts the necessary technology infrastructures providing the electrical power and cooling to the facility. The power supply and continuity are ensured by a dedicated room with three 15,000 to 400 V transformers in a separate part of the principal building and two redundant 1.4MW diesel rotary uninterruptible power supplies. The cooling is provided by six free cooling chillers of 320 kW each with a N+2 redundancy configuration. Clearly, considering the complex physical distribution of the technical plants, a detailed Building Management System (BMS) was designed and implemented as part of the original project in order to monitor and collect all the necessary information and for providing alarms in case of malfunctions or major failures. After almost 10 years of service, a revision of the BMS system was somewhat necessary. In addition, the increasing cost of electrical power is nowadays a strong motivation for improving the energy efficiency of the infrastructure. Therefore the exact calculation of the power usage effectiveness (PUE) metric has become one of the most important factors when aiming for the optimization of a modern data center. For these reasons, an evolution of the BMS system was designed using the Schneider StruxureWare infrastructure hardware and software products. This solution proves to be a natural and flexible development of the previous TAC Vista software with advantages in the ease of use and the possibility to customize the data collection and the graphical interfaces display. Moreover, the addition of protocols like open standard Web services gives the possibility to communicate with the BMS from custom user application and permits the exchange of data and information through the Web between different third-party systems. Specific Web services SOAP requests has been implemented in our Tier-1 monitoring system in

  18. New Center Links Earth, Space, and Information Sciences

    Science.gov (United States)

    Aswathanarayana, U.

    2004-05-01

    Broad-based geoscience instruction melding the Earth, space, and information technology sciences has been identified as an effective way to take advantage of the new jobs created by technological innovations in natural resources management. Based on this paradigm, the University of Hyderabad in India is developing a Centre of Earth and Space Sciences that will be linked to the university's super-computing facility. The proposed center will provide the basic science underpinnings for the Earth, space, and information technology sciences; develop new methodologies for the utilization of natural resources such as water, soils, sediments, minerals, and biota; mitigate the adverse consequences of natural hazards; and design innovative ways of incorporating scientific information into the legislative and administrative processes. For these reasons, the ethos and the innovatively designed management structure of the center would be of particular relevance to the developing countries. India holds 17% of the world's human population, and 30% of its farm animals, but only about 2% of the planet's water resources. Water will hence constitute the core concern of the center, because ecologically sustainable, socially equitable, and economically viable management of water resources of the country holds the key to the quality of life (drinking water, sanitation, and health), food security, and industrial development of the country. The center will be focused on interdisciplinary basic and pure applied research that is relevant to the practical needs of India as a developing country. These include, for example, climate prediction, since India is heavily dependent on the monsoon system, and satellite remote sensing of soil moisture, since agriculture is still a principal source of livelihood in India. The center will perform research and development in areas such as data assimilation and validation, and identification of new sensors to be mounted on the Indian meteorological

  19. The National Institutes of Health Clinical Center Digital Imaging Network, Picture Archival and Communication System, and Radiology Information System.

    Science.gov (United States)

    Goldszal, A F; Brown, G K; McDonald, H J; Vucich, J J; Staab, E V

    2001-06-01

    In this work, we describe the digital imaging network (DIN), picture archival and communication system (PACS), and radiology information system (RIS) currently being implemented at the Clinical Center, National Institutes of Health (NIH). These systems are presently in clinical operation. The DIN is a redundant meshed network designed to address gigabit density and expected high bandwidth requirements for image transfer and server aggregation. The PACS projected workload is 5.0 TB of new imaging data per year. Its architecture consists of a central, high-throughput Digital Imaging and Communications in Medicine (DICOM) data repository and distributed redundant array of inexpensive disks (RAID) servers employing fiber-channel technology for immediate delivery of imaging data. On demand distribution of images and reports to clinicians and researchers is accomplished via a clustered web server. The RIS follows a client-server model and provides tools to order exams, schedule resources, retrieve and review results, and generate management reports. The RIS-hospital information system (HIS) interfaces include admissions, discharges, and transfers (ATDs)/demographics, orders, appointment notifications, doctors update, and results.

  20. Function-centered modeling of engineering systems using the goal tree-success tree technique and functional primitives

    International Nuclear Information System (INIS)

    Modarres, Mohammad; Cheon, Se Woo

    1999-01-01

    Most of the complex systems are formed through some hierarchical evolution. Therefore, those systems can be best described through hierarchical frameworks. This paper describes some fundamental attributes of complex physical systems and several hierarchies such as functional, behavioral, goal/condition, and event hierarchies, then presents a function-centered approach to system modeling. Based on the function-centered concept, this paper describes the joint goal tree-success tree (GTST) and the master logic diagram (MLD) as a framework for developing models of complex physical systems. A function-based lexicon for classifying the most common elements of engineering systems for use in the GTST-MLD framework has been proposed. The classification is based on the physical conservation laws that govern the engineering systems. Functional descriptions based on conservation laws provide a simple and rich vocabulary for modeling complex engineering systems