WorldWideScience

Sample records for integrated hpc reliability

  1. Modeling Energy & Reliability of a CNT based WSN on an HPC Setup

    Directory of Open Access Journals (Sweden)

    Rohit Pathak

    2010-07-01

    Full Text Available We have analyzed the effect of innovations in Nanotechnology on Wireless Sensor Networks (WSN and have modeled Carbon Nanotube (CNT based sensor nodes from a device prospective. A WSN model has been programmed in Simulink-MATLAB and a library has been developed. Integration of CNT in WSN for various modules such as sensors, microprocessors, batteries etc has been shown. Also average energy consumption for the system has been formulated and its reliability has been shown holistically. A proposition has been put forward on the changes needed in existing sensor node structure to improve its efficiency and to facilitate as well as enhance the assimilation of CNT based devices in a WSN. Finally we have commented on the challenges that exist in this technology and described the important factors that need to be considered for calculating reliability. This research will help in practical implementation of CNT based devices and analysis of their key effects on the WSN environment. The work has been executed on Simulink and Distributive Computing toolbox of MATLAB. The proposal has been compared to the recent developments and past experimental results reported in this field. This attempt to derieve the energy consumption and reliability implications will help in development of real devices using CNT which is a major hurdle in bringing the success from lab to commercial market. Recent research in CNT has been used to model an energy efficient model which will also lead to the development CAD tools. Library for Reliability and Energy consumption includes analysis of various parts of a WSN system which is being constructed from CNT. Nano routing in a CNT system is also implemented with its dependencies. Finally the computations were executed on a HPC setup and the model showed remarkable speedup.

  2. Redundancy and Reliability for an HPC Data Centre

    OpenAIRE

    Erhan Yılmaz

    2012-01-01

    Defining a level of redundancy is a strategic question when planning a new data centre, as it will directly impact the entire design of the building as well as the construction and operational costs. It will also affect how to integrate future extension plans into the design. Redundancy is also a key strategic issue when upgrading or retrofitting an existing facility. Redundancy is a central strategic question to any business that relies on data centres for its operation. In th...

  3. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    Science.gov (United States)

    Filipčič, A.; ATLAS Collaboration

    2017-10-01

    Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte Carlo Simulation in SCEAPI and have been providing CPU power since fall 2015.

  4. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081160; The ATLAS collaboration

    2016-01-01

    Fifteen Chinese High Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte C...

  5. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081160

    2017-01-01

    Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte C...

  6. Simplifying the Access to HPC Resources by Integrating them in the Application GUI

    KAUST Repository

    van Waveren, Matthijs

    2016-06-22

    The computing landscape of KAUST is increasing in complexity. Researchers have access to the 9th fastest supercomputer in the world (Shaheen II) and several other HPC clusters. They work on local Windows, Mac, or Linux workstations. In order to facilitate the access of the HPC systems, we have developed interfaces to several research applications that automate input data transfer, job submission and retrieval of results. The user now submits his jobs to the cluster from within the application GUI on his workstation, and does not have to physically go onto the cluster anymore.

  7. Integrated system reliability analysis

    DEFF Research Database (Denmark)

    Gintautas, Tomas; Sørensen, John Dalsgaard

    Specific targets: 1) The report shall describe the state of the art of reliability and risk-based assessment of wind turbine components. 2) Development of methodology for reliability and risk-based assessment of the wind turbine at system level. 3) Describe quantitative and qualitative measures...

  8. ATLAS computing on CSCS HPC

    Science.gov (United States)

    Filipcic, A.; Haug, S.; Hostettler, M.; Walker, R.; Weber, M.

    2015-12-01

    The Piz Daint Cray XC30 HPC system at CSCS, the Swiss National Supercomputing centre, was the highest ranked European system on TOP500 in 2014, also featuring GPU accelerators. Event generation and detector simulation for the ATLAS experiment have been enabled for this machine. We report on the technical solutions, performance, HPC policy challenges and possible future opportunities for HEP on extreme HPC systems. In particular a custom made integration to the ATLAS job submission system has been developed via the Advanced Resource Connector (ARC) middleware. Furthermore, a partial GPU acceleration of the Geant4 detector simulations has been implemented.

  9. ATLAS computing on CSCS HPC

    CERN Document Server

    Hostettler, Michael Artur; The ATLAS collaboration; Haug, Sigve; Walker, Rodney; Weber, Michele

    2015-01-01

    The Piz Daint Cray XC30 HPC system at CSCS, the Swiss National Supercomputing centre, was in 2014 the highest ranked European system on TOP500, also featuring GPU accelerators. Event generation and detector simulation for the ATLAS experiment have been enabled for this machine. We report on the technical solutions, performance, HPC policy challenges and possible future opportunities for HEP on extreme HPC systems. In particular a custom made integration to the ATLAS job submission system has been developed via the Advanced Resource Connector (ARC) middleware. Furthermore, some GPU acceleration of the Geant4 detector simulations has been implemented to justify the allocation request for this machine.

  10. ATLAS computing on CSCS HPC

    CERN Document Server

    Filipcic, Andrej; The ATLAS collaboration; Weber, Michele; Walker, Rodney; Hostettler, Michael Artur

    2015-01-01

    The Piz Daint Cray XC30 HPC system at CSCS, the Swiss National Supercomputing centre, is in 2014 the highest ranked European system on TOP500, also featuring GPU accelerators. Event generation and detector simulation for the ATLAS experiment has been enabled for this machine. We report on the technical solutions, performance, HPC policy challenges and possible future opportunities for HEP on extreme HPC systems. In particular a custom made integration to the ATLAS job submission system has been developed via the Advanced Resource Connector (ARC) middleware. Further, some GPU acceleration of the Geant4 detector simulations were implemented to justify the allocation request for this machine.

  11. Interactive reliability assessment using an integrated reliability data bank

    International Nuclear Information System (INIS)

    Allan, R.N.; Whitehead, A.M.

    1986-01-01

    The logical structure, techniques and practical application of a computer-aided technique based on a microcomputer using floppy disc Random Access Files is described. This interactive computational technique is efficient if the reliability prediction program is coupled directly to a relevant source of data to create an integrated reliability assessment/reliability data bank system. (DG)

  12. HPC Annual Report 2017

    Energy Technology Data Exchange (ETDEWEB)

    Dennig, Yasmin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-10-01

    Sandia National Laboratories has a long history of significant contributions to the high performance community and industry. Our innovative computer architectures allowed the United States to become the first to break the teraFLOP barrier—propelling us to the international spotlight. Our advanced simulation and modeling capabilities have been integral in high consequence US operations such as Operation Burnt Frost. Strong partnerships with industry leaders, such as Cray, Inc. and Goodyear, have enabled them to leverage our high performance computing (HPC) capabilities to gain a tremendous competitive edge in the marketplace. As part of our continuing commitment to providing modern computing infrastructure and systems in support of Sandia missions, we made a major investment in expanding Building 725 to serve as the new home of HPC systems at Sandia. Work is expected to be completed in 2018 and will result in a modern facility of approximately 15,000 square feet of computer center space. The facility will be ready to house the newest National Nuclear Security Administration/Advanced Simulation and Computing (NNSA/ASC) Prototype platform being acquired by Sandia, with delivery in late 2019 or early 2020. This new system will enable continuing advances by Sandia science and engineering staff in the areas of operating system R&D, operation cost effectiveness (power and innovative cooling technologies), user environment and application code performance.

  13. Programming Models in HPC

    Energy Technology Data Exchange (ETDEWEB)

    Shipman, Galen M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-13

    These are the slides for a presentation on programming models in HPC, at the Los Alamos National Laboratory's Parallel Computing Summer School. The following topics are covered: Flynn's Taxonomy of computer architectures; single instruction single data; single instruction multiple data; multiple instruction multiple data; address space organization; definition of Trinity (Intel Xeon-Phi is a MIMD architecture); single program multiple data; multiple program multiple data; ExMatEx workflow overview; definition of a programming model, programming languages, runtime systems; programming model and environments; MPI (Message Passing Interface); OpenMP; Kokkos (Performance Portable Thread-Parallel Programming Model); Kokkos abstractions, patterns, policies, and spaces; RAJA, a systematic approach to node-level portability and tuning; overview of the Legion Programming Model; mapping tasks and data to hardware resources; interoperability: supporting task-level models; Legion S3D execution and performance details; workflow, integration of external resources into the programming model.

  14. Integrating reliability analysis and design

    International Nuclear Information System (INIS)

    Rasmuson, D.M.

    1980-10-01

    This report describes the Interactive Reliability Analysis Project and demonstrates the advantages of using computer-aided design systems (CADS) in reliability analysis. Common cause failure problems require presentations of systems, analysis of fault trees, and evaluation of solutions to these. Results have to be communicated between the reliability analyst and the system designer. Using a computer-aided design system saves time and money in the analysis of design. Computer-aided design systems lend themselves to cable routing, valve and switch lists, pipe routing, and other component studies. At EG and G Idaho, Inc., the Applicon CADS is being applied to the study of water reactor safety systems

  15. HPC: Rent or Buy

    Science.gov (United States)

    Fredette, Michelle

    2012-01-01

    "Rent or buy?" is a question people ask about everything from housing to textbooks. It is also a question universities must consider when it comes to high-performance computing (HPC). With the advent of Amazon's Elastic Compute Cloud (EC2), Microsoft Windows HPC Server, Rackspace's OpenStack, and other cloud-based services, researchers now have…

  16. Reliability criteria selection for integrated resource planning

    International Nuclear Information System (INIS)

    Ruiu, D.; Ye, C.; Billinton, R.; Lakhanpal, D.

    1993-01-01

    A study was conducted on the selection of a generating system reliability criterion that ensures a reasonable continuity of supply while minimizing the total costs to utility customers. The study was conducted using the Institute for Electronic and Electrical Engineers (IEEE) reliability test system as the study system. The study inputs and results for conditions and load forecast data, new supply resources data, demand-side management resource data, resource planning criterion, criterion value selection, supply side development, integrated resource development, and best criterion values, are tabulated and discussed. Preliminary conclusions are drawn as follows. In the case of integrated resource planning, the selection of the best value for a given type of reliability criterion can be done using methods similar to those used for supply side planning. The reliability criteria values previously used for supply side planning may not be economically justified when integrated resource planning is used. Utilities may have to revise and adopt new, and perhaps lower supply reliability criteria for integrated resource planning. More complex reliability criteria, such as energy related indices, which take into account the magnitude, frequency and duration of the expected interruptions are better adapted than the simpler capacity-based reliability criteria such as loss of load expectation. 7 refs., 5 figs., 10 tabs

  17. HPC s Pivot to Data

    Energy Technology Data Exchange (ETDEWEB)

    Parete-Koon, Suzanne [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Caldwell, Blake A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Canon, Richard Shane [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Dart, Eli [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Sciences Network (ESnet); Hick, Jason [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Hill, Jason J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Layton, Chris [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Pelfrey, Daniel S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Shipman, Galen M [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Skinner, David [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Nam, Hai Ah [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Wells, Jack C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Zurawski, Jason [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Sciences Network (ESnet)

    2014-05-03

    Computer centers such as NERSC and OLCF have traditionally focused on delivering computational capability that enables breakthrough innovation in a wide range of science domains. Accessing that computational power has required services and tools to move the data from input and output to computation and storage. A ''pivot to data'' is occurring in HPC. Data transfer tools and services that were previously peripheral are becoming integral to scientific workflows. Emerging requirements from high-bandwidth detectors, high-throughput screening techniques, highly concur- rent simulations, increased focus on uncertainty quantification, and an emerging open-data policy posture toward published research are among the data-drivers shaping the networks, file systems, databases, and overall compute and data environment. In this paper we explain the pivot to data in HPC through user requirements and the changing resources provided by HPC with particular focus on data movement. For WAN data transfers we present the results of a study of network performance between centers.

  18. Integrated analysis of hematopoietic differentiation outcomes and molecular characterization reveals unbiased differentiation capacity and minor transcriptional memory in HPC/HSC-iPSCs.

    Science.gov (United States)

    Gao, Shuai; Hou, Xinfeng; Jiang, Yonghua; Xu, Zijian; Cai, Tao; Chen, Jiajie; Chang, Gang

    2017-01-23

    Transcription factor-mediated reprogramming can reset the epigenetics of somatic cells into a pluripotency compatible state. Recent studies show that induced pluripotent stem cells (iPSCs) always inherit starting cell-specific characteristics, called epigenetic memory, which may be advantageous, as directed differentiation into specific cell types is still challenging; however, it also may be unpredictable when uncontrollable differentiation occurs. In consideration of biosafety in disease modeling and personalized medicine, the availability of high-quality iPSCs which lack a biased differentiation capacity and somatic memory could be indispensable. Herein, we evaluate the hematopoietic differentiation capacity and somatic memory state of hematopoietic progenitor and stem cell (HPC/HSC)-derived-iPSCs (HPC/HSC-iPSCs) using a previously established sequential reprogramming system. We found that HPC/HSCs are amenable to being reprogrammed into iPSCs with unbiased differentiation capacity to hematopoietic progenitors and mature hematopoietic cells. Genome-wide analyses revealed that no global epigenetic memory was detectable in HPC/HSC-iPSCs, but only a minor transcriptional memory of HPC/HSCs existed in a specific tetraploid complementation (4 N)-incompetent HPC/HSC-iPSC line. However, the observed minor transcriptional memory had no influence on the hematopoietic differentiation capacity, indicating the reprogramming of the HPC/HSCs was nearly complete. Further analysis revealed the correlation of minor transcriptional memory with the aberrant distribution of H3K27me3. This work provides a comprehensive framework for obtaining high-quality iPSCs from HPC/HSCs with unbiased hematopoietic differentiation capacity and minor transcriptional memory.

  19. Robust Reliability or reliable robustness? - Integrated consideration of robustness and reliability aspects

    DEFF Research Database (Denmark)

    Kemmler, S.; Eifler, Tobias; Bertsche, B.

    2015-01-01

    products are and vice versa. For a comprehensive understanding and to use existing synergies between both domains, this paper discusses the basic principles of Reliability- and Robust Design theory. The development of a comprehensive model will enable an integrated consideration of both domains...

  20. Integrated Reliability and Risk Analysis System (IRRAS)

    International Nuclear Information System (INIS)

    Russell, K.D.; McKay, M.K.; Sattison, M.B.; Skinner, N.L.; Wood, S.T.; Rasmuson, D.M.

    1992-01-01

    The Integrated Reliability and Risk Analysis System (IRRAS) is a state-of-the-art, microcomputer-based probabilistic risk assessment (PRA) model development and analysis tool to address key nuclear plant safety issues. IRRAS is an integrated software tool that gives the user the ability to create and analyze fault trees and accident sequences using a microcomputer. This program provides functions that range from graphical fault tree construction to cut set generation and quantification. Version 1.0 of the IRRAS program was released in February of 1987. Since that time, many user comments and enhancements have been incorporated into the program providing a much more powerful and user-friendly system. This version has been designated IRRAS 4.0 and is the subject of this Reference Manual. Version 4.0 of IRRAS provides the same capabilities as Version 1.0 and adds a relational data base facility for managing the data, improved functionality, and improved algorithm performance

  1. Performance Tuning of Fock Matrix and Two-Electron Integral Calculations for NWChem on Leading HPC Platforms

    Energy Technology Data Exchange (ETDEWEB)

    Shan, Hongzhan; Austin, Brian M.; De Jong, Wibe A.; Oliker, Leonid; Wright, Nicholas J.; Apra, Edoardo

    2014-10-01

    Attaining performance in the evaluation of two-electron repulsion integrals and constructing the Fock matrix is of considerable importance to the computational chemistry community. Due to its numerical complexity improving the performance behavior across a variety of leading supercomputing platforms is an increasing challenge due to the significant diversity in high-performance computing architectures. In this paper, we present our successful tuning methodology for these important numerical methods on the Cray XE6, the Cray XC30, the IBM BG/Q, as well as the Intel Xeon Phi. Our optimization schemes leverage key architectural features including vectorization and simultaneous multithreading, and results in speedups of up to 2.5x compared with the original implementation.

  2. Towards Reliable Integrated Services for Dependable Systems

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Ravn, Anders Peter; Izadi-Zamanabadi, Roozbeh

    Reliability issues for various technical systems are discussed and focus is directed towards distributed systems, where communication facilities are vital to maintain system functionality. Reliability in communication subsystems is considered as a resource to be shared among a number of logical c...... applications residing on alternative routes. Details are provided for the operation of RRRSVP based on reliability slack calculus. Conclusions summarize the considerations and give directions for future research....... connections and a reliability management framework is suggested. We suggest a network layer level reliability management protocol RRSVP (Reliability Resource Reservation Protocol) as a counterpart of the RSVP for bandwidth and time resource management. Active and passive standby redundancy by background...

  3. Towards Reliable Integrated Services for Dependable Systems

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Ravn, Anders Peter; Izadi-Zamanabadi, Roozbeh

    2003-01-01

    Reliability issues for various technical systems are discussed and focus is directed towards distributed systems, where communication facilities are vital to maintain system functionality. Reliability in communication subsystems is considered as a resource to be shared among a number of logical c...... applications residing on alternative routes. Details are provided for the operation of RRRSVP based on reliability slack calculus. Conclusions summarize the considerations and give directions for future research....... connections and a reliability management framework is suggested. We suggest a network layer level reliability management protocol RRSVP (Reliability Resource Reservation Protocol) as a counterpart of the RSVP for bandwidth and time resource management. Active and passive standby redundancy by background...

  4. Limits of reliability for the measurement of integral count

    International Nuclear Information System (INIS)

    Erbeszkorn, L.

    1979-01-01

    A method is presented for exact and approximate calculation of reliability limits of measured nuclear integral count. The formulae are applicable in measuring conditions which assure the Poisson distribution of the counts. The coefficients of the approximate formulae for 90, 95, 98 and 99 per cent reliability levels are given. The exact reliability limits for 90 per cent reliability level are calculated up to 80 integral counts. (R.J.)

  5. 2014 HPC Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Jennings, Barbara [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-10-01

    Our commitment is to support you through delivery of an IT environment that provides mission value by transforming the way you use, protect, and access information. We approach this through technical innovation, risk management, and relationships with our workforce, Laboratories leadership, and policy makers nationwide. This second edition of our HPC Annual Report continues our commitment to communicate the details and impact of Sandia’s large-scale computing resources that support the programs associated with our diverse mission areas. A key tenet to our approach is to work with our mission partners to understand and anticipate their requirements and formulate an investment strategy that is aligned with those Laboratories priorities. In doing this, our investments include not only expanding the resources available for scientific computing and modeling and simulation, but also acquiring large-scale systems for data analytics, cloud computing, and Emulytics. We are also investigating new computer architectures in our advanced systems test bed to guide future platform designs and prepare for changes in our code development models. Our initial investments in large-scale institutional platforms that are optimized for Informatics and Emulytics work are serving a diverse customer base. We anticipate continued growth and expansion of these resources in the coming years as the use of these analytic techniques expands across our mission space. If your program could benefit from an investment in innovative systems, please work through your Program Management Unit ’s Mission Computing Council representatives to engage our teams.

  6. Lightweight HPC beam OMEGA

    Science.gov (United States)

    Sýkora, Michal; Jedlinský, Petr; Komanec, Jan

    2017-09-01

    In the design and construction of precast bridge structures, a general goal is to achieve the maximum possible span length. Often, the weight of individual beams makes them difficult to handle, which may be a limiting factor in achieving the desired span. The design of the OMEGA beam aims to solve a part of these problems. It is a thin-walled shell made of prestressed high-performance concrete (HPC) in the shape of inverted Ω character. The concrete shell with prestressed strands is fitted with a non-stressed tendon already in the casting yard and is more easily transported and installed on the site. The shells are subsequently completed with mild steel reinforcement and cores are cast in situ together with the deck. The OMEGA beams can also be used as an alternative to steel - concrete composite bridges. Due to the higher production complexity, OMEGA beam can hardly substitute conventional prestressed beams like T or PETRA completely, but it can be a useful alternative for specific construction needs.

  7. An integrated reliability management system for nuclear power plants

    International Nuclear Information System (INIS)

    Kimura, T.; Shimokawa, H.; Matsushima, H.

    1998-01-01

    The responsibility in the nuclear field of the Government, utilities and manufactures has increased in the past years due to the need of stable operation and great reliability of nuclear power plants. The need to improve the reliability is not only for the new plants but also for those now running. So, several measures have been taken to improve reliability. In particular, the plant manufactures have developed a reliability management system for each phase (planning, construction, maintenance and operation) and these have been integrated as a unified system. This integrated reliability management system for nuclear power plants contains information about plant performance, failures and incidents which have occurred in the plants. (author)

  8. Integrated reliability condition monitoring and maintenance of equipment

    CERN Document Server

    Osarenren, John

    2015-01-01

    Consider a Viable and Cost-Effective Platform for the Industries of the Future (IOF) Benefit from improved safety, performance, and product deliveries to your customers. Achieve a higher rate of equipment availability, performance, product quality, and reliability. Integrated Reliability: Condition Monitoring and Maintenance of Equipment incorporates reliable engineering and mathematical modeling to help you move toward sustainable development in reliability condition monitoring and maintenance. This text introduces a cost-effective integrated reliability growth monitor, integrated reliability degradation monitor, technological inheritance coefficient sensors, and a maintenance tool that supplies real-time information for predicting and preventing potential failures of manufacturing processes and equipment. The author highlights five key elements that are essential to any improvement program: improving overall equipment and part effectiveness, quality, and reliability; improving process performance with maint...

  9. Program integration of predictive maintenance with reliability centered maintenance

    International Nuclear Information System (INIS)

    Strong, D.K. Jr; Wray, D.M.

    1990-01-01

    This paper addresses improving the safety and reliability of power plants in a cost-effective manner by integrating the recently developed reliability centered maintenance techniques with the traditional predictive maintenance techniques of nuclear power plants. The topics of the paper include a description of reliability centered maintenance (RCM), enhancing RCM with predictive maintenance, predictive maintenance programs, condition monitoring techniques, performance test techniques, the mid-Atlantic Reliability Centered Maintenance Users Group, test guides and the benefits of shared guide development

  10. Integration of NDE Reliability and Fracture Mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Becker, F. L.; Doctor, S. R.; Heas!er, P. G.; Morris, C. J.; Pitman, S. G.; Selby, G. P.; Simonen, F. A.

    1981-03-01

    The Pacific Northwest Laboratory is conducting a four-phase program for measuring and evaluating the effectiveness and reliability of in-service inspection (lSI} performed on the primary system piping welds of commercial light water reactors (LWRs). Phase I of the program is complete. A survey was made of the state of practice for ultrasonic rsr of LWR primary system piping welds. Fracture mechanics calculations were made to establish required nondestrutive testing sensitivities. In general, it was found that fatigue flaws less than 25% of wall thickness would not grow to failure within an inspection interval of 10 years. However, in some cases failure could occur considerably faster. Statistical methods for predicting and measuring the effectiveness and reliability of lSI were developed and will be applied in the "Round Robin Inspections" of Phase II. Methods were also developed for the production of flaws typical of those found in service. Samples fabricated by these methods wilI be used in Phase II to test inspection effectiveness and reliability. Measurements were made of the influence of flaw characteristics {i.e., roughness, tightness, and orientation) on inspection reliability. These measurernents, as well as the predictions of a statistical model for inspection reliability, indicate that current reporting and recording sensitivities are inadequate.

  11. HPC - Platforms Penta Chart

    Energy Technology Data Exchange (ETDEWEB)

    Trujillo, Angelina Michelle [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-10-08

    Strategy, Planning, Acquiring- very large scale computing platforms come and go and planning for immensely scalable machines often precedes actual procurement by 3 years. Procurement can be another year or more. Integration- After Acquisition, machines must be integrated into the computing environments at LANL. Connection to scalable storage via large scale storage networking, assuring correct and secure operations. Management and Utilization – Ongoing operations, maintenance, and trouble shooting of the hardware and systems software at massive scale is required.

  12. An integrated reliability-based design optimization of offshore towers

    International Nuclear Information System (INIS)

    Karadeniz, Halil; Togan, Vedat; Vrouwenvelder, Ton

    2009-01-01

    After recognizing the uncertainty in the parameters such as material, loading, geometry and so on in contrast with the conventional optimization, the reliability-based design optimization (RBDO) concept has become more meaningful to perform an economical design implementation, which includes a reliability analysis and an optimization algorithm. RBDO procedures include structural analysis, reliability analysis and sensitivity analysis both for optimization and for reliability. The efficiency of the RBDO system depends on the mentioned numerical algorithms. In this work, an integrated algorithms system is proposed to implement the RBDO of the offshore towers, which are subjected to the extreme wave loading. The numerical strategies interacting with each other to fulfill the RBDO of towers are as follows: (a) a structural analysis program, SAPOS, (b) an optimization program, SQP and (c) a reliability analysis program based on FORM. A demonstration of an example tripod tower under the reliability constraints based on limit states of the critical stress, buckling and the natural frequency is presented.

  13. An integrated reliability-based design optimization of offshore towers

    Energy Technology Data Exchange (ETDEWEB)

    Karadeniz, Halil [Faculty of Civil Engineering and Geosciences, Delft University of Technology, Delft (Netherlands)], E-mail: h.karadeniz@tudelft.nl; Togan, Vedat [Department of Civil Engineering, Karadeniz Technical University, Trabzon (Turkey); Vrouwenvelder, Ton [Faculty of Civil Engineering and Geosciences, Delft University of Technology, Delft (Netherlands)

    2009-10-15

    After recognizing the uncertainty in the parameters such as material, loading, geometry and so on in contrast with the conventional optimization, the reliability-based design optimization (RBDO) concept has become more meaningful to perform an economical design implementation, which includes a reliability analysis and an optimization algorithm. RBDO procedures include structural analysis, reliability analysis and sensitivity analysis both for optimization and for reliability. The efficiency of the RBDO system depends on the mentioned numerical algorithms. In this work, an integrated algorithms system is proposed to implement the RBDO of the offshore towers, which are subjected to the extreme wave loading. The numerical strategies interacting with each other to fulfill the RBDO of towers are as follows: (a) a structural analysis program, SAPOS, (b) an optimization program, SQP and (c) a reliability analysis program based on FORM. A demonstration of an example tripod tower under the reliability constraints based on limit states of the critical stress, buckling and the natural frequency is presented.

  14. Integrated approach to economical, reliable, safe nuclear power production

    International Nuclear Information System (INIS)

    1982-06-01

    An Integrated Approach to Economical, Reliable, Safe Nuclear Power Production is the latest evolution of a concept which originated with the Defense-in-Depth philosophy of the nuclear industry. As Defense-in-Depth provided a framework for viewing physical barriers and equipment redundancy, the Integrated Approach gives a framework for viewing nuclear power production in terms of functions and institutions. In the Integrated Approach, four plant Goals are defined (Normal Operation, Core and Plant Protection, Containment Integrity and Emergency Preparedness) with the attendant Functional and Institutional Classifications that support them. The Integrated Approach provides a systematic perspective that combines the economic objective of reliable power production with the safety objective of consistent, controlled plant operation

  15. DOD HPC Insights. Spring 2012

    Science.gov (United States)

    2012-04-01

    petascale and exascale HPC concepts has led to new research thrusts including power efficiency. Now, power efficiency is an important area of expertise... exascale supercomputers. MHPCC is also working on the gen- eration side of the energy equation. We have deployed a 100 KW research so- lar array... exascale su- percomputers. Within the HPCMP, en- ergy costs take an increasing amount of the limited budget that could be better used for service

  16. ADVANCED COMPRESSOR ENGINE CONTROLS TO ENHANCE OPERATION, RELIABILITY AND INTEGRITY

    Energy Technology Data Exchange (ETDEWEB)

    Gary D. Bourn; Jess W. Gingrich; Jack A. Smith

    2004-03-01

    This document is the final report for the ''Advanced Compressor Engine Controls to Enhance Operation, Reliability, and Integrity'' project. SwRI conducted this project for DOE in conjunction with Cooper Compression, under DOE contract number DE-FC26-03NT41859. This report addresses an investigation of engine controls for integral compressor engines and the development of control strategies that implement closed-loop NOX emissions feedback.

  17. Engineering systems reliability, safety, and maintenance an integrated approach

    CERN Document Server

    Dhillon, B S

    2017-01-01

    Today, engineering systems are an important element of the world economy and each year billions of dollars are spent to develop, manufacture, operate, and maintain various types of engineering systems around the globe. Many of these systems are highly sophisticated and contain millions of parts. For example, a Boeing jumbo 747 is made up of approximately 4.5 million parts including fasteners. Needless to say, reliability, safety, and maintenance of systems such as this have become more important than ever before.  Global competition and other factors are forcing manufacturers to produce highly reliable, safe, and maintainable engineering products. Therefore, there is a definite need for the reliability, safety, and maintenance professionals to work closely during design and other phases. Engineering Systems Reliability, Safety, and Maintenance: An Integrated Approach eliminates the need to consult many different and diverse sources in the hunt for the information required to design better engineering syste...

  18. Big Data and HPC collocation: Using HPC idle resources for Big Data Analytics

    OpenAIRE

    MERCIER , Michael; Glesser , David; Georgiou , Yiannis; Richard , Olivier

    2017-01-01

    International audience; Executing Big Data workloads upon High Performance Computing (HPC) infrastractures has become an attractive way to improve their performances. However, the collocation of HPC and Big Data workloads is not an easy task, mainly because of their core concepts' differences. This paper focuses on the challenges related to the scheduling of both Big Data and HPC workloads on the same computing platform. In classic HPC workloads, the rigidity of jobs tends to create holes in ...

  19. HPC in a HEP lab: lessons learned from setting up cost-effective HPC clusters

    OpenAIRE

    Husejko, Michal; Agtzidis, Ioannis; Baehler, Pierre; Dul, Tadeusz; Evans, John; Himyr, Nils; Meinhard, Helge

    2015-01-01

    In this paper we present our findings gathered during the evaluation and testing of Windows Server High-Performance Computing (Windows HPC) in view of potentially using it as a production HPC system for engineering applications. The Windows HPC package, an extension of Microsofts Windows Server product, provides all essential interfaces, utilities and management functionality for creating, operating and monitoring a Windows-based HPC cluster infrastructure. The evaluation and test phase was f...

  20. Neurophysiology underlying influence of stimulus reliability on audiovisual integration.

    Science.gov (United States)

    Shatzer, Hannah; Shen, Stanley; Kerlin, Jess R; Pitt, Mark A; Shahin, Antoine J

    2018-01-24

    We tested the predictions of the dynamic reweighting model (DRM) of audiovisual (AV) speech integration, which posits that spectrotemporally reliable (informative) AV speech stimuli induce a reweighting of processing from low-level to high-level auditory networks. This reweighting decreases sensitivity to acoustic onsets and in turn increases tolerance to AV onset asynchronies (AVOA). EEG was recorded while subjects watched videos of a speaker uttering trisyllabic nonwords that varied in spectrotemporal reliability and asynchrony of the visual and auditory inputs. Subjects judged the stimuli as in-sync or out-of-sync. Results showed that subjects exhibited greater AVOA tolerance for non-blurred than blurred visual speech and for less than more degraded acoustic speech. Increased AVOA tolerance was reflected in reduced amplitude of the P1-P2 auditory evoked potentials, a neurophysiological indication of reduced sensitivity to acoustic onsets and successful AV integration. There was also sustained visual alpha band (8-14 Hz) suppression (desynchronization) following acoustic speech onsets for non-blurred vs. blurred visual speech, consistent with continuous engagement of the visual system as the speech unfolds. The current findings suggest that increased spectrotemporal reliability of acoustic and visual speech promotes robust AV integration, partly by suppressing sensitivity to acoustic onsets, in support of the DRM's reweighting mechanism. Increased visual signal reliability also sustains the engagement of the visual system with the auditory system to maintain alignment of information across modalities. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  1. Integration of nondestructive examination reliability and fracture mechanics

    International Nuclear Information System (INIS)

    Doctor, S.R.; Bates, D.J.; Charlot, L.A.

    1985-01-01

    The primary pressure boundaries (pressure vessels and piping) of nuclear power plants are in-service inspected (ISI) according to the rules of ASME Boiler and Pressure Vessel Code, Section XI. Ultrasonic techniques are normally used for these inspections, which are periodically performed on a sampling of welds. The Integration of Nondestructive Examination (NDE) Reliability and Fracture Mechanics (FM) Program at Pacific Northwest Laboratory was established to determine the reliability of current ISI techniques and to develop recommendations that will ensure a suitably high inspection reliability. The objectives of this NRC program are to: 1) determine the reliability of ultrasonic ISI performed on commercial light-water reactor primary systems; 2) using probabilistic FM analysis, determine the impact of NDE unreliability on system safety and determine the level of inspection reliability required to ensure a suitably low failure probability; 3) evaluate the degree of reliability improvement that could be achieved using improved and advanced NDE techniques; and 4) based on material properties, service conditions, and NDE uncertainties, formulate recommended revisions to ASME Code, Section XI, and Regulatory Requirements needed to ensure suitably low failure probabilities

  2. HPC Test Results Analysis with Splunk

    Energy Technology Data Exchange (ETDEWEB)

    Green, Jennifer Kathleen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-04-21

    This PowerPoint presentation details Los Alamos National Laboratory’s (LANL) outstanding computing division. LANL’s high performance computing (HPC) aims at having the first platform large and fast enough to accommodate resolved 3D calculations for full scale end-to-end calculations. Strategies for managing LANL’s HPC division are also discussed.

  3. Achieving High Reliability Operations Through Multi-Program Integration

    Energy Technology Data Exchange (ETDEWEB)

    Holly M. Ashley; Ronald K. Farris; Robert E. Richards

    2009-04-01

    Over the last 20 years the Idaho National Laboratory (INL) has adopted a number of operations and safety-related programs which has each periodically taken its turn in the limelight. As new programs have come along there has been natural competition for resources, focus and commitment. In the last few years, the INL has made real progress in integrating all these programs and are starting to realize important synergies. Contributing to this integration are both collaborative individuals and an emerging shared vision and goal of the INL fully maturing in its high reliability operations. This goal is so powerful because the concept of high reliability operations (and the resulting organizations) is a masterful amalgam and orchestrator of the best of all the participating programs (i.e. conduct of operations, behavior based safety, human performance, voluntary protection, quality assurance, and integrated safety management). This paper is a brief recounting of the lessons learned, thus far, at the INL in bringing previously competing programs into harmony under the goal (umbrella) of seeking to perform regularly as a high reliability organization. In addition to a brief diagram-illustrated historical review, the authors will share the INL’s primary successes (things already effectively stopped or started) and the gaps yet to be bridged.

  4. Integrating generation and transmission networks reliability for unit commitment solution

    International Nuclear Information System (INIS)

    Jalilzadeh, S.; Shayeghi, H.; Hadadian, H.

    2009-01-01

    This paper presents a new method with integration of generation and transmission networks reliability for the solution of unit commitment (UC) problem. In fact, in order to have a more accurate assessment of system reserve requirement, in addition to unavailability of generation units, unavailability of transmission lines are also taken into account. In this way, evaluation of the required spinning reserve (SR) capacity is performed by applying reliability constraints based on loss of load probability and expected energy not supplied (EENS) indices. Calculation of the above parameters is accomplished by employing a novel procedure based on the linear programming which it also minimizes them to achieve optimum level of the SR capacity and consequently a cost-benefit reliability constrained UC schedule. In addition, a powerful solution technique called 'integer-coded genetic algorithm (ICGA)' is being used for the solution of the proposed method. Numerical results on the IEEE reliability test system show that the consideration of transmission network unavailability has an important influence on reliability indices of the UC schedules

  5. Leveraging HPC resources for High Energy Physics

    International Nuclear Information System (INIS)

    O'Brien, B; Washbrook, A; Walker, R

    2014-01-01

    High Performance Computing (HPC) supercomputers provide unprecedented computing power for a diverse range of scientific applications. The most powerful supercomputers now deliver petaflop peak performance with the expectation of 'exascale' technologies available in the next five years. More recent HPC facilities use x86-based architectures managed by Linux-based operating systems which could potentially allow unmodified HEP software to be run on supercomputers. There is now a renewed interest from both the LHC experiments and the HPC community to accommodate data analysis and event simulation production on HPC facilities. This study provides an outline of the challenges faced when incorporating HPC resources for HEP software by using the HECToR supercomputer as a demonstrator.

  6. An integrated approach to human reliability analysis -- decision analytic dynamic reliability model

    International Nuclear Information System (INIS)

    Holmberg, J.; Hukki, K.; Norros, L.; Pulkkinen, U.; Pyy, P.

    1999-01-01

    The reliability of human operators in process control is sensitive to the context. In many contemporary human reliability analysis (HRA) methods, this is not sufficiently taken into account. The aim of this article is that integration between probabilistic and psychological approaches in human reliability should be attempted. This is achieved first, by adopting such methods that adequately reflect the essential features of the process control activity, and secondly, by carrying out an interactive HRA process. Description of the activity context, probabilistic modeling, and psychological analysis form an iterative interdisciplinary sequence of analysis in which the results of one sub-task maybe input to another. The analysis of the context is carried out first with the help of a common set of conceptual tools. The resulting descriptions of the context promote the probabilistic modeling, through which new results regarding the probabilistic dynamics can be achieved. These can be incorporated in the context descriptions used as reference in the psychological analysis of actual performance. The results also provide new knowledge of the constraints of activity, by providing information of the premises of the operator's actions. Finally, the stochastic marked point process model gives a tool, by which psychological methodology may be interpreted and utilized for reliability analysis

  7. Plant Reliability - an Integrated System for Management (PR-ISM)

    International Nuclear Information System (INIS)

    Aukeman, M.C.; Leininger, E.G.; Carr, P.

    1984-01-01

    The Toledo Edison Company, located in Toledo, Ohio, United States of America, recently implemented a comprehensive maintenance management information system for the Davis-Besse Nuclear Power Station. The system is called PR-ISM, meaning Plant Reliability - An Integrated System for Management. PR-ISM provides the tools needed by station management to effectively plan and control maintenance and other plant activities. The PR-ISM system as it exists today consists of four integrated computer applications: equipment data base maintenance, maintenance work order control, administrative activity tracking, and technical specification compliance. PR-ISM is designed as an integrated on-line system and incorporates strong human factors features. PR-ISM provides each responsible person information to do his job on a daily basis and to look ahead towards future events. It goes beyond 'after the fact' reporting. In this respect, PR-ISM is an 'interactive' control system which: captures work requirements and commitments as they are identified, provides accurate and up-to-date status immediately to those who need it, simplifies paperwork and reduces the associated time delays, provides the information base for work management and reliability analysis, and improves productivity by replacing clerical tasks and consolidating maintenance activities. The functional and technical features of PR-ISM, the experience of Toledo Edison during the first year of operation, and the factors which led to the success of the development project are highlighted. (author)

  8. Management systems for high reliability organizations. Integration and effectiveness; Managementsysteme fuer Hochzuverlaessigkeitsorganisationen. Integration und Wirksamkeit

    Energy Technology Data Exchange (ETDEWEB)

    Mayer, Michael

    2015-03-09

    The scope of the thesis is the development of a method for improvement of efficient integrated management systems for high reliability organizations (HRO). A comprehensive analysis of severe accident prevention is performed. Severe accident management, mitigation measures and business continuity management are not included. High reliability organizations are complex and potentially dynamic organization forms that can be inherently dangerous like nuclear power plants, offshore platforms, chemical facilities, large ships or large aircrafts. A recursive generic management system model (RGM) was development based on the following factors: systemic and cybernetic Asepcts; integration of different management fields, high decision quality, integration of efficient methods of safety and risk analysis, integration of human reliability aspects, effectiveness evaluation and improvement.

  9. Reliability modelling - PETROBRAS 2010 integrated gas supply chain

    Energy Technology Data Exchange (ETDEWEB)

    Faertes, Denise; Heil, Luciana; Saker, Leonardo; Vieira, Flavia; Risi, Francisco; Domingues, Joaquim; Alvarenga, Tobias; Carvalho, Eduardo; Mussel, Patricia

    2010-09-15

    The purpose of this paper is to present the innovative reliability modeling of Petrobras 2010 integrated gas supply chain. The model represents a challenge in terms of complexity and software robustness. It was jointly developed by PETROBRAS Gas and Power Department and Det Norske Veritas. It was carried out with the objective of evaluating security of supply of 2010 gas network design that was conceived to connect Brazilian Northeast and Southeast regions. To provide best in class analysis, state of the art software was used to quantify the availability and the efficiency of the overall network and its individual components.

  10. A simple reliability block diagram method for safety integrity verification

    International Nuclear Information System (INIS)

    Guo Haitao; Yang Xianhui

    2007-01-01

    IEC 61508 requires safety integrity verification for safety related systems to be a necessary procedure in safety life cycle. PFD avg must be calculated to verify the safety integrity level (SIL). Since IEC 61508-6 does not give detailed explanations of the definitions and PFD avg calculations for its examples, it is difficult for common reliability or safety engineers to understand when they use the standard as guidance in practice. A method using reliability block diagram is investigated in this study in order to provide a clear and feasible way of PFD avg calculation and help those who take IEC 61508-6 as their guidance. The method finds mean down times (MDTs) of both channel and voted group first and then PFD avg . The calculated results of various voted groups are compared with those in IEC61508 part 6 and Ref. [Zhang T, Long W, Sato Y. Availability of systems with self-diagnostic components-applying Markov model to IEC 61508-6. Reliab Eng System Saf 2003;80(2):133-41]. An interesting outcome can be realized from the comparison. Furthermore, although differences in MDT of voted groups exist between IEC 61508-6 and this paper, PFD avg of voted groups are comparatively close. With detailed description, the method of RBD presented can be applied to the quantitative SIL verification, showing a similarity of the method in IEC 61508-6

  11. Technology success: Integration of power plant reliability and effective maintenance

    International Nuclear Information System (INIS)

    Ferguson, K.

    2008-01-01

    The nuclear power generation sector has a tradition of utilizing technology as a key attribute for advancement. Companies that own, manage, and operate nuclear power plants can be expected to continue to rely on technology as a vital element of success. Inherent with the operations of the nuclear power industry in many parts of the world is the close connection between efficiency of power plant operations and successful business survival. The relationship among power plant availability, reliability of systems and components, and viability of the enterprise is more evident than ever. Technology decisions need to be accomplished that reflect business strategies, work processes, as well as needs of stakeholders and authorities. Such rigor is needed to address overarching concerns such as power plant life extension and license renewal, new plant orders, outage management, plant safety, inventory management etc. Particular to power plant reliability, the prudent leveraging of technology as a key to future success is vital. A dominant concern is effective asset management as physical plant assets age. Many plants are in, or are entering in, a situation in which systems and component design life and margins are converging such that failure threats can come into play with increasing frequency. Wisely selected technologies can be vital to the identification of emerging threats to reliable performance of key plant features and initiating effective maintenance actions and investments that can sustain or enhance current reliability in a cost effective manner. This attention to detail is vital to investment in new plants as well This paper and presentation will address (1) specific technology success in place at power plants, including nuclear, that integrates attention to attaining high plant reliability and effective maintenance actions as well as (2) complimentary actions that maximize technology success. In addition, the range of benefits that accrue as a result of

  12. ICAROUS - Integrated Configurable Algorithms for Reliable Operations Of Unmanned Systems

    Science.gov (United States)

    Consiglio, María; Muñoz, César; Hagen, George; Narkawicz, Anthony; Balachandran, Swee

    2016-01-01

    NASA's Unmanned Aerial System (UAS) Traffic Management (UTM) project aims at enabling near-term, safe operations of small UAS vehicles in uncontrolled airspace, i.e., Class G airspace. A far-term goal of UTM research and development is to accommodate the expected rise in small UAS traffic density throughout the National Airspace System (NAS) at low altitudes for beyond visual line-of-sight operations. This paper describes a new capability referred to as ICAROUS (Integrated Configurable Algorithms for Reliable Operations of Unmanned Systems), which is being developed under the UTM project. ICAROUS is a software architecture comprised of highly assured algorithms for building safety-centric, autonomous, unmanned aircraft applications. Central to the development of the ICAROUS algorithms is the use of well-established formal methods to guarantee higher levels of safety assurance by monitoring and bounding the behavior of autonomous systems. The core autonomy-enabling capabilities in ICAROUS include constraint conformance monitoring and contingency control functions. ICAROUS also provides a highly configurable user interface that enables the modular integration of mission-specific software components.

  13. Using HPC within an operational forecasting configuration

    Science.gov (United States)

    Jagers, H. R. A.; Genseberger, M.; van den Broek, M. A. F. H.

    2012-04-01

    Various natural disasters are caused by high-intensity events, for example: extreme rainfall can in a short time cause major damage in river catchments, storms can cause havoc in coastal areas. To assist emergency response teams in operational decisions, it's important to have reliable information and predictions as soon as possible. This starts before the event by providing early warnings about imminent risks and estimated probabilities of possible scenarios. In the context of various applications worldwide, Deltares has developed an open and highly configurable forecasting and early warning system: Delft-FEWS. Finding the right balance between simulation time (and hence prediction lead time) and simulation accuracy and detail is challenging. Model resolution may be crucial to capture certain critical physical processes. Uncertainty in forcing conditions may require running large ensembles of models; data assimilation techniques may require additional ensembles and repeated simulations. The computational demand is steadily increasing and data streams become bigger. Using HPC resources is a logical step; in different settings Delft-FEWS has been configured to take advantage of distributed computational resources available to improve and accelerate the forecasting process (e.g. Montanari et al, 2006). We will illustrate the system by means of a couple of practical applications including the real-time dynamic forecasting of wind driven waves, flow of water, and wave overtopping at dikes of Lake IJssel and neighboring lakes in the center of The Netherlands. Montanari et al., 2006. Development of an ensemble flood forecasting system for the Po river basin, First MAP D-PHASE Scientific Meeting, 6-8 November 2006, Vienna, Austria.

  14. Big Data and HPC: A Happy Marriage

    KAUST Repository

    Mehmood, Rashid

    2016-01-25

    International Data Corporation (IDC) defines Big Data technologies as “a new generation of technologies and architectures, designed to economically extract value from very large volumes of a wide variety of data produced every day, by enabling high velocity capture, discovery, and/or analysis”. High Performance Computing (HPC) most generally refers to “the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer or workstation in order to solve large problems in science, engineering, or business”. Big data platforms are built primarily considering the economics and capacity of the system for dealing with the 4V characteristics of data. HPC traditionally has been more focussed on the speed of digesting (computing) the data. For these reasons, the two domains (HPC and Big Data) have developed their own paradigms and technologies. However, recently, these two have grown fond of each other. HPC technologies are needed by Big Data to deal with the ever increasing Vs of data in order to forecast and extract insights from existing and new domains, faster, and with greater accuracy. Increasingly more data is being produced by scientific experiments from areas such as bioscience, physics, and climate, and therefore, HPC needs to adopt data-driven paradigms. Moreover, there are synergies between them with unimaginable potential for developing new computing paradigms, solving long-standing grand challenges, and making new explorations and discoveries. Therefore, they must get married to each other. In this talk, we will trace the HPC and big data landscapes through time including their respective technologies, paradigms and major applications areas. Subsequently, we will present the factors that are driving the convergence of the two technologies, the synergies between them, as well as the benefits of their convergence to the biosciences field. The opportunities and challenges of the

  15. Reliability

    OpenAIRE

    Condon, David; Revelle, William

    2017-01-01

    Separating the signal in a test from the irrelevant noise is a challenge for all measurement. Low test reliability limits test validity, attenuates important relationships, and can lead to regression artifacts. Multiple approaches to the assessment and improvement of reliability are discussed. The advantages and disadvantages of several different approaches to reliability are considered. Practical advice on how to assess reliability using open source software is provided.

  16. Project Final Report: HPC-Colony II

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Terry R [ORNL; Kale, Laxmikant V [University of Illinois, Urbana-Champaign; Moreira, Jose [IBM T. J. Watson Research Center

    2013-11-01

    This report recounts the HPC Colony II Project which was a computer science effort funded by DOE's Advanced Scientific Computing Research office. The project included researchers from ORNL, IBM, and the University of Illinois at Urbana-Champaign. The topic of the effort was adaptive system software for extreme scale parallel machines. A description of findings is included.

  17. Approach for an integral power transformer reliability model

    NARCIS (Netherlands)

    Schijndel, van A.; Wouters, P.A.A.F.; Steennis, E.F.; Wetzer, J.M.

    2012-01-01

    In electrical power transmission and distribution networks power transformers represent a crucial group of assets both in terms of reliability and investments. In order to safeguard the required quality at acceptable costs, decisions must be based on a reliable forecast of future behaviour. The aim

  18. Building and integrating reliability models in a Reliability-Centered-Maintenance approach

    International Nuclear Information System (INIS)

    Verite, B.; Villain, B.; Venturini, V.; Hugonnard, S.; Bryla, P.

    1998-03-01

    Electricite de France (EDF) has recently developed its OMF-Structures method, designed to optimize preventive maintenance of passive structures such as pipes and support, based on risk. In particular, reliability performances of components need to be determined; it is a two-step process, consisting of a qualitative sort followed by a quantitative evaluation, involving two types of models. Initially, degradation models are widely used to exclude some components from the field of preventive maintenance. The reliability of the remaining components is then evaluated by means of quantitative reliability models. The results are then included in a risk indicator that is used to directly optimize preventive maintenance tasks. (author)

  19. Bringing ATLAS production to HPC resources. A case study with SuperMuc and Hydra

    Energy Technology Data Exchange (ETDEWEB)

    Duckeck, Guenter; Walker, Rodney [LMU Muenchen (Germany); Kennedy, John; Mazzaferro, Luca [RZG Garching (Germany); Kluth, Stefan [Max-Planck-Institut fuer Physik, Muenchen (Germany); Collaboration: ATLAS-Collaboration

    2015-07-01

    The possible usage of Supercomputer systems or HPC resources by ATLAS is now becoming viable due to the changing nature of these systems and it is also very attractive due to the need for increasing amounts of simulated data. The ATLAS experiment at CERN will begin a period of high luminosity data taking in 2015. The corresponding need for simulated data might potentially exceed the capabilities of the current Grid infrastructure. ATLAS aims to address this need by opportunistically accessing resources such as cloud and HPC systems. This contribution presents the results of two projects undertaken by LMU/LRZ and MPP/RZG to use the supercomputer facilities SuperMuc (LRZ) and Hydra (RZG). Both are Linux based supercomputers in the 100 k CPU-core category. The integration of such HPC resources into the ATLAS production system poses many challenges. Firstly, established techniques and features of standard WLCG operation are prohibited or much restricted on HPC systems, e.g. Grid middleware, software installation, outside connectivity, etc. Secondly, efficient use of available resources requires massive multi-core jobs, back-fill submission and check-pointing. We discuss the customization of these components and the strategies for HPC usage as well as possibilities for future directions.

  20. Fault tolerance and reliability in integrated ship control

    DEFF Research Database (Denmark)

    Nielsen, Jens Frederik Dalsgaard; Izadi-Zamanabadi, Roozbeh; Schiøler, Henrik

    2002-01-01

    Various strategies for achieving fault tolerance in large scale control systems are discussed. The positive and negative impacts of distribution through network communication are presented. The ATOMOS framework for standardized reliable marine automation is presented along with the corresponding...

  1. Virtualization of the ATLAS software environment on a shared HPC system

    CERN Document Server

    Gamel, Anton Josef; The ATLAS collaboration

    2017-01-01

    The shared HPC cluster NEMO at the University of Freiburg has been made available to local ATLAS users through the provisioning of virtual machines incorporating the ATLAS software environment analogously to a WLCG center. This concept allows to run both data analysis and production on the HPC host system which is connected to the existing Tier2/Tier3 infrastructure. Schedulers of the two clusters were integrated in a dynamic, on-demand way. An automatically generated, fully functional virtual machine image provides access to the local user environment. The performance in the virtualized environment is evaluated for typical High-Energy Physics applications.

  2. Virtualization of the ATLAS software environment on a shared HPC system

    CERN Document Server

    Schnoor, Ulrike; The ATLAS collaboration

    2017-01-01

    High-Performance Computing (HPC) and other research cluster computing resources provided by universities can be useful supplements to the collaboration’s own WLCG computing resources for data analysis and production of simulated event samples. The shared HPC cluster "NEMO" at the University of Freiburg has been made available to local ATLAS users through the provisioning of virtual machines incorporating the ATLAS software environment analogously to a WLCG center. The talk describes the concept and implementation of virtualizing the ATLAS software environment to run both data analysis and production on the HPC host system which is connected to the existing Tier-3 infrastructure. Main challenges include the integration into the NEMO and Tier-3 schedulers in a dynamic, on-demand way, the scalability of the OpenStack infrastructure, as well as the automatic generation of a fully functional virtual machine image providing access to the local user environment, the dCache storage element and the parallel file sys...

  3. Model correction factor method for reliability problems involving integrals of non-Gaussian random fields

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  4. Review of methods for the integration of reliability and design engineering

    International Nuclear Information System (INIS)

    Reilly, J.T.

    1978-03-01

    A review of methods for the integration of reliability and design engineering was carried out to establish a reliability program philosophy, an initial set of methods, and procedures to be used by both the designer and reliability analyst. The report outlines a set of procedures which implements a philosophy that requires increased involvement by the designer in reliability analysis. Discussions of each method reviewed include examples of its application

  5. Integrated Reliability-Based Optimal Design of Structures

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    1987-01-01

    In conventional optimal design of structural systems the weight or the initial cost of the structure is usually used as objective function. Further, the constraints require that the stresses and/or strains at some critical points have to be less than some given values. Finally, all variables......-based optimal design is discussed. Next, an optimal inspection and repair strategy for existing structural systems is presented. An optimization problem is formulated , where the objective is to minimize the expected total future cost of inspection and repair subject to the constraint that the reliability...... value. The reliability can be measured from an element and/or a systems point of view. A number of methods to solve reliability-based optimization problems has been suggested, see e.g. Frangopol [I]. Murotsu et al. (2], Thoft-Christensen & Sørensen (3] and Sørensen (4). For structures where...

  6. HPC Access Using KVM over IP

    Science.gov (United States)

    2007-06-08

    Lightwave VDE /200 KVM-over-Fiber (Keyboard, Video and Mouse) devices installed throughout the TARDEC campus. Implementation of this system required...development effort through the pursuit of an Army-funded Phase-II Small Business Innovative Research (SBIR) effort with IP Video Systems (formerly known as...visualization capabilities of a DoD High- Performance Computing facility, many advanced features are necessary. TARDEC-HPC’s SBIR with IP Video Systems

  7. Integrated Markov-neural reliability computation method: A case for multiple automated guided vehicle system

    International Nuclear Information System (INIS)

    Fazlollahtabar, Hamed; Saidi-Mehrabad, Mohammad; Balakrishnan, Jaydeep

    2015-01-01

    This paper proposes an integrated Markovian and back propagation neural network approaches to compute reliability of a system. While states of failure occurrences are significant elements for accurate reliability computation, Markovian based reliability assessment method is designed. Due to drawbacks shown by Markovian model for steady state reliability computations and neural network for initial training pattern, integration being called Markov-neural is developed and evaluated. To show efficiency of the proposed approach comparative analyses are performed. Also, for managerial implication purpose an application case for multiple automated guided vehicles (AGVs) in manufacturing networks is conducted. - Highlights: • Integrated Markovian and back propagation neural network approach to compute reliability. • Markovian based reliability assessment method. • Managerial implication is shown in an application case for multiple automated guided vehicles (AGVs) in manufacturing networks

  8. COMPOSE-HPC: A Transformational Approach to Exascale

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, David E [ORNL; Allan, Benjamin A. [Sandia National Laboratories (SNL); Armstrong, Robert C. [Sandia National Laboratories (SNL); Chavarria-Miranda, Daniel [Pacific Northwest National Laboratory (PNNL); Dahlgren, Tamara L. [Lawrence Livermore National Laboratory (LLNL); Elwasif, Wael R [ORNL; Epperly, Tom [Lawrence Livermore National Laboratory (LLNL); Foley, Samantha S [ORNL; Hulette, Geoffrey C. [Sandia National Laboratories (SNL); Krishnamoorthy, Sriram [Pacific Northwest National Laboratory (PNNL); Prantl, Adrian [Lawrence Livermore National Laboratory (LLNL); Panyala, Ajay [Louisiana State University; Sottile, Matthew [Galois, Inc.

    2012-04-01

    The goal of the COMPOSE-HPC project is to 'democratize' tools for automatic transformation of program source code so that it becomes tractable for the developers of scientific applications to create and use their own transformations reliably and safely. This paper describes our approach to this challenge, the creation of the KNOT tool chain, which includes tools for the creation of annotation languages to control the transformations (PAUL), to perform the transformations (ROTE), and optimization and code generation (BRAID), which can be used individually and in combination. We also provide examples of current and future uses of the KNOT tools, which include transforming code to use different programming models and environments, providing tests that can be used to detect errors in software or its execution, as well as composition of software written in different programming languages, or with different threading patterns.

  9. Addressing Uniqueness and Unison of Reliability and Safety for a Better Integration

    Science.gov (United States)

    Huang, Zhaofeng; Safie, Fayssal

    2016-01-01

    Over time, it has been observed that Safety and Reliability have not been clearly differentiated, which leads to confusion, inefficiency, and, sometimes, counter-productive practices in executing each of these two disciplines. It is imperative to address this situation to help Reliability and Safety disciplines improve their effectiveness and efficiency. The paper poses an important question to address, "Safety and Reliability - Are they unique or unisonous?" To answer the question, the paper reviewed several most commonly used analyses from each of the disciplines, namely, FMEA, reliability allocation and prediction, reliability design involvement, system safety hazard analysis, Fault Tree Analysis, and Probabilistic Risk Assessment. The paper pointed out uniqueness and unison of Safety and Reliability in their respective roles, requirements, approaches, and tools, and presented some suggestions for enhancing and improving the individual disciplines, as well as promoting the integration of the two. The paper concludes that Safety and Reliability are unique, but compensating each other in many aspects, and need to be integrated. Particularly, the individual roles of Safety and Reliability need to be differentiated, that is, Safety is to ensure and assure the product meets safety requirements, goals, or desires, and Reliability is to ensure and assure maximum achievability of intended design functions. With the integration of Safety and Reliability, personnel can be shared, tools and analyses have to be integrated, and skill sets can be possessed by the same person with the purpose of providing the best value to a product development.

  10. Novel technique for reliability testing of silicon integrated circuits

    NARCIS (Netherlands)

    Le Minh, P.; Wallinga, Hans; Woerlee, P.H.; van den Berg, Albert; Holleman, J.

    2001-01-01

    We propose a simple, inexpensive technique with high resolution to identify the weak spots in integrated circuits by means of a non-destructive photochemical process in which photoresist is used as the photon detection tool. The experiment was done to localize the breakdown link of thin silicon

  11. WinHPC System Configuration | High-Performance Computing | NREL

    Science.gov (United States)

    ), login node (WinHPC02) and worker/compute nodes. The head node acts as the file, DNS, and license server . The login node is where the users connect to access the cluster. Node 03 has dual Intel Xeon E5530 2008 R2 HPC Edition. The login node, WinHPC02, is where users login to access the system. This is where

  12. An Integrated Approach to Establish Validity and Reliability of Reading Tests

    Science.gov (United States)

    Razi, Salim

    2012-01-01

    This study presents the processes of developing and establishing reliability and validity of a reading test by administering an integrative approach as conventional reliability and validity measures superficially reveals the difficulty of a reading test. In this respect, analysing vocabulary frequency of the test is regarded as a more eligible way…

  13. Development of the integrated system reliability analysis code MODULE

    International Nuclear Information System (INIS)

    Han, S.H.; Yoo, K.J.; Kim, T.W.

    1987-01-01

    The major components in a system reliability analysis are the determination of cut sets, importance measure, and uncertainty analysis. Various computer codes have been used for these purposes. For example, SETS and FTAP are used to determine cut sets; Importance for importance calculations; and Sample, CONINT, and MOCUP for uncertainty analysis. There have been problems when the codes run each other and the input and output are not linked, which could result in errors when preparing input for each code. The code MODULE was developed to carry out the above calculations simultaneously without linking input and outputs to other codes. MODULE can also prepare input for SETS for the case of a large fault tree that cannot be handled by MODULE. The flow diagram of the MODULE code is shown. To verify the MODULE code, two examples are selected and the results and computation times are compared with those of SETS, FTAP, CONINT, and MOCUP on both Cyber 170-875 and IBM PC/AT. Two examples are fault trees of the auxiliary feedwater system (AFWS) of Korea Nuclear Units (KNU)-1 and -2, which have 54 gates and 115 events, 39 gates and 92 events, respectively. The MODULE code has the advantage that it can calculate the cut sets, importances, and uncertainties in a single run with little increase in computing time over other codes and that it can be used in personal computers

  14. Delivering LHC software to HPC compute elements

    CERN Document Server

    Blomer, Jakob; Hardi, Nikola; Popescu, Radu

    2017-01-01

    In recent years, there was a growing interest in improving the utilization of supercomputers by running applications of experiments at the Large Hadron Collider (LHC) at CERN when idle cores cannot be assigned to traditional HPC jobs. At the same time, the upcoming LHC machine and detector upgrades will produce some 60 times higher data rates and challenge LHC experiments to use so far untapped compute resources. LHC experiment applications are tailored to run on high-throughput computing resources and they have a different anatomy than HPC applications. LHC applications comprise a core framework that allows hundreds of researchers to plug in their specific algorithms. The software stacks easily accumulate to many gigabytes for a single release. New releases are often produced on a daily basis. To facilitate the distribution of these software stacks to world-wide distributed computing resources, LHC experiments use a purpose-built, global, POSIX file system, the CernVM File System. CernVM-FS pre-processes dat...

  15. WinHPC System Policies | High-Performance Computing | NREL

    Science.gov (United States)

    ) cluster. The WinHPC login node (WinHPC02) is intended to allow users with approved access to connect to also be run from the login node. There is a single login node for this system so any applications

  16. IRRAS, Integrated Reliability and Risk Analysis System for PC

    International Nuclear Information System (INIS)

    Russell, K.D.

    1995-01-01

    1 - Description of program or function: IRRAS4.16 is a program developed for the purpose of performing those functions necessary to create and analyze a complete Probabilistic Risk Assessment (PRA). This program includes functions to allow the user to create event trees and fault trees, to define accident sequences and basic event failure data, to solve system and accident sequence fault trees, to quantify cut sets, and to perform uncertainty analysis on the results. Also included in this program are features to allow the analyst to generate reports and displays that can be used to document the results of an analysis. Since this software is a very detailed technical tool, the user of this program should be familiar with PRA concepts and the methods used to perform these analyses. 2 - Method of solution: IRRAS4.16 is written entirely in MODULA-2 and uses an integrated commercial graphics package to interactively construct and edit fault trees. The fault tree solving methods used are industry recognized top down algorithms. For quantification, the program uses standard methods to propagate the failure information through the generated cut sets. 3 - Restrictions on the complexity of the problem: Due to the complexity of and the variety of ways a fault tree can be defined it is difficult to define limits on the complexity of the problem solved by this software. It is, however, capable of solving a substantial fault tree due to efficient methods. At this time, the software can efficiently solve problems as large as other software currently used on mainframe computers. Does not include source code

  17. End-to-end experiment management in HPC

    Energy Technology Data Exchange (ETDEWEB)

    Bent, John M [Los Alamos National Laboratory; Kroiss, Ryan R [Los Alamos National Laboratory; Torrez, Alfred [Los Alamos National Laboratory; Wingate, Meghan [Los Alamos National Laboratory

    2010-01-01

    Experiment management in any domain is challenging. There is a perpetual feedback loop cycling through planning, execution, measurement, and analysis. The lifetime of a particular experiment can be limited to a single cycle although many require myriad more cycles before definite results can be obtained. Within each cycle, a large number of subexperiments may be executed in order to measure the effects of one or more independent variables. Experiment management in high performance computing (HPC) follows this general pattern but also has three unique characteristics. One, computational science applications running on large supercomputers must deal with frequent platform failures which can interrupt, perturb, or terminate running experiments. Two, these applications typically integrate in parallel using MPI as their communication medium. Three, there is typically a scheduling system (e.g. Condor, Moab, SGE, etc.) acting as a gate-keeper for the HPC resources. In this paper, we introduce LANL Experiment Management (LEM), an experimental management framework simplifying all four phases of experiment management. LEM simplifies experiment planning by allowing the user to describe their experimental goals without having to fully construct the individual parameters for each task. To simplify execution, LEM dispatches the subexperiments itself thereby freeing the user from remembering the often arcane methods for interacting with the various scheduling systems. LEM provides transducers for experiments that automatically measure and record important information about each subexperiment; these transducers can easily be extended to collect additional measurements specific to each experiment. Finally, experiment analysis is simplified by providing a general database visualization framework that allows users to quickly and easily interact with their measured data.

  18. Reliability evaluation methodologies for ensuring container integrity of stored transuranic (TRU) waste

    International Nuclear Information System (INIS)

    Smith, K.L.

    1995-06-01

    This report provides methodologies for providing defensible estimates of expected transuranic waste storage container lifetimes at the Radioactive Waste Management Complex. These methodologies can be used to estimate transuranic waste container reliability (for integrity and degradation) and as an analytical tool to optimize waste container integrity. Container packaging and storage configurations, which directly affect waste container integrity, are also addressed. The methodologies presented provide a means for demonstrating Resource Conservation and Recovery Act waste storage requirements

  19. Social Information Is Integrated into Value and Confidence Judgments According to Its Reliability.

    Science.gov (United States)

    De Martino, Benedetto; Bobadilla-Suarez, Sebastian; Nouguchi, Takao; Sharot, Tali; Love, Bradley C

    2017-06-21

    How much we like something, whether it be a bottle of wine or a new film, is affected by the opinions of others. However, the social information that we receive can be contradictory and vary in its reliability. Here, we tested whether the brain incorporates these statistics when judging value and confidence. Participants provided value judgments about consumer goods in the presence of online reviews. We found that participants updated their initial value and confidence judgments in a Bayesian fashion, taking into account both the uncertainty of their initial beliefs and the reliability of the social information. Activity in dorsomedial prefrontal cortex tracked the degree of belief update. Analogous to how lower-level perceptual information is integrated, we found that the human brain integrates social information according to its reliability when judging value and confidence. SIGNIFICANCE STATEMENT The field of perceptual decision making has shown that the sensory system integrates different sources of information according to their respective reliability, as predicted by a Bayesian inference scheme. In this work, we hypothesized that a similar coding scheme is implemented by the human brain to process social signals and guide complex, value-based decisions. We provide experimental evidence that the human prefrontal cortex's activity is consistent with a Bayesian computation that integrates social information that differs in reliability and that this integration affects the neural representation of value and confidence. Copyright © 2017 De Martino et al.

  20. HPC CLOUD APPLIED TO LATTICE OPTIMIZATION

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Changchun; Nishimura, Hiroshi; James, Susan; Song, Kai; Muriki, Krishna; Qin, Yong

    2011-03-18

    As Cloud services gain in popularity for enterprise use, vendors are now turning their focus towards providing cloud services suitable for scientific computing. Recently, Amazon Elastic Compute Cloud (EC2) introduced the new Cluster Compute Instances (CCI), a new instance type specifically designed for High Performance Computing (HPC) applications. At Berkeley Lab, the physicists at the Advanced Light Source (ALS) have been running Lattice Optimization on a local cluster, but the queue wait time and the flexibility to request compute resources when needed are not ideal for rapid development work. To explore alternatives, for the first time we investigate running the Lattice Optimization application on Amazon's new CCI to demonstrate the feasibility and trade-offs of using public cloud services for science.

  1. HPC Cloud Applied To Lattice Optimization

    International Nuclear Information System (INIS)

    Sun, Changchun; Nishimura, Hiroshi; James, Susan; Song, Kai; Muriki, Krishna; Qin, Yong

    2011-01-01

    As Cloud services gain in popularity for enterprise use, vendors are now turning their focus towards providing cloud services suitable for scientific computing. Recently, Amazon Elastic Compute Cloud (EC2) introduced the new Cluster Compute Instances (CCI), a new instance type specifically designed for High Performance Computing (HPC) applications. At Berkeley Lab, the physicists at the Advanced Light Source (ALS) have been running Lattice Optimization on a local cluster, but the queue wait time and the flexibility to request compute resources when needed are not ideal for rapid development work. To explore alternatives, for the first time we investigate running the Lattice Optimization application on Amazon's new CCI to demonstrate the feasibility and trade-offs of using public cloud services for science.

  2. HPC4Energy Final Report : GE Energy

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Steven G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Van Zandt, Devin T. [GE Energy Consulting, Schenectady, NY (United States); Thomas, Brian [GE Energy Consulting, Schenectady, NY (United States); Mahmood, Sajjad [GE Energy Consulting, Schenectady, NY (United States); Woodward, Carol S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-02-25

    Power System planning tools are being used today to simulate systems that are far larger and more complex than just a few years ago. Advances in renewable technologies and more pervasive control technology are driving planning engineers to analyze an increasing number of scenarios and system models with much more detailed network representations. Although the speed of individual CPU’s has increased roughly according to Moore’s Law, the requirements for advanced models, increased system sizes, and larger sensitivities have outstripped CPU performance. This computational dilemma has reached a critical point and the industry needs to develop the technology to accurately model the power system of the future. The hpc4energy incubator program provided a unique opportunity to leverage the HPC resources available to LLNL and the power systems domain expertise of GE Energy to enhance the GE Concorda PSLF software. Well over 500 users worldwide, including all of the major California electric utilities, rely on Concorda PSLF software for their power flow and dynamics. This pilot project demonstrated that the GE Concorda PSLF software can perform contingency analysis in a massively parallel environment to significantly reduce the time to results. An analysis with 4,127 contingencies that would take 24 days on a single core was reduced to 24 minutes when run on 4,217 cores. A secondary goal of this project was to develop and test modeling techniques that will expand the computational capability of PSLF to efficiently deal with systems sizes greater than 150,000 buses. Toward this goal the matrix reordering implementation time was sped up 9.5 times by optimizing the code and introducing threading.

  3. First experience with particle-in-cell plasma physics code on ARM-based HPC systems

    Science.gov (United States)

    Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Mantsinen, Mervi; Mateo, Sergi; Cela, José M.; Castejón, Francisco

    2015-09-01

    In this work, we will explore the feasibility of porting a Particle-in-cell code (EUTERPE) to an ARM multi-core platform from the Mont-Blanc project. The used prototype is based on a system-on-chip Samsung Exynos 5 with an integrated GPU. It is the first prototype that could be used for High-Performance Computing (HPC), since it supports double precision and parallel programming languages.

  4. Human reliability analysis of performing tasks in plants based on fuzzy integral

    International Nuclear Information System (INIS)

    Washio, Takashi; Kitamura, Yutaka; Takahashi, Hideaki

    1991-01-01

    The effective improvement of the human working conditions in nuclear power plants might be a solution for the enhancement of the operation safety. The human reliability analysis (HRA) gives a methodological basis of the improvement based on the evaluation of human reliability under various working conditions. This study investigates some difficulties of the human reliability analysis using conventional linear models and recent fuzzy integral models, and provides some solutions to the difficulties. The following practical features of the provided methods are confirmed in comparison with the conventional methods: (1) Applicability to various types of tasks (2) Capability of evaluating complicated dependencies among working condition factors (3) A priori human reliability evaluation based on a systematic task analysis of human action processes (4) A conversion scheme to probability from indices representing human reliability. (author)

  5. Review of Reliability-Based Design Optimization Approach and Its Integration with Bayesian Method

    Science.gov (United States)

    Zhang, Xiangnan

    2018-03-01

    A lot of uncertain factors lie in practical engineering, such as external load environment, material property, geometrical shape, initial condition, boundary condition, etc. Reliability method measures the structural safety condition and determine the optimal design parameter combination based on the probabilistic theory. Reliability-based design optimization (RBDO) is the most commonly used approach to minimize the structural cost or other performance under uncertainty variables which combines the reliability theory and optimization. However, it cannot handle the various incomplete information. The Bayesian approach is utilized to incorporate this kind of incomplete information in its uncertainty quantification. In this paper, the RBDO approach and its integration with Bayesian method are introduced.

  6. Addressing Unison and Uniqueness of Reliability and Safety for Better Integration

    Science.gov (United States)

    Huang, Zhaofeng; Safie, Fayssal

    2015-01-01

    For a long time, both in theory and in practice, safety and reliability have not been clearly differentiated, which leads to confusion, inefficiency, and sometime counter-productive practices in executing each of these two disciplines. It is imperative to address the uniqueness and the unison of these two disciplines to help both disciplines become more effective and to promote a better integration of the two for enhancing safety and reliability in our products as an overall objective. There are two purposes of this paper. First, it will investigate the uniqueness and unison of each discipline and discuss the interrelationship between the two for awareness and clarification. Second, after clearly understanding the unique roles and interrelationship between the two in a product design and development life cycle, we offer suggestions to enhance the disciplines with distinguished and focused roles, to better integrate the two, and to improve unique sets of skills and tools of reliability and safety processes. From the uniqueness aspect, the paper identifies and discusses the respective uniqueness of reliability and safety from their roles, accountability, nature of requirements, technical scopes, detailed technical approaches, and analysis boundaries. It is misleading to equate unreliable to unsafe, since a safety hazard may or may not be related to the component, sub-system, or system functions, which are primarily what reliability addresses. Similarly, failing-to-function may or may not lead to hazard events. Examples will be given in the paper from aerospace, defense, and consumer products to illustrate the uniqueness and differences between reliability and safety. From the unison aspect, the paper discusses what the commonalities between reliability and safety are, and how these two disciplines are linked, integrated, and supplemented with each other to accomplish the customer requirements and product goals. In addition to understanding the uniqueness in

  7. Structural reliability calculation method based on the dual neural network and direct integration method.

    Science.gov (United States)

    Li, Haibin; He, Yun; Nie, Xiaobo

    2018-01-01

    Structural reliability analysis under uncertainty is paid wide attention by engineers and scholars due to reflecting the structural characteristics and the bearing actual situation. The direct integration method, started from the definition of reliability theory, is easy to be understood, but there are still mathematics difficulties in the calculation of multiple integrals. Therefore, a dual neural network method is proposed for calculating multiple integrals in this paper. Dual neural network consists of two neural networks. The neural network A is used to learn the integrand function, and the neural network B is used to simulate the original function. According to the derivative relationships between the network output and the network input, the neural network B is derived from the neural network A. On this basis, the performance function of normalization is employed in the proposed method to overcome the difficulty of multiple integrations and to improve the accuracy for reliability calculations. The comparisons between the proposed method and Monte Carlo simulation method, Hasofer-Lind method, the mean value first-order second moment method have demonstrated that the proposed method is an efficient and accurate reliability method for structural reliability problems.

  8. An integrated approach to estimate storage reliability with initial failures based on E-Bayesian estimates

    International Nuclear Information System (INIS)

    Zhang, Yongjin; Zhao, Ming; Zhang, Shitao; Wang, Jiamei; Zhang, Yanjun

    2017-01-01

    Storage reliability that measures the ability of products in a dormant state to keep their required functions is studied in this paper. For certain types of products, Storage reliability may not always be 100% at the beginning of storage, unlike the operational reliability, which exist possible initial failures that are normally neglected in the models of storage reliability. In this paper, a new integrated technique, the non-parametric measure based on the E-Bayesian estimates of current failure probabilities is combined with the parametric measure based on the exponential reliability function, is proposed to estimate and predict the storage reliability of products with possible initial failures, where the non-parametric method is used to estimate the number of failed products and the reliability at each testing time, and the parameter method is used to estimate the initial reliability and the failure rate of storage product. The proposed method has taken into consideration that, the reliability test data of storage products containing the unexamined before and during the storage process, is available for providing more accurate estimates of both the initial failure probability and the storage failure probability. When storage reliability prediction that is the main concern in this field should be made, the non-parametric estimates of failure numbers can be used into the parametric models for the failure process in storage. In the case of exponential models, the assessment and prediction method for storage reliability is presented in this paper. Finally, a numerical example is given to illustrate the method. Furthermore, a detailed comparison between the proposed and traditional method, for examining the rationality of assessment and prediction on the storage reliability, is investigated. The results should be useful for planning a storage environment, decision-making concerning the maximum length of storage, and identifying the production quality. - Highlights:

  9. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE), Version 5.0: Integrated Reliability and Risk Analysis System (IRRAS) reference manual. Volume 2

    International Nuclear Information System (INIS)

    Russell, K.D.; Kvarfordt, K.J.; Skinner, N.L.; Wood, S.T.; Rasmuson, D.M.

    1994-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. The Integrated Reliability and Risk Analysis System (IRRAS) is a state-of-the-art, microcomputer-based probabilistic risk assessment (PRA) model development and analysis tool to address key nuclear plant safety issues. IRRAS is an integrated software tool that gives the use the ability to create and analyze fault trees and accident sequences using a microcomputer. This program provides functions that range from graphical fault tree construction to cut set generation and quantification to report generation. Version 1.0 of the IRRAS program was released in February of 1987. Since then, many user comments and enhancements have been incorporated into the program providing a much more powerful and user-friendly system. This version has been designated IRRAS 5.0 and is the subject of this Reference Manual. Version 5.0 of IRRAS provides the same capabilities as earlier versions and ads the ability to perform location transformations, seismic analysis, and provides enhancements to the user interface as well as improved algorithm performance. Additionally, version 5.0 contains new alphanumeric fault tree and event used for event tree rules, recovery rules, and end state partitioning

  10. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 1: HARP introduction and user's guide

    Science.gov (United States)

    Bavuso, Salvatore J.; Rothmann, Elizabeth; Dugan, Joanne Bechta; Trivedi, Kishor S.; Mittal, Nitin; Boyd, Mark A.; Geist, Robert M.; Smotherman, Mark D.

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed to be compatible with most computing platforms and operating systems, and some programs have been beta tested, within the aerospace community for over 8 years. Volume 1 provides an introduction to the HARP program. Comprehensive information on HARP mathematical models can be found in the references.

  11. Standard high-reliability integrated circuit logic packaging. [for deep space tracking stations

    Science.gov (United States)

    Slaughter, D. W.

    1977-01-01

    A family of standard, high-reliability hardware used for packaging digital integrated circuits is described. The design transition from early prototypes to production hardware is covered and future plans are discussed. Interconnections techniques are described as well as connectors and related hardware available at both the microcircuit packaging and main-frame level. General applications information is also provided.

  12. Integration of external estimated breeding values and associated reliabilities using correlations among traits and effects

    NARCIS (Netherlands)

    Vandenplas, J.; Colinet, F.G.; Glorieux, G.; Bertozzi, C.; Gengler, N.

    2015-01-01

    Based on a Bayesian view of linear mixed models, several studies showed the possibilities to integrate estimated breeding values (EBV) and associated reliabilities (REL) provided by genetic evaluations performed outside a given evaluation system into this genetic evaluation. Hereafter, the term

  13. The clinical phenotype of hereditary versus sporadic prostate cancer: HPC definition revisited

    NARCIS (Netherlands)

    Cremers, R.G.H.M.; Aben, K.K.H.; Oort, I.M. van; Sedelaar, J.P.M.; Vasen, H.F.A.; Vermeulen, S.H.; Kiemeney, L.A.L.M.

    2016-01-01

    BACKGROUND: The definition of hereditary prostate cancer (HPC) is based on family history and age at onset. Intuitively, HPC is a serious subtype of prostate cancer but there are only limited data on the clinical phenotype of HPC. Here, we aimed to compare the prognosis of HPC to the sporadic form

  14. The Software Reliability of Large Scale Integration Circuit and Very Large Scale Integration Circuit

    OpenAIRE

    Artem Ganiyev; Jan Vitasek

    2010-01-01

    This article describes evaluation method of faultless function of large scale integration circuits (LSI) and very large scale integration circuits (VLSI). In the article there is a comparative analysis of factors which determine faultless of integrated circuits, analysis of already existing methods and model of faultless function evaluation of LSI and VLSI. The main part describes a proposed algorithm and program for analysis of fault rate in LSI and VLSI circuits.

  15. Reliability and integrity management program for PBMR helium pressure boundary components - HTR2008-58036

    International Nuclear Information System (INIS)

    Fleming, K. N.; Gamble, R.; Gosselin, S.; Fletcher, J.; Broom, N.

    2008-01-01

    The purpose of this paper is to present the results of a study to establish strategies for the reliability and integrity management (RIM) of passive metallic components for the PBMR. The RIM strategies investigated include design elements, leak detection and testing approaches, and non-destructive examinations. Specific combinations of strategies are determined to be necessary and sufficient to achieve target reliability goals for passive components. This study recommends a basis for the RIM program for the PBMR Demonstration Power Plant (DPP) and provides guidance for the development by the American Society of Mechanical Engineers (ASME) of RIM requirements for Modular High Temperature Gas-Cooled Reactors (MHRs). (authors)

  16. Reliability assessment of distribution system with the integration of renewable distributed generation

    International Nuclear Information System (INIS)

    Adefarati, T.; Bansal, R.C.

    2017-01-01

    Highlights: • Addresses impacts of renewable DG on the reliability of the distribution system. • Multi-objective formulation for maximizing the cost saving with integration of DG. • Uses Markov model to study the stochastic characteristics of the major components. • The investigation is done using modified RBTS bus test distribution system. • Proposed approach is useful for electric utilities to enhance the reliability. - Abstract: Recent studies have shown that renewable energy resources will contribute substantially to future energy generation owing to the rapid depletion of fossil fuels. Wind and solar energy resources are major sources of renewable energy that have the ability to reduce the energy crisis and the greenhouse gases emitted by the conventional power plants. Reliability assessment is one of the key indicators to measure the impact of the renewable distributed generation (DG) units in the distribution networks and to minimize the cost that is associated with power outage. This paper presents a comprehensive reliability assessment of the distribution system that satisfies the consumer load requirements with the penetration of wind turbine generator (WTG), electric storage system (ESS) and photovoltaic (PV). A Markov model is proposed to access the stochastic characteristics of the major components of the renewable DG resources as well as their influence on the reliability of a conventional distribution system. The results obtained from the case studies have demonstrated the effectiveness of using WTG, ESS and PV to enhance the reliability of the conventional distribution system.

  17. Reliable electricity. The effects of system integration and cooperative measures to make it work

    International Nuclear Information System (INIS)

    Hagspiel, Simeon; Koeln Univ.

    2017-01-01

    We investigate the effects of system integration for reliability of supply in regional electricity systems along with cooperative measures to support it. Specifically, we set up a model to contrast the benefits from integration through statistical balancing (i.e., a positive externality) with the risk of cascading outages (a negative externality). The model is calibrated with a comprehensive dataset comprising 28 European countries on a high spatial and temporal resolution. We find that positive externalities from system integration prevail, and that cooperation is key to meet reliability targets efficiently. To enable efficient solutions in a non-marketed environment, we formulate the problem as a cooperative game and study different rules to allocate the positive and negative effects to individual countries. Strikingly, we find that without a mechanism, the integrated solution is unstable. In contrast, proper transfer payments can be found to make all countries better off in full integration, and the Nucleolus is identified as a particularly promising candidate. The rule could be used as a basis for compensation payments to support the successful integration and cooperation of electricity systems.

  18. Reliable electricity. The effects of system integration and cooperative measures to make it work

    Energy Technology Data Exchange (ETDEWEB)

    Hagspiel, Simeon [Koeln Univ. (Germany). Energiewirtschaftliches Inst.; Koeln Univ. (Germany). Dept. of Economics

    2017-12-15

    We investigate the effects of system integration for reliability of supply in regional electricity systems along with cooperative measures to support it. Specifically, we set up a model to contrast the benefits from integration through statistical balancing (i.e., a positive externality) with the risk of cascading outages (a negative externality). The model is calibrated with a comprehensive dataset comprising 28 European countries on a high spatial and temporal resolution. We find that positive externalities from system integration prevail, and that cooperation is key to meet reliability targets efficiently. To enable efficient solutions in a non-marketed environment, we formulate the problem as a cooperative game and study different rules to allocate the positive and negative effects to individual countries. Strikingly, we find that without a mechanism, the integrated solution is unstable. In contrast, proper transfer payments can be found to make all countries better off in full integration, and the Nucleolus is identified as a particularly promising candidate. The rule could be used as a basis for compensation payments to support the successful integration and cooperation of electricity systems.

  19. Reliability-Weighted Integration of Audiovisual Signals Can Be Modulated by Top-down Attention

    Science.gov (United States)

    Noppeney, Uta

    2018-01-01

    Abstract Behaviorally, it is well established that human observers integrate signals near-optimally weighted in proportion to their reliabilities as predicted by maximum likelihood estimation. Yet, despite abundant behavioral evidence, it is unclear how the human brain accomplishes this feat. In a spatial ventriloquist paradigm, participants were presented with auditory, visual, and audiovisual signals and reported the location of the auditory or the visual signal. Combining psychophysics, multivariate functional MRI (fMRI) decoding, and models of maximum likelihood estimation (MLE), we characterized the computational operations underlying audiovisual integration at distinct cortical levels. We estimated observers’ behavioral weights by fitting psychometric functions to participants’ localization responses. Likewise, we estimated the neural weights by fitting neurometric functions to spatial locations decoded from regional fMRI activation patterns. Our results demonstrate that low-level auditory and visual areas encode predominantly the spatial location of the signal component of a region’s preferred auditory (or visual) modality. By contrast, intraparietal sulcus forms spatial representations by integrating auditory and visual signals weighted by their reliabilities. Critically, the neural and behavioral weights and the variance of the spatial representations depended not only on the sensory reliabilities as predicted by the MLE model but also on participants’ modality-specific attention and report (i.e., visual vs. auditory). These results suggest that audiovisual integration is not exclusively determined by bottom-up sensory reliabilities. Instead, modality-specific attention and report can flexibly modulate how intraparietal sulcus integrates sensory signals into spatial representations to guide behavioral responses (e.g., localization and orienting). PMID:29527567

  20. A Distributed Python HPC Framework: ODIN, PyTrilinos, & Seamless

    Energy Technology Data Exchange (ETDEWEB)

    Grant, Robert [Enthought, Inc., Austin, TX (United States)

    2015-11-23

    Under this grant, three significant software packages were developed or improved, all with the goal of improving the ease-of-use of HPC libraries. The first component is a Python package, named DistArray (originally named Odin), that provides a high-level interface to distributed array computing. This interface is based on the popular and widely used NumPy package and is integrated with the IPython project for enhanced interactive parallel distributed computing. The second Python package is the Distributed Array Protocol (DAP) that enables separate distributed array libraries to share arrays efficiently without copying or sending messages. If a distributed array library supports the DAP, it is then automatically able to communicate with any other library that also supports the protocol. This protocol allows DistArray to communicate with the Trilinos library via PyTrilinos, which was also enhanced during this project. A third package, PyTrilinos, was extended to support distributed structured arrays (in addition to the unstructured arrays of its original design), allow more flexible distributed arrays (i.e., the restriction to double precision data was lifted), and implement the DAP. DAP support includes both exporting the protocol so that external packages can use distributed Trilinos data structures, and importing the protocol so that PyTrilinos can work with distributed data from external packages.

  1. BEAM: A computational workflow system for managing and modeling material characterization data in HPC environments

    Energy Technology Data Exchange (ETDEWEB)

    Lingerfelt, Eric J [ORNL; Endeve, Eirik [ORNL; Ovchinnikov, Oleg S [ORNL; Borreguero Calvo, Jose M [ORNL; Park, Byung H [ORNL; Archibald, Richard K [ORNL; Symons, Christopher T [ORNL; Kalinin, Sergei V [ORNL; Messer, Bronson [ORNL; Shankar, Mallikarjun [ORNL; Jesse, Stephen [ORNL

    2016-01-01

    Improvements in scientific instrumentation allow imaging at mesoscopic to atomic length scales, many spectroscopic modes, and now with the rise of multimodal acquisition systems and the associated processing capability the era of multidimensional, informationally dense data sets has arrived. Technical issues in these combinatorial scientific fields are exacerbated by computational challenges best summarized as a necessity for drastic improvement in the capability to transfer, store, and analyze large volumes of data. The Bellerophon Environment for Analysis of Materials (BEAM) platform provides material scientists the capability to directly leverage the integrated computational and analytical power of High Performance Computing (HPC) to perform scalable data analysis and simulation via an intuitive, cross-platform client user interface. This framework delivers authenticated, push-button execution of complex user workflows that deploy data analysis algorithms and computational simulations utilizing the converged compute-and-data infrastructure at Oak Ridge National Laboratory s (ORNL) Compute and Data Environment for Science (CADES) and HPC environments like Titan at the Oak Ridge Leadership Computing Facility (OLCF). In this work we address the underlying HPC needs for characterization in the material science community, elaborate how BEAM s design and infrastructure tackle those needs, and present a small sub-set of user cases where scientists utilized BEAM across a broad range of analytical techniques and analysis modes.

  2. A Bayesian reliability evaluation method with integrated accelerated degradation testing and field information

    International Nuclear Information System (INIS)

    Wang, Lizhi; Pan, Rong; Li, Xiaoyang; Jiang, Tongmin

    2013-01-01

    Accelerated degradation testing (ADT) is a common approach in reliability prediction, especially for products with high reliability. However, oftentimes the laboratory condition of ADT is different from the field condition; thus, to predict field failure, one need to calibrate the prediction made by using ADT data. In this paper a Bayesian evaluation method is proposed to integrate the ADT data from laboratory with the failure data from field. Calibration factors are introduced to calibrate the difference between the lab and the field conditions so as to predict a product's actual field reliability more accurately. The information fusion and statistical inference procedure are carried out through a Bayesian approach and Markov chain Monte Carlo methods. The proposed method is demonstrated by two examples and the sensitivity analysis to prior distribution assumption

  3. Integrated Evaluation of Reliability and Power Consumption of Wireless Sensor Networks

    Science.gov (United States)

    Dâmaso, Antônio; Maciel, Paulo

    2017-01-01

    Power consumption is a primary interest in Wireless Sensor Networks (WSNs), and a large number of strategies have been proposed to evaluate it. However, those approaches usually neither consider reliability issues nor the power consumption of applications executing in the network. A central concern is the lack of consolidated solutions that enable us to evaluate the power consumption of applications and the network stack also considering their reliabilities. To solve this problem, we introduce a fully automatic solution to design power consumption aware WSN applications and communication protocols. The solution presented in this paper comprises a methodology to evaluate the power consumption based on the integration of formal models, a set of power consumption and reliability models, a sensitivity analysis strategy to select WSN configurations and a toolbox named EDEN to fully support the proposed methodology. This solution allows accurately estimating the power consumption of WSN applications and the network stack in an automated way. PMID:29113078

  4. The reliability of integrated gasification combined cycle (IGCC) power generation units

    Energy Technology Data Exchange (ETDEWEB)

    Higman, C.; DellaVilla, S.; Steele, B. [Syngas Consultants Ltd. (United Kingdom)

    2006-07-01

    This paper presents two interlinked projects aimed at supporting the improvement of integrated gasification combined cycle (IGCC) reliability. The one project comprises the extension of SPS's existing ORAP (Operational Reliability Analysis Program) reliability, availability and maintainability (RAM) tracking technology from its existing base in natural gas open and combined cycle operations into IGCC. The other project is using the extended ORAP database to evaluate performance data from existing plants. The initial work has concentrated on evaluating public domain data on the performance of gasification based power and chemical plants. This is being followed up by plant interviews in some 20 plants to verify and expand the database on current performance. 23 refs., 8 figs., 2 tabs.

  5. Integrated Evaluation of Reliability and Power Consumption of Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Antônio Dâmaso

    2017-11-01

    Full Text Available Power consumption is a primary interest in Wireless Sensor Networks (WSNs, and a large number of strategies have been proposed to evaluate it. However, those approaches usually neither consider reliability issues nor the power consumption of applications executing in the network. A central concern is the lack of consolidated solutions that enable us to evaluate the power consumption of applications and the network stack also considering their reliabilities. To solve this problem, we introduce a fully automatic solution to design power consumption aware WSN applications and communication protocols. The solution presented in this paper comprises a methodology to evaluate the power consumption based on the integration of formal models, a set of power consumption and reliability models, a sensitivity analysis strategy to select WSN configurations and a toolbox named EDEN to fully support the proposed methodology. This solution allows accurately estimating the power consumption of WSN applications and the network stack in an automated way.

  6. Improved structural integrity through advances in reliable residual stress measurement: the impact of ENGIN-X

    Science.gov (United States)

    Edwards, L.; Santisteban, J. R.

    The determination of accurate reliable residual stresses is critical to many fields of structural integrity. Neutron stress measurement is a non-destructive technique that uniquely provides insights into stress fields deep within engineering components and structures. As such, it has become an increasingly important tool within engineering, leading to improved manufacturing processes to reduce stress and distortion as well as to the definition of more precise lifing procedures. This paper describes the likely impact of the next generation of dedicated engineering stress diffractometers currently being constructed and the utility of the technique using examples of residual stresses both beneficial and detrimental to structural integrity.

  7. Neural substrates of reliability-weighted visual-tactile multisensory integration

    Directory of Open Access Journals (Sweden)

    Michael S Beauchamp

    2010-06-01

    Full Text Available As sensory systems deteriorate in aging or disease, the brain must relearn the appropriate weights to assign each modality during multisensory integration. Using blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI of human subjects, we tested a model for the neural mechanisms of sensory weighting, termed “weighted connections”. This model holds that the connection weights between early and late areas vary depending on the reliability of the modality, independent of the level of early sensory cortex activity. When subjects detected viewed and felt touches to the hand, a network of brain areas was active, including visual areas in lateral occipital cortex, somatosensory areas in inferior parietal lobe, and multisensory areas in the intraparietal sulcus (IPS. In agreement with the weighted connection model, the connection weight measured with structural equation modeling between somatosensory cortex and IPS increased for somatosensory-reliable stimuli, and the connection weight between visual cortex and IPS increased for visual-reliable stimuli. This double dissociation of connection strengths was similar to the pattern of behavioral responses during incongruent multisensory stimulation, suggesting that weighted connections may be a neural mechanism for behavioral reliability weighting.for behavioral reliability weighting.

  8. Site-specific landslide assessment in Alpine area using a reliable integrated monitoring system

    Science.gov (United States)

    Romeo, Saverio; Di Matteo, Lucio; Kieffer, Daniel Scott

    2016-04-01

    Rockfalls are one of major cause of landslide fatalities around the world. The present work discusses the reliability of integrated monitoring of displacements in a rockfall within the Alpine region (Salzburg Land - Austria), taking into account also the effect of the ongoing climate change. Due to the unpredictability of the frequency and magnitude, that threatens human lives and infrastructure, frequently it is necessary to implement an efficient monitoring system. For this reason, during the last decades, integrated monitoring systems of unstable slopes were widely developed and used (e.g., extensometers, cameras, remote sensing, etc.). In this framework, Remote Sensing techniques, such as GBInSAR technique (Groung-Based Interferometric Synthetic Aperture Radar), have emerged as efficient and powerful tools for deformation monitoring. GBInSAR measurements can be useful to achieve an early warning system using surface deformation parameters as ground displacement or inverse velocity (for semi-empirical forecasting methods). In order to check the reliability of GBInSAR and to monitor the evolution of landslide, it is very important to integrate different techniques. Indeed, a multi-instrumental approach is essential to investigate movements both in surface and in depth and the use of different monitoring techniques allows to perform a cross analysis of the data and to minimize errors, to check the data quality and to improve the monitoring system. During 2013, an intense and complete monitoring campaign has been conducted on the Ingelsberg landslide. By analyzing both historical temperature series (HISTALP) recorded during the last century and those from local weather stations, temperature values (Autumn-Winter, Winter and Spring) are clearly increased in Bad Hofgastein area as well as in Alpine region. As consequence, in the last decades the rockfall events have been shifted from spring to summer due to warmer winters. It is interesting to point out that

  9. ATLAS utilisation of the Czech national HPC center

    CERN Document Server

    Svatos, Michal; The ATLAS collaboration

    2018-01-01

    The Czech national HPC center IT4Innovations located in Ostrava provides two HPC systems, Anselm and Salomon. The Salomon HPC is amongst the hundred most powerful supercomputers on Earth since its commissioning in 2015. Both clusters were tested for usage by the ATLAS experiment for running simulation jobs. Several thousand core hours were allocated to the project for tests, but the main aim is to use free resources waiting for large parallel jobs of other users. Multiple strategies for ATLAS job execution were tested on the Salomon and Anselm HPCs. The solution described herein is based on the ATLAS experience with other HPC sites. ARC Compute Element (ARC-CE) installed at the grid site in Prague is used for job submission to Salomon. The ATLAS production system submits jobs to the ARC-CE via ARC Control Tower (aCT). The ARC-CE processes job requirements from aCT and creates a script for a batch system which is then executed via ssh. Sshfs is used to share scripts and input files between the site and the HPC...

  10. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 3: HARP Graphics Oriented (GO) input user's guide

    Science.gov (United States)

    Bavuso, Salvatore J.; Rothmann, Elizabeth; Mittal, Nitin; Koppen, Sandra Howell

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems, and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical preprocessor Graphics Oriented (GO) program. GO is a graphical user interface for the HARP engine that enables the drawing of reliability/availability models on a monitor. A mouse is used to select fault tree gates or Markov graphical symbols from a menu for drawing.

  11. Ensuring Structural Integrity through Reliable Residual Stress Measurement: From Crystals to Crankshafts

    International Nuclear Information System (INIS)

    Edwards, Lyndon

    2005-01-01

    Full text: The determination of accurate, reliable stresses is critical to many fields of engineering and, in particular, the structural integrity and hence, safety, of many systems. Neutron stress measurement is a non-destructive technique that uniquely provides insights into stress fields deep within components and structures. As such, it has become an increasingly important tool within the engineering community leading to improved manufacturing processes to reduce stress and distortion as well as to the definition of more precise structural integrity lifting procedures. This talk describes the current state of the art and identifies the key opportunities for improved structural integrity provided by the 2nd generation dedicated engineering stress diffractometers currently being designed and commissioned world-wide. Examples are provided covering a range of industrially relevant problems from the fields. (author)

  12. Design for High Performance, Low Power, and Reliable 3D Integrated Circuits

    CERN Document Server

    Lim, Sung Kyu

    2013-01-01

    This book describes the design of through-silicon-via (TSV) based three-dimensional integrated circuits.  It includes details of numerous “manufacturing-ready” GDSII-level layouts of TSV-based 3D ICs, developed with tools covered in the book. Readers will benefit from the sign-off level analysis of timing, power, signal integrity, and thermo-mechanical reliability for 3D IC designs.  Coverage also includes various design-for-manufacturability (DFM), design-for-reliability (DFR), and design-for-testability (DFT) techniques that are considered critical to the 3D IC design process. Describes design issues and solutions for high performance and low power 3D ICs, such as the pros/cons of regular and irregular placement of TSVs, Steiner routing, buffer insertion, low power 3D clock routing, power delivery network design and clock design for pre-bond testability. Discusses topics in design-for-electrical-reliability for 3D ICs, such as TSV-to-TSV coupling, current crowding at the wire-to-TSV junction and the e...

  13. Study of ageing side effects in the DELPHI HPC calorimeter

    CERN Document Server

    Bonivento, W

    1997-01-01

    The readout proportional chambers of the HPC electromagnetic calorimeter in the DELPHI experiment are affected by large ageing. In order to study the long-term behaviour fo the calorimeter, one HPC module was extracted from DELPHI in 1992 and was brought to a test area where it was artificially aged during a period of two years; an ageing level exceeding the one expected for the HPC at the end of the LEP era was reached. During this period the performance of the module was periodically tested by means of dedicated beam tests whose results are discussed in this paper. These show that ageing has no significant effects on the response linearity and on the energy resolution for electromagnetic showers, once the analog response loss is compensated for by increasing the chamber gain through the anode voltage.

  14. Modular HPC I/O characterization with Darshan

    Energy Technology Data Exchange (ETDEWEB)

    Snyder, Shane; Carns, Philip; Harms, Kevin; Ross, Robert; Lockwood, Glenn K.; Wright, Nicholas J.

    2016-11-13

    Contemporary high-performance computing (HPC) applications encompass a broad range of distinct I/O strategies and are often executed on a number of different compute platforms in their lifetime. These large-scale HPC platforms employ increasingly complex I/O subsystems to provide a suitable level of I/O performance to applications. Tuning I/O workloads for such a system is nontrivial, and the results generally are not portable to other HPC systems. I/O profiling tools can help to address this challenge, but most existing tools only instrument specific components within the I/O subsystem that provide a limited perspective on I/O performance. The increasing diversity of scientific applications and computing platforms calls for greater flexibililty and scope in I/O characterization.

  15. Safety, reliability, risk management and human factors: an integrated engineering approach applied to nuclear facilities

    Energy Technology Data Exchange (ETDEWEB)

    Vasconcelos, Vanderley de; Silva, Eliane Magalhaes Pereira da; Costa, Antonio Carlos Lopes da; Reis, Sergio Carneiro dos [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)], e-mail: vasconv@cdtn.br, e-mail: silvaem@cdtn.br, e-mail: aclc@cdtn.br, e-mail: reissc@cdtn.br

    2009-07-01

    Nuclear energy has an important engineering legacy to share with the conventional industry. Much of the development of the tools related to safety, reliability, risk management, and human factors are associated with nuclear plant processes, mainly because the public concern about nuclear power generation. Despite the close association between these subjects, there are some important different approaches. The reliability engineering approach uses several techniques to minimize the component failures that cause the failure of the complex systems. These techniques include, for instance, redundancy, diversity, standby sparing, safety factors, and reliability centered maintenance. On the other hand system safety is primarily concerned with hazard management, that is, the identification, evaluation and control of hazards. Rather than just look at failure rates or engineering strengths, system safety would examine the interactions among system components. The events that cause accidents may be complex combinations of component failures, faulty maintenance, design errors, human actions, or actuation of instrumentation and control. Then, system safety deals with a broader spectrum of risk management, including: ergonomics, legal requirements, quality control, public acceptance, political considerations, and many other non-technical influences. Taking care of these subjects individually can compromise the completeness of the analysis and the measures associated with both risk reduction, and safety and reliability increasing. Analyzing together the engineering systems and controls of a nuclear facility, their management systems and operational procedures, and the human factors engineering, many benefits can be realized. This paper proposes an integration of these issues based on the application of systems theory. (author)

  16. Safety, reliability, risk management and human factors: an integrated engineering approach applied to nuclear facilities

    International Nuclear Information System (INIS)

    Vasconcelos, Vanderley de; Silva, Eliane Magalhaes Pereira da; Costa, Antonio Carlos Lopes da; Reis, Sergio Carneiro dos

    2009-01-01

    Nuclear energy has an important engineering legacy to share with the conventional industry. Much of the development of the tools related to safety, reliability, risk management, and human factors are associated with nuclear plant processes, mainly because the public concern about nuclear power generation. Despite the close association between these subjects, there are some important different approaches. The reliability engineering approach uses several techniques to minimize the component failures that cause the failure of the complex systems. These techniques include, for instance, redundancy, diversity, standby sparing, safety factors, and reliability centered maintenance. On the other hand system safety is primarily concerned with hazard management, that is, the identification, evaluation and control of hazards. Rather than just look at failure rates or engineering strengths, system safety would examine the interactions among system components. The events that cause accidents may be complex combinations of component failures, faulty maintenance, design errors, human actions, or actuation of instrumentation and control. Then, system safety deals with a broader spectrum of risk management, including: ergonomics, legal requirements, quality control, public acceptance, political considerations, and many other non-technical influences. Taking care of these subjects individually can compromise the completeness of the analysis and the measures associated with both risk reduction, and safety and reliability increasing. Analyzing together the engineering systems and controls of a nuclear facility, their management systems and operational procedures, and the human factors engineering, many benefits can be realized. This paper proposes an integration of these issues based on the application of systems theory. (author)

  17. Delivering on Industry Equipment Reliability Goals By Leveraging an Integration Platform and Decision Support Environment

    International Nuclear Information System (INIS)

    Coveney, Maureen K.; Bailey, W. Henry; Parkinson, William

    2004-01-01

    Utilities have invested in many costly enterprise systems - computerized maintenance management systems, document management systems, enterprise grade portals, to name but a few - and often very specialized systems, like data historians, high end diagnostic systems, and other focused and point solutions. From recent industry reports, we now know that the average nuclear power utilizes on average 1900 systems to perform daily work, of which 250 might facilitate the equipment reliability decision-making process. The time has come to leverage the investment in these systems by providing a common platform for integration and decision-making that will further the collective industry aim of enhancing the reliability of our nuclear generation assets to maintain high plant availability and to deliver on plant life extension goals without requiring additional large scale investment in IT infrastructure. (authors)

  18. The large-scale integration of wind generation: Impacts on price, reliability and dispatchable conventional suppliers

    International Nuclear Information System (INIS)

    MacCormack, John; Hollis, Aidan; Zareipour, Hamidreza; Rosehart, William

    2010-01-01

    This work examines the effects of large-scale integration of wind powered electricity generation in a deregulated energy-only market on loads (in terms of electricity prices and supply reliability) and dispatchable conventional power suppliers. Hourly models of wind generation time series, load and resultant residual demand are created. From these a non-chronological residual demand duration curve is developed that is combined with a probabilistic model of dispatchable conventional generator availability, a model of an energy-only market with a price cap, and a model of generator costs and dispatch behavior. A number of simulations are performed to evaluate the effect on electricity prices, overall reliability of supply, the ability of a dominant supplier acting strategically to profitably withhold supplies, and the fixed cost recovery of dispatchable conventional power suppliers at different levels of wind generation penetration. Medium and long term responses of the market and/or regulator in the long term are discussed.

  19. Special Issue on Automatic Application Tuning for HPC Architectures

    Directory of Open Access Journals (Sweden)

    Siegfried Benkner

    2014-01-01

    Full Text Available High Performance Computing architectures have become incredibly complex and exploiting their full potential is becoming more and more challenging. As a consequence, automatic performance tuning (autotuning of HPC applications is of growing interest and many research groups around the world are currently involved. Autotuning is still a rapidly evolving research field with many different approaches being taken. This special issue features selected papers presented at the Dagstuhl seminar on “Automatic Application Tuning for HPC Architectures” in October 2013, which brought together researchers from the areas of autotuning and performance analysis in order to exchange ideas and steer future collaborations.

  20. First experience with particle-in-cell plasma physics code on ARM-based HPC systems

    OpenAIRE

    Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Mantsinen, Mervi; Mateo, Sergio; Cela, José M.; Castejón, Francisco

    2015-01-01

    In this work, we will explore the feasibility of porting a Particle-in-cell code (EUTERPE) to an ARM multi-core platform from the Mont-Blanc project. The used prototype is based on a system-on-chip Samsung Exynos 5 with an integrated GPU. It is the first prototype that could be used for High-Performance Computing (HPC), since it supports double precision and parallel programming languages. The research leading to these results has received funding from the European Com- munity's Seventh...

  1. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 4: HARP Output (HARPO) graphics display user's guide

    Science.gov (United States)

    Sproles, Darrell W.; Bavuso, Salvatore J.

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical postprocessor program HARPO (HARP Output). HARPO reads ASCII files generated by HARP. It provides an interactive plotting capability that can be used to display alternate model data for trade-off analyses. File data can also be imported to other commercial software programs.

  2. Mechanical Integrity Issues at MCM-Cs for High Reliability Applications

    International Nuclear Information System (INIS)

    Morgenstern, H.A.; Tarbutton, T.J.; Becka, G.A.; Uribe, F.; Monroe, S.; Burchett, S.

    1998-01-01

    During the qualification of a new high reliability low-temperature cofired ceramic (LTCC) multichip module (MCM), two issues relating to the electrical and mechanical integrity of the LTCC network were encountered while performing qualification testing. One was electrical opens after aging tests that were caused by cracks in the solder joints. The other was fracturing of the LTCC networks during mechanical testing. Through failure analysis, computer modeling, bend testing, and test samples, changes were identified. Upon implementation of all these changes, the modules passed testing, and the MCM was placed into production

  3. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Quality Assurance Manual

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; R. Nims; K. J. Kvarfordt; C. Wharton

    2008-08-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment using a personal computer running the Microsoft Windows operating system. SAPHIRE is primarily funded by the U.S. Nuclear Regulatory Commission (NRC). The role of the INL in this project is that of software developer and tester. This development takes place using formal software development procedures and is subject to quality assurance (QA) processes. The purpose of this document is to describe how the SAPHIRE software QA is performed for Version 6 and 7, what constitutes its parts, and limitations of those processes.

  4. reliability reliability

    African Journals Online (AJOL)

    eobe

    Corresponding author, Tel: +234-703. RELIABILITY .... V , , given by the code of practice. However, checks must .... an optimization procedure over the failure domain F corresponding .... of Concrete Members based on Utility Theory,. Technical ...

  5. Integrating software reliability concepts into risk and reliability modeling of digital instrumentation and control systems used in nuclear power plants

    International Nuclear Information System (INIS)

    Arndt, S. A.

    2006-01-01

    As software-based digital systems are becoming more and more common in all aspects of industrial process control, including the nuclear power industry, it is vital that the current state of the art in quality, reliability, and safety analysis be advanced to support the quantitative review of these systems. Several research groups throughout the world are working on the development and assessment of software-based digital system reliability methods and their applications in the nuclear power, aerospace, transportation, and defense industries. However, these groups are hampered by the fact that software experts and probabilistic safety assessment experts view reliability engineering very differently. This paper discusses the characteristics of a common vocabulary and modeling framework. (authors)

  6. A Novel Evaluation Method for Building Construction Project Based on Integrated Information Entropy with Reliability Theory

    Directory of Open Access Journals (Sweden)

    Xiao-ping Bai

    2013-01-01

    Full Text Available Selecting construction schemes of the building engineering project is a complex multiobjective optimization decision process, in which many indexes need to be selected to find the optimum scheme. Aiming at this problem, this paper selects cost, progress, quality, and safety as the four first-order evaluation indexes, uses the quantitative method for the cost index, uses integrated qualitative and quantitative methodologies for progress, quality, and safety indexes, and integrates engineering economics, reliability theories, and information entropy theory to present a new evaluation method for building construction project. Combined with a practical case, this paper also presents detailed computing processes and steps, including selecting all order indexes, establishing the index matrix, computing score values of all order indexes, computing the synthesis score, sorting all selected schemes, and making analysis and decision. Presented method can offer valuable references for risk computing of building construction projects.

  7. A novel evaluation method for building construction project based on integrated information entropy with reliability theory.

    Science.gov (United States)

    Bai, Xiao-ping; Zhang, Xi-wei

    2013-01-01

    Selecting construction schemes of the building engineering project is a complex multiobjective optimization decision process, in which many indexes need to be selected to find the optimum scheme. Aiming at this problem, this paper selects cost, progress, quality, and safety as the four first-order evaluation indexes, uses the quantitative method for the cost index, uses integrated qualitative and quantitative methodologies for progress, quality, and safety indexes, and integrates engineering economics, reliability theories, and information entropy theory to present a new evaluation method for building construction project. Combined with a practical case, this paper also presents detailed computing processes and steps, including selecting all order indexes, establishing the index matrix, computing score values of all order indexes, computing the synthesis score, sorting all selected schemes, and making analysis and decision. Presented method can offer valuable references for risk computing of building construction projects.

  8. Strategy for establishing integrated l and c reliability of operating nuclear power plants in korea

    International Nuclear Information System (INIS)

    Kang, H. T.; Chung, H. Y.; Lee, Y. H.

    2008-01-01

    Korea hydro and nuclear power co. (KHNP) are in progress of developing a integrated I and C reliability establishing strategy for managing l and C obsolescence and phasing in new technology that both meets the needs of the fleet and captures the benefits of applying proven solutions to multiple plants, with reduced incremental costs. In view of this, we are developing I and C component management which covers major failure mode, symptom of performance degradation, condition-based or time-based preventive management (PM), monitoring, and failure finding and correction based on equipment reliability (ER). Furthermore, for the l and C system replacement management, we are in progress of 3-year-long I and C systems upgrade fundamental designing in developing the long-term major l and C systems implementation plan to improve plant operations, eliminate operator challenges, reduce maintenance costs, and cope with the challenges of component obsolescence. For accomplishing I and C digital upgrade in near future, we chose demonstration plant, Younggwang (YGN) unit 3 and 4 which are Korean Standard Nuclear Power Plant (KSNP). In this paper, we established the long term reliability strategy of I and C system based on ER in component replacement and furthermore I and C systems digital upgrade in system replacement. (authors)

  9. Optical packet switching in HPC : an analysis of applications performance

    NARCIS (Netherlands)

    Meyer, Hugo; Sancho, Jose Carlos; Mrdakovic, Milica; Miao, Wang; Calabretta, Nicola

    2018-01-01

    Optical Packet Switches (OPS) could provide the needed low latency transmissions in today large data centers. OPS can deliver lower latency and higher bandwidth than traditional electrical switches. These features are needed for parallel High Performance Computing (HPC) applications. For this

  10. Fire performance of basalt FRP mesh reinforced HPC thin plates

    DEFF Research Database (Denmark)

    Hulin, Thomas; Hodicky, Kamil; Schmidt, Jacob Wittrup

    2013-01-01

    An experimental program was carried out to investigate the influence of basalt FRP (BFRP) reinforcing mesh on the fire behaviour of thin high performance concrete (HPC) plates applied to sandwich elements. Samples with BFRP mesh were compared to samples with no mesh, samples with steel mesh...

  11. Connecting to HPC VPN | High-Performance Computing | NREL

    Science.gov (United States)

    visualization, and file transfers. NREL Users Logging in to Peregrine Use SSH to login to the system. Your login and password will match your NREL network account login/password. From OS X or Linux, open a terminal login for the Windows HPC Cluster will match your NREL Active Directory login/password that you use to

  12. An integrated methodology for the dynamic performance and reliability evaluation of fault-tolerant systems

    International Nuclear Information System (INIS)

    Dominguez-Garcia, Alejandro D.; Kassakian, John G.; Schindall, Joel E.; Zinchuk, Jeffrey J.

    2008-01-01

    We propose an integrated methodology for the reliability and dynamic performance analysis of fault-tolerant systems. This methodology uses a behavioral model of the system dynamics, similar to the ones used by control engineers to design the control system, but also incorporates artifacts to model the failure behavior of each component. These artifacts include component failure modes (and associated failure rates) and how those failure modes affect the dynamic behavior of the component. The methodology bases the system evaluation on the analysis of the dynamics of the different configurations the system can reach after component failures occur. For each of the possible system configurations, a performance evaluation of its dynamic behavior is carried out to check whether its properties, e.g., accuracy, overshoot, or settling time, which are called performance metrics, meet system requirements. Markov chains are used to model the stochastic process associated with the different configurations that a system can adopt when failures occur. This methodology not only enables an integrated framework for evaluating dynamic performance and reliability of fault-tolerant systems, but also enables a method for guiding the system design process, and further optimization. To illustrate the methodology, we present a case-study of a lateral-directional flight control system for a fighter aircraft

  13. MARIANE: MApReduce Implementation Adapted for HPC Environments

    Energy Technology Data Exchange (ETDEWEB)

    Fadika, Zacharia; Dede, Elif; Govindaraju, Madhusudhan; Ramakrishnan, Lavanya

    2011-07-06

    MapReduce is increasingly becoming a popular framework, and a potent programming model. The most popular open source implementation of MapReduce, Hadoop, is based on the Hadoop Distributed File System (HDFS). However, as HDFS is not POSIX compliant, it cannot be fully leveraged by applications running on a majority of existing HPC environments such as Teragrid and NERSC. These HPC environments typicallysupport globally shared file systems such as NFS and GPFS. On such resourceful HPC infrastructures, the use of Hadoop not only creates compatibility issues, but also affects overall performance due to the added overhead of the HDFS. This paper not only presents a MapReduce implementation directly suitable for HPC environments, but also exposes the design choices for better performance gains in those settings. By leveraging inherent distributed file systems' functions, and abstracting them away from its MapReduce framework, MARIANE (MApReduce Implementation Adapted for HPC Environments) not only allows for the use of the model in an expanding number of HPCenvironments, but also allows for better performance in such settings. This paper shows the applicability and high performance of the MapReduce paradigm through MARIANE, an implementation designed for clustered and shared-disk file systems and as such not dedicated to a specific MapReduce solution. The paper identifies the components and trade-offs necessary for this model, and quantifies the performance gains exhibited by our approach in distributed environments over Apache Hadoop in a data intensive setting, on the Magellan testbed at the National Energy Research Scientific Computing Center (NERSC).

  14. Enabling parallel simulation of large-scale HPC network systems

    International Nuclear Information System (INIS)

    Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; Carns, Philip

    2016-01-01

    Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks used in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations

  15. Pipeline integrity model-a formative approach towards reliability and life assessment

    International Nuclear Information System (INIS)

    Sayed, A.M.; Jaffery, M.A.

    2005-01-01

    Pipe forms an integral part of transmission medium in oil and gas industry. This holds true for both upstream and downstream segments of this global energy business. With the aging of this asset base, emphasis on its operational aspects has been under immense considerations from the operators and regulators sides. Moreover, the milieu of information area and enhancement in global trade has lifted the barriers on means to forge forward towards better utilization of resources. This has resulted in optimized solutions as priority for business and technical manager's world over. There is a paradigm shift from mere development of 'smart materials' to 'low life cycle cost material'. The force inducing this change is a rationale one: the recovery of development costs is no more a problem in a global community; rather it is the pay-off time which matters most to the materials end users. This means that decision makers are not evaluating just the price offered but are keen to judge the entire life cycle cost of a product. The integrity of pipe are affected by factors such as corrosion, fatigue-crack growth, stress-corrosion cracking, and mechanical damage. Extensive research in the area of reliability and life assessment has been carried out. A number of models concerning with the reliability issues of pipes have been developed and are being used by a number of pipeline operators worldwide. Yet, it is emphasised that there are no substitute for sound engineering judgment and allowance for factors of safety. The ability of a laid down pipe network to transport the intended fluid under pre-defined conditions for the entire project envisaged life, is referred to the reliability of system. The reliability is built into the product through extensive benchmarking against industry standard codes. The process of pipes construction for oil and gas service is regulated through American Petroleum Institute's Specification for Line Pipe. Subsequently, specific programs have been

  16. Solar Energy Grid Integration Systems (SEGIS): adding functionality while maintaining reliability and economics

    Science.gov (United States)

    Bower, Ward

    2011-09-01

    An overview of the activities and progress made during the US DOE Solar Energy Grid Integration Systems (SEGIS) solicitation, while maintaining reliability and economics is provided. The SEGIS R&D opened pathways for interconnecting PV systems to intelligent utility grids and micro-grids of the future. In addition to new capabilities are "value added" features. The new hardware designs resulted in smaller, less material-intensive products that are being viewed by utilities as enabling dispatchable generation and not just unpredictable negative loads. The technical solutions enable "advanced integrated system" concepts and "smart grid" processes to move forward in a faster and focused manner. The advanced integrated inverters/controllers can now incorporate energy management functionality, intelligent electrical grid support features and a multiplicity of communication technologies. Portals for energy flow and two-way communications have been implemented. SEGIS hardware was developed for the utility grid of today, which was designed for one-way power flow, for intermediate grid scenarios, AND for the grid of tomorrow, which will seamlessly accommodate managed two-way power flows as required by large-scale deployment of solar and other distributed generation. The SEGIS hardware and control developed for today meets existing standards and codes AND provides for future connections to a "smart grid" mode that enables utility control and optimized performance.

  17. Integrated Reliability and Risk Analysis System (IRRAS) Version 2.0 user's guide

    International Nuclear Information System (INIS)

    Russell, K.D.; Sattison, M.B.; Rasmuson, D.M.

    1990-06-01

    The Integrated Reliability and Risk Analysis System (IRRAS) is a state-of-the-art, microcomputer-based probabilistic risk assessment (PRA) model development and analysis tool to address key nuclear plant safety issues. IRRAS is an integrated software tool that gives the user the ability to create and analyze fault trees and accident sequences using a microcomputer. This program provides functions that range from graphical fault tree construction to cut set generation and quantification. Also provided in the system is an integrated full-screen editor for use when interfacing with remote mainframe computer systems. Version 1.0 of the IRRAS program was released in February of 1987. Since that time, many user comments and enhancements have been incorporated into the program providing a much more powerful and user-friendly system. This version has been designated IRRAS 2.0 and is the subject of this user's guide. Version 2.0 of IRRAS provides all of the same capabilities as Version 1.0 and adds a relational data base facility for managing the data, improved functionality, and improved algorithm performance. 9 refs., 292 figs., 4 tabs

  18. Measuring Integrated Socioemotional Guidance at School: Factor Structure and Reliability of the Socioemotional Guidance Questionnaire (SEG-Q)

    Science.gov (United States)

    Jacobs, Karen; Struyf, Elke

    2013-01-01

    Socioemotional guidance of students has recently become an integral part of education, however no instrument exists to measure integrated socioemotional guidance. This study therefore examines the factor structure and reliability of the Socioemotional Guidance Questionnaire. Psychometric properties of the Socioemotional Guidance Questionnaire and…

  19. Application of high efficiency and reliable 3D-designed integral shrouded blades to nuclear turbines

    International Nuclear Information System (INIS)

    Watanabe, Eiichiro; Ohyama, Hiroharu; Tashiro, Hikaru; Sugitani, Toshiro; Kurosawa, Masaru

    1998-01-01

    Mitsubishi Heavy Industries, Ltd. has recently developed new blades for nuclear turbines, in order to achieve higher efficiency and higher reliability. The 3D aerodynamic design for 41 inch and 46 inch blades, their one piece structural design (integral-shrouded blades: ISB), and the verification test results using a model steam turbine are described in this paper. The predicted efficiency and lower vibratory stress have been verified. Based on these 60Hz ISB, 50Hz ISB series are under development using 'the law of similarity' without changing their thermodynamic performance and mechanical stress levels. Our 3D-designed reaction blades which are used for the high pressure and low pressure upstream stages, are also briefly mentioned. (author)

  20. Systamatic approach to integration of a human reliability analysis into a NPP probabalistic risk assessment

    International Nuclear Information System (INIS)

    Fragola, J.R.

    1984-01-01

    This chapter describes the human reliability analysis tasks which were employed in the evaluation of the overall probability of an internal flood sequence and its consequences in terms of disabling vulnerable risk significant equipment. Topics considered include the problem familiarization process, the identification and classification of key human interactions, a human interaction review of potential initiators, a maintenance and operations review, human interaction identification, quantification model selection, the definition of operator-induced sequences, the quantification of specific human interactions, skill- and rule-based interactions, knowledge-based interactions, and the incorporation of human interaction-related events into the event tree structure. It is concluded that an integrated approach to the analysis of human interaction within the context of a Probabilistic Risk Assessment (PRA) is feasible

  1. Towards Spherical Mesh Gravity and Magnetic Modelling in an HPC Environment

    Science.gov (United States)

    Lane, R. J.; Brodie, R. C.; de Hoog, M.; Navin, J.; Chen, C.; Du, J.; Liang, Q.; Wang, H.; Li, Y.

    2013-12-01

    Staff at Geoscience Australia (GA), Australia's Commonwealth Government geoscientific agency, have routinely performed 3D gravity and magnetic modelling as part of geoscience investigations. For this work, we have used software programs that have been based on a Cartesian mesh spatial framework. These programs have come as executable files that were compiled to operate in a Windows environment on single core personal computers (PCs). To cope with models with higher resolution and larger extents, we developed an approach whereby a large problem could be broken down into a number of overlapping smaller models (';tiles') that could be modelled separately, with the results combined back into a single output model. To speed up the processing, we established a Condor distributed network from existing desktop PCs. A number of factors have caused us to consider a new approach to this modelling work. The drivers for change include; 1) models with very large lateral extents where the effects of Earth curvature are a consideration, 2) a desire to ensure that the modelling of separate regions is carried out in a consistent and managed fashion, 3) migration of scientific computing to off-site High Performance Computing (HPC) facilities, and 4) development of virtual globe environments for integration and visualization of 3D spatial objects. Some of the more surprising realizations to emerge have been that; 1) there aren't any readily available commercial software packages for modelling gravity and magnetic data in a spherical mesh spatial framework, 2) there are many different types of HPC environments, 3) no two HPC environments are the same, and 4) the most common virtual globe environment (i.e., Google Earth) doesn't allow spatial objects to be displayed below the topographic/bathymetric surface. Our response has been to do the following; 1) form a collaborative partnership with researchers at the Colorado School of Mines (CSM) and the China University of Geosciences (CUG

  2. Integration of external estimated breeding values and associated reliabilities using correlations among traits and effects.

    Science.gov (United States)

    Vandenplas, J; Colinet, F G; Glorieux, G; Bertozzi, C; Gengler, N

    2015-12-01

    Based on a Bayesian view of linear mixed models, several studies showed the possibilities to integrate estimated breeding values (EBV) and associated reliabilities (REL) provided by genetic evaluations performed outside a given evaluation system into this genetic evaluation. Hereafter, the term "internal" refers to this given genetic evaluation system, and the term "external" refers to all other genetic evaluations performed outside the internal evaluation system. Bayesian approaches integrate external information (i.e., external EBV and associated REL) by altering both the mean and (co)variance of the prior distributions of the additive genetic effects based on the knowledge of this external information. Extensions of the Bayesian approaches to multivariate settings are interesting because external information expressed on other scales, measurement units, or trait definitions, or associated with different heritabilities and genetic parameters than the internal traits, could be integrated into a multivariate genetic evaluation without the need to convert external information to the internal traits. Therefore, the aim of this study was to test the integration of external EBV and associated REL, expressed on a 305-d basis and genetically correlated with a trait of interest, into a multivariate genetic evaluation using a random regression test-day model for the trait of interest. The approach we used was a multivariate Bayesian approach. Results showed that the integration of external information led to a genetic evaluation for the trait of interest for, at least, animals associated with external information, as accurate as a bivariate evaluation including all available phenotypic information. In conclusion, the multivariate Bayesian approaches have the potential to integrate external information correlated with the internal phenotypic traits, and potentially to the different random regressions, into a multivariate genetic evaluation. This allows the use of different

  3. Easy Access to HPC Resources through the Application GUI

    KAUST Repository

    van Waveren, Matthijs

    2016-11-01

    The computing environment at the King Abdullah University of Science and Technology (KAUST) is growing in size and complexity. KAUST hosts the tenth fastest supercomputer in the world (Shaheen II) and several HPC clusters. Researchers can be inhibited by the complexity, as they need to learn new languages and execute many tasks in order to access the HPC clusters and the supercomputer. In order to simplify the access, we have developed an interface between the applications and the clusters and supercomputer that automates the transfer of input data and job submission and also the retrieval of results to the researcher’s local workstation. The innovation is that the user now submits his jobs from within the application GUI on his workstation, and does not have to directly log into the clusters or supercomputer anymore. This article details the solution and its benefits to the researchers.

  4. Integration of human reliability analysis into the probabilistic risk assessment process: phase 1

    International Nuclear Information System (INIS)

    Bell, B.J.; Vickroy, S.C.

    1985-01-01

    The US Nuclear Regulatory Commission and Pacific Northwest Laboratory initiated a research program in 1984 to develop a testable set of analytical procedures for integrating human reliability analysis (HRA) into the probabilistic risk assessment (PRA) process to more adequately assess the overall impact of human performance on risk. In this three phase program, stand-alone HRA/PRA analytic procedures will be developed and field evaluated to provide improved methods, techniques, and models for applying quantitative and qualitative human error data which systematically integrate HRA principles, techniques, and analyses throughout the entire PRA process. Phase 1 of the program involved analysis of state-of-the-art PRAs to define the structures and processes currently in use in the industry. Phase 2 research will involve developing a new or revised PRA methodology which will enable more efficient regulation of the industry using quantitative or qualitative results of the PRA. Finally, Phase 3 will be to field test those procedures to assure that the results generated by the new methodologies will be usable and acceptable to the NRC. This paper briefly describes the first phase of the program and outlines the second

  5. Integration of human reliability analysis into the probabilistic risk assessment process: Phase 1

    International Nuclear Information System (INIS)

    Bell, B.J.; Vickroy, S.C.

    1984-10-01

    A research program was initiated to develop a testable set of analytical procedures for integrating human reliability analysis (HRA) into the probabilistic risk assessment (PRA) process to more adequately assess the overall impact of human performance on risk. In this three-phase program, stand-alone HRA/PRA analytic procedures will be developed and field evaluated to provide improved methods, techniques, and models for applying quantitative and qualitative human error data which systematically integrate HRA principles, techniques, and analyses throughout the entire PRA process. Phase 1 of the program involved analysis of state-of-the-art PRAs to define the structures and processes currently in use in the industry. Phase 2 research will involve developing a new or revised PRA methodology which will enable more efficient regulation of the industry using quantitative or qualitative results of the PRA. Finally, Phase 3 will be to field test those procedures to assure that the results generated by the new methodologies will be usable and acceptable to the NRC. This paper briefly describes the first phase of the program and outlines the second

  6. Living PRAs [probabilistic risk analysis] made easier with IRRAS [Integrated Reliability and Risk Analysis System

    International Nuclear Information System (INIS)

    Russell, K.D.; Sattison, M.B.; Rasmuson, D.M.

    1989-01-01

    The Integrated Reliability and Risk Analysis System (IRRAS) is an integrated PRA software tool that gives the user the ability to create and analyze fault trees and accident sequences using an IBM-compatible microcomputer. This program provides functions that range from graphical fault tree and event tree construction to cut set generation and quantification. IRRAS contains all the capabilities and functions required to create, modify, reduce, and analyze event tree and fault tree models used in the analysis of complex systems and processes. IRRAS uses advanced graphic and analytical techniques to achieve the greatest possible realization of the potential of the microcomputer. When the needs of the user exceed this potential, IRRAS can call upon the power of the mainframe computer. The role of the Idaho National Engineering Laboratory if the IRRAS program is that of software developer and interface to the user community. Version 1.0 of the IRRAS program was released in February 1987 to prove the concept of performing this kind of analysis on microcomputers. This version contained many of the basic features needed for fault tree analysis and was received very well by the PRA community. Since the release of Version 1.0, many user comments and enhancements have been incorporated into the program providing a much more powerful and user-friendly system. This version is designated ''IRRAS 2.0''. Version 3.0 will contain all of the features required for efficient event tree and fault tree construction and analysis. 5 refs., 26 figs

  7. Integrated Reliability and Risk Analysis System (IRRAS), Version 2.5: Reference manual

    International Nuclear Information System (INIS)

    Russell, K.D.; McKay, M.K.; Sattison, M.B.; Skinner, N.L.; Wood, S.T.; Rasmuson, D.M.

    1991-03-01

    The Integrated Reliability and Risk Analysis System (IRRAS) is a state-of-the-art, microcomputer-based probabilistic risk assessment (PRA) model development and analysis tool to address key nuclear plant safety issues. IRRAS is an integrated software tool that gives the user the ability to create and analyze fault trees and accident sequences using a microcomputer. This program provides functions that range from graphical fault tree construction to cut set generation and quantification. Version 1.0 of the IRRAS program was released in February of 1987. Since that time, many user comments and enhancements have been incorporated into the program providing a much more powerful and user-friendly system. This version has been designated IRRAS 2.5 and is the subject of this Reference Manual. Version 2.5 of IRRAS provides the same capabilities as Version 1.0 and adds a relational data base facility for managing the data, improved functionality, and improved algorithm performance. 7 refs., 348 figs

  8. Behavior of HPC with Fly Ash after Elevated Temperature

    OpenAIRE

    Shang, Huai-Shuai; Yi, Ting-Hua

    2013-01-01

    For use in fire resistance calculations, the relevant thermal properties of high-performance concrete (HPC) with fly ash were determined through an experimental study. These properties included compressive strength, cubic compressive strength, cleavage strength, flexural strength, and the ultrasonic velocity at various temperatures (20, 100, 200, 300, 400 and 500∘C) for high-performance concrete. The effect of temperature on compressive strength, cubic compressive strength, cleavage strength,...

  9. Users and Programmers Guide for HPC Platforms in CIEMAT

    International Nuclear Information System (INIS)

    Munoz Roldan, A.

    2003-01-01

    This Technical Report presents a description of the High Performance Computing platforms available to researchers in CIEMAT and dedicated mainly to scientific computing. It targets to users and programmers and tries to help in the processes of developing new code and porting code across platforms. A brief review is also presented about historical evolution in the field of HPC, ie, the programming paradigms and underlying architectures. (Author) 32 refs

  10. Trends in Data Locality Abstractions for HPC Systems

    KAUST Repository

    Unat, Didem; Dubey, Anshu; Hoefler, Torsten; Shalf, John; Abraham, Mark; Bianco, Mauro; Chamberlain, Bradford L.; Cledat, Romain; Edwards, H. Carter; Finkel, Hal; Fuerlinger, Karl; Hannig, Frank; Jeannot, Emmanuel; Kamil, Amir; Keasler, Jeff; Kelly, Paul H J; Leung, Vitus; Ltaief, Hatem; Maruyama, Naoya; Newburn, Chris J.; Pericas, Miquel

    2017-01-01

    The cost of data movement has always been an important concern in high performance computing (HPC) systems. It has now become the dominant factor in terms of both energy consumption and performance. Support for expression of data locality has been explored in the past, but those efforts have had only modest success in being adopted in HPC applications for various reasons. them However, with the increasing complexity of the memory hierarchy and higher parallelism in emerging HPC systems, locality management has acquired a new urgency. Developers can no longer limit themselves to low-level solutions and ignore the potential for productivity and performance portability obtained by using locality abstractions. Fortunately, the trend emerging in recent literature on the topic alleviates many of the concerns that got in the way of their adoption by application developers. Data locality abstractions are available in the forms of libraries, data structures, languages and runtime systems; a common theme is increasing productivity without sacrificing performance. This paper examines these trends and identifies commonalities that can combine various locality concepts to develop a comprehensive approach to expressing and managing data locality on future large-scale high-performance computing systems.

  11. Trends in Data Locality Abstractions for HPC Systems

    KAUST Repository

    Unat, Didem

    2017-05-12

    The cost of data movement has always been an important concern in high performance computing (HPC) systems. It has now become the dominant factor in terms of both energy consumption and performance. Support for expression of data locality has been explored in the past, but those efforts have had only modest success in being adopted in HPC applications for various reasons. them However, with the increasing complexity of the memory hierarchy and higher parallelism in emerging HPC systems, locality management has acquired a new urgency. Developers can no longer limit themselves to low-level solutions and ignore the potential for productivity and performance portability obtained by using locality abstractions. Fortunately, the trend emerging in recent literature on the topic alleviates many of the concerns that got in the way of their adoption by application developers. Data locality abstractions are available in the forms of libraries, data structures, languages and runtime systems; a common theme is increasing productivity without sacrificing performance. This paper examines these trends and identifies commonalities that can combine various locality concepts to develop a comprehensive approach to expressing and managing data locality on future large-scale high-performance computing systems.

  12. An integrated model for reliability estimation of digital nuclear protection system based on fault tree and software control flow methodologies

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Seong, Poong Hyun

    2000-01-01

    In the nuclear industry, the difficulty of proving the reliabilities of digital systems prohibits the widespread use of digital systems in various nuclear application such as plant protection system. Even though there exist a few models which are used to estimate the reliabilities of digital systems, we develop a new integrated model which is more realistic than the existing models. We divide the process of estimating the reliability of a digital system into two phases, a high-level phase and a low-level phase, and the boundary of two phases is the reliabilities of subsystems. We apply software control flow method to the low-level phase and fault tree analysis to the high-level phase. The application of the model to Dynamic Safety System(DDS) shows that the estimated reliability of the system is quite reasonable and realistic

  13. An integrated model for reliability estimation of digital nuclear protection system based on fault tree and software control flow methodologies

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Seong, Poong Hyun

    2000-01-01

    In nuclear industry, the difficulty of proving the reliabilities of digital systems prohibits the widespread use of digital systems in various nuclear application such as plant protection system. Even though there exist a few models which are used to estimate the reliabilities of digital systems, we develop a new integrated model which is more realistic than the existing models. We divide the process of estimating the reliability of a digital system into two phases, a high-level phase and a low-level phase, and the boundary of two phases is the reliabilities of subsystems. We apply software control flow method to the low-level phase and fault tree analysis to the high-level phase. The application of the model of dynamic safety system (DSS) shows that the estimated reliability of the system is quite reasonable and realistic. (author)

  14. New Methods for Building-In and Improvement of Integrated Circuit Reliability

    NARCIS (Netherlands)

    van der Pol, J.A.; van der Pol, Jacob Antonius

    2000-01-01

    Over the past 30 years the reliability of semiconductor products has improved by a factor of 100 while at the same time the complexity of the circuits has increased by a factor 105. This 7-decade reliability improvement has been realised by implementing a sophisticated reliability assurance system

  15. An Appropriate Wind Model for Wind Integrated Power Systems Reliability Evaluation Considering Wind Speed Correlations

    Directory of Open Access Journals (Sweden)

    Rajesh Karki

    2013-02-01

    Full Text Available Adverse environmental impacts of carbon emissions are causing increasing concerns to the general public throughout the world. Electric energy generation from conventional energy sources is considered to be a major contributor to these harmful emissions. High emphasis is therefore being given to green alternatives of energy, such as wind and solar. Wind energy is being perceived as a promising alternative. This source of energy technology and its applications have undergone significant research and development over the past decade. As a result, many modern power systems include a significant portion of power generation from wind energy sources. The impact of wind generation on the overall system performance increases substantially as wind penetration in power systems continues to increase to relatively high levels. It becomes increasingly important to accurately model the wind behavior, the interaction with other wind sources and conventional sources, and incorporate the characteristics of the energy demand in order to carry out a realistic evaluation of system reliability. Power systems with high wind penetrations are often connected to multiple wind farms at different geographic locations. Wind speed correlations between the different wind farms largely affect the total wind power generation characteristics of such systems, and therefore should be an important parameter in the wind modeling process. This paper evaluates the effect of the correlation between multiple wind farms on the adequacy indices of wind-integrated systems. The paper also proposes a simple and appropriate probabilistic analytical model that incorporates wind correlations, and can be used for adequacy evaluation of multiple wind-integrated systems.

  16. Of Iron or Wax? The Effect of Economic Integration on the Reliability of Military Alliances***

    Directory of Open Access Journals (Sweden)

    Vobolevičius Vincentas

    2015-12-01

    Full Text Available In this paper we analyze what determines if a military alliance represents a credible commitment. More precisely, we verify if economic integration of military allies increases the deterrent capability of an alliance, and its effectiveness in the case of third-party aggression. We propose that growing intra-alliance trade creates audience costs and sunk costs for political leaders who venture to violate conditions of an alliance treaty. Therefore, intensive trade can be regarded as a signal of allies’ determination to aid one another in the case of third party aggression, and a deterrent of such aggression. Regression analysis of bilateral fixed-term mutual defense agreements concluded between 1945 and 2003 reveals that large trade volumes among military allies indeed reduce the likelihood that their political leaders will breach alliance commitments. Intra-alliance trade also displays a number of interesting interaction effects with the other common predictors of military alliance reliability such as shared allies’ interests and values, symmetry of their military capabilities, their geographic location and domestic political institutions.

  17. Systems analysis programs for hands-on integrated reliability evaluations (SAPHIRE), Version 5.0

    International Nuclear Information System (INIS)

    Russell, K.D.; Kvarfordt, K.J.; Hoffman, C.L.

    1995-10-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. The Graphical Evaluation Module (GEM) is a special application tool designed for evaluation of operational occurrences using the Accident Sequence Precursor (ASP) program methods. GEM provides the capability for an analyst to quickly and easily perform conditional core damage probability (CCDP) calculations. The analyst can then use the CCDP calculations to determine if the occurrence of an initiating event or a condition adversely impacts safety. It uses models and data developed in the SAPHIRE specially for the ASP program. GEM requires more data than that normally provided in SAPHIRE and will not perform properly with other models or data bases. This is the first release of GEM and the developers of GEM welcome user comments and feedback that will generate ideas for improvements to future versions. GEM is designated as version 5.0 to track GEM codes along with the other SAPHIRE codes as the GEM relies on the same, shared database structure

  18. Reliability and validity of the Japanese version of the Community Integration Measure for community-dwelling people with schizophrenia

    OpenAIRE

    Shioda, Ai; Tadaka, Etsuko; Okochi, Ayako

    2017-01-01

    Background Community integration is an essential right for people with schizophrenia that affects their well-being and quality of life, but no valid instrument exists to measure it in Japan. The aim of the present study is to develop and evaluate the reliability and validity of the Japanese version of the Community Integration Measure (CIM) for people with schizophrenia. Methods The Japanese version of the CIM was developed as a self-administered questionnaire based on the original version of...

  19. INFN-Pisa scientific computation environment (GRID, HPC and Interactive Analysis)

    International Nuclear Information System (INIS)

    Arezzini, S; Carboni, A; Caruso, G; Ciampa, A; Coscetti, S; Mazzoni, E; Piras, S

    2014-01-01

    The INFN-Pisa Tier2 infrastructure is described, optimized not only for GRID CPU and Storage access, but also for a more interactive use of the resources in order to provide good solutions for the final data analysis step. The Data Center, equipped with about 6700 production cores, permits the use of modern analysis techniques realized via advanced statistical tools (like RooFit and RooStat) implemented in multicore systems. In particular a POSIX file storage access integrated with standard SRM access is provided. Therefore the unified storage infrastructure is described, based on GPFS and Xrootd, used both for SRM data repository and interactive POSIX access. Such a common infrastructure allows a transparent access to the Tier2 data to the users for their interactive analysis. The organization of a specialized many cores CPU facility devoted to interactive analysis is also described along with the login mechanism integrated with the INFN-AAI (National INFN Infrastructure) to extend the site access and use to a geographical distributed community. Such infrastructure is used also for a national computing facility in use to the INFN theoretical community, it enables a synergic use of computing and storage resources. Our Center initially developed for the HEP community is now growing and includes also HPC resources fully integrated. In recent years has been installed and managed a cluster facility (1000 cores, parallel use via InfiniBand connection) and we are now updating this facility that will provide resources for all the intermediate level HPC computing needs of the INFN theoretical national community.

  20. The Fifth Workshop on HPC Best Practices: File Systems and Archives

    Energy Technology Data Exchange (ETDEWEB)

    Hick, Jason; Hules, John; Uselton, Andrew

    2011-11-30

    The workshop on High Performance Computing (HPC) Best Practices on File Systems and Archives was the fifth in a series sponsored jointly by the Department Of Energy (DOE) Office of Science and DOE National Nuclear Security Administration. The workshop gathered technical and management experts for operations of HPC file systems and archives from around the world. Attendees identified and discussed best practices in use at their facilities, and documented findings for the DOE and HPC community in this report.

  1. The VERCE Science Gateway: enabling user friendly seismic waves simulations across European HPC infrastructures

    Science.gov (United States)

    Spinuso, Alessandro; Krause, Amy; Ramos Garcia, Clàudia; Casarotti, Emanuele; Magnoni, Federica; Klampanos, Iraklis A.; Frobert, Laurent; Krischer, Lion; Trani, Luca; David, Mario; Leong, Siew Hoon; Muraleedharan, Visakh

    2014-05-01

    The EU-funded project VERCE (Virtual Earthquake and seismology Research Community in Europe) aims to deploy technologies which satisfy the HPC and data-intensive requirements of modern seismology. As a result of VERCE's official collaboration with the EU project SCI-BUS, access to computational resources, like local clusters and international infrastructures (EGI and PRACE), is made homogeneous and integrated within a dedicated science gateway based on the gUSE framework. In this presentation we give a detailed overview on the progress achieved with the developments of the VERCE Science Gateway, according to a use-case driven implementation strategy. More specifically, we show how the computational technologies and data services have been integrated within a tool for Seismic Forward Modelling, whose objective is to offer the possibility to perform simulations of seismic waves as a service to the seismological community. We will introduce the interactive components of the OGC map based web interface and how it supports the user with setting up the simulation. We will go through the selection of input data, which are either fetched from federated seismological web services, adopting community standards, or provided by the users themselves by accessing their own document data store. The HPC scientific codes can be selected from a number of waveform simulators, currently available to the seismological community as batch tools or with limited configuration capabilities in their interactive online versions. The results will be staged out from the HPC via a secure GridFTP transfer to a VERCE data layer managed by iRODS. The provenance information of the simulation will be automatically cataloged by the data layer via NoSQL techonologies. We will try to demonstrate how data access, validation and visualisation can be supported by a general purpose provenance framework which, besides common provenance concepts imported from the OPM and the W3C-PROV initiatives, also offers

  2. Study on seismic reliability for foundation grounds and surrounding slopes of nuclear power plants. Proposal of evaluation methodology and integration of seismic reliability evaluation system

    International Nuclear Information System (INIS)

    Ohtori, Yasuki; Kanatani, Mamoru

    2006-01-01

    This paper proposes an evaluation methodology of annual probability of failure for soil structures subjected to earthquakes and integrates the analysis system for seismic reliability of soil structures. The method is based on margin analysis, that evaluates the ground motion level at which structure is damaged. First, ground motion index that is strongly correlated with damage or response of the specific structure, is selected. The ultimate strength in terms of selected ground motion index is then evaluated. Next, variation of soil properties is taken into account for the evaluation of seismic stability of structures. The variation of the safety factor (SF) is evaluated and then the variation is converted into the variation of the specific ground motion index. Finally, the fragility curve is developed and then the annual probability of failure is evaluated combined with seismic hazard curve. The system facilitates the assessment of seismic reliability. A generator of random numbers, dynamic analysis program and stability analysis program are incorporated into one package. Once we define a structural model, distribution of the soil properties, input ground motions and so forth, list of safety factors for each sliding line is obtained. Monte Carlo Simulation (MCS), Latin Hypercube Sampling (LHS), point estimation method (PEM) and first order second moment (FOSM) implemented in this system are also introduced. As numerical examples, a ground foundation and a surrounding slope are assessed using the proposed method and the integrated system. (author)

  3. Continuous Security and Configuration Monitoring of HPC Clusters

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Lomeli, H. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bertsch, A. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fox, D. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-05-08

    Continuous security and configuration monitoring of information systems has been a time consuming and laborious task for system administrators at the High Performance Computing (HPC) center. Prior to this project, system administrators had to manually check the settings of thousands of nodes, which required a significant number of hours rendering the old process ineffective and inefficient. This paper explains the application of Splunk Enterprise, a software agent, and a reporting tool in the development of a user application interface to track and report on critical system updates and security compliance status of HPC Clusters. In conjunction with other configuration management systems, the reporting tool is to provide continuous situational awareness to system administrators of the compliance state of information systems. Our approach consisted of the development, testing, and deployment of an agent to collect any arbitrary information across a massively distributed computing center, and organize that information into a human-readable format. Using Splunk Enterprise, this raw data was then gathered into a central repository and indexed for search, analysis, and correlation. Following acquisition and accumulation, the reporting tool generated and presented actionable information by filtering the data according to command line parameters passed at run time. Preliminary data showed results for over six thousand nodes. Further research and expansion of this tool could lead to the development of a series of agents to gather and report critical system parameters. However, in order to make use of the flexibility and resourcefulness of the reporting tool the agent must conform to specifications set forth in this paper. This project has simplified the way system administrators gather, analyze, and report on the configuration and security state of HPC clusters, maintaining ongoing situational awareness. Rather than querying each cluster independently, compliance checking

  4. ENHANCING PERFORMANCE OF AN HPC CLUSTER BY ADOPTING NONDEDICATED NODES

    OpenAIRE

    Pil Seong Park

    2015-01-01

    Persona-sized HPC clusters are widely used in many small labs, because they are cost-effective and easy to build. Instead of adding costly new nodes to old clusters, we may try to make use of some servers’ idle times by including them working independently on the same LAN, especially during the night. However such extension across a firewall raises not only some security problem with NFS but also a load balancing problem caused by heterogeneity. In this paper, we propose a meth...

  5. Photovoltaic and Wind Turbine Integration Applying Cuckoo Search for Probabilistic Reliable Optimal Placement

    OpenAIRE

    R. A. Swief; T. S. Abdel-Salam; Noha H. El-Amary

    2018-01-01

    This paper presents an efficient Cuckoo Search Optimization technique to improve the reliability of electrical power systems. Various reliability objective indices such as Energy Not Supplied, System Average Interruption Frequency Index, System Average Interruption, and Duration Index are the main indices indicating reliability. The Cuckoo Search Optimization (CSO) technique is applied to optimally place the protection devices, install the distributed generators, and to determine the size of ...

  6. Photovoltaic and Wind Turbine Integration Applying Cuckoo Search for Probabilistic Reliable Optimal Placement

    Directory of Open Access Journals (Sweden)

    R. A. Swief

    2018-01-01

    Full Text Available This paper presents an efficient Cuckoo Search Optimization technique to improve the reliability of electrical power systems. Various reliability objective indices such as Energy Not Supplied, System Average Interruption Frequency Index, System Average Interruption, and Duration Index are the main indices indicating reliability. The Cuckoo Search Optimization (CSO technique is applied to optimally place the protection devices, install the distributed generators, and to determine the size of distributed generators in radial feeders for reliability improvement. Distributed generator affects reliability and system power losses and voltage profile. The volatility behaviour for both photovoltaic cells and the wind turbine farms affect the values and the selection of protection devices and distributed generators allocation. To improve reliability, the reconfiguration will take place before installing both protection devices and distributed generators. Assessment of consumer power system reliability is a vital part of distribution system behaviour and development. Distribution system reliability calculation will be relayed on probabilistic reliability indices, which can expect the disruption profile of a distribution system based on the volatility behaviour of added generators and load behaviour. The validity of the anticipated algorithm has been tested using a standard IEEE 69 bus system.

  7. The VERCE Science Gateway: Enabling User Friendly HPC Seismic Wave Simulations.

    Science.gov (United States)

    Casarotti, E.; Spinuso, A.; Matser, J.; Leong, S. H.; Magnoni, F.; Krause, A.; Garcia, C. R.; Muraleedharan, V.; Krischer, L.; Anthes, C.

    2014-12-01

    The EU-funded project VERCE (Virtual Earthquake and seismology Research Community in Europe) aims to deploy technologies which satisfy the HPC and data-intensive requirements of modern seismology.As a result of VERCE official collaboration with the EU project SCI-BUS, access to computational resources, like local clusters and international infrastructures (EGI and PRACE), is made homogeneous and integrated within a dedicated science gateway based on the gUSE framework. In this presentation we give a detailed overview on the progress achieved with the developments of the VERCE Science Gateway, according to a use-case driven implementation strategy. More specifically, we show how the computational technologies and data services have been integrated within a tool for Seismic Forward Modelling, whose objective is to offer the possibility to performsimulations of seismic waves as a service to the seismological community.We will introduce the interactive components of the OGC map based web interface and how it supports the user with setting up the simulation. We will go through the selection of input data, which are either fetched from federated seismological web services, adopting community standards, or provided by the users themselves by accessing their own document data store. The HPC scientific codes can be selected from a number of waveform simulators, currently available to the seismological community as batch tools or with limited configuration capabilities in their interactive online versions.The results will be staged out via a secure GridFTP transfer to a VERCE data layer managed by iRODS. The provenance information of the simulation will be automatically cataloged by the data layer via NoSQL techonologies.Finally, we will show the example of how the visualisation output of the gateway could be enhanced by the connection with immersive projection technology at the Virtual Reality and Visualisation Centre of Leibniz Supercomputing Centre (LRZ).

  8. Systems analysis programs for hands-on integrated reliability evaluations (SAPHIRE) version 5.0

    International Nuclear Information System (INIS)

    Russell, K.D.; Kvarfordt, K.J.; Skinner, N.L.; Wood, S.T.

    1994-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. This volume is the reference manual for the Systems Analysis and Risk Assessment (SARA) System Version 5.0, a microcomputer-based system used to analyze the safety issues of a open-quotes familyclose quotes [i.e., a power plant, a manufacturing facility, any facility on which a probabilistic risk assessment (PRA) might be performed]. The SARA database contains PRA data primarily for the dominant accident sequences of a family and descriptive information about the family including event trees, fault trees, and system model diagrams. The number of facility databases that can be accessed is limited only by the amount of disk storage available. To simulate changes to family systems, SARA users change the failure rates of initiating and basic events and/or modify the structure of the cut sets that make up the event trees, fault trees, and systems. The user then evaluates the effects of these changes through the recalculation of the resultant accident sequence probabilities and importance measures. The results are displayed in tables and graphs that may be printed for reports. A preliminary version of the SARA program was completed in August 1985 and has undergone several updates in response to user suggestions and to maintain compatibility with the other SAPHIRE programs. Version 5.0 of SARA provides the same capability as earlier versions and adds the ability to process unlimited cut sets; display fire, flood, and seismic data; and perform more powerful cut set editing

  9. Reliability of the Test of Integrated Language and Literacy Skills (TILLS)

    Science.gov (United States)

    Mailend, Marja-Liisa; Plante, Elena; Anderson, Michele A.; Applegate, E. Brooks; Nelson, Nickola W.

    2016-01-01

    Background: As new standardized tests become commercially available, it is critical that clinicians have access to the information about a test's psychometric properties, including aspects of reliability. Aims: The purpose of the three studies reported in this article was to investigate the reliability of a new test, the Test of Integrated…

  10. Towards anatomic scale agent-based modeling with a massively parallel spatially explicit general-purpose model of enteric tissue (SEGMEnT_HPC).

    Science.gov (United States)

    Cockrell, Robert Chase; Christley, Scott; Chang, Eugene; An, Gary

    2015-01-01

    Perhaps the greatest challenge currently facing the biomedical research community is the ability to integrate highly detailed cellular and molecular mechanisms to represent clinical disease states as a pathway to engineer effective therapeutics. This is particularly evident in the representation of organ-level pathophysiology in terms of abnormal tissue structure, which, through histology, remains a mainstay in disease diagnosis and staging. As such, being able to generate anatomic scale simulations is a highly desirable goal. While computational limitations have previously constrained the size and scope of multi-scale computational models, advances in the capacity and availability of high-performance computing (HPC) resources have greatly expanded the ability of computational models of biological systems to achieve anatomic, clinically relevant scale. Diseases of the intestinal tract are exemplary examples of pathophysiological processes that manifest at multiple scales of spatial resolution, with structural abnormalities present at the microscopic, macroscopic and organ-levels. In this paper, we describe a novel, massively parallel computational model of the gut, the Spatially Explicitly General-purpose Model of Enteric Tissue_HPC (SEGMEnT_HPC), which extends an existing model of the gut epithelium, SEGMEnT, in order to create cell-for-cell anatomic scale simulations. We present an example implementation of SEGMEnT_HPC that simulates the pathogenesis of ileal pouchitis, and important clinical entity that affects patients following remedial surgery for ulcerative colitis.

  11. Reliability Evaluation of a Single-phase H-bridge Inverter with Integrated Active Power Decoupling

    DEFF Research Database (Denmark)

    Tang, Junchaojie; Wang, Haoran; Ma, Siyuan

    2016-01-01

    it with the traditional passive DC-link solution. The converter level reliability is obtained by component level electro-thermal stress modeling, lifetime model, Weibull distribution, and Reliability Block Diagram (RBD) method. The results are demonstrated by a 2 kW single-phase inverter application.......Various power decoupling methods have been proposed recently to replace the DC-link Electrolytic Capacitors (E-caps) in single-phase conversion system, in order to extend the lifetime and improve the reliability of the DC-link. However, it is still an open question whether the converter level...... reliability becomes better or not, since additional components are introduced and the loading of the existing components may be changed. This paper aims to study the converter level reliability of a single-phase full-bridge inverter with two kinds of active power decoupling module and to compare...

  12. Climate simulations and services on HPC, Cloud and Grid infrastructures

    Science.gov (United States)

    Cofino, Antonio S.; Blanco, Carlos; Minondo Tshuma, Antonio

    2017-04-01

    Cloud, Grid and High Performance Computing have changed the accessibility and availability of computing resources for Earth Science research communities, specially for Climate community. These paradigms are modifying the way how climate applications are being executed. By using these technologies the number, variety and complexity of experiments and resources are increasing substantially. But, although computational capacity is increasing, traditional applications and tools used by the community are not good enough to manage this large volume and variety of experiments and computing resources. In this contribution, we evaluate the challenges to run climate simulations and services on Grid, Cloud and HPC infrestructures and how to tackle them. The Grid and Cloud infrastructures provided by EGI's VOs ( esr , earth.vo.ibergrid and fedcloud.egi.eu) will be evaluated, as well as HPC resources from PRACE infrastructure and institutional clusters. To solve those challenges, solutions using DRM4G framework will be shown. DRM4G provides a good framework to manage big volume and variety of computing resources for climate experiments. This work has been supported by the Spanish National R&D Plan under projects WRF4G (CGL2011-28864), INSIGNIA (CGL2016-79210-R) and MULTI-SDM (CGL2015-66583-R) ; the IS-ENES2 project from the 7FP of the European Commission (grant agreement no. 312979); the European Regional Development Fund—ERDF and the Programa de Personal Investigador en Formación Predoctoral from Universidad de Cantabria and Government of Cantabria.

  13. OCCAM: a flexible, multi-purpose and extendable HPC cluster

    Science.gov (United States)

    Aldinucci, M.; Bagnasco, S.; Lusso, S.; Pasteris, P.; Rabellino, S.; Vallero, S.

    2017-10-01

    The Open Computing Cluster for Advanced data Manipulation (OCCAM) is a multipurpose flexible HPC cluster designed and operated by a collaboration between the University of Torino and the Sezione di Torino of the Istituto Nazionale di Fisica Nucleare. It is aimed at providing a flexible, reconfigurable and extendable infrastructure to cater to a wide range of different scientific computing use cases, including ones from solid-state chemistry, high-energy physics, computer science, big data analytics, computational biology, genomics and many others. Furthermore, it will serve as a platform for R&D activities on computational technologies themselves, with topics ranging from GPU acceleration to Cloud Computing technologies. A heterogeneous and reconfigurable system like this poses a number of challenges related to the frequency at which heterogeneous hardware resources might change their availability and shareability status, which in turn affect methods and means to allocate, manage, optimize, bill, monitor VMs, containers, virtual farms, jobs, interactive bare-metal sessions, etc. This work describes some of the use cases that prompted the design and construction of the HPC cluster, its architecture and resource provisioning model, along with a first characterization of its performance by some synthetic benchmark tools and a few realistic use-case tests.

  14. Simplifying the Development, Use and Sustainability of HPC Software

    Directory of Open Access Journals (Sweden)

    Jeremy Cohen

    2014-07-01

    Full Text Available Developing software to undertake complex, compute-intensive scientific processes requires a challenging combination of both specialist domain knowledge and software development skills to convert this knowledge into efficient code. As computational platforms become increasingly heterogeneous and newer types of platform such as Infrastructure-as-a-Service (IaaS cloud computing become more widely accepted for high-performance computing (HPC, scientists require more support from computer scientists and resource providers to develop efficient code that offers long-term sustainability and makes optimal use of the resources available to them. As part of the libhpc stage 1 and 2 projects we are developing a framework to provide a richer means of job specification and efficient execution of complex scientific software on heterogeneous infrastructure. In this updated version of our submission to the WSSSPE13 workshop at SuperComputing 2013 we set out our approach to simplifying access to HPC applications and resources for end-users through the use of flexible and interchangeable software components and associated high-level functional-style operations. We believe this approach can support sustainability of scientific software and help to widen access to it.

  15. Performance Analysis, Modeling and Scaling of HPC Applications and Tools

    Energy Technology Data Exchange (ETDEWEB)

    Bhatele, Abhinav [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-13

    E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research along the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.

  16. Self-service for software development projects and HPC activities

    International Nuclear Information System (INIS)

    Husejko, M; Høimyr, N; Gonzalez, A; Koloventzos, G; Asbury, D; Trzcinska, A; Agtzidis, I; Botrel, G; Otto, J

    2014-01-01

    This contribution describes how CERN has implemented several essential tools for agile software development processes, ranging from version control (Git) to issue tracking (Jira) and documentation (Wikis). Running such services in a large organisation like CERN requires many administrative actions both by users and service providers, such as creating software projects, managing access rights, users and groups, and performing tool-specific customisation. Dealing with these requests manually would be a time-consuming task. Another area of our CERN computing services that has required dedicated manual support has been clusters for specific user communities with special needs. Our aim is to move all our services to a layered approach, with server infrastructure running on the internal cloud computing infrastructure at CERN. This contribution illustrates how we plan to optimise the management of our of services by means of an end-user facing platform acting as a portal into all the related services for software projects, inspired by popular portals for open-source developments such as Sourceforge, GitHub and others. Furthermore, the contribution will discuss recent activities with tests and evaluations of High Performance Computing (HPC) applications on different hardware and software stacks, and plans to offer a dynamically scalable HPC service at CERN, based on affordable hardware.

  17. I/O load balancing for big data HPC applications

    Energy Technology Data Exchange (ETDEWEB)

    Paul, Arnab K. [Virginia Polytechnic Institute and State University; Goyal, Arpit [Virginia Polytechnic Institute and State University; Wang, Feiyi [ORNL; Oral, H Sarp [ORNL; Butt, Ali R. [Virginia Tech, Blacksburg, VA; Brim, Michael J. [ORNL; Srinivasa, Sangeetha B. [Virginia Polytechnic Institute and State University

    2018-01-01

    High Performance Computing (HPC) big data problems require efficient distributed storage systems. However, at scale, such storage systems often experience load imbalance and resource contention due to two factors: the bursty nature of scientific application I/O; and the complex I/O path that is without centralized arbitration and control. For example, the extant Lustre parallel file system-that supports many HPC centers-comprises numerous components connected via custom network topologies, and serves varying demands of a large number of users and applications. Consequently, some storage servers can be more loaded than others, which creates bottlenecks and reduces overall application I/O performance. Existing solutions typically focus on per application load balancing, and thus are not as effective given their lack of a global view of the system. In this paper, we propose a data-driven approach to load balance the I/O servers at scale, targeted at Lustre deployments. To this end, we design a global mapper on Lustre Metadata Server, which gathers runtime statistics from key storage components on the I/O path, and applies Markov chain modeling and a minimum-cost maximum-flow algorithm to decide where data should be placed. Evaluation using a realistic system simulator and a real setup shows that our approach yields better load balancing, which in turn can improve end-to-end performance.

  18. Sensor Selection and Data Validation for Reliable Integrated System Health Management

    Science.gov (United States)

    Garg, Sanjay; Melcher, Kevin J.

    2008-01-01

    For new access to space systems with challenging mission requirements, effective implementation of integrated system health management (ISHM) must be available early in the program to support the design of systems that are safe, reliable, highly autonomous. Early ISHM availability is also needed to promote design for affordable operations; increased knowledge of functional health provided by ISHM supports construction of more efficient operations infrastructure. Lack of early ISHM inclusion in the system design process could result in retrofitting health management systems to augment and expand operational and safety requirements; thereby increasing program cost and risk due to increased instrumentation and computational complexity. Having the right sensors generating the required data to perform condition assessment, such as fault detection and isolation, with a high degree of confidence is critical to reliable operation of ISHM. Also, the data being generated by the sensors needs to be qualified to ensure that the assessments made by the ISHM is not based on faulty data. NASA Glenn Research Center has been developing technologies for sensor selection and data validation as part of the FDDR (Fault Detection, Diagnosis, and Response) element of the Upper Stage project of the Ares 1 launch vehicle development. This presentation will provide an overview of the GRC approach to sensor selection and data quality validation and will present recent results from applications that are representative of the complexity of propulsion systems for access to space vehicles. A brief overview of the sensor selection and data quality validation approaches is provided below. The NASA GRC developed Systematic Sensor Selection Strategy (S4) is a model-based procedure for systematically and quantitatively selecting an optimal sensor suite to provide overall health assessment of a host system. S4 can be logically partitioned into three major subdivisions: the knowledge base, the down

  19. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Code Reference Manual

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; K. J. Kvarfordt; S. T. Wood

    2008-08-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer. SAPHIRE is funded by the U.S. Nuclear Regulatory Commission (NRC) and developed by the Idaho National Laboratory (INL). The INL's primary role in this project is that of software developer. However, the INL also plays an important role in technology transfer by interfacing and supporting SAPHIRE users comprised of a wide range of PRA practitioners from the NRC, national laboratories, the private sector, and foreign countries. SAPHIRE can be used to model a complex system’s response to initiating events, quantify associated damage outcome frequencies, and identify important contributors to this damage (Level 1 PRA) and to analyze containment performance during a severe accident and quantify radioactive releases (Level 2 PRA). It can be used for a PRA evaluating a variety of operating conditions, for example, for a nuclear reactor at full power, low power, or at shutdown conditions. Furthermore, SAPHIRE can be used to analyze both internal and external initiating events and has special features for transforming models built for internal event analysis to models for external event analysis. It can also be used in a limited manner to quantify risk in terms of release consequences to both the public and the environment (Level 3 PRA). SAPHIRE includes a separate module called the Graphical Evaluation Module (GEM). GEM provides a highly specialized user interface with SAPHIRE that automates SAPHIRE process steps for evaluating operational events at commercial nuclear power plants. Using GEM, an analyst can estimate the risk associated with operational events in a very efficient and expeditious manner. This reference guide will introduce the SAPHIRE Version 7.0 software. A brief discussion of the purpose and history of the software is included along with

  20. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Code Reference Manual

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; K. J. Kvarfordt; S. T. Wood

    2006-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer. SAPHIRE is funded by the U.S. Nuclear Regulatory Commission (NRC) and developed by the Idaho National Laboratory (INL). The INL's primary role in this project is that of software developer. However, the INL also plays an important role in technology transfer by interfacing and supporting SAPHIRE users comprised of a wide range of PRA practitioners from the NRC, national laboratories, the private sector, and foreign countries. SAPHIRE can be used to model a complex system’s response to initiating events, quantify associated damage outcome frequencies, and identify important contributors to this damage (Level 1 PRA) and to analyze containment performance during a severe accident and quantify radioactive releases (Level 2 PRA). It can be used for a PRA evaluating a variety of operating conditions, for example, for a nuclear reactor at full power, low power, or at shutdown conditions. Furthermore, SAPHIRE can be used to analyze both internal and external initiating events and has special features for ansforming models built for internal event analysis to models for external event analysis. It can also be used in a limited manner to quantify risk in terms of release consequences to both the public and the environment (Level 3 PRA). SAPHIRE includes a separate module called the Graphical Evaluation Module (GEM). GEM provides a highly specialized user interface with SAPHIRE that automates SAPHIRE process steps for evaluating operational events at commercial nuclear power plants. Using GEM, an analyst can estimate the risk associated with operational events in a very efficient and expeditious manner. This reference guide will introduce the SAPHIRE Version 7.0 software. A brief discussion of the purpose and history of the software is included along with

  1. Integrated Reliability Estimation of a Nuclear Maintenance Robot including a Software

    Energy Technology Data Exchange (ETDEWEB)

    Eom, Heung Seop; Kim, Jae Hee; Jeong, Kyung Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2011-10-15

    Conventional reliability estimation techniques such as Fault Tree Analysis (FTA), Reliability Block Diagram (RBD), Markov Model, and Event Tree Analysis (ETA) have been used widely and approved in some industries. Then there are some limitations when we use them for a complicate robot systems including software such as intelligent reactor inspection robots. Therefore an expert's judgment plays an important role in estimating the reliability of a complicate system in practice, because experts can deal with diverse evidence related to the reliability and then perform an inference based on them. The proposed method in this paper combines qualitative and quantitative evidences and performs an inference like experts. Furthermore, it does the work in a formal and in a quantitative way unlike human experts, by the benefits of Bayesian Nets (BNs)

  2. SoAx: A generic C++ Structure of Arrays for handling particles in HPC codes

    Science.gov (United States)

    Homann, Holger; Laenen, Francois

    2018-03-01

    The numerical study of physical problems often require integrating the dynamics of a large number of particles evolving according to a given set of equations. Particles are characterized by the information they are carrying such as an identity, a position other. There are generally speaking two different possibilities for handling particles in high performance computing (HPC) codes. The concept of an Array of Structures (AoS) is in the spirit of the object-oriented programming (OOP) paradigm in that the particle information is implemented as a structure. Here, an object (realization of the structure) represents one particle and a set of many particles is stored in an array. In contrast, using the concept of a Structure of Arrays (SoA), a single structure holds several arrays each representing one property (such as the identity) of the whole set of particles. The AoS approach is often implemented in HPC codes due to its handiness and flexibility. For a class of problems, however, it is known that the performance of SoA is much better than that of AoS. We confirm this observation for our particle problem. Using a benchmark we show that on modern Intel Xeon processors the SoA implementation is typically several times faster than the AoS one. On Intel's MIC co-processors the performance gap even attains a factor of ten. The same is true for GPU computing, using both computational and multi-purpose GPUs. Combining performance and handiness, we present the library SoAx that has optimal performance (on CPUs, MICs, and GPUs) while providing the same handiness as AoS. For this, SoAx uses modern C++ design techniques such template meta programming that allows to automatically generate code for user defined heterogeneous data structures.

  3. SAPHIRE6.64, System Analysis Programs for Hands-on Integrated Reliability

    International Nuclear Information System (INIS)

    2001-01-01

    1 - Description of program or function: SAPHIRE is a collection of programs developed for the purpose of performing those functions necessary to create and analyze a complete Probabilistic Risk Assessment (PRA) primarily for nuclear power plants. The programs included in this suite are the Integrated Reliability and Risk Analysis System (IRRAS), the System Analysis and Risk Assessment (SARA) system, the Models And Results Database (MAR-D) system, and the Fault tree, Event tree and P and ID (FEP) editors. Previously these programs were released as separate packages. These programs include functions to allow the user to create event trees and fault trees, to define accident sequences and basic event failure data, to solve system and accident sequence fault trees, to quantify cut sets, and to perform uncertainty analysis on the results. Also included in this program are features to allow the analyst to generate reports and displays that can be used to document the results of an analysis. Since this software is a very detailed technical tool, the user of this program should be familiar with PRA concepts and the methods used to perform these analyses. 2 - Methods: SAPHIRE is written in MODULA-2 and uses an integrated commercial graphics package to interactively construct and edit fault trees. The fault tree solving methods used are industry recognized top down algorithms. For quantification, the program uses standard methods to propagate the failure information through the generated cut sets. SAPHIRE includes a separate module called the Graphical Evaluation Module (GEM). GEM provides a highly specialized user interface with SAPHIRE which automates the process for evaluating operational events at commercial nuclear power plants. Using GEM an analyst can estimate the risk associated with operational events (that is, perform a Level 1, Level 2, and Level 3 analysis for operational events) in a very efficient and expeditious manner. This on-line reference guide will

  4. La confiabilidad integral del activo. // The reliability of a physical asset.

    Directory of Open Access Journals (Sweden)

    L. F. Sexto Cabrera

    2008-01-01

    Full Text Available En el presente artículo se discute sobre algunos aspectos que influyen sobre la confiabilidad de un activo físico. En él seproponen diferentes clasificaciones de fallos. Se presentan algunos elementos sobre el análisis de la confiabilidad humana ylos tipos de errores. De manera introductoria se comenta acerca de los defectos crónicos tolerados y el modelo de la trilogíade Juran. Se presentan reflexiones acerca de los procesos de deterioro gradual que puede sufrir un activo cualquiera.Finalmente, se discute sobre los costos de la confiabilidad.Palabras claves: Confiabilidad, fallo, modo de fallo, defectos crónicos, confiabilidad humana, costos de laconfiabilidad.____________________________________________________________________________Abstract.This paper discusses some aspects that influence about the reliability of a physical asset. Is presented differentclassifications of failures. Some elements are proposed about the analysis of the human reliability and the types of errors, aswell as, an introduction about the tolerated chronic defects and the Juran trilogy. Reflections are presented about theprocesses of gradual deterioration. Finally, is discusses on the reliability costs.Key words; Reliability, failure, failure mode, chronic defects, human reliability, reliability costs.

  5. Optimal integrated sizing and planning of hubs with midsize/large CHP units considering reliability of supply

    International Nuclear Information System (INIS)

    Moradi, Saeed; Ghaffarpour, Reza; Ranjbar, Ali Mohammad; Mozaffari, Babak

    2017-01-01

    Highlights: • New hub planning formulation is proposed to exploit assets of midsize/large CHPs. • Linearization approaches are proposed for two-variable nonlinear CHP fuel function. • Efficient operation of addressed CHPs & hub devices at contingencies are considered. • Reliability-embedded integrated planning & sizing is formulated as one single MILP. • Noticeable results for costs & reliability-embedded planning due to mid/large CHPs. - Abstract: Use of multi-carrier energy systems and the energy hub concept has recently been a widespread trend worldwide. However, most of the related researches specialize in CHP systems with constant electricity/heat ratios and linear operating characteristics. In this paper, integrated energy hub planning and sizing is developed for the energy systems with mid-scale and large-scale CHP units, by taking their wide operating range into consideration. The proposed formulation is aimed at taking the best use of the beneficial degrees of freedom associated with these units for decreasing total costs and increasing reliability. High-accuracy piecewise linearization techniques with approximation errors of about 1% are introduced for the nonlinear two-dimensional CHP input-output function, making it possible to successfully integrate the CHP sizing. Efficient operation of CHP and the hub at contingencies is extracted via a new formulation, which is developed to be incorporated to the planning and sizing problem. Optimal operation, planning, sizing and contingency operation of hub components are integrated and formulated as a single comprehensive MILP problem. Results on a case study with midsize CHPs reveal a 33% reduction in total costs, and it is demonstrated that the proposed formulation ceases the need for additional components/capacities for increasing reliability of supply.

  6. Innovative HPC architectures for the study of planetary plasma environments

    Science.gov (United States)

    Amaya, Jorge; Wolf, Anna; Lembège, Bertrand; Zitz, Anke; Alvarez, Damian; Lapenta, Giovanni

    2016-04-01

    DEEP-ER is an European Commission founded project that develops a new type of High Performance Computer architecture. The revolutionary system is currently used by KU Leuven to study the effects of the solar wind on the global environments of the Earth and Mercury. The new architecture combines the versatility of Intel Xeon computing nodes with the power of the upcoming Intel Xeon Phi accelerators. Contrary to classical heterogeneous HPC architectures, where it is customary to find CPU and accelerators in the same computing nodes, in the DEEP-ER system CPU nodes are grouped together (Cluster) and independently from the accelerator nodes (Booster). The system is equipped with a state of the art interconnection network, a highly scalable and fast I/O and a fail recovery resiliency system. The final objective of the project is to introduce a scalable system that can be used to create the next generation of exascale supercomputers. The code iPic3D from KU Leuven is being adapted to this new architecture. This particle-in-cell code can now perform the computation of the electromagnetic fields in the Cluster while the particles are moved in the Booster side. Using fast and scalable Xeon Phi accelerators in the Booster we can introduce many more particles per cell in the simulation than what is possible in the current generation of HPC systems, allowing to calculate fully kinetic plasmas with very low interpolation noise. The system will be used to perform fully kinetic, low noise, 3D simulations of the interaction of the solar wind with the magnetosphere of the Earth and Mercury. Preliminary simulations have been performed in other HPC centers in order to compare the results in different systems. In this presentation we show the complexity of the plasma flow around the planets, including the development of hydrodynamic instabilities at the flanks, the presence of the collision-less shock, the magnetosheath, the magnetopause, reconnection zones, the formation of the

  7. Utilizing HPC Network Technologies in High Energy Physics Experiments

    CERN Document Server

    AUTHOR|(CDS)2088631; The ATLAS collaboration

    2017-01-01

    Because of their performance characteristics high-performance fabrics like Infiniband or OmniPath are interesting technologies for many local area network applications, including data acquisition systems for high-energy physics experiments like the ATLAS experiment at CERN. This paper analyzes existing APIs for high-performance fabrics and evaluates their suitability for data acquisition systems in terms of performance and domain applicability. The study finds that existing software APIs for high-performance interconnects are focused on applications in high-performance computing with specific workloads and are not compatible with the requirements of data acquisition systems. To evaluate the use of high-performance interconnects in data acquisition systems a custom library, NetIO, is presented and compared against existing technologies. NetIO has a message queue-like interface which matches the ATLAS use case better than traditional HPC APIs like MPI. The architecture of NetIO is based on a interchangeable bac...

  8. Behavior of HPC with Fly Ash after Elevated Temperature

    Directory of Open Access Journals (Sweden)

    Huai-Shuai Shang

    2013-01-01

    Full Text Available For use in fire resistance calculations, the relevant thermal properties of high-performance concrete (HPC with fly ash were determined through an experimental study. These properties included compressive strength, cubic compressive strength, cleavage strength, flexural strength, and the ultrasonic velocity at various temperatures (20, 100, 200, 300, 400 and 500∘C for high-performance concrete. The effect of temperature on compressive strength, cubic compressive strength, cleavage strength, flexural strength, and the ultrasonic velocity of the high-performance concrete with fly ash was discussed according to the experimental results. The change of surface characteristics with the temperature was observed. It can serve as a reference for the maintenance, design, and the life prediction of high-performance concrete engineering, such as high-rise building, subjected to elevated temperatures.

  9. Behaviour of slag HPC submitted to immersion-drying cycles

    Directory of Open Access Journals (Sweden)

    Rabah Chaid

    2016-04-01

    Full Text Available This article is part of a summary of the work developed in conjunction with the Laboratory of Civil Engineering and Mechanical Engineering from INSA Rennes and Research Unit: Materials, Processes and Environment, University of Boumerdes. One of the objectives was indeed to promote, through studies of variants, the use of local cementitious additions in the formulation of high performance concretes (HPC. The binding contribution of mineral additions to the physical, mechanical and durability of concrete was evaluated by an experimental methodology to subjugate their original granular and pozzolanic effect. The results show that the contribution of couple cement -slag intensification of the matrix is higher than that obtained when the cement is not substituted by addition. Therefore, a significant improvement in performance of concretes was observed, despite the adverse action immersion cycles - drying maintained for 365 days.

  10. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE)

    International Nuclear Information System (INIS)

    C. L. Smith

    2006-01-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer (PC) running the Microsoft Windows operating system. SAPHIRE is primarily funded by the U.S. Nuclear Regulatory Commission (NRC) and developed by the Idaho National Laboratory (INL). INL's primary role in this project is that of software developer and tester. However, INL also plays an important role in technology transfer by interfacing and supporting SAPHIRE users, who constitute a wide range of PRA practitioners from the NRC, national laboratories, the private sector, and foreign countries. SAPHIRE can be used to model a complex system's response to initiating events and quantify associated consequential outcome frequencies. Specifically, for nuclear power plant applications, SAPHIRE can identify important contributors to core damage (Level 1 PRA) and containment failure during a severe accident which lead to releases (Level 2 PRA). It can be used for a PRA where the reactor is at full power, low power, or at shutdown conditions. Furthermore, it can be used to analyze both internal and external initiating events and has special features for transforming an internal events model to a model for external events, such as flooding and fire analysis. It can also be used in a limited manner to quantify risk in terms of release consequences to the public and environment (Level 3 PRA). SAPHIRE also includes a separate module called the Graphical Evaluation Module (GEM). GEM is a special user interface linked to SAPHIRE that automates the SAPHIRE process steps for evaluating operational events at commercial nuclear power plants. Using GEM, an analyst can estimate the risk associated with operational events (for example, to calculate a conditional core damage probability) very efficiently and expeditiously. This report provides an overview of the functions

  11. Automated Energy Distribution and Reliability System: Validation Integration - Results of Future Architecture Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Buche, D. L.

    2008-06-01

    This report describes Northern Indiana Public Service Co. project efforts to develop an automated energy distribution and reliability system. The purpose of this project was to implement a database-driven GIS solution that would manage all of the company's gas, electric, and landbase objects. This report is second in a series of reports detailing this effort.

  12. Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system

    International Nuclear Information System (INIS)

    Meier, Konrad; Fleig, Georg; Hauth, Thomas; Quast, Günter; Janczyk, Michael; Von Suchodoletz, Dirk; Wiebelt, Bernd

    2016-01-01

    Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare

  13. Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system

    Science.gov (United States)

    Meier, Konrad; Fleig, Georg; Hauth, Thomas; Janczyk, Michael; Quast, Günter; von Suchodoletz, Dirk; Wiebelt, Bernd

    2016-10-01

    Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare

  14. Passive safety systems reliability and integration of these systems in nuclear power plant PSA

    International Nuclear Information System (INIS)

    La Lumia, V.; Mercier, S.; Marques, M.; Pignatel, J.F.

    2004-01-01

    Innovative nuclear reactor concepts could lead to use passive safety features in combination with active safety systems. A passive system does not need active component, external energy, signal or human interaction to operate. These are attractive advantages for safety nuclear plant improvements and economic competitiveness. But specific reliability problems, linked to physical phenomena, can conduct to stop the physical process. In this context, the European Commission (EC) starts the RMPS (Reliability Methods for Passive Safety functions) program. In this RMPS program, a quantitative reliability evaluation of the RP2 system (Residual Passive heat Removal system on the Primary circuit) has been realised, and the results introduced in a simplified PSA (Probabilistic Safety Assessment). The scope is to get out experience of definition of characteristic parameters for reliability evaluation and PSA including passive systems. The simplified PSA, using event tree method, is carried out for the total loss of power supplies initiating event leading to a severe core damage. Are taken into account: failures of components but also failures of the physical process involved (e.g. natural convection) by a specific method. The physical process failure probabilities are assessed through uncertainty analyses based on supposed probability density functions for the characteristic parameters of the RP2 system. The probabilities are calculated by MONTE CARLO simulation coupled to the CATHARE thermalhydraulic code. The yearly frequency of the severe core damage is evaluated for each accident sequence. This analysis has identified the influence of the passive system RP2 and propose a re-dimensioning of the RP2 system in order to satisfy the safety probabilistic objectives for reactor core severe damage. (authors)

  15. Probabilistic safety assessment of Tehran Research Reactor using systems analysis programs for hands-on integrated reliability evaluations

    International Nuclear Information System (INIS)

    Hosseini, M.H.; Nematollahi, M.R.; Sepanloo, K.

    2004-01-01

    Probabilistic safety assessment application is found to be a practical tool for research reactor safety due to intense involvement of human interactions in an experimental facility. In this document the application of the probabilistic safety assessment to the Tehran Research Reactor is presented. The level 1 practicabilities safety assessment application involved: Familiarization with the plant, selection of accident initiators, mitigating functions and system definitions, event tree constructions and quantifications, fault tree constructions and quantification, human reliability, component failure data base development and dependent failure analysis. Each of the steps of the analysis given above is discussed with highlights from the selected results. Quantification of the constructed models is done using systems analysis programs for hands-on integrated reliability evaluations software

  16. Energy efficient HPC on embedded SoCs : optimization techniques for mali GPU

    OpenAIRE

    Grasso, Ivan; Radojkovic, Petar; Rajovic, Nikola; Gelado Fernandez, Isaac; Ramírez Bellido, Alejandro

    2014-01-01

    A lot of effort from academia and industry has been invested in exploring the suitability of low-power embedded technologies for HPC. Although state-of-the-art embedded systems-on-chip (SoCs) inherently contain GPUs that could be used for HPC, their performance and energy capabilities have never been evaluated. Two reasons contribute to the above. Primarily, embedded GPUs until now, have not supported 64-bit floating point arithmetic - a requirement for HPC. Secondly, embedded GPUs did not pr...

  17. Comparative performance of conventional OPC concrete and HPC designed by densified mixture design algorithm

    Science.gov (United States)

    Huynh, Trong-Phuoc; Hwang, Chao-Lung; Yang, Shu-Ti

    2017-12-01

    This experimental study evaluated the performance of normal ordinary Portland cement (OPC) concrete and high-performance concrete (HPC) that were designed by the conventional method (ACI) and densified mixture design algorithm (DMDA) method, respectively. Engineering properties and durability performance of both the OPC and HPC samples were studied using the tests of workability, compressive strength, water absorption, ultrasonic pulse velocity, and electrical surface resistivity. Test results show that the HPC performed good fresh property and further showed better performance in terms of strength and durability as compared to the OPC.

  18. International Energy Agency's Heat Pump Centre (IEA-HPC) Annual National Team Working Group Meeting

    Science.gov (United States)

    Broders, M. A.

    1992-09-01

    The traveler, serving as Delegate from the United States Advanced Heat Pump National Team, participated in the activities of the fourth IEA-HPC National Team Working Group meeting. Highlights of this meeting included review and discussion of 1992 IEA-HPC activities and accomplishments, introduction of the Switzerland National Team, and development of the 1993 IEA-HPC work program. The traveler also gave a formal presentation about the Development and Activities of the IEA Advanced Heat Pump U.S. National Team.

  19. Improving the high performance concrete (HPC behaviour in high temperatures

    Directory of Open Access Journals (Sweden)

    Cattelan Antocheves De Lima, R.

    2003-12-01

    Full Text Available High performance concrete (HPC is an interesting material that has been long attracting the interest from the scientific and technical community, due to the clear advantages obtained in terms of mechanical strength and durability. Given these better characteristics, HFC, in its various forms, has been gradually replacing normal strength concrete, especially in structures exposed to severe environments. However, the veiy dense microstructure and low permeability typical of HPC can result in explosive spalling under certain thermal and mechanical conditions, such as when concrete is subject to rapid temperature rises, during a f¡re. This behaviour is caused by the build-up of internal water pressure, in the pore structure, during heating, and by stresses originating from thermal deformation gradients. Although there are still a limited number of experimental programs in this area, some researchers have reported that the addition of polypropylene fibers to HPC is a suitable way to avoid explosive spalling under f re conditions. This change in behavior is derived from the fact that polypropylene fibers melt in high temperatures and leave a pathway for heated gas to escape the concrete matrix, therefore allowing the outward migration of water vapor and resulting in the reduction of interned pore pressure. The present research investigates the behavior of high performance concrete on high temperatures, especially when polypropylene fibers are added to the mix.

    El hormigón de alta resistencia (HAR es un material de gran interés para la comunidad científica y técnica, debido a las claras ventajas obtenidas en término de resistencia mecánica y durabilidad. A causa de estas características, el HAR, en sus diversas formas, en algunas aplicaciones está reemplazando gradualmente al hormigón de resistencia normal, especialmente en estructuras expuestas a ambientes severos. Sin embargo, la microestructura muy densa y la baja permeabilidad t

  20. OpenTopography: Addressing Big Data Challenges Using Cloud Computing, HPC, and Data Analytics

    Science.gov (United States)

    Crosby, C. J.; Nandigam, V.; Phan, M.; Youn, C.; Baru, C.; Arrowsmith, R.

    2014-12-01

    OpenTopography (OT) is a geoinformatics-based data facility initiated in 2009 for democratizing access to high-resolution topographic data, derived products, and tools. Hosted at the San Diego Supercomputer Center (SDSC), OT utilizes cyberinfrastructure, including large-scale data management, high-performance computing, and service-oriented architectures to provide efficient Web based access to large, high-resolution topographic datasets. OT collocates data with processing tools to enable users to quickly access custom data and derived products for their application. OT's ongoing R&D efforts aim to solve emerging technical challenges associated with exponential growth in data, higher order data products, as well as user base. Optimization of data management strategies can be informed by a comprehensive set of OT user access metrics that allows us to better understand usage patterns with respect to the data. By analyzing the spatiotemporal access patterns within the datasets, we can map areas of the data archive that are highly active (hot) versus the ones that are rarely accessed (cold). This enables us to architect a tiered storage environment consisting of high performance disk storage (SSD) for the hot areas and less expensive slower disk for the cold ones, thereby optimizing price to performance. From a compute perspective, OT is looking at cloud based solutions such as the Microsoft Azure platform to handle sudden increases in load. An OT virtual machine image in Microsoft's VM Depot can be invoked and deployed quickly in response to increased system demand. OT has also integrated SDSC HPC systems like the Gordon supercomputer into our infrastructure tier to enable compute intensive workloads like parallel computation of hydrologic routing on high resolution topography. This capability also allows OT to scale to HPC resources during high loads to meet user demand and provide more efficient processing. With a growing user base and maturing scientific user

  1. Utilizing clad piping to improve process plant piping integrity, reliability, and operations

    International Nuclear Information System (INIS)

    Chakravarti, B.

    1996-01-01

    During the past four years carbon steel piping clad with type 304L (UNS S30403) stainless steel has been used to solve the flow accelerated corrosion (FAC) problem in nuclear power plants with exceptional success. The product is designed to allow ''like for like'' replacement of damaged carbon steel components where the carbon steel remains the pressure boundary and type 304L (UNS S30403) stainless steel the corrosion allowance. More than 3000 feet of piping and 500 fittings in sizes from 6 to 36-in. NPS have been installed in the extraction steam and other lines of these power plants to improve reliability, eliminate inspection program, reduce O and M costs and provide operational benefits. This concept of utilizing clad piping in solving various corrosion problems in industrial and process plants by conservatively selecting a high alloy material as cladding can provide similar, significant benefits in controlling corrosion problems, minimizing maintenance cost, improving operation and reliability to control performance and risks in a highly cost effective manner. This paper will present various material combinations and applications that appear ideally suited for use of the clad piping components in process plants

  2. An integral equation approach to the interval reliability of systems modelled by finite semi-Markov processes

    International Nuclear Information System (INIS)

    Csenki, A.

    1995-01-01

    The interval reliability for a repairable system which alternates between working and repair periods is defined as the probability of the system being functional throughout a given time interval. In this paper, a set of integral equations is derived for this dependability measure, under the assumption that the system is modelled by an irreducible finite semi-Markov process. The result is applied to the semi-Markov model of a two-unit system with sequential preventive maintenance. The method used for the numerical solution of the resulting system of integral equations is a two-point trapezoidal rule. The system of implementation is the matrix computation package MATLAB on the Apple Macintosh SE/30. The numerical results are discussed and compared with those from simulation

  3. Perm State University HPC-hardware and software services: capabilities for aircraft engine aeroacoustics problems solving

    Science.gov (United States)

    Demenev, A. G.

    2018-02-01

    The present work is devoted to analyze high-performance computing (HPC) infrastructure capabilities for aircraft engine aeroacoustics problems solving at Perm State University. We explore here the ability to develop new computational aeroacoustics methods/solvers for computer-aided engineering (CAE) systems to handle complicated industrial problems of engine noise prediction. Leading aircraft engine engineering company, including “UEC-Aviadvigatel” JSC (our industrial partners in Perm, Russia), require that methods/solvers to optimize geometry of aircraft engine for fan noise reduction. We analysed Perm State University HPC-hardware resources and software services to use efficiently. The performed results demonstrate that Perm State University HPC-infrastructure are mature enough to face out industrial-like problems of development CAE-system with HPC-method and CFD-solvers.

  4. High Temperature Exposure of HPC – Experimental Analysis of Residual Properties and Thermal Response

    Directory of Open Access Journals (Sweden)

    Pavlík Zbyšek

    2016-01-01

    Full Text Available The effect of high temperature exposure on properties of a newly designed High Performance Concrete (HPC is studied in the paper. The HPC samples are exposed to the temperatures of 200, 400, 600, 800, and 1000°C respectively. Among the basic physical properties, bulk density, matrix density and total open porosity are measured. The mechanical resistivity against disruptive temperature action is characterised by compressive strength, flexural strength and dynamic modulus of elasticity. To study the chemical and physical processes in HPC during its high-temperature exposure, Simultaneous Thermal Analysis (STA is performed. Linear thermal expansion coefficient is determined as function of temperature using thermodilatometry (TDA. In order to describe the changes in microstructure of HPC induced by high temperature loading, MIP measurement of pore size distribution is done. Increase of the total open porosity and connected decrease of the mechanical parameters for temperatures higher than 200 °C were identified.

  5. Advanced High and Low Fidelity HPC Simulations of FCS Concept Designs for Dynamic Systems

    National Research Council Canada - National Science Library

    Sandhu, S. S; Kanapady, R; Tamma, K. K

    2004-01-01

    ...) resources of many Army initiatives. In this paper we present a new and advanced HPC based rigid and flexible modeling and simulation technology capable of adaptive high/low fidelity modeling that is useful in the initial design concept...

  6. Accelerating Memory-Access-Limited HPC Applications via Novel Fast Data Compression, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — A fast-paced continual increase on the ratio of CPU to memory speed feeds an exponentially growing limitation for extracting performance from HPC systems. Breaking...

  7. Accelerating Memory-Access-Limited HPC Applications via Novel Fast Data Compression, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — A fast-paced continual increase on the ratio of CPU to memory speed feeds an exponentially growing limitation for extracting performance from HPC systems. Ongoing...

  8. STAR Data Reconstruction at NERSC/Cori, an adaptable Docker container approach for HPC

    Science.gov (United States)

    Mustafa, Mustafa; Balewski, Jan; Lauret, Jérôme; Porter, Jefferson; Canon, Shane; Gerhardt, Lisa; Hajdu, Levente; Lukascsyk, Mark

    2017-10-01

    As HPC facilities grow their resources, adaptation of classic HEP/NP workflows becomes a need. Linux containers may very well offer a way to lower the bar to exploiting such resources and at the time, help collaboration to reach vast elastic resources on such facilities and address their massive current and future data processing challenges. In this proceeding, we showcase STAR data reconstruction workflow at Cori HPC system at NERSC. STAR software is packaged in a Docker image and runs at Cori in Shifter containers. We highlight two of the typical end-to-end optimization challenges for such pipelines: 1) data transfer rate which was carried over ESnet after optimizing end points and 2) scalable deployment of conditions database in an HPC environment. Our tests demonstrate equally efficient data processing workflows on Cori/HPC, comparable to standard Linux clusters.

  9. Reliability Calculations

    DEFF Research Database (Denmark)

    Petersen, Kurt Erling

    1986-01-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety...... and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic...... approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very...

  10. A New Biobjective Model to Optimize Integrated Redundancy Allocation and Reliability-Centered Maintenance Problems in a System Using Metaheuristics

    Directory of Open Access Journals (Sweden)

    Shima MohammadZadeh Dogahe

    2015-01-01

    Full Text Available A novel integrated model is proposed to optimize the redundancy allocation problem (RAP and the reliability-centered maintenance (RCM simultaneously. A system of both repairable and nonrepairable components has been considered. In this system, electronic components are nonrepairable while mechanical components are mostly repairable. For nonrepairable components, a redundancy allocation problem is dealt with to determine optimal redundancy strategy and number of redundant components to be implemented in each subsystem. In addition, a maintenance scheduling problem is considered for repairable components in order to identify the best maintenance policy and optimize system reliability. Both active and cold standby redundancy strategies have been taken into account for electronic components. Also, net present value of the secondary cost including operational and maintenance costs has been calculated. The problem is formulated as a biobjective mathematical programming model aiming to reach a tradeoff between system reliability and cost. Three metaheuristic algorithms are employed to solve the proposed model: Nondominated Sorting Genetic Algorithm (NSGA-II, Multiobjective Particle Swarm Optimization (MOPSO, and Multiobjective Firefly Algorithm (MOFA. Several test problems are solved using the mentioned algorithms to test efficiency and effectiveness of the solution approaches and obtained results are analyzed.

  11. Using Formal Grammars to Predict I/O Behaviors in HPC: The Omnisc'IO Approach

    Energy Technology Data Exchange (ETDEWEB)

    Dorier, Matthieu; Ibrahim, Shadi; Antoniu, Gabriel; Ross, Rob

    2016-08-01

    The increasing gap between the computation performance of post-petascale machines and the performance of their I/O subsystem has motivated many I/O optimizations including prefetching, caching, and scheduling. In order to further improve these techniques, modeling and predicting spatial and temporal I/O patterns of HPC applications as they run has become crucial. In this paper we present Omnisc'IO, an approach that builds a grammar-based model of the I/O behavior of HPC applications and uses it to predict when future I/O operations will occur, and where and how much data will be accessed. To infer grammars, Omnisc'IO is based on StarSequitur, a novel algorithm extending Nevill-Manning's Sequitur algorithm. Omnisc'IO is transparently integrated into the POSIX and MPI I/O stacks and does not require any modification in applications or higher-level I/O libraries. It works without any prior knowledge of the application and converges to accurate predictions of any N future I/O operations within a couple of iterations. Its implementation is efficient in both computation time and memory footprint.

  12. Modeling the Performance of Fast Mulipole Method on HPC platforms

    KAUST Repository

    Ibeid, Huda

    2012-04-06

    The current trend in high performance computing is pushing towards exascale computing. To achieve this exascale performance, future systems will have between 100 million and 1 billion cores assuming gigahertz cores. Currently, there are many efforts studying the hardware and software bottlenecks for building an exascale system. It is important to understand and meet these bottlenecks in order to attain 10 PFLOPS performance. On applications side, there is an urgent need to model application performance and to understand what changes need to be made to ensure continued scalability at this scale. Fast multipole methods (FMM) were originally developed for accelerating N-body problems for particle based methods. Nowadays, FMM is more than an N-body solver, recent trends in HPC have been to use FMMs in unconventional application areas. FMM is likely to be a main player in exascale due to its hierarchical nature and the techniques used to access the data via a tree structure which allow many operations to happen simultaneously at each level of the hierarchy. In this thesis , we discuss the challenges for FMM on current parallel computers and future exasclae architecture. Furthermore, we develop a novel performance model for FMM. Our ultimate aim of this thesis is to ensure the scalability of FMM on the future exascale machines.

  13. A ``Cyber Wind Facility'' for HPC Wind Turbine Field Experiments

    Science.gov (United States)

    Brasseur, James; Paterson, Eric; Schmitz, Sven; Campbell, Robert; Vijayakumar, Ganesh; Lavely, Adam; Jayaraman, Balaji; Nandi, Tarak; Jha, Pankaj; Dunbar, Alex; Motta-Mena, Javier; Craven, Brent; Haupt, Sue

    2013-03-01

    The Penn State ``Cyber Wind Facility'' (CWF) is a high-fidelity multi-scale high performance computing (HPC) environment in which ``cyber field experiments'' are designed and ``cyber data'' collected from wind turbines operating within the atmospheric boundary layer (ABL) environment. Conceptually the ``facility'' is akin to a high-tech wind tunnel with controlled physical environment, but unlike a wind tunnel it replicates commercial-scale wind turbines operating in the field and forced by true atmospheric turbulence with controlled stability state. The CWF is created from state-of-the-art high-accuracy technology geometry and grid design and numerical methods, and with high-resolution simulation strategies that blend unsteady RANS near the surface with high fidelity large-eddy simulation (LES) in separated boundary layer, blade and rotor wake regions, embedded within high-resolution LES of the ABL. CWF experiments complement physical field facility experiments that can capture wider ranges of meteorological events, but with minimal control over the environment and with very small numbers of sensors at low spatial resolution. I shall report on the first CWF experiments aimed at dynamical interactions between ABL turbulence and space-time wind turbine loadings. Supported by DOE and NSF.

  14. HPC Co-operation between industry and university

    International Nuclear Information System (INIS)

    Ruhle, R.

    2003-01-01

    The full text of publication follows. Some years ago industry and university were using the same kind of high performance computers. Therefore it seemed appropriate to run the systems in common. Achieved synergies are larger systems to have better capabilities, to share skills in operating and using the system and to have less operating cost because of larger scale of operations. An example for a business model which allows that kind of co-operation would be demonstrated. Recently more and more simulations especially in the automotive industry are using PC clusters. A small number of PC's are used for one simulation, but the cluster is used for a large number of simulations as a throughput device. These devices are easily installed on the department level and it is difficult to achieve better cost on a central site, mainly because of the cost of the network. This is in contrast to the scientific need which still needs capability computing. In the presentation, strategies will be discussed for which cooperation potential in HPC (high performance computing) still exists. These are: to install heterogeneous computer farms, which allow to use the best computer for each application, to improve the quality of large scale simulation models to be used in design calculations or to form expert teams from industry and university to solve difficult problems in industry applications. Some examples of this co-operation are shown

  15. Spark and HPC for High Energy Physics Data Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Sehrish, Saba; Kowalkowski, Jim; Paterno, Marc

    2017-05-01

    A full High Energy Physics (HEP) data analysis is divided into multiple data reduction phases. Processing within these phases is extremely time consuming, therefore intermediate results are stored in files held in mass storage systems and referenced as part of large datasets. This processing model limits what can be done with interactive data analytics. Growth in size and complexity of experimental datasets, along with emerging big data tools are beginning to cause changes to the traditional ways of doing data analyses. Use of big data tools for HEP analysis looks promising, mainly because extremely large HEP datasets can be represented and held in memory across a system, and accessed interactively by encoding an analysis using highlevel programming abstractions. The mainstream tools, however, are not designed for scientific computing or for exploiting the available HPC platform features. We use an example from the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) in Geneva, Switzerland. The LHC is the highest energy particle collider in the world. Our use case focuses on searching for new types of elementary particles explaining Dark Matter in the universe. We use HDF5 as our input data format, and Spark to implement the use case. We show the benefits and limitations of using Spark with HDF5 on Edison at NERSC.

  16. HPC Colony II Consolidated Annual Report: July-2010 to June-2011

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Terry R [ORNL

    2011-06-01

    This report provides a brief progress synopsis of the HPC Colony II project for the period of July 2010 to June 2011. HPC Colony II is a 36-month project and this report covers project months 10 through 21. It includes a consolidated view of all partners (Oak Ridge National Laboratory, IBM, and the University of Illinois at Urbana-Champaign) as well as detail for Oak Ridge. Highlights are noted and fund status data (burn rates) are provided.

  17. HPC Cloud for Scientific and Business Applications: Taxonomy, Vision, and Research Challenges

    OpenAIRE

    Netto, Marco A. S.; Calheiros, Rodrigo N.; Rodrigues, Eduardo R.; Cunha, Renato L. F.; Buyya, Rajkumar

    2017-01-01

    High Performance Computing (HPC) clouds are becoming an alternative to on-premise clusters for executing scientific applications and business analytics services. Most research efforts in HPC cloud aim to understand the cost-benefit of moving resource-intensive applications from on-premise environments to public cloud platforms. Industry trends show hybrid environments are the natural path to get the best of the on-premise and cloud resources---steady (and sensitive) workloads can run on on-pr...

  18. Systems analysis programs for hands-on integrated reliability evaluations (SAPHIRE) version 5.0, technical reference manual

    International Nuclear Information System (INIS)

    Russell, K.D.; Atwood, C.L.; Galyean, W.J.; Sattison, M.B.; Rasmuson, D.M.

    1994-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. This volume provides information on the principles used in the construction and operation of Version 5.0 of the Integrated Reliability and Risk Analysis System (IRRAS) and the System Analysis and Risk Assessment (SARA) system. It summarizes the fundamental mathematical concepts of sets and logic, fault trees, and probability. This volume then describes the algorithms that these programs use to construct a fault tree and to obtain the minimal cut sets. It gives the formulas used to obtain the probability of the top event from the minimal cut sets, and the formulas for probabilities that are appropriate under various assumptions concerning repairability and mission time. It defines the measures of basic event importance that these programs can calculate. This volume gives an overview of uncertainty analysis using simple Monte Carlo sampling or Latin Hypercube sampling, and states the algorithms used by these programs to generate random basic event probabilities from various distributions. Further references are given, and a detailed example of the reduction and quantification of a simple fault tree is provided in an appendix

  19. Integrated Life Cycle Management: A Strategy for Plants to Extend Operating Lifetimes Safely with High Operational Reliability

    International Nuclear Information System (INIS)

    Esselman, Thomas; Bruck, Paul; Mengers, Charles

    2012-01-01

    Nuclear plant operators are studying the possibility of extending their existing generating facilities operating lifetime to 60 years and beyond. Many nuclear plants have been granted licenses to operate their facilities beyond the original 40 year term; however, in order to optimize the long term operating strategies, plant decision-makers need a consistent approach to support their options. This paper proposes a standard methodology to support effective decision-making for the long-term management of selected station assets. Methods detailed are intended to be used by nuclear plant site management, equipment reliability personnel, long term planners, capital asset planners, license renewal staff, and others that intend to look at operation between the current time and the end of operation. This methodology, named Integrated Life Cycle Management (ILCM), will provide a technical basis to assist decision makers regarding the timing of large capital investments required to get to the end of operation safely and with high plant reliability. ILCM seeks to identify end of life cycle failure probabilities for individual plant large capital assets and attendant costs associated with their refurbishment or replacement. It will provide a standard basis for evaluation of replacement and refurbishment options for these components. ILCM will also develop methods to integrate the individual assets over the entire plant thus assisting nuclear plant decision-makers in their facility long term operating strategies. (author)

  20. Reliability and validity of the Japanese version of the Community Integration Measure for community-dwelling people with schizophrenia.

    Science.gov (United States)

    Shioda, Ai; Tadaka, Etsuko; Okochi, Ayako

    2017-01-01

    Community integration is an essential right for people with schizophrenia that affects their well-being and quality of life, but no valid instrument exists to measure it in Japan. The aim of the present study is to develop and evaluate the reliability and validity of the Japanese version of the Community Integration Measure (CIM) for people with schizophrenia. The Japanese version of the CIM was developed as a self-administered questionnaire based on the original version of the CIM, which was developed by McColl et al. This study of the Japanese CIM had a cross-sectional design. Construct validity was determined using a confirmatory factor analysis (CFA) and data from 291 community-dwelling people with schizophrenia in Japan. Internal consistency was calculated using Cronbach's alpha. The Lubben Social Network Scale (LSNS-6), the Rosenberg Self-Esteem Scale (RSE) and the UCLA Loneliness Scale, version 3 (UCLALS) were administered to assess the criterion-related validity of the Japanese version of the CIM. The participants were 263 people with schizophrenia who provided valid responses. The Cronbach's alpha was 0.87, and CFA identified one domain with ten items that demonstrated the following values: goodness of fit index = 0.924, adjusted goodness of fit index = 0.881, comparative fit index = 0.925, and root mean square error of approximation = 0.085. The correlation coefficients were 0.43 (p reliability and validity for assessing community integration for people with schizophrenia in Japan.

  1. Impact of Thresholds and Load Patterns when Executing HPC Applications with Cloud Elasticity

    Directory of Open Access Journals (Sweden)

    Vinicius Facco Rodrigues

    2016-04-01

    Full Text Available Elasticity is one of the most known capabilities related to cloud computing, being largely deployed reactively using thresholds. In this way, maximum and minimum limits are used to drive resource allocation and deallocation actions, leading to the following problem statements: How can cloud users set the threshold values to enable elasticity in their cloud applications? And what is the impact of the application’s load pattern in the elasticity? This article tries to answer these questions for iterative high performance computing applications, showing the impact of both thresholds and load patterns on application performance and resource consumption. To accomplish this, we developed a reactive and PaaS-based elasticity model called AutoElastic and employed it over a private cloud to execute a numerical integration application. Here, we are presenting an analysis of best practices and possible optimizations regarding the elasticity and HPC pair. Considering the results, we observed that the maximum threshold influences the application time more than the minimum one. We concluded that threshold values close to 100% of CPU load are directly related to a weaker reactivity, postponing resource reconfiguration when its activation in advance could be pertinent for reducing the application runtime.

  2. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    Science.gov (United States)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  3. Novel Material Integration for Reliable and Energy-Efficient NEM Relay Technology

    Science.gov (United States)

    Chen, I.-Ru

    Energy-efficient switching devices have become ever more important with the emergence of ubiquitous computing. NEM relays are promising to complement CMOS transistors as circuit building blocks for future ultra-low-power information processing, and as such have recently attracted significant attention from the semiconductor industry and researchers. Relay technology potentially can overcome the energy efficiency limit for conventional CMOS technology due to several key characteristics, including zero OFF-state leakage, abrupt switching behavior, and potentially very low active energy consumption. However, two key issues must be addressed for relay technology to reach its full potential: surface oxide formation at the contacting surfaces leading to increased ON-state resistance after switching, and high switching voltages due to strain gradient present within the relay structure. This dissertation advances NEM relay technology by investigating solutions to both of these pressing issues. Ruthenium, whose native oxide is conductive, is proposed as the contacting material to improve relay ON-state resistance stability. Ruthenium-contact relays are fabricated after overcoming several process integration challenges, and show superior ON-state resistance stability in electrical measurements and extended device lifetime. The relay structural film is optimized via stress matching among all layers within the structure, to provide lower strain gradient (below 10E-3/microm -1) and hence lower switching voltage. These advancements in relay technology, along with the integration of a metallic interconnect layer, enable complex relay-based circuit demonstration. In addition to the experimental efforts, this dissertation theoretically analyzes the energy efficiency limit of a NEM switch, which is generally believed to be limited by the surface adhesion energy. New compact (electronic device technology.

  4. 24. MPA-seminar: safety and reliability of plant technology with special emphasis on integrity and life management. Vol. 2. Papers 28-63

    International Nuclear Information System (INIS)

    1999-01-01

    The second volume is dedicated to the safety and reliability of plant technology with special emphasis on the integrity and life management. The following topics are discussed: 1. Integrity of vessels, pipes and components. 2. Fracture mechanics. 3. Measures for the extension of service life, and 4. Online Monitoring. All 30 contributions are separately analyzed for this database. (orig.)

  5. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    Science.gov (United States)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will

  6. Computational approaches to standard-compliant biofilm data for reliable analysis and integration.

    Science.gov (United States)

    Sousa, Ana Margarida; Ferreira, Andreia; Azevedo, Nuno F; Pereira, Maria Olivia; Lourenço, Anália

    2012-12-01

    The study of microorganism consortia, also known as biofilms, is associated to a number of applications in biotechnology, ecotechnology and clinical domains. Nowadays, biofilm studies are heterogeneous and data-intensive, encompassing different levels of analysis. Computational modelling of biofilm studies has become thus a requirement to make sense of these vast and ever-expanding biofilm data volumes. The rationale of the present work is a machine-readable format for representing biofilm studies and supporting biofilm data interchange and data integration. This format is supported by the Biofilm Science Ontology (BSO), the first ontology on biofilms information. The ontology is decomposed into a number of areas of interest, namely: the Experimental Procedure Ontology (EPO) which describes biofilm experimental procedures; the Colony Morphology Ontology (CMO) which characterises morphologically microorganism colonies; and other modules concerning biofilm phenotype, antimicrobial susceptibility and virulence traits. The overall objective behind BSO is to develop semantic resources to capture, represent and share data on biofilms and related experiments in a regularized fashion manner. Furthermore, the present work also introduces a framework in assistance of biofilm data interchange and analysis - BiofOmics (http://biofomics.org) - and a public repository on colony morphology signatures - MorphoCol (http://stardust.deb.uminho.pt/morphocol).

  7. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Version 5.0: Data loading manual. Volume 10

    International Nuclear Information System (INIS)

    VanHorn, R.L.; Wolfram, L.M.; Fowler, R.D.; Beck, S.T.; Smith, C.L.

    1995-04-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) suite of programs can be used to organize and standardize in an electronic format information from probabilistic risk assessments or individual plant examinations. The Models and Results Database (MAR-D) program of the SAPHIRE suite serves as the repository for probabilistic risk assessment and individual plant examination data and information. This report demonstrates by examples the common electronic and manual methods used to load these types of data. It is not a stand alone document but references documents that contribute information relative to the data loading process. This document provides a more detailed discussion and instructions for using SAPHIRE 5.0 only when enough information on a specific topic is not provided by another available source

  8. Application to nuclear turbines of high-efficiency and reliable 3D-designed integral shrouded blades

    International Nuclear Information System (INIS)

    Watanabe, Eiichiro; Ohyama, Hiroharu; Tashiro, Hikaru; Sugitani, Toshio; Kurosawa, Masaru

    1999-01-01

    Mitsubishi Heavy Industries, Ltd. (MHI) has recently developed new blades for nuclear turbines, in order to achieve higher efficiency and higher reliability. The three-dimensional aerodynamic design for 41-inch and 46-inch blades, their one piece structural design (integral shrouded blades: ISB), and the verification test results using a model steam turbine are described in this paper. The predicted efficiency and lower vibratory stress have been verified. On the basis of these 60 Hz ISB, 50 Hz ISB series are under development using 'the law of similarity' without changing their thermodynamic performance and mechanical stress levels. Our 3D-designed reaction blades which are used for the high pressure and low pressure upstream stages, are also briefly mentioned. (author)

  9. Research on the Reliability Analysis of the Integrated Modular Avionics System Based on the AADL Error Model

    Directory of Open Access Journals (Sweden)

    Peng Wang

    2018-01-01

    Full Text Available In recent years, the integrated modular avionics (IMA concept has been introduced to replace the traditional federated avionics. Different avionics functions are hosted in a shared IMA platform, and IMA adopts partition technologies to provide a logical isolation among different functions. The IMA architecture can provide more sophisticated and powerful avionics functionality; meanwhile, the failure propagation patterns in IMA are more complex. The feature of resource sharing introduces some unintended interconnections among different functions, which makes the failure propagation modes more complex. Therefore, this paper proposes an architecture analysis and design language- (AADL- based method to establish the reliability model of IMA platform. The single software and hardware error behavior in IMA system is modeled. The corresponding AADL error model of failure propagation among components, between software and hardware, is given. Finally, the display function of IMA platform is taken as an example to illustrate the effectiveness of the proposed method.

  10. The rise of HPC accelerators: towards a common vision for a petascale future

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    Nowadays new exciting scientific discoveries are mainly driven by large challenging simulations. An analysis of the trends in High Performance Computing clearly show that we hit several barriers (CPU frequency, power consumption, technological limits, limitations of the present paradigms) that we cannot easily overcome. In this context, accelerators became the concrete alternative to increase the compute capabilities of the deployed HPC clusters inside Universities and research centers across Europe. Within the EC funded "Partnership of Advanced Computing in Europe" (PRACE) project, several actions has been taken and will be taken to enable community codes to exploit accelerators in modern HPC architectures. In this talk, the vision and the strategy adopted by the PRACE project will be presented, focusing on new HPC programming model and paradigm. Accelerators are a fundamental piece to innovate in this direction, from both the hardware and the software point of view. This work started dur...

  11. Reliability of nine programs of topological predictions and their application to integral membrane channel and carrier proteins.

    Science.gov (United States)

    Reddy, Abhinay; Cho, Jaehoon; Ling, Sam; Reddy, Vamsee; Shlykov, Maksim; Saier, Milton H

    2014-01-01

    We evaluated topological predictions for nine different programs, HMMTOP, TMHMM, SVMTOP, DAS, SOSUI, TOPCONS, PHOBIUS, MEMSAT-SVM (hereinafter referred to as MEMSAT), and SPOCTOPUS. These programs were first evaluated using four large topologically well-defined families of secondary transporters, and the three best programs were further evaluated using topologically more diverse families of channels and carriers. In the initial studies, the order of accuracy was: SPOCTOPUS > MEMSAT > HMMTOP > TOPCONS > PHOBIUS > TMHMM > SVMTOP > DAS > SOSUI. Some families, such as the Sugar Porter Family (2.A.1.1) of the Major Facilitator Superfamily (MFS; TC #2.A.1) and the Amino Acid/Polyamine/Organocation (APC) Family (TC #2.A.3), were correctly predicted with high accuracy while others, such as the Mitochondrial Carrier (MC) (TC #2.A.29) and the K(+) transporter (Trk) families (TC #2.A.38), were predicted with much lower accuracy. For small, topologically homogeneous families, SPOCTOPUS and MEMSAT were generally most reliable, while with large, more diverse superfamilies, HMMTOP often proved to have the greatest prediction accuracy. We next developed a novel program, TM-STATS, that tabulates HMMTOP, SPOCTOPUS or MEMSAT-based topological predictions for any subdivision (class, subclass, superfamily, family, subfamily, or any combination of these) of the Transporter Classification Database (TCDB; www.tcdb.org) and examined the following subclasses: α-type channel proteins (TC subclasses 1.A and 1.E), secreted pore-forming toxins (TC subclass 1.C) and secondary carriers (subclass 2.A). Histograms were generated for each of these subclasses, and the results were analyzed according to subclass, family and protein. The results provide an update of topological predictions for integral membrane transport proteins as well as guides for the development of more reliable topological prediction programs, taking family-specific characteristics into account. © 2014 S. Karger AG, Basel.

  12. Enviro-HIRLAM/ HARMONIE Studies in ECMWF HPC EnviroAerosols Project

    Science.gov (United States)

    Hansen Sass, Bent; Mahura, Alexander; Nuterman, Roman; Baklanov, Alexander; Palamarchuk, Julia; Ivanov, Serguei; Pagh Nielsen, Kristian; Penenko, Alexey; Edvardsson, Nellie; Stysiak, Aleksander Andrzej; Bostanbekov, Kairat; Amstrup, Bjarne; Yang, Xiaohua; Ruban, Igor; Bergen Jensen, Marina; Penenko, Vladimir; Nurseitov, Daniyar; Zakarin, Edige

    2017-04-01

    The EnviroAerosols on ECMWF HPC project (2015-2017) "Enviro-HIRLAM/ HARMONIE model research and development for online integrated meteorology-chemistry-aerosols feedbacks and interactions in weather and atmospheric composition forecasting" is aimed at analysis of importance of the meteorology-chemistry/aerosols interactions and to provide a way for development of efficient techniques for on-line coupling of numerical weather prediction and atmospheric chemical transport via process-oriented parameterizations and feedback algorithms, which will improve both the numerical weather prediction and atmospheric composition forecasts. Two main application areas of the on-line integrated modelling are considered: (i) improved numerical weather prediction with short-term feedbacks of aerosols and chemistry on formation and development of meteorological variables, and (ii) improved atmospheric composition forecasting with on-line integrated meteorological forecast and two-way feedbacks between aerosols/chemistry and meteorology. During 2015-2016 several research projects were realized. At first, the study on "On-line Meteorology-Chemistry/Aerosols Modelling and Integration for Risk Assessment: Case Studies" focused on assessment of scenarios with accidental and continuous emissions of sulphur dioxide for case studies for Atyrau (Kazakhstan) near the northern part of the Caspian Sea and metallurgical enterprises on the Kola Peninsula (Russia), with GIS integration of modelling results into the RANDOM (Risk Assessment of Nature Detriment due to Oil spill Migration) system. At second, the studies on "The sensitivity of precipitation simulations to the soot aerosol presence" & "The precipitation forecast sensitivity to data assimilation on a very high resolution domain" focused on sensitivity and changes in precipitation life-cycle under black carbon polluted conditions over Scandinavia. At third, studies on "Aerosol effects over China investigated with a high resolution

  13. The NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform to Support the Analysis of Petascale Environmental Data Collections

    Science.gov (United States)

    Evans, B. J. K.; Pugh, T.; Wyborn, L. A.; Porter, D.; Allen, C.; Smillie, J.; Antony, J.; Trenham, C.; Evans, B. J.; Beckett, D.; Erwin, T.; King, E.; Hodge, J.; Woodcock, R.; Fraser, R.; Lescinsky, D. T.

    2014-12-01

    The National Computational Infrastructure (NCI) has co-located a priority set of national data assets within a HPC research platform. This powerful in-situ computational platform has been created to help serve and analyse the massive amounts of data across the spectrum of environmental collections - in particular the climate, observational data and geoscientific domains. This paper examines the infrastructure, innovation and opportunity for this significant research platform. NCI currently manages nationally significant data collections (10+ PB) categorised as 1) earth system sciences, climate and weather model data assets and products, 2) earth and marine observations and products, 3) geosciences, 4) terrestrial ecosystem, 5) water management and hydrology, and 6) astronomy, social science and biosciences. The data is largely sourced from the NCI partners (who include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. By co-locating these large valuable data assets, new opportunities have arisen by harmonising the data collections, making a powerful transdisciplinary research platformThe data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. New scientific software, cloud-scale techniques, server-side visualisation and data services have been harnessed and integrated into the platform, so that analysis is performed seamlessly across the traditional boundaries of the underlying data domains. Characterisation of the techniques along with performance profiling ensures scalability of each software component, all of which can either be enhanced or replaced through future improvements. A Development-to-Operations (DevOps) framework has also been implemented to manage the scale of the software complexity alone. This ensures that

  14. Using three-dimension virtual reality main control room for integrated system validation and human reliability analysis

    International Nuclear Information System (INIS)

    Yang Chihwei; Cheng Tsungchieh

    2011-01-01

    This study proposes the performance assessment in three-dimension virtual reality (3D-VR) main control room (MCR). The assessment is conducted for integrated system validation (ISV) purposes, and also for human reliability analyses (HRA). This paper describes the latest developments in 3D-VR applications, designated for the familiarization with MCR, specially taking into account the ISV and HRA. The experiences in 3D-VR application, the benefits and advantages of use of VR in training and maintenances of MCR operators in the target NPP are equally presented in this paper. Results gathered from the performance measurement lead to hazard mitigation and reduces the risk of human error in the operation and maintenance of nuclear equipments. The latest developments in simulation techniques, including 3D presentation enhances the above mentioned benefits, brings the MCR simulators closer to reality. In the near future, this type of 3D solutions should be applied more and more often in the design of MCR simulators. The presented 3D-VR are related to the MCR in NPPs, but the concept of composition and navigation through the system's elements can be easily applied for the purpose of any type of technical equipment and shall contribute in a similar manner to hazard prevention. (author)

  15. Reliability engineering

    International Nuclear Information System (INIS)

    Lee, Chi Woo; Kim, Sun Jin; Lee, Seung Woo; Jeong, Sang Yeong

    1993-08-01

    This book start what is reliability? such as origin of reliability problems, definition of reliability and reliability and use of reliability. It also deals with probability and calculation of reliability, reliability function and failure rate, probability distribution of reliability, assumption of MTBF, process of probability distribution, down time, maintainability and availability, break down maintenance and preventive maintenance design of reliability, design of reliability for prediction and statistics, reliability test, reliability data and design and management of reliability.

  16. Reliability calculations

    International Nuclear Information System (INIS)

    Petersen, K.E.

    1986-03-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very complex systems. In order to increase the applicability of the programs variance reduction techniques can be applied to speed up the calculation process. Variance reduction techniques have been studied and procedures for implementation of importance sampling are suggested. (author)

  17. 24. MPA-seminar: safety and reliability of plant technology with special emphasis on integrity and life management. Vol. 1. Papers 1-27

    International Nuclear Information System (INIS)

    1999-01-01

    The first volume is dedicated to the safety and reliability of plant technology with special emphasis on the integrity and life management. The main topic in the volume is the contribution of nondestructive testing to the reactor safety from an international point of view. All 20 papers are separately analyzed for this database. (orig.)

  18. Human Reliability Program Overview

    Energy Technology Data Exchange (ETDEWEB)

    Bodin, Michael

    2012-09-25

    This presentation covers the high points of the Human Reliability Program, including certification/decertification, critical positions, due process, organizational structure, program components, personnel security, an overview of the US DOE reliability program, retirees and academia, and security program integration.

  19. On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

    Science.gov (United States)

    Erli, G.; Fischer, F.; Fleig, G.; Giffels, M.; Hauth, T.; Quast, G.; Schnepf, M.; Heese, J.; Leppert, K.; Arnaez de Pedro, J.; Sträter, R.

    2017-10-01

    This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers. Rather than relying on dedicated HEP computing centers, it is nowadays more reasonable and flexible to utilize remote computing capacity via virtualization techniques or container concepts. We report on recent experience from incorporating a remote HPC center (NEMO Cluster, Freiburg University) and resources dynamically requested from the commercial provider 1&1 Internet SE into our intitute’s computing infrastructure. The Freiburg HPC resources are requested via the standard batch system, allowing HPC and HEP applications to be executed simultaneously, such that regular batch jobs run side by side to virtual machines managed via OpenStack [1]. For the inclusion of the 1&1 commercial resources, a Python API and SDK as well as the possibility to upload images were available. Large scale tests prove the capability to serve the scientific use case in the European 1&1 datacenters. The described environment at the Institute of Experimental Nuclear Physics (IEKP) at KIT serves the needs of researchers participating in the CMS and Belle II experiments. In total, resources exceeding half a million CPU hours have been provided by remote sites.

  20. Could Blobs Fuel Storage-Based Convergence between HPC and Big Data?

    Energy Technology Data Exchange (ETDEWEB)

    Matri, Pierre; Alforov, Yevhen; Brandon, Alvaro; Kuhn, Michael; Carns, Philip; Ludwig, Thomas

    2017-09-05

    The increasingly growing data sets processed on HPC platforms raise major challenges for the underlying storage layer. A promising alternative to POSIX-IO- compliant file systems are simpler blobs (binary large objects), or object storage systems. Such systems offer lower overhead and better performance at the cost of largely unused features such as file hierarchies or permissions. Similarly, blobs are increasingly considered for replacing distributed file systems for big data analytics or as a base for storage abstractions such as key-value stores or time-series databases. This growing interest in such object storage on HPC and big data platforms raises the question: Are blobs the right level of abstraction to enable storage-based convergence between HPC and Big Data? In this paper we study the impact of blob-based storage for real-world applications on HPC and cloud environments. The results show that blobbased storage convergence is possible, leading to a significant performance improvement on both platforms

  1. Performance Analysis Tool for HPC and Big Data Applications on Scientific Clusters

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wucherl [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Koo, Michelle [Univ. of California, Berkeley, CA (United States); Cao, Yu [California Inst. of Technology (CalTech), Pasadena, CA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Nugent, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States); Wu, Kesheng [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-09-17

    Big data is prevalent in HPC computing. Many HPC projects rely on complex workflows to analyze terabytes or petabytes of data. These workflows often require running over thousands of CPU cores and performing simultaneous data accesses, data movements, and computation. It is challenging to analyze the performance involving terabytes or petabytes of workflow data or measurement data of the executions, from complex workflows over a large number of nodes and multiple parallel task executions. To help identify performance bottlenecks or debug the performance issues in large-scale scientific applications and scientific clusters, we have developed a performance analysis framework, using state-ofthe- art open-source big data processing tools. Our tool can ingest system logs and application performance measurements to extract key performance features, and apply the most sophisticated statistical tools and data mining methods on the performance data. It utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of the big data analysis framework, we conduct case studies on the workflows from an astronomy project known as the Palomar Transient Factory (PTF) and the job logs from the genome analysis scientific cluster. Our study processed many terabytes of system logs and application performance measurements collected on the HPC systems at NERSC. The implementation of our tool is generic enough to be used for analyzing the performance of other HPC systems and Big Data workows.

  2. A harmonic polynomial cell (HPC) method for 3D Laplace equation with application in marine hydrodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Yan-Lin, E-mail: yanlin.shao@dnvgl.com; Faltinsen, Odd M.

    2014-10-01

    We propose a new efficient and accurate numerical method based on harmonic polynomials to solve boundary value problems governed by 3D Laplace equation. The computational domain is discretized by overlapping cells. Within each cell, the velocity potential is represented by the linear superposition of a complete set of harmonic polynomials, which are the elementary solutions of Laplace equation. By its definition, the method is named as Harmonic Polynomial Cell (HPC) method. The characteristics of the accuracy and efficiency of the HPC method are demonstrated by studying analytical cases. Comparisons will be made with some other existing boundary element based methods, e.g. Quadratic Boundary Element Method (QBEM) and the Fast Multipole Accelerated QBEM (FMA-QBEM) and a fourth order Finite Difference Method (FDM). To demonstrate the applications of the method, it is applied to some studies relevant for marine hydrodynamics. Sloshing in 3D rectangular tanks, a fully-nonlinear numerical wave tank, fully-nonlinear wave focusing on a semi-circular shoal, and the nonlinear wave diffraction of a bottom-mounted cylinder in regular waves are studied. The comparisons with the experimental results and other numerical results are all in satisfactory agreement, indicating that the present HPC method is a promising method in solving potential-flow problems. The underlying procedure of the HPC method could also be useful in other fields than marine hydrodynamics involved with solving Laplace equation.

  3. The Requirement for Acquisition and Logistics Integration: An Examination of Reliability Management Within the Marine Corps Acquisition Process

    National Research Council Canada - National Science Library

    Norcross, Marvin

    2002-01-01

    Combat system reliability is central to creating combat power determining logistics supportability requirements and determining systems total ownership costs, yet the Marine Corps typically monitors...

  4. Visual-Haptic Integration: Cue Weights are Varied Appropriately, to Account for Changes in Haptic Reliability Introduced by Using a Tool

    Directory of Open Access Journals (Sweden)

    Chie Takahashi

    2011-10-01

    Full Text Available Tools such as pliers systematically change the relationship between an object's size and the hand opening required to grasp it. Previous work suggests the brain takes this into account, integrating visual and haptic size information that refers to the same object, independent of the similarity of the ‘raw’ visual and haptic signals (Takahashi et al., VSS 2009. Variations in tool geometry also affect the reliability (precision of haptic size estimates, however, because they alter the change in hand opening caused by a given change in object size. Here, we examine whether the brain appropriately adjusts the weights given to visual and haptic size signals when tool geometry changes. We first estimated each cue's reliability by measuring size-discrimination thresholds in vision-alone and haptics-alone conditions. We varied haptic reliability using tools with different object-size:hand-opening ratios (1:1, 0.7:1, and 1.4:1. We then measured the weights given to vision and haptics with each tool, using a cue-conflict paradigm. The weight given to haptics varied with tool type in a manner that was well predicted by the single-cue reliabilities (MLE model; Ernst and Banks, 2002. This suggests that the process of visual-haptic integration appropriately accounts for variations in haptic reliability introduced by different tool geometries.

  5. Freva - Freie Univ Evaluation System Framework for Scientific HPC Infrastructures in Earth System Modeling

    Science.gov (United States)

    Kadow, C.; Illing, S.; Schartner, T.; Grieger, J.; Kirchner, I.; Rust, H.; Cubasch, U.; Ulbrich, U.

    2017-12-01

    The Freie Univ Evaluation System Framework (Freva - freva.met.fu-berlin.de) is a software infrastructure for standardized data and tool solutions in Earth system science (e.g. www-miklip.dkrz.de, cmip-eval.dkrz.de). Freva runs on high performance computers to handle customizable evaluation systems of research projects, institutes or universities. It combines different software technologies into one common hybrid infrastructure, including all features present in the shell and web environment. The database interface satisfies the international standards provided by the Earth System Grid Federation (ESGF). Freva indexes different data projects into one common search environment by storing the meta data information of the self-describing model, reanalysis and observational data sets in a database. This implemented meta data system with its advanced but easy-to-handle search tool supports users, developers and their plugins to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. The integrated web-shell (shellinabox) adds a degree of freedom in the choice of the working environment and can be used as a gate to the research projects HPC. Plugins are able to integrate their e.g. post-processed results into the database of the user. This allows e.g. post-processing plugins to feed statistical analysis plugins, which fosters an active exchange between plugin developers of a research project. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a database. Configurations and results of the tools can be shared among scientists via shell or web system. Furthermore, if configurations match

  6. Optimizing CyberShake Seismic Hazard Workflows for Large HPC Resources

    Science.gov (United States)

    Callaghan, S.; Maechling, P. J.; Juve, G.; Vahi, K.; Deelman, E.; Jordan, T. H.

    2014-12-01

    The CyberShake computational platform is a well-integrated collection of scientific software and middleware that calculates 3D simulation-based probabilistic seismic hazard curves and hazard maps for the Los Angeles region. Currently each CyberShake model comprises about 235 million synthetic seismograms from about 415,000 rupture variations computed at 286 sites. CyberShake integrates large-scale parallel and high-throughput serial seismological research codes into a processing framework in which early stages produce files used as inputs by later stages. Scientific workflow tools are used to manage the jobs, data, and metadata. The Southern California Earthquake Center (SCEC) developed the CyberShake platform using USC High Performance Computing and Communications systems and open-science NSF resources.CyberShake calculations were migrated to the NSF Track 1 system NCSA Blue Waters when it became operational in 2013, via an interdisciplinary team approach including domain scientists, computer scientists, and middleware developers. Due to the excellent performance of Blue Waters and CyberShake software optimizations, we reduced the makespan (a measure of wallclock time-to-solution) of a CyberShake study from 1467 to 342 hours. We will describe the technical enhancements behind this improvement, including judicious introduction of new GPU software, improved scientific software components, increased workflow-based automation, and Blue Waters-specific workflow optimizations.Our CyberShake performance improvements highlight the benefits of scientific workflow tools. The CyberShake workflow software stack includes the Pegasus Workflow Management System (Pegasus-WMS, which includes Condor DAGMan), HTCondor, and Globus GRAM, with Pegasus-mpi-cluster managing the high-throughput tasks on the HPC resources. The workflow tools handle data management, automatically transferring about 13 TB back to SCEC storage.We will present performance metrics from the most recent Cyber

  7. Thermosyphon Cooler Hybrid System for Water Savings in an Energy-Efficient HPC Data Center: Modeling and Installation: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Thomas; Liu, Zan; Sickinger, David; Regimbal, Kevin; Martinez, David

    2017-02-01

    The Thermosyphon Cooler Hybrid System (TCHS) integrates the control of a dry heat rejection device, the thermosyphon cooler (TSC), with an open cooling tower. A combination of equipment and controls, this new heat rejection system embraces the 'smart use of water,' using evaporative cooling when it is most advantageous and then saving water and modulating toward increased dry sensible cooling as system operations and ambient weather conditions permit. Innovative fan control strategies ensure the most economical balance between water savings and parasitic fan energy. The unique low-pressure-drop design of the TSC allows water to be cooled directly by the TSC evaporator without risk of bursting tubes in subfreezing ambient conditions. Johnson Controls partnered with the National Renewable Energy Laboratory (NREL) and Sandia National Laboratories to deploy the TSC as a test bed at NREL's high-performance computing (HPC) data center in the first half of 2016. Located in NREL's Energy Systems Integration Facility (ESIF), this HPC data center has achieved an annualized average power usage effectiveness rating of 1.06 or better since 2012. Warm-water liquid cooling is used to capture heat generated by computer systems direct to water; that waste heat is either reused as the primary heat source in the ESIF building or rejected using evaporative cooling. This data center is the single largest source of water and power demand on the NREL campus, using about 7,600 m3 (2.0 million gal) of water during the past year with an hourly average IT load of nearly 1 MW (3.4 million Btu/h) -- so dramatically reducing water use while continuing efficient data center operations is of significant interest. Because Sandia's climate is similar to NREL's, this new heat rejection system being deployed at NREL has gained interest at Sandia. Sandia's data centers utilize an hourly average of 8.5 MW (29 million Btu/h) and are also one of the largest consumers of

  8. Thermal performance of capillary micro tubes integrated into the sandwich element made of concrete

    DEFF Research Database (Denmark)

    Mikeska, Tomas; Svendsen, Svend

    2013-01-01

    integrated into the thin plate of sandwich element made of HPC can supply the energy needed for heating and cooling. The investigations were conceived as a low temperature concept, where the difference between the temperature of circulating fluid and air in the room was kept in range of 1 to 4°C. © (2013......The thermal performance of radiant heating and cooling systems (RHCS) composed of capillary micro tubes (CMT) integrated into the inner plate of sandwich elements made of High Performance Concrete (HPC) was investigated in the article. Temperature distribution in HPC elements around integrated CMT...

  9. Improving the Reliability of Network Metrics in Structural Brain Networks by Integrating Different Network Weighting Strategies into a Single Graph

    Directory of Open Access Journals (Sweden)

    Stavros I. Dimitriadis

    2017-12-01

    Full Text Available Structural brain networks estimated from diffusion MRI (dMRI via tractography have been widely studied in healthy controls and patients with neurological and psychiatric diseases. However, few studies have addressed the reliability of derived network metrics both node-specific and network-wide. Different network weighting strategies (NWS can be adopted to weight the strength of connection between two nodes yielding structural brain networks that are almost fully-weighted. Here, we scanned five healthy participants five times each, using a diffusion-weighted MRI protocol and computed edges between 90 regions of interest (ROI from the Automated Anatomical Labeling (AAL template. The edges were weighted according to nine different methods. We propose a linear combination of these nine NWS into a single graph using an appropriate diffusion distance metric. We refer to the resulting weighted graph as an Integrated Weighted Structural Brain Network (ISWBN. Additionally, we consider a topological filtering scheme that maximizes the information flow in the brain network under the constraint of the overall cost of the surviving connections. We compared each of the nine NWS and the ISWBN based on the improvement of: (a intra-class correlation coefficient (ICC of well-known network metrics, both node-wise and per network level; and (b the recognition accuracy of each subject compared to the remainder of the cohort, as an attempt to access the uniqueness of the structural brain network for each subject, after first applying our proposed topological filtering scheme. Based on a threshold where the network level ICC should be >0.90, our findings revealed that six out of nine NWS lead to unreliable results at the network level, while all nine NWS were unreliable at the node level. In comparison, our proposed ISWBN performed as well as the best performing individual NWS at the network level, and the ICC was higher compared to all individual NWS at the node

  10. BSLD threshold driven power management policy for HPC centers

    OpenAIRE

    Etinski, Maja; Corbalán González, Julita; Labarta Mancho, Jesús José; Valero Cortés, Mateo

    2010-01-01

    In this paper, we propose a power-aware parallel job scheduler assuming DVFS enabled clusters. A CPU frequency assignment algorithm is integrated into the well established EASY backfilling job scheduling policy. Running a job at lower frequency results in a reduction in power dissipation and accordingly in energy consumption. However, lower frequencies introduce a penalty in performance. Our frequency assignment algorithm has two adjustable parameters in order to enable fine grain energy-perf...

  11. Using CyberShake Workflows to Manage Big Seismic Hazard Data on Large-Scale Open-Science HPC Resources

    Science.gov (United States)

    Callaghan, S.; Maechling, P. J.; Juve, G.; Vahi, K.; Deelman, E.; Jordan, T. H.

    2015-12-01

    The CyberShake computational platform, developed by the Southern California Earthquake Center (SCEC), is an integrated collection of scientific software and middleware that performs 3D physics-based probabilistic seismic hazard analysis (PSHA) for Southern California. CyberShake integrates large-scale and high-throughput research codes to produce probabilistic seismic hazard curves for individual locations of interest and hazard maps for an entire region. A recent CyberShake calculation produced about 500,000 two-component seismograms for each of 336 locations, resulting in over 300 million synthetic seismograms in a Los Angeles-area probabilistic seismic hazard model. CyberShake calculations require a series of scientific software programs. Early computational stages produce data used as inputs by later stages, so we describe CyberShake calculations using a workflow definition language. Scientific workflow tools automate and manage the input and output data and enable remote job execution on large-scale HPC systems. To satisfy the requests of broad impact users of CyberShake data, such as seismologists, utility companies, and building code engineers, we successfully completed CyberShake Study 15.4 in April and May 2015, calculating a 1 Hz urban seismic hazard map for Los Angeles. We distributed the calculation between the NSF Track 1 system NCSA Blue Waters, the DOE Leadership-class system OLCF Titan, and USC's Center for High Performance Computing. This study ran for over 5 weeks, burning about 1.1 million node-hours and producing over half a petabyte of data. The CyberShake Study 15.4 results doubled the maximum simulated seismic frequency from 0.5 Hz to 1.0 Hz as compared to previous studies, representing a factor of 16 increase in computational complexity. We will describe how our workflow tools supported splitting the calculation across multiple systems. We will explain how we modified CyberShake software components, including GPU implementations and

  12. Development of a methodology for conducting an integrated HRA/PRA --. Task 1, An assessment of human reliability influences during LP&S conditions PWRs

    Energy Technology Data Exchange (ETDEWEB)

    Luckas, W.J.; Barriere, M.T.; Brown, W.S. [Brookhaven National Lab., Upton, NY (United States); Wreathall, J. [Wreathall (John) and Co., Dublin, OH (United States); Cooper, S.E. [Science Applications International Corp., McLean, VA (United States)

    1993-06-01

    During Low Power and Shutdown (LP&S) conditions in a nuclear power plant (i.e., when the reactor is subcritical or at less than 10--15% power), human interactions with the plant`s systems will be more frequent and more direct. Control is typically not mediated by automation, and there are fewer protective systems available. Therefore, an assessment of LP&S related risk should include a greater emphasis on human reliability than such an assessment made for power operation conditions. In order to properly account for the increase in human interaction and thus be able to perform a probabilistic risk assessment (PRA) applicable to operations during LP&S, it is important that a comprehensive human reliability assessment (HRA) methodology be developed and integrated into the LP&S PRA. The tasks comprising the comprehensive HRA methodology development are as follows: (1) identification of the human reliability related influences and associated human actions during LP&S, (2) identification of potentially important LP&S related human actions and appropriate HRA framework and quantification methods, and (3) incorporation and coordination of methodology development with other integrated PRA/HRA efforts. This paper describes the first task, i.e., the assessment of human reliability influences and any associated human actions during LP&S conditions for a pressurized water reactor (PWR).

  13. Visual-Haptic Integration: Cue Weights are Varied Appropriately, to Account for Changes in Haptic Reliability Introduced by Using a Tool

    OpenAIRE

    Chie Takahashi; Simon J Watt

    2011-01-01

    Tools such as pliers systematically change the relationship between an object's size and the hand opening required to grasp it. Previous work suggests the brain takes this into account, integrating visual and haptic size information that refers to the same object, independent of the similarity of the ‘raw’ visual and haptic signals (Takahashi et al., VSS 2009). Variations in tool geometry also affect the reliability (precision) of haptic size estimates, however, because they alter the change ...

  14. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard; Allcock, William; Beggio, Chris; Campbell, Stuart; Cherry, Andrew; Cholia, Shreyas; Dart, Eli; England, Clay; Fahey, Tim; Foertter, Fernanda; Goldstone, Robin; Hick, Jason; Karelitz, David; Kelly, Kaki; Monroe, Laura; Prabhat,; Skinner, David; White, Julia

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at the DOE national laboratories. The report contains findings from that review.

  15. Educational program on HPC technologies based on the heterogeneous cluster HybriLIT (LIT JINR

    Directory of Open Access Journals (Sweden)

    Vladimir V. Korenkov

    2017-12-01

    Full Text Available The article highlights the issues of training personnel for work with high-performance computing systems (HPC, as well as of support of the software and information environment which is necessary for the efficient use of heterogeneous computing resources and the development of parallel and hybrid applications. The heterogeneous computing cluster HybriLIT, which is one of the components of the Multifunctional Information and Computing Complex of JINR, is used as the main platform for training and re-training specialists, as well as for training students, graduate students and young scientists. The HybriLIT cluster is a dynamic, actively developing structure, incorporating the most advanced HPC computing architectures (graphics accelerators, Intel Xeon Phi coprocessors, and also it has a developed software and information environment, which in turn, makes it possible to build educational programs on the up-to-date level, and enables the learners to master both modern computing platforms and modern IT technologies.

  16. Reliability-Based and Cost-Oriented Product Optimization Integrating Fuzzy Reasoning Petri Nets, Interval Expert Evaluation and Cultural-Based DMOPSO Using Crowding Distance Sorting

    Directory of Open Access Journals (Sweden)

    Zhaoxi Hong

    2017-08-01

    Full Text Available In reliability-based and cost-oriented product optimization, the target product reliability is apportioned to subsystems or components to achieve the maximum reliability and minimum cost. Main challenges to conducting such optimization design lie in how to simultaneously consider subsystem division, uncertain evaluation provided by experts for essential factors, and dynamic propagation of product failure. To overcome these problems, a reliability-based and cost-oriented product optimization method integrating fuzzy reasoning Petri net (FRPN, interval expert evaluation and cultural-based dynamic multi-objective particle swarm optimization (DMOPSO using crowding distance sorting is proposed in this paper. Subsystem division is performed based on failure decoupling, and then subsystem weights are calculated with FRPN reflecting dynamic and uncertain failure propagation, as well as interval expert evaluation considering six essential factors. A mathematical model of reliability-based and cost-oriented product optimization is established, and the cultural-based DMOPSO with crowding distance sorting is utilized to obtain the optimized design scheme. The efficiency and effectiveness of the proposed method are demonstrated by the numerical example of the optimization design for a computer numerically controlled (CNC machine tool.

  17. Survey on Projects at DLR Simulation and Software Technology with Focus on Software Engineering and HPC

    OpenAIRE

    Schreiber, Andreas; Basermann, Achim

    2013-01-01

    We introduce the DLR institute “Simulation and Software Technology” (SC) and present current activities regarding software engineering and high performance computing (HPC) in German or international projects. Software engineering at SC focusses on data and knowledge management as well as tools for studies and experiments. We discuss how we apply software configuration management, validation and verification in our projects. Concrete research topics are traceability of (software devel...

  18. When to Renew Software Licences at HPC Centres? A Mathematical Analysis

    International Nuclear Information System (INIS)

    Baolai, Ge; MacIsaac, Allan B

    2010-01-01

    In this paper we study a common problem faced by many high performance computing (HPC) centres: When and how to renew commercial software licences. Software vendors often sell perpetual licences along with forward update and support contracts at an additional, annual cost. Every year or so, software support personnel and the budget units of HPC centres are required to make the decision of whether or not to renew such support, and usually such decisions are made intuitively. The total cost for a continuing support contract can, however, be costly. One might therefore want a rational answer to the question of whether the option for a renewal should be exercised and when. In an attempt to study this problem within a market framework, we present the mathematical problem derived for the day to day operation of a hypothetical HPC centre that charges for the use of software packages. In the mathematical model, we assume that the uncertainty comes from the demand, number of users using the packages, as well as the price. Further we assume the availability of up to date software versions may also affect the demand. We develop a renewal strategy that aims to maximize the expected profit from the use the software under consideration. The derived problem involves a decision tree, which constitutes a numerical procedure that can be processed in parallel.

  19. Optimizing new components of PanDA for ATLAS production on HPC resources

    CERN Document Server

    Maeno, Tadashi; The ATLAS collaboration

    2017-01-01

    The Production and Distributed Analysis system (PanDA) has been used for workload management in the ATLAS Experiment for over a decade. It uses pilots to retrieve jobs from the PanDA server and execute them on worker nodes. While PanDA has been mostly used on Worldwide LHC Computing Grid (WLCG) resources for production operations, R&D work has been ongoing on cloud and HPC resources for many years. These efforts have led to the significant usage of large scale HPC resources in the past couple of years. In this talk we will describe the changes to the pilot which enabled the use of HPC sites by PanDA, specifically the Titan supercomputer at Oakridge National Laboratory. Furthermore, it was decided in 2016 to start a fresh redesign of the Pilot with a more modern approach to better serve present and future needs from ATLAS and other collaborations that are interested in using the PanDA System. Another new project for development of a resource oriented service, PanDA Harvester, was also launched in 2016. The...

  20. When to Renew Software Licences at HPC Centres? A Mathematical Analysis

    Science.gov (United States)

    Baolai, Ge; MacIsaac, Allan B.

    2010-11-01

    In this paper we study a common problem faced by many high performance computing (HPC) centres: When and how to renew commercial software licences. Software vendors often sell perpetual licences along with forward update and support contracts at an additional, annual cost. Every year or so, software support personnel and the budget units of HPC centres are required to make the decision of whether or not to renew such support, and usually such decisions are made intuitively. The total cost for a continuing support contract can, however, be costly. One might therefore want a rational answer to the question of whether the option for a renewal should be exercised and when. In an attempt to study this problem within a market framework, we present the mathematical problem derived for the day to day operation of a hypothetical HPC centre that charges for the use of software packages. In the mathematical model, we assume that the uncertainty comes from the demand, number of users using the packages, as well as the price. Further we assume the availability of up to date software versions may also affect the demand. We develop a renewal strategy that aims to maximize the expected profit from the use the software under consideration. The derived problem involves a decision tree, which constitutes a numerical procedure that can be processed in parallel.

  1. A task analysis-linked approach for integrating the human factor in reliability assessments of nuclear power plants

    International Nuclear Information System (INIS)

    Ryan, T.G.

    1988-01-01

    This paper describes an emerging Task Analysis-Linked Evaluation Technique (TALENT) for assessing the contributions of human error to nuclear power plant systems unreliability and risk. Techniques such as TALENT are emerging as a recognition that human error is a primary contributor to plant safety, however, it has been a peripheral consideration to data in plant reliability evaluations. TALENT also recognizes that involvement of persons with behavioral science expertise is required to support plant reliability and risk analyses. A number of state-of-knowledge human reliability analysis tools are also discussed which support the TALENT process. The core of TALENT is comprised of task, timeline and interface analysis data which provide the technology base for event and fault tree development, serve as criteria for selecting and evaluating performance shaping factors, and which provide a basis for auditing TALENT results. Finally, programs and case studies used to refine the TALENT process are described along with future research needs in the area. (author)

  2. Strategies and Decision Support Systems for Integrating Variable Energy Resources in Control Centers for Reliable Grid Operations

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Lawrence E.

    2012-01-05

    A variety of studies have recently evaluated the opportunities for the large-scale integration of wind energy into the US power system. These studies have included, but are not limited to, "20 Percent Wind Energy by 2030: Increasing Wind Energy's Contribution to US Electricity Supply", the "Western Wind and Solar Integration Study", and the "Eastern Wind Integration and Transmission Study." Each of these US based studies have evaluated a variety of activities that can be undertaken by utilities to help integrate wind energy.

  3. Improvement in reliability and accuracy of heater tube eddy current testing by integration with an appropriate destructive test

    International Nuclear Information System (INIS)

    Giovanelli, F.; Gabiccini, S.; Tarli, R.; Motta, P.

    1988-01-01

    A specially developed destructive test is described showing how the reliability and accuracy of a non-destructive technique can be improved if it is suitably accompanied by an appropriate destructive test. The experiment was carried out on samples of AISI 304L tubes from the low-pressure (LP) preheaters of a BWR 900 MW nuclear plant. (author)

  4. Adoption of High Performance Computational (HPC) Modeling Software for Widespread Use in the Manufacture of Welded Structures

    Energy Technology Data Exchange (ETDEWEB)

    Brust, Frederick W. [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Punch, Edward F. [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Twombly, Elizabeth Kurth [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Kalyanam, Suresh [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Kennedy, James [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Hattery, Garty R. [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Dodds, Robert H. [Professional Consulting Services, Inc., Lisle, IL (United States); Mach, Justin C [Caterpillar, Peoria, IL (United States); Chalker, Alan [Ohio Supercomputer Center (OSC), Columbus, OH (United States); Nicklas, Jeremy [Ohio Supercomputer Center (OSC), Columbus, OH (United States); Gohar, Basil M [Ohio Supercomputer Center (OSC), Columbus, OH (United States); Hudak, David [Ohio Supercomputer Center (OSC), Columbus, OH (United States)

    2016-12-30

    . Through VFT®, manufacturing companies can avoid costly design changes after fabrication. This leads to the concept of joint design/fabrication where these important disciplines are intimately linked to minimize fabrication costs. Finally service performance (such as fatigue, corrosion, and fracture/damage) can be improved using this product. Emc2’s DOE SBIR Phase II effort successfully adapted VFT® to perform efficiently in an HPC environment independent of commercial software on a platform to permit easy and cost effective access to the code. This provides the key for SMEs to access this sophisticated and proven methodology that is quick, accurate, cost effective and available “on-demand” to address weld-simulation and fabrication problems prior to manufacture. In addition, other organizations, such as Government agencies and large companies, may have a need for spot use of such a tool. The open source code, WARP3D, a high performance finite element code used in fracture and damage assessment of structures, was significantly modified so computational weld problems can be solved efficiently on multiple processors and threads with VFT®. The thermal solver for VFT®, based on a series of closed form solution approximations, was extensively enhanced for solution on multiple processors greatly increasing overall speed. In addition, the graphical user interface (GUI) was re-written to permit SMEs access to an HPC environment at the Ohio Super Computer Center (OSC) to integrate these solutions with WARP3D. The GUI is used to define all weld pass descriptions, number of passes, material properties, consumable properties, weld speed, etc. for the structure to be modeled. The GUI was enhanced to make it more user-friendly so that non-experts can perform weld modeling. Finally, an extensive outreach program to market this capability to fabrication companies was performed. This access will permit SMEs to perform weld modeling to improve their competitiveness at a

  5. LDRD HPC4Energy Wrapup Report - LDRD 12-ERD-074

    Energy Technology Data Exchange (ETDEWEB)

    Dube, E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Grosh, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-01-23

    High-performance computing and simulation has the potential to optimize production, distribution, and conversion of energy. Although a number of concepts have been discussed, a comprehensive research project to establish and quantify the effectiveness of computing and simulation at scale to core energy problems has not been conducted. We propose to perform the basic research to adapt existing high-performance computing tools and simulation approaches to two selected classes of problems common across the energy sector. The first, applying uncertainty quantification and contingency analysis techniques to energy optimization, allows us to assess the effectiveness of LLNL core competencies to problems such as grid optimization and building-system efficiency. The second, applying adaptive meshing and numerical analysis techniques to physical problems at fine scale, could allow immediate impacts in key areas such as efficient combustion and fracture and spallation. By creating an integrated project team with the necessary expertise, we can efficiently address these issues, delivering both near-term results as well as quantifying developments needed to address future energy challenges.

  6. The Reliability and User-Feasibility of Materials and Procedures for Monitoring the Implementation Integrity of a Reading Intervention

    Science.gov (United States)

    Begeny, John C.; Easton, Julia E.; Upright, James J.; Tunstall, Kali R.; Ehrenbock, Cassia A.

    2014-01-01

    Within the realm of school-based interventions, implementation integrity is important for practical, legal, and ethical purposes. Unfortunately, evidence suggests that proper monitoring of implementation integrity is often absent from both research and practice. School psychology practitioners and researchers have reported that a major barrier to…

  7. The integrated North American electricity market : a bi-national model for securing a reliable supply of electricity

    International Nuclear Information System (INIS)

    Egan, T.

    2004-03-01

    The 50 million people who experienced the power blackout on August 14, 2003 in southern Ontario and the U.S. Midwest and Northeast understood how vital electricity is in our day-to-day lives, but they also saw the resiliency of the North American electricity system. More than 65 per cent of the power generation was restored to service within 12 hours and no damage was caused to the generation or transmission facilities. Although the interconnected North American electricity system is among the most reliable in the world, it is threatened by an aging infrastructure, lack of new generation and transmission to meet demand, and growing regulatory pressures. This report suggests that any measures that respond to the threat of ongoing reliability should be bi-national in scope due to the interconnected nature of the system. Currently, the market, regulatory and administrative systems are different in each country. The full engagement and cooperation of both Canada and the United States is important to ensure future cross-border trade and power reliability. The Canadian Electricity Association proposes the following 7 measures: (1) support an open debate on all the supply options available to meet growing power demands, (2) promote bi-national cooperation in the construction of new transmission capacity to ensure a reliable continental electricity system, (3) examine opportunities for bi-national cooperation for investment in advanced transmission technologies and transmission research and development, (4) promote new generation technology and demand-side measures to relieve existing transmission constraints and reduce the need for new transmission facilities, (5) endorse a self-governing international organization for developing and enforcing mandatory reliability standards for the electricity industry, (6) coordinate measures to promote critical infrastructure protection, and (7) harmonize U.S. and Canadian efforts to streamline or clarify regulation of electricity

  8. Reliability Engineering

    International Nuclear Information System (INIS)

    Lee, Sang Yong

    1992-07-01

    This book is about reliability engineering, which describes definition and importance of reliability, development of reliability engineering, failure rate and failure probability density function about types of it, CFR and index distribution, IFR and normal distribution and Weibull distribution, maintainability and movability, reliability test and reliability assumption in index distribution type, normal distribution type and Weibull distribution type, reliability sampling test, reliability of system, design of reliability and functionality failure analysis by FTA.

  9. Strategies and Decision Support Systems for Integrating Variable Energy Resources in Control Centers for Reliable Grid Operations

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Lawrence E. [Alstom Grid Inc., Washington, DC (United States)

    2011-11-01

    This report provides findings from the field regarding the best ways in which to guide operational strategies, business processes and control room tools to support the integration of renewable energy into electrical grids.

  10. Strategies and Decision Support Systems for Integrating Variable Energy Resources in Control Centers for Reliable Grid Operations. Executive Summary

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Lawrence E. [Alstom Grid, Inc., Washington, DC (United States)

    2011-11-01

    This is the executive summary for a report that provides findings from the field regarding the best ways in which to guide operational strategies, business processes and control room tools to support the integration of renewable energy into electrical grids.

  11. A critical review of frameworks used for evaluating reliability and relevance of (eco)toxicity data: Perspectives for an integrated eco-human decision-making framework.

    Science.gov (United States)

    Roth, N; Ciffroy, P

    2016-10-01

    Considerable efforts have been invested so far to evaluate and rank the quality and relevance of (eco)toxicity data for their use in regulatory risk assessment to assess chemical hazards. Many frameworks have been developed to improve robustness and transparency in the evaluation of reliability and relevance of individual tests, but these frameworks typically focus on either environmental risk assessment (ERA) or human health risk assessment (HHRA), and there is little cross talk between them. There is a need to develop a common approach that would support a more consistent, transparent and robust evaluation and weighting of the evidence across ERA and HHRA. This paper explores the applicability of existing Data Quality Assessment (DQA) frameworks for integrating environmental toxicity hazard data into human health assessments and vice versa. We performed a comparative analysis of the strengths and weaknesses of eleven frameworks for evaluating reliability and/or relevance of toxicity and ecotoxicity hazard data. We found that a frequent shortcoming is the lack of a clear separation between reliability and relevance criteria. A further gaps and needs analysis revealed that none of the reviewed frameworks satisfy the needs of a common eco-human DQA system. Based on our analysis, some key characteristics, perspectives and recommendations are identified and discussed for building a common DQA system as part of a future integrated eco-human decision-making framework. This work lays the basis for developing a common DQA system to support the further development and promotion of Integrated Risk Assessment. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Experience-based design for integrating the patient care experience into healthcare improvement: Identifying a set of reliable emotion words.

    Science.gov (United States)

    Russ, Lauren R; Phillips, Jennifer; Brzozowicz, Keely; Chafetz, Lynne A; Plsek, Paul E; Blackmore, C Craig; Kaplan, Gary S

    2013-12-01

    Experience-based design is an emerging method used to capture the emotional content of patient and family member healthcare experiences, and can serve as the foundation for patient-centered healthcare improvement. However, a core tool-the experience-based design questionnaire-requires words with consistent emotional meaning. Our objective was to identify and evaluate an emotion word set reliably categorized across the demographic spectrum as expressing positive, negative, or neutral emotions for experience-based design improvement work. We surveyed 407 patients, family members, and healthcare workers in 2011. Participants designated each of 67 potential emotion words as positive, neutral, or negative based on their emotional perception of the word. Overall agreement was assessed using the kappa statistic. Words were selected for retention in the final emotion word set based on 80% simple agreement on classification of meaning across subgroups. The participants were 47.9% (195/407) patients, 19.4% (33/407) family members and 32.7% (133/407) healthcare staff. Overall agreement adjusted for chance was moderate (k=0.55). However, agreement for positive (k=0.69) and negative emotions (k=0.68) was substantially higher, while agreement in the neutral category was low (k=0.11). There were 20 positive, 1 neutral, and 14 negative words retained for the final experience-based design emotion word set. We identified a reliable set of emotion words for experience questionnaires to serve as the foundation for patient-centered, experience-based redesign of healthcare. Incorporation of patient and family member perspectives in healthcare requires reliable tools to capture the emotional content of care touch points. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Novel HPC-ibuprofen conjugates: synthesis, characterization, thermal analysis and degradation kinetics

    International Nuclear Information System (INIS)

    Hussain, M.A.; Lodhi, B.A.; Abbas, K.

    2014-01-01

    Naturally occurring hydrophilic polysaccharides are advantageously used as drug carriers because they provide a mechanism to improve drug action. Hydroxypropylcellulose (HPC) is water-soluble, biocompatible and bears hydroxyl groups for drug conjugation outside the parent polymeric chains. This unique geometry allows the attachment of drug molecules with higher covalent loading. The HPC-Ibuprofen conjugates as macromolecular prodrugs were therefore synthesized employing homogenous and one pot reaction methodologies using p-toluenesulfonyl chloride in N,N-dimethylacetamide solvent at 80 degree C for 24 h under nitrogen atmosphere. The imidazole was used as a base for neutralization of acidic impurities. Present strategy appeared effective to get high yield (77-81%) and high degree of drug substitution (DS 0.88-1.40) onto the HPC polymer as determined by the acid-base titration and verified by 1H-NMR spectroscopy. The gel permeation chromatography has shown uni-modal absorption which indicates no significant degradation of polymer during reaction. Macromolecular prodrugs with different DS of ibuprofen were synthesized, purified, characterized and found soluble in organic solvents. From thermogravimetric analysis, initial, maximum and final degradation temperatures of the conjugates were calculated and compared for relative thermal stability. Thermal degradation kinetics was also studied and results have indicated that degradation of conjugates follows about first order kinetics as calculated by Kissinger model. The energy of activation was also found moderate 92.38, 99.34 and 87.34 kJ/mol as calculated using Friedman, Broido and Chang models. It was found that these novel prodrugs of ibuprofen were thermally stable therefore these may have potential pharmaceutical applications. (author)

  14. Divide and Conquer (DC BLAST: fast and easy BLAST execution within HPC environments

    Directory of Open Access Journals (Sweden)

    Won Cheol Yim

    2017-06-01

    Full Text Available Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI Basic Local Alignment Search Tool (BLAST and BLAST+ suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible and used due to the increasing availability of high-performance computing (HPC systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1 to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. This freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.

  15. The HARNESS Workbench: Unified and Adaptive Access to Diverse HPC Platforms

    Energy Technology Data Exchange (ETDEWEB)

    Sunderam, Vaidy S.

    2012-03-20

    The primary goal of the Harness WorkBench (HWB) project is to investigate innovative software environments that will help enhance the overall productivity of applications science on diverse HPC platforms. Two complementary frameworks were designed: one, a virtualized command toolkit for application building, deployment, and execution, that provides a common view across diverse HPC systems, in particular the DOE leadership computing platforms (Cray, IBM, SGI, and clusters); and two, a unified runtime environment that consolidates access to runtime services via an adaptive framework for execution-time and post processing activities. A prototype of the first was developed based on the concept of a 'system-call virtual machine' (SCVM), to enhance portability of the HPC application deployment process across heterogeneous high-end machines. The SCVM approach to portable builds is based on the insertion of toolkit-interpretable directives into original application build scripts. Modifications resulting from these directives preserve the semantics of the original build instruction flow. The execution of the build script is controlled by our toolkit that intercepts build script commands in a manner transparent to the end-user. We have applied this approach to a scientific production code (Gamess-US) on the Cray-XT5 machine. The second facet, termed Unibus, aims to facilitate provisioning and aggregation of multifaceted resources from resource providers and end-users perspectives. To achieve that, Unibus proposes a Capability Model and mediators (resource drivers) to virtualize access to diverse resources, and soft and successive conditioning to enable automatic and user-transparent resource provisioning. A proof of concept implementation has demonstrated the viability of this approach on high end machines, grid systems and computing clouds.

  16. Development and assessment of a fiber reinforced HPC container for radioactive waste

    International Nuclear Information System (INIS)

    Roulet, A.; Pineau, F.; Chanut, S.; Thibaux, Th.

    2007-01-01

    As part of its research into solutions for concrete disposal containers for long-lived radioactive waste, Andra defined requirements for high-performance concretes with enhanced porosity, diffusion, and permeability characteristics. It is the starting point for further research into severe conditions of containment and durability. To meet these objectives, Eiffage TP consequently developed a highly fibered High Performance Concrete (HPC) design mix using CEM V cement and silica fume. Then, mockups were produced to characterize the performance various concepts of containers with this new concrete mix. These mockups helped to identify possible manufacturing problems, and particularly the risk of cracking due to restrained shrinkage. (authors)

  17. Concept of a Cloud Service for Data Preparation and Computational Control on Custom HPC Systems in Application to Molecular Dynamics

    Science.gov (United States)

    Puzyrkov, Dmitry; Polyakov, Sergey; Podryga, Viktoriia; Markizov, Sergey

    2018-02-01

    At the present stage of computer technology development it is possible to study the properties and processes in complex systems at molecular and even atomic levels, for example, by means of molecular dynamics methods. The most interesting are problems related with the study of complex processes under real physical conditions. Solving such problems requires the use of high performance computing systems of various types, for example, GRID systems and HPC clusters. Considering the time consuming computational tasks, the need arises of software for automatic and unified monitoring of such computations. A complex computational task can be performed over different HPC systems. It requires output data synchronization between the storage chosen by a scientist and the HPC system used for computations. The design of the computational domain is also quite a problem. It requires complex software tools and algorithms for proper atomistic data generation on HPC systems. The paper describes the prototype of a cloud service, intended for design of atomistic systems of large volume for further detailed molecular dynamic calculations and computational management for this calculations, and presents the part of its concept aimed at initial data generation on the HPC systems.

  18. Concept of a Cloud Service for Data Preparation and Computational Control on Custom HPC Systems in Application to Molecular Dynamics

    Directory of Open Access Journals (Sweden)

    Puzyrkov Dmitry

    2018-01-01

    Full Text Available At the present stage of computer technology development it is possible to study the properties and processes in complex systems at molecular and even atomic levels, for example, by means of molecular dynamics methods. The most interesting are problems related with the study of complex processes under real physical conditions. Solving such problems requires the use of high performance computing systems of various types, for example, GRID systems and HPC clusters. Considering the time consuming computational tasks, the need arises of software for automatic and unified monitoring of such computations. A complex computational task can be performed over different HPC systems. It requires output data synchronization between the storage chosen by a scientist and the HPC system used for computations. The design of the computational domain is also quite a problem. It requires complex software tools and algorithms for proper atomistic data generation on HPC systems. The paper describes the prototype of a cloud service, intended for design of atomistic systems of large volume for further detailed molecular dynamic calculations and computational management for this calculations, and presents the part of its concept aimed at initial data generation on the HPC systems.

  19. HPC Applications

    Czech Academy of Sciences Publication Activity Database

    Blaheta, Radim; Georgiev, I.; Georgiev, K.; Jakl, Ondřej; Kohut, Roman; Margenov, S.; Starý, Jiří

    2017-01-01

    Roč. 17, č. 5 (2017), s. 5-16 ISSN 1311-9702 R&D Projects: GA MŠk LQ1602 Institutional support: RVO:68145535 Keywords : analysis of fiber-reinforced concrete * homogenization * identification of parameters * parallelizable solver * additive Schwarz method Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics http://www.cit.iit.bas.bg/cit_online_contents.html

  20. Systems analysis programs for Hands-on integrated reliability evaluations (SAPHIRE) Version 5.0: Verification and validation (V ampersand V) manual. Volume 9

    International Nuclear Information System (INIS)

    Jones, J.L.; Calley, M.B.; Capps, E.L.; Zeigler, S.L.; Galyean, W.J.; Novack, S.D.; Smith, C.L.; Wolfram, L.M.

    1995-03-01

    A verification and validation (V ampersand V) process has been performed for the System Analysis Programs for Hands-on Integrated Reliability Evaluation (SAPHIRE) Version 5.0. SAPHIRE is a set of four computer programs that NRC developed for performing probabilistic risk assessments. They allow an analyst to perform many of the functions necessary to create, quantify, and evaluate the risk associated with a facility or process being analyzed. The programs are Integrated Reliability and Risk Analysis System (IRRAS) System Analysis and Risk Assessment (SARA), Models And Results Database (MAR-D), and Fault tree, Event tree, and Piping and instrumentation diagram (FEP) graphical editor. Intent of this program is to perform a V ampersand V of successive versions of SAPHIRE. Previous efforts have been the V ampersand V of SAPHIRE Version 4.0. The SAPHIRE 5.0 V ampersand V plan is based on the SAPHIRE 4.0 V ampersand V plan with revisions to incorporate lessons learned from the previous effort. Also, the SAPHIRE 5.0 vital and nonvital test procedures are based on the test procedures from SAPHIRE 4.0 with revisions to include the new SAPHIRE 5.0 features as well as to incorporate lessons learned from the previous effort. Most results from the testing were acceptable; however, some discrepancies between expected code operation and actual code operation were identified. Modifications made to SAPHIRE are identified

  1. Design reliability engineering

    International Nuclear Information System (INIS)

    Buden, D.; Hunt, R.N.M.

    1989-01-01

    Improved design techniques are needed to achieve high reliability at minimum cost. This is especially true of space systems where lifetimes of many years without maintenance are needed and severe mass limitations exist. Reliability must be designed into these systems from the start. Techniques are now being explored to structure a formal design process that will be more complete and less expensive. The intent is to integrate the best features of design, reliability analysis, and expert systems to design highly reliable systems to meet stressing needs. Taken into account are the large uncertainties that exist in materials, design models, and fabrication techniques. Expert systems are a convenient method to integrate into the design process a complete definition of all elements that should be considered and an opportunity to integrate the design process with reliability, safety, test engineering, maintenance and operator training. 1 fig

  2. Validity and reliability of the Nintendo Wii Balance Board to assess standing balance and sensory integration in highly functional older adults.

    Science.gov (United States)

    Scaglioni-Solano, Pietro; Aragón-Vargas, Luis F

    2014-06-01

    Standing balance is an important motor task. Postural instability associated with age typically arises from deterioration of peripheral sensory systems. The modified Clinical Test of Sensory Integration for Balance and the Tandem test have been used to screen for balance. Timed tests present some limitations, whereas quantification of the motions of the center of pressure (CoP) with portable and inexpensive equipment may help to improve the sensitivity of these tests and give the possibility of widespread use. This study determines the validity and reliability of the Wii Balance Board (Wii BB) to quantify CoP motions during the mentioned tests. Thirty-seven older adults completed three repetitions of five balance conditions: eyes open, eyes closed, eyes open on a compliant surface, eyes closed on a compliant surface, and tandem stance, all performed on a force plate and a Wii BB simultaneously. Twenty participants repeated the trials for reliability purposes. CoP displacement was the main outcome measure. Regression analysis indicated that the Wii BB has excellent concurrent validity, and Bland-Altman plots showed good agreement between devices with small mean differences and no relationship between the difference and the mean. Intraclass correlation coefficients (ICCs) indicated modest-to-excellent test-retest reliability (ICC=0.64-0.85). Standard error of measurement and minimal detectable change were similar for both devices, except the 'eyes closed' condition, with greater standard error of measurement for the Wii BB. In conclusion, the Wii BB is shown to be a valid and reliable method to quantify CoP displacement in older adults.

  3. Human reliability analysis

    International Nuclear Information System (INIS)

    Dougherty, E.M.; Fragola, J.R.

    1988-01-01

    The authors present a treatment of human reliability analysis incorporating an introduction to probabilistic risk assessment for nuclear power generating stations. They treat the subject according to the framework established for general systems theory. Draws upon reliability analysis, psychology, human factors engineering, and statistics, integrating elements of these fields within a systems framework. Provides a history of human reliability analysis, and includes examples of the application of the systems approach

  4. Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data

    Energy Technology Data Exchange (ETDEWEB)

    Di, Sheng; Cappello, Franck

    2018-01-01

    Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points can be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.

  5. Toward Transparent Data Management in Multi-layer Storage Hierarchy for HPC Systems

    Energy Technology Data Exchange (ETDEWEB)

    Wadhwa, Bharti [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States). Dept. of Computer Science; Byna, Suren [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Butt, Ali R. [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States). Dept. of Computer Science

    2018-04-17

    Upcoming exascale high performance computing (HPC) systems are expected to comprise multi-tier storage hierarchy, and thus will necessitate innovative storage and I/O mechanisms. Traditional disk and block-based interfaces and file systems face severe challenges in utilizing capabilities of storage hierarchies due to the lack of hierarchy support and semantic interfaces. Object-based and semantically-rich data abstractions for scientific data management on large scale systems offer a sustainable solution to these challenges. Such data abstractions can also simplify users involvement in data movement. Here, we take the first steps of realizing such an object abstraction and explore storage mechanisms for these objects to enhance I/O performance, especially for scientific applications. We explore how an object-based interface can facilitate next generation scalable computing systems by presenting the mapping of data I/O from two real world HPC scientific use cases: a plasma physics simulation code (VPIC) and a cosmology simulation code (HACC). Our storage model stores data objects in different physical organizations to support data movement across layers of memory/storage hierarchy. Our implementation sclaes well to 16K parallel processes, and compared to the state of the art, such as MPI-IO and HDF5, our object-based data abstractions and data placement strategy in multi-level storage hierarchy achieves up to 7 X I/O performance improvement for scientific data.

  6. Degradation of 2,4,6-Trinitrophenol (TNP) by Arthrobacter sp. HPC1223 Isolated from Effluent Treatment Plant

    OpenAIRE

    Qureshi, Asifa; Kapley, Atya; Purohit, Hemant J.

    2012-01-01

    Arthrobacter sp. HPC1223 (Genebank Accession No. AY948280) isolated from activated biomass of effluent treatment plant was capable of utilizing 2,4,6 trinitrophenol (TNP) under aerobic condition at 30 °C and pH 7 as nitrogen source. It was observed that the isolated bacteria utilized TNP up to 70 % (1 mM) in R2A media with nitrite release. The culture growth media changed into orange-red color hydride-meisenheimer complex at 24 h as detected by HPLC. Oxygen uptake of Arthrobacter HPC1223 towa...

  7. Integrated approach for combining sustainability and safety into a RAM analysis, RAM2S (Reliability, Availability, Maintainability, Sustainability and Safety) towards greenhouse gases emission targets

    Energy Technology Data Exchange (ETDEWEB)

    Alvarenga, Tobias V. [Det Norske Veritas (DNV), Hovik, Oslo (Norway)

    2009-07-01

    This paper aims to present an approach to integrate sustainability and safety concerns on top of a typical RAM Analysis to support new enterprises to find alternatives to align themselves to the greenhouse gases emission targets, measured as CO{sub 2} (carbon dioxide) equivalent. This approach can be used to measure the impact of the potential CO{sub 2} equivalent emission levels mainly related to new enterprises with high CO{sub 2} content towards environment and production, as per example, the extraction of oil and gas from the Brazilian Pre-salt layers. In this sense, this integrated approach, combining Sustainability and Safety into a RAM analysis, RAM2S (Reliability, Availability, Maintainability, Sustainability and Safety), can be used to assess the impact of CO{sub 2} 'production' along the entire enterprise life-cycle, including the impact of possible facility shutdown due to emission restrictions limits, as well as due to the occurrence of additional failures modes related to CO{sub 2} corrosion capabilities. Thus, at the end, this integrated approach would allow companies to find out a more cost-effective alternative to adapt their business into the global warming reality, overcoming the inherent threats of greenhouse gases. (author)

  8. ISC High Performance 2017 International Workshops, DRBSD, ExaComm, HCPM, HPC-IODC, IWOPH, IXPUG, P^3MA, VHPC, Visualization at Scale, WOPSSS

    CERN Document Server

    Yokota, Rio; Taufer, Michela; Shalf, John

    2017-01-01

    This book constitutes revised selected papers from 10 workshops that were held as the ISC High Performance 2017 conference in Frankfurt, Germany, in June 2017. The 59 papers presented in this volume were carefully reviewed and selected for inclusion in this book. They stem from the following workshops: Workshop on Virtualization in High-Performance Cloud Computing (VHPC) Visualization at Scale: Deployment Case Studies and Experience Reports International Workshop on Performance Portable Programming Models for Accelerators (P^3MA) OpenPOWER for HPC (IWOPH) International Workshop on Data Reduction for Big Scientific Data (DRBSD) International Workshop on Communication Architectures for HPC, Big Data, Deep Learning and Clouds at Extreme Scale Workshop on HPC Computing in a Post Moore's Law World (HCPM) HPC I/O in the Data Center ( HPC-IODC) Workshop on Performance and Scalability of Storage Systems (WOPSSS) IXPUG: Experiences on Intel Knights Landing at the One Year Mark International Workshop on Communicati...

  9. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE), Version 5.0. Volume 5, Systems Analysis and Risk Assessment (SARA) tutorial manual

    International Nuclear Information System (INIS)

    Sattison, M.B.; Russell, K.D.; Skinner, N.L.

    1994-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs) primarily for nuclear power plants. This volume is the tutorial manual for the Systems Analysis and Risk Assessment (SARA) System Version 5.0, a microcomputer-based system used to analyze the safety issues of a open-quotes familyclose quotes [i.e., a power plant, a manufacturing facility, any facility on which a probabilistic risk assessment (PRA) might be performed]. A series of lessons is provided that guides the user through some basic steps common to most analyses performed with SARA. The example problems presented in the lessons build on one another, and in combination, lead the user through all aspects of SARA sensitivity analysis capabilities

  10. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE), Version 5.0: Models and Results Database (MAR-D) reference manual. Volume 8

    International Nuclear Information System (INIS)

    Russell, K.D.; Skinner, N.L.

    1994-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. The primary function of MAR-D is to create a data repository for completed PRAs and Individual Plant Examinations (IPEs) by providing input, conversion, and output capabilities for data used by IRRAS, SARA, SETS, and FRANTIC software. As probabilistic risk assessments and individual plant examinations are submitted to the NRC for review, MAR-D can be used to convert the models and results from the study for use with IRRAS and SARA. Then, these data can be easily accessed by future studies and will be in a form that will enhance the analysis process. This reference manual provides an overview of the functions available within MAR-D and step-by-step operating instructions

  11. Shock and vibration effects on performance reliability and mechanical integrity of proton exchange membrane fuel cells: A critical review and discussion

    Science.gov (United States)

    Haji Hosseinloo, Ashkan; Ehteshami, Mohsen Mousavi

    2017-10-01

    Performance reliability and mechanical integrity are the main bottlenecks in mass commercialization of PEMFCs for applications with inherent harsh environment such as automotive and aerospace applications. Imparted shock and vibration to the fuel cell in such applications could bring about numerous issues including clamping torque loosening, gas leakage, increased electrical resistance, and structural damage and breakage. Here, we provide a comprehensive review and critique of the literature focusing on the effects of mechanically harsh environment on PEMFCs, and at the end, we suggest two main future directions in FC technology research that need immediate attention: (i) developing a generic and adequately accurate dynamic model of PEMFCs to assess the dynamic response of FC devices, and (ii) designing effective and robust shock and vibration protection systems based on the developed models in (i).

  12. Integration of Human Reliability Analysis Models into the Simulation-Based Framework for the Risk-Informed Safety Margin Characterization Toolkit

    International Nuclear Information System (INIS)

    Boring, Ronald; Mandelli, Diego; Rasmussen, Martin; Ulrich, Thomas; Groth, Katrina; Smith, Curtis

    2016-01-01

    This report presents an application of a computation-based human reliability analysis (HRA) framework called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER). HUNTER has been developed not as a standalone HRA method but rather as framework that ties together different HRA methods to model dynamic risk of human activities as part of an overall probabilistic risk assessment (PRA). While we have adopted particular methods to build an initial model, the HUNTER framework is meant to be intrinsically flexible to new pieces that achieve particular modeling goals. In the present report, the HUNTER implementation has the following goals: • Integration with a high fidelity thermal-hydraulic model capable of modeling nuclear power plant behaviors and transients • Consideration of a PRA context • Incorporation of a solid psychological basis for operator performance • Demonstration of a functional dynamic model of a plant upset condition and appropriate operator response This report outlines these efforts and presents the case study of a station blackout scenario to demonstrate the various modules developed to date under the HUNTER research umbrella.

  13. Integration of Human Reliability Analysis Models into the Simulation-Based Framework for the Risk-Informed Safety Margin Characterization Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rasmussen, Martin [Norwegian Univ. of Science and Technology, Trondheim (Norway). Social Research; Herberger, Sarah [Idaho National Lab. (INL), Idaho Falls, ID (United States); Ulrich, Thomas [Idaho National Lab. (INL), Idaho Falls, ID (United States); Groth, Katrina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Smith, Curtis [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-06-01

    This report presents an application of a computation-based human reliability analysis (HRA) framework called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER). HUNTER has been developed not as a standalone HRA method but rather as framework that ties together different HRA methods to model dynamic risk of human activities as part of an overall probabilistic risk assessment (PRA). While we have adopted particular methods to build an initial model, the HUNTER framework is meant to be intrinsically flexible to new pieces that achieve particular modeling goals. In the present report, the HUNTER implementation has the following goals: • Integration with a high fidelity thermal-hydraulic model capable of modeling nuclear power plant behaviors and transients • Consideration of a PRA context • Incorporation of a solid psychological basis for operator performance • Demonstration of a functional dynamic model of a plant upset condition and appropriate operator response This report outlines these efforts and presents the case study of a station blackout scenario to demonstrate the various modules developed to date under the HUNTER research umbrella.

  14. Bringing ATLAS production to HPC resources - A use case with the Hydra supercomputer of the Max Planck Society

    CERN Document Server

    Kennedy, John; The ATLAS collaboration; Mazzaferro, Luca; Walker, Rodney

    2015-01-01

    The possible usage of HPC resources by ATLAS is now becoming viable due to the changing nature of these systems and it is also very attractive due to the need for increasing amounts of simulated data. In recent years the architecture of HPC systems has evolved, moving away from specialized monolithic systems, to a more generic Linux type platform. This change means that the deployment of non HPC specific codes has become much easier. The timing of this evolution perfectly suits the needs of ATLAS and opens a new window of opportunity. The ATLAS experiment at CERN will begin a period of high luminosity data taking in 2015. This high luminosity phase will be accompanied by a need for increasing amounts of simulated data which is expected to exceed the capabilities of the current Grid infrastructure. ATLAS aims to address this need by opportunistically accessing resources such as cloud and HPC systems. This paper presents the results of a pilot project undertaken by ATLAS and the MPP and RZG to provide access to...

  15. Exploiting HPC Platforms for Metagenomics: Challenges and Opportunities (MICW - Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    Energy Technology Data Exchange (ETDEWEB)

    Canon, Shane

    2011-10-12

    DOE JGI's Zhong Wang, chair of the High-performance Computing session, gives a brief introduction before Berkeley Lab's Shane Canon talks about "Exploiting HPC Platforms for Metagenomics: Challenges and Opportunities" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.

  16. HPC Colony II: FAST_OS II: Operating Systems and Runtime Systems at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Moreira, Jose [IBM, Armonk, NY (United States)

    2013-11-13

    HPC Colony II has been a 36-month project focused on providing portable performance for leadership class machines—a task made difficult by the emerging variety of more complex computer architectures. The project attempts to move the burden of portable performance to adaptive system software, thereby allowing domain scientists to concentrate on their field rather than the fine details of a new leadership class machine. To accomplish our goals, we focused on adding intelligence into the system software stack. Our revised components include: new techniques to address OS jitter; new techniques to dynamically address load imbalances; new techniques to map resources according to architectural subtleties and application dynamic behavior; new techniques to dramatically improve the performance of checkpoint-restart; and new techniques to address membership service issues at scale.

  17. Final Report for File System Support for Burst Buffers on HPC Systems

    Energy Technology Data Exchange (ETDEWEB)

    Yu, W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mohror, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-11-27

    Distributed burst buffers are a promising storage architecture for handling I/O workloads for exascale computing. As they are being deployed on more supercomputers, a file system that efficiently manages these burst buffers for fast I/O operations carries great consequence. Over the past year, FSU team has undertaken several efforts to design, prototype and evaluate distributed file systems for burst buffers on HPC systems. These include MetaKV: a Key-Value Store for Metadata Management of Distributed Burst Buffers, a user-level file system with multiple backends, and a specialized file system for large datasets of deep neural networks. Our progress for these respective efforts are elaborated further in this report.

  18. Charliecloud: Unprivileged containers for user-defined software stacks in HPC

    Energy Technology Data Exchange (ETDEWEB)

    Priedhorsky, Reid [Los Alamos National Laboratory; Randles, Timothy C. [Los Alamos National Laboratory

    2016-08-09

    Supercomputing centers are seeing increasing demand for user-defined software stacks (UDSS), instead of or in addition to the stack provided by the center. These UDSS support user needs such as complex dependencies or build requirements, externally required configurations, portability, and consistency. The challenge for centers is to provide these services in a usable manner while minimizing the risks: security, support burden, missing functionality, and performance. We present Charliecloud, which uses the Linux user and mount namespaces to run industry-standard Docker containers with no privileged operations or daemons on center resources. Our simple approach avoids most security risks while maintaining access to the performance and functionality already on offer, doing so in less than 500 lines of code. Charliecloud promises to bring an industry-standard UDSS user workflow to existing, minimally altered HPC resources.

  19. Study of thermal performance of capillary micro tubes integrated into the building sandwich element made of high performance concrete

    DEFF Research Database (Denmark)

    Mikeska, Tomas; Svendsen, Svend

    2013-01-01

    The thermal performance of radiant heating and cooling systems (RHCS) composed of capillary micro tubes (CMT) integrated into the inner plate of sandwich elements made of high performance concrete (HPC) was investigated in the article. Temperature distribution in HPC elements around integrated CM...... and cooling purposes of future low energy buildings. The investigations were conceived as a low temperature concept, where the difference between the temperature of circulating fluid and air in the room was kept in range of 1–4 °C.......The thermal performance of radiant heating and cooling systems (RHCS) composed of capillary micro tubes (CMT) integrated into the inner plate of sandwich elements made of high performance concrete (HPC) was investigated in the article. Temperature distribution in HPC elements around integrated CMT...... HPC layer covering the CMT. This paper shows that CMT integrated into the thin plate of sandwich element made of HPC can supply the energy needed for heating (cooling) and at the same time create the comfortable and healthy environment for the occupants. This solution is very suitable for heating...

  20. Software reliability

    CERN Document Server

    Bendell, A

    1986-01-01

    Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo

  1. 25. MPA-seminar: safety and reliability of plant technology with special emphasis on safety and reliability - integrity proofs, qualification of components, damage prevention. Vol. 1. Papers 1-29

    International Nuclear Information System (INIS)

    1999-01-01

    The proceedings of the 25th MPA Seminar on 'Safety and Reliability of Plant Technology' were issued in two volumes. The main topics of the first volume are: 1. Structural and safety analysis, 2. Reliability analysis, 3. Fracture mechanics, and 4. Nondestructive Testing. s

  2. Human reliability

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1987-01-01

    Concepts and techniques of human reliability have been developed and are used mostly in probabilistic risk assessment. For this, the major application of human reliability assessment has been to identify the human errors which have a significant effect on the overall safety of the system and to quantify the probability of their occurrence. Some of the major issues within human reliability studies are reviewed and it is shown how these are applied to the assessment of human failures in systems. This is done under the following headings; models of human performance used in human reliability assessment, the nature of human error, classification of errors in man-machine systems, practical aspects, human reliability modelling in complex situations, quantification and examination of human reliability, judgement based approaches, holistic techniques and decision analytic approaches. (UK)

  3. Integration of the functional reliability of two passive safety systems to mitigate a SBLOCA+BO in a CAREM-like reactor PSA

    Energy Technology Data Exchange (ETDEWEB)

    Mezio, Federico, E-mail: federico.mezio@cab.cnea.gov.ar [CNEA, Sede Central, Av. Del Libertador 8250, CABA (Argentina); Grinberg, Mariela [CNEA, Centro Atómico Bariloche, S.C. de Bariloche, Río Negro (Argentina); Lorenzo, Gabriel [CNEA, Sede Central, Av. Del Libertador 8250, CABA (Argentina); Giménez, Marcelo [CNEA, Centro Atómico Bariloche, S.C. de Bariloche, Río Negro (Argentina)

    2014-04-01

    Highlights: • An estimation of the Functional Unreliability was performed using RMPS methodology. • The methodology uses an improved response surface in order to estimate the FU. • The FU may become relevant to be analyzed in the Passive Safety Systems. • There were proposed two ways to incorporate the FU into an APS. - Abstract: This paper describes a case study of a methodological approach for assessing the functional reliability of passive safety systems (PSS) and its treatment within a probabilistic safety assessment (PSA). The functional unreliability (FU) can be understood as the failure probability of PSS to fulfill its mission due to the impairment of the related passive safety function. The safety function accomplishment is characterized and quantified by a performance indicator (PI), which is a measure of how far the system is from verifying its mission. PI uncertainties are estimated from uncertainty propagation of selected parameters. A methodology based on the reliability methodology for passive system (RMPS) one is used to estimate the FU associated to the isolation condensers (ICs) in combination with the accumulators (medium pressure injection system) of a CAREM-like integral advanced reactor. A small break loss of coolant accident with black-out is selected as an evaluation case. This implies success of reactor shut-down (inherent) and failure of residual heat removal by active systems. The safety function to accomplish is to refill the reactor pressure vessel (RPV) in order to avoid core damage. For this case, to allow the discharge of accumulators into RPV, the pressure must be reduced by the IC. The methodology for passive safety function assessment considers uncertainties in code parameters, besides uncertainties in engineering parameters (design, construction, operation and maintenance), in order to perform Monte Carlo simulations based on best estimate (B-E) plant model. Then, response surfaces based on PI are used for improving the

  4. Reliability of electronic systems

    International Nuclear Information System (INIS)

    Roca, Jose L.

    2001-01-01

    Reliability techniques have been developed subsequently as a need of the diverse engineering disciplines, nevertheless they are not few those that think they have been work a lot on reliability before the same word was used in the current context. Military, space and nuclear industries were the first ones that have been involved in this topic, however not only in these environments it is that it has been carried out this small great revolution in benefit of the increase of the reliability figures of the products of those industries, but rather it has extended to the whole industry. The fact of the massive production, characteristic of the current industries, drove four decades ago, to the fall of the reliability of its products, on one hand, because the massively itself and, for other, to the recently discovered and even not stabilized industrial techniques. Industry should be changed according to those two new requirements, creating products of medium complexity and assuring an enough reliability appropriated to production costs and controls. Reliability began to be integral part of the manufactured product. Facing this philosophy, the book describes reliability techniques applied to electronics systems and provides a coherent and rigorous framework for these diverse activities providing a unifying scientific basis for the entire subject. It consists of eight chapters plus a lot of statistical tables and an extensive annotated bibliography. Chapters embrace the following topics: 1- Introduction to Reliability; 2- Basic Mathematical Concepts; 3- Catastrophic Failure Models; 4-Parametric Failure Models; 5- Systems Reliability; 6- Reliability in Design and Project; 7- Reliability Tests; 8- Software Reliability. This book is in Spanish language and has a potentially diverse audience as a text book from academic to industrial courses. (author)

  5. Efficient Machine Learning Approach for Optimizing Scientific Computing Applications on Emerging HPC Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Arumugam, Kamesh [Old Dominion Univ., Norfolk, VA (United States)

    2017-05-01

    the parallel implementation challenges of such irregular applications on different HPC architectures. In particular, we use supervised learning to predict the computation structure and use it to address the control-ow and memory access irregularities in the parallel implementation of such applications on GPUs, Xeon Phis, and heterogeneous architectures composed of multi-core CPUs with GPUs or Xeon Phis. We use numerical simulation of charged particles beam dynamics simulation as a motivating example throughout the dissertation to present our new approach, though they should be equally applicable to a wide range of irregular applications. The machine learning approach presented here use predictive analytics and forecasting techniques to adaptively model and track the irregular memory access pattern at each time step of the simulation to anticipate the future memory access pattern. Access pattern forecasts can then be used to formulate optimization decisions during application execution which improves the performance of the application at a future time step based on the observations from earlier time steps. In heterogeneous architectures, forecasts can also be used to improve the memory performance and resource utilization of all the processing units to deliver a good aggregate performance. We used these optimization techniques and anticipation strategy to design a cache-aware, memory efficient parallel algorithm to address the irregularities in the parallel implementation of charged particles beam dynamics simulation on different HPC architectures. Experimental result using a diverse mix of HPC architectures shows that our approach in using anticipation strategy is effective in maximizing data reuse, ensuring workload balance, minimizing branch and memory divergence, and in improving resource utilization.

  6. FOSS GIS on the GFZ HPC cluster: Towards a service-oriented Scientific Geocomputation Environment

    Science.gov (United States)

    Loewe, P.; Klump, J.; Thaler, J.

    2012-12-01

    High performance compute clusters can be used as geocomputation workbenches. Their wealth of resources enables us to take on geocomputation tasks which exceed the limitations of smaller systems. These general capabilities can be harnessed via tools such as Geographic Information System (GIS), provided they are able to utilize the available cluster configuration/architecture and provide a sufficient degree of user friendliness to allow for wide application. While server-level computing is clearly not sufficient for the growing numbers of data- or computation-intense tasks undertaken, these tasks do not get even close to the requirements needed for access to "top shelf" national cluster facilities. So until recently such kind of geocomputation research was effectively barred due to lack access to of adequate resources. In this paper we report on the experiences gained by providing GRASS GIS as a software service on a HPC compute cluster at the German Research Centre for Geosciences using Platform Computing's Load Sharing Facility (LSF). GRASS GIS is the oldest and largest Free Open Source (FOSS) GIS project. During ramp up in 2011, multiple versions of GRASS GIS (v 6.4.2, 6.5 and 7.0) were installed on the HPC compute cluster, which currently consists of 234 nodes with 480 CPUs providing 3084 cores. Nineteen different processing queues with varying hardware capabilities and priorities are provided, allowing for fine-grained scheduling and load balancing. After successful initial testing, mechanisms were developed to deploy scripted geocomputation tasks onto dedicated processing queues. The mechanisms are based on earlier work by NETELER et al. (2008) and allow to use all 3084 cores for GRASS based geocomputation work. However, in practice applications are limited to fewer resources as assigned to their respective queue. Applications of the new GIS functionality comprise so far of hydrological analysis, remote sensing and the generation of maps of simulated tsunamis

  7. The Benefits and Complexities of Operating Geographic Information Systems (GIS) in a High Performance Computing (HPC) Environment

    Science.gov (United States)

    Shute, J.; Carriere, L.; Duffy, D.; Hoy, E.; Peters, J.; Shen, Y.; Kirschbaum, D.

    2017-12-01

    The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center is building and maintaining an Enterprise GIS capability for its stakeholders, to include NASA scientists, industry partners, and the public. This platform is powered by three GIS subsystems operating in a highly-available, virtualized environment: 1) the Spatial Analytics Platform is the primary NCCS GIS and provides users discoverability of the vast DigitalGlobe/NGA raster assets within the NCCS environment; 2) the Disaster Mapping Platform provides mapping and analytics services to NASA's Disaster Response Group; and 3) the internal (Advanced Data Analytics Platform/ADAPT) enterprise GIS provides users with the full suite of Esri and open source GIS software applications and services. All systems benefit from NCCS's cutting edge infrastructure, to include an InfiniBand network for high speed data transfers; a mixed/heterogeneous environment featuring seamless sharing of information between Linux and Windows subsystems; and in-depth system monitoring and warning systems. Due to its co-location with the NCCS Discover High Performance Computing (HPC) environment and the Advanced Data Analytics Platform (ADAPT), the GIS platform has direct access to several large NCCS datasets including DigitalGlobe/NGA, Landsat, MERRA, and MERRA2. Additionally, the NCCS ArcGIS Desktop Windows virtual machines utilize existing NetCDF and OPeNDAP assets for visualization, modelling, and analysis - thus eliminating the need for data duplication. With the advent of this platform, Earth scientists have full access to vast data repositories and the industry-leading tools required for successful management and analysis of these multi-petabyte, global datasets. The full system architecture and integration with scientific datasets will be presented. Additionally, key applications and scientific analyses will be explained, to include the NASA Global Landslide Catalog (GLC) Reporter crowdsourcing application, the

  8. Economic Model For a Return on Investment Analysis of United States Government High Performance Computing (HPC) Research and Development (R & D) Investment

    Energy Technology Data Exchange (ETDEWEB)

    Joseph, Earl C. [IDC Research Inc., Framingham, MA (United States); Conway, Steve [IDC Research Inc., Framingham, MA (United States); Dekate, Chirag [IDC Research Inc., Framingham, MA (United States)

    2013-09-30

    This study investigated how high-performance computing (HPC) investments can improve economic success and increase scientific innovation. This research focused on the common good and provided uses for DOE, other government agencies, industry, and academia. The study created two unique economic models and an innovation index: 1 A macroeconomic model that depicts the way HPC investments result in economic advancements in the form of ROI in revenue (GDP), profits (and cost savings), and jobs. 2 A macroeconomic model that depicts the way HPC investments result in basic and applied innovations, looking at variations by sector, industry, country, and organization size. A new innovation index that provides a means of measuring and comparing innovation levels. Key findings of the pilot study include: IDC collected the required data across a broad set of organizations, with enough detail to create these models and the innovation index. The research also developed an expansive list of HPC success stories.

  9. Hawaii Electric System Reliability

    Energy Technology Data Exchange (ETDEWEB)

    Loose, Verne William [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Silva Monroy, Cesar Augusto [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2012-08-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers’ views of reliability “worth” and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers’ views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  10. Hawaii electric system reliability.

    Energy Technology Data Exchange (ETDEWEB)

    Silva Monroy, Cesar Augusto; Loose, Verne William

    2012-09-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  11. Personalized cloud-based bioinformatics services for research and education: use cases and the elasticHPC package.

    Science.gov (United States)

    El-Kalioby, Mohamed; Abouelhoda, Mohamed; Krüger, Jan; Giegerich, Robert; Sczyrba, Alexander; Wall, Dennis P; Tonellato, Peter

    2012-01-01

    Bioinformatics services have been traditionally provided in the form of a web-server that is hosted at institutional infrastructure and serves multiple users. This model, however, is not flexible enough to cope with the increasing number of users, increasing data size, and new requirements in terms of speed and availability of service. The advent of cloud computing suggests a new service model that provides an efficient solution to these problems, based on the concepts of "resources-on-demand" and "pay-as-you-go". However, cloud computing has not yet been introduced within bioinformatics servers due to the lack of usage scenarios and software layers that address the requirements of the bioinformatics domain. In this paper, we provide different use case scenarios for providing cloud computing based services, considering both the technical and financial aspects of the cloud computing service model. These scenarios are for individual users seeking computational power as well as bioinformatics service providers aiming at provision of personalized bioinformatics services to their users. We also present elasticHPC, a software package and a library that facilitates the use of high performance cloud computing resources in general and the implementation of the suggested bioinformatics scenarios in particular. Concrete examples that demonstrate the suggested use case scenarios with whole bioinformatics servers and major sequence analysis tools like BLAST are presented. Experimental results with large datasets are also included to show the advantages of the cloud model. Our use case scenarios and the elasticHPC package are steps towards the provision of cloud based bioinformatics services, which would help in overcoming the data challenge of recent biological research. All resources related to elasticHPC and its web-interface are available at http://www.elasticHPC.org.

  12. High-Bandwidth Tactical-Network Data Analysis in a High-Performance-Computing (HPC) Environment: Packet-Level Analysis

    Science.gov (United States)

    2015-09-01

    individual fragments using the hash-based method. In general, fragments 6 appear in order and relatively close to each other in the file. A fragment...data product derived from the data model is shown in Fig. 5, a Google Earth12 Keyhole Markup Language (KML) file. This product includes aggregate...System BLOb binary large object FPGA field-programmable gate array HPC high-performance computing IP Internet Protocol KML Keyhole Markup Language

  13. Mechanisms of adhesion and subsequent actions of a haematopoietic stem cell line, HPC-7, in the injured murine intestinal microcirculation in vivo.

    Directory of Open Access Journals (Sweden)

    Dean P J Kavanagh

    Full Text Available Although haematopoietic stem cells (HSCs migrate to injured gut, therapeutic success clinically remains poor. This has been partially attributed to limited local HSC recruitment following systemic injection. Identifying site specific adhesive mechanisms underpinning HSC-endothelial interactions may provide important information on how to enhance their recruitment and thus potentially improve therapeutic efficacy. This study determined (i the integrins and inflammatory cyto/chemokines governing HSC adhesion to injured gut and muscle (ii whether pre-treating HSCs with these cyto/chemokines enhanced their adhesion and (iii whether the degree of HSC adhesion influenced their ability to modulate leukocyte recruitment.Adhesion of HPC-7, a murine HSC line, to ischaemia-reperfused (IR injured mouse gut or cremaster muscle was monitored intravitally. Critical adhesion molecules were identified by pre-treating HPC-7 with blocking antibodies to CD18 and CD49d. To identify cyto/chemokines capable of recruiting HPC-7, adhesion was monitored following tissue exposure to TNF-α, IL-1β or CXCL12. The effects of pre-treating HPC-7 with these cyto/chemokines on surface integrin expression/clustering, adhesion to ICAM-1/VCAM-1 and recruitment in vivo was also investigated. Endogenous leukocyte adhesion following HPC-7 injection was again determined intravitally.IR injury increased HPC-7 adhesion in vivo, with intestinal adhesion dependent upon CD18 and muscle adhesion predominantly relying on CD49d. Only CXCL12 pre-treatment enhanced HPC-7 adhesion within injured gut, likely by increasing CD18 binding to ICAM-1 and/or CD18 surface clustering on HPC-7. Leukocyte adhesion was reduced at 4 hours post-reperfusion, but only when local HPC-7 adhesion was enhanced using CXCL12.This data provides evidence that site-specific molecular mechanisms govern HPC-7 adhesion to injured tissue. Importantly, we show that HPC-7 adhesion is a modulatable event in IR injury and

  14. Reliability Engineering

    CERN Document Server

    Lazzaroni, Massimo

    2012-01-01

    This book gives a practical guide for designers and users in Information and Communication Technology context. In particular, in the first Section, the definition of the fundamental terms according to the international standards are given. Then, some theoretical concepts and reliability models are presented in Chapters 2 and 3: the aim is to evaluate performance for components and systems and reliability growth. Chapter 4, by introducing the laboratory tests, puts in evidence the reliability concept from the experimental point of view. In ICT context, the failure rate for a given system can be

  15. Reliability training

    Science.gov (United States)

    Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Dillard, Richard B.; Wong, Kam L.; Barber, Frank J.; Barina, Frank J.

    1992-01-01

    Discussed here is failure physics, the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low cost reliable products. A review of reliability for the years 1940 to 2000 is given. Next, a review of mathematics is given as well as a description of what elements contribute to product failures. Basic reliability theory and the disciplines that allow us to control and eliminate failures are elucidated.

  16. A Numerical Study of Scalable Cardiac Electro-Mechanical Solvers on HPC Architectures

    Directory of Open Access Journals (Sweden)

    Piero Colli Franzone

    2018-04-01

    Full Text Available We introduce and study some scalable domain decomposition preconditioners for cardiac electro-mechanical 3D simulations on parallel HPC (High Performance Computing architectures. The electro-mechanical model of the cardiac tissue is composed of four coupled sub-models: (1 the static finite elasticity equations for the transversely isotropic deformation of the cardiac tissue; (2 the active tension model describing the dynamics of the intracellular calcium, cross-bridge binding and myofilament tension; (3 the anisotropic Bidomain model describing the evolution of the intra- and extra-cellular potentials in the deforming cardiac tissue; and (4 the ionic membrane model describing the dynamics of ionic currents, gating variables, ionic concentrations and stretch-activated channels. This strongly coupled electro-mechanical model is discretized in time with a splitting semi-implicit technique and in space with isoparametric finite elements. The resulting scalable parallel solver is based on Multilevel Additive Schwarz preconditioners for the solution of the Bidomain system and on BDDC preconditioned Newton-Krylov solvers for the non-linear finite elasticity system. The results of several 3D parallel simulations show the scalability of both linear and non-linear solvers and their application to the study of both physiological excitation-contraction cardiac dynamics and re-entrant waves in the presence of different mechano-electrical feedbacks.

  17. Network Traffic Analysis With Query Driven VisualizationSC 2005HPC Analytics Results

    Energy Technology Data Exchange (ETDEWEB)

    Stockinger, Kurt; Wu, Kesheng; Campbell, Scott; Lau, Stephen; Fisk, Mike; Gavrilov, Eugene; Kent, Alex; Davis, Christopher E.; Olinger,Rick; Young, Rob; Prewett, Jim; Weber, Paul; Caudell, Thomas P.; Bethel,E. Wes; Smith, Steve

    2005-09-01

    Our analytics challenge is to identify, characterize, and visualize anomalous subsets of large collections of network connection data. We use a combination of HPC resources, advanced algorithms, and visualization techniques. To effectively and efficiently identify the salient portions of the data, we rely on a multi-stage workflow that includes data acquisition, summarization (feature extraction), novelty detection, and classification. Once these subsets of interest have been identified and automatically characterized, we use a state-of-the-art-high-dimensional query system to extract data subsets for interactive visualization. Our approach is equally useful for other large-data analysis problems where it is more practical to identify interesting subsets of the data for visualization than to render all data elements. By reducing the size of the rendering workload, we enable highly interactive and useful visualizations. As a result of this work we were able to analyze six months worth of data interactively with response times two orders of magnitude shorter than with conventional methods.

  18. A graphical user interface for real-time analysis of XPCS using HPC

    Energy Technology Data Exchange (ETDEWEB)

    Sikorski, M., E-mail: sikorski@aps.anl.gov [Argonne National Laboratory, Advanced Photon Source, 9700 S Cass Ave, Argonne, IL 60439 (United States); Jiang, Z. [Argonne National Laboratory, Advanced Photon Source, 9700 S Cass Ave, Argonne, IL 60439 (United States); Sprung, M. [HASYLAB at DESY, Notkestr. 85, D 22-607 Hamburg (Germany); Narayanan, S.; Sandy, A.R.; Tieman, B. [Argonne National Laboratory, Advanced Photon Source, 9700 S Cass Ave, Argonne, IL 60439 (United States)

    2011-09-01

    With the development of third generation synchrotron radiation sources, X-ray photon correlation spectroscopy has emerged as a powerful technique for characterizing equilibrium and non-equilibrium dynamics in complex materials at nanometer length scales over a wide range of time-scales (0.001-1000 s). Moreover, the development of powerful new direct detection CCD cameras has allowed investigation of faster dynamical processes. A consequence of these technical improvements is the need to reduce a very large amount of area detector data within a short time. This problem can be solved by utilizing a large number of processors (32-64) in the cluster architecture to improve the efficiency of the calculations by 1-2 orders of magnitude (Tieman et al., this issue). However, to make such a data analysis system operational, powerful and user-friendly control software needs to be developed. As a part of the effort to maintain a high data acquisition and reduction rate, we have developed a Matlab-based software that acts as an interface between the user and the high performance computing (HPC) cluster.

  19. Parallel Application Performance on Two Generations of Intel Xeon HPC Platforms

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Christopher H.; Long, Hai; Sides, Scott; Vaidhynathan, Deepthi; Jones, Wesley

    2015-10-15

    Two next-generation node configurations hosting the Haswell microarchitecture were tested with a suite of microbenchmarks and application examples, and compared with a current Ivy Bridge production node on NREL" tm s Peregrine high-performance computing cluster. A primary conclusion from this study is that the additional cores are of little value to individual task performance--limitations to application parallelism, or resource contention among concurrently running but independent tasks, limits effective utilization of these added cores. Hyperthreading generally impacts throughput negatively, but can improve performance in the absence of detailed attention to runtime workflow configuration. The observations offer some guidance to procurement of future HPC systems at NREL. First, raw core count must be balanced with available resources, particularly memory bandwidth. Balance-of-system will determine value more than processor capability alone. Second, hyperthreading continues to be largely irrelevant to the workloads that are commonly seen, and were tested here, at NREL. Finally, perhaps the most impactful enhancement to productivity might occur through enabling multiple concurrent jobs per node. Given the right type and size of workload, more may be achieved by doing many slow things at once, than fast things in order.

  20. Engraftment Outcomes after HPC Co-Culture with Mesenchymal Stromal Cells and Osteoblasts

    Directory of Open Access Journals (Sweden)

    Matthew M. Cook

    2013-09-01

    Full Text Available Haematopoietic stem cell (HSC transplantation is an established cell-based therapy for a number of haematological diseases. To enhance this therapy, there is considerable interest in expanding HSCs in artificial niches prior to transplantation. This study compared murine HSC expansion supported through co-culture on monolayers of either undifferentiated mesenchymal stromal cells (MSCs or osteoblasts. Sorted Lineage− Sca-1+ c-kit+ (LSK haematopoietic stem/progenitor cells (HPC demonstrated proliferative capacity on both stromal monolayers with the greatest expansion of LSK shown in cultures supported by osteoblast monolayers. After transplantation, both types of bulk-expanded cultures were capable of engrafting and repopulating lethally irradiated primary and secondary murine recipients. LSKs co-cultured on MSCs showed comparable, but not superior, reconstitution ability to that of freshly isolated LSKs. Surprisingly, however, osteoblast co-cultured LSKs showed significantly poorer haematopoietic reconstitution compared to LSKs co-cultured on MSCs, likely due to a delay in short-term reconstitution. We demonstrated that stromal monolayers can be used to maintain, but not expand, functional HSCs without a need for additional haematopoietic growth factors. We also demonstrated that despite apparently superior in vitro performance, co-injection of bulk cultures of osteoblasts and LSKs in vivo was detrimental to recipient survival and should be avoided in translation to clinical practice.

  1. A graphical user interface for real-time analysis of XPCS using HPC

    International Nuclear Information System (INIS)

    Sikorski, M.; Jiang, Z.; Sprung, M.; Narayanan, S.; Sandy, A.R.; Tieman, B.

    2011-01-01

    With the development of third generation synchrotron radiation sources, X-ray photon correlation spectroscopy has emerged as a powerful technique for characterizing equilibrium and non-equilibrium dynamics in complex materials at nanometer length scales over a wide range of time-scales (0.001-1000 s). Moreover, the development of powerful new direct detection CCD cameras has allowed investigation of faster dynamical processes. A consequence of these technical improvements is the need to reduce a very large amount of area detector data within a short time. This problem can be solved by utilizing a large number of processors (32-64) in the cluster architecture to improve the efficiency of the calculations by 1-2 orders of magnitude (Tieman et al., this issue). However, to make such a data analysis system operational, powerful and user-friendly control software needs to be developed. As a part of the effort to maintain a high data acquisition and reduction rate, we have developed a Matlab-based software that acts as an interface between the user and the high performance computing (HPC) cluster.

  2. Low latency network and distributed storage for next generation HPC systems: the ExaNeSt project

    Science.gov (United States)

    Ammendola, R.; Biagioni, A.; Cretaro, P.; Frezza, O.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Paolucci, P. S.; Pastorelli, E.; Pisani, F.; Simula, F.; Vicini, P.; Navaridas, J.; Chaix, F.; Chrysos, N.; Katevenis, M.; Papaeustathiou, V.

    2017-10-01

    With processor architecture evolution, the HPC market has undergone a paradigm shift. The adoption of low-cost, Linux-based clusters extended the reach of HPC from its roots in modelling and simulation of complex physical systems to a broader range of industries, from biotechnology, cloud computing, computer analytics and big data challenges to manufacturing sectors. In this perspective, the near future HPC systems can be envisioned as composed of millions of low-power computing cores, densely packed — meaning cooling by appropriate technology — with a tightly interconnected, low latency and high performance network and equipped with a distributed storage architecture. Each of these features — dense packing, distributed storage and high performance interconnect — represents a challenge, made all the harder by the need to solve them at the same time. These challenges lie as stumbling blocks along the road towards Exascale-class systems; the ExaNeSt project acknowledges them and tasks itself with investigating ways around them.

  3. Synergy between the CIMENT tier-2 HPC centre and the HEP community at LPSC in Grenoble (France)

    International Nuclear Information System (INIS)

    Biscarat, C; Bzeznik, B

    2014-01-01

    Two of the most pressing questions in current research in Particle Physics are the characterisation of the newly discovered Higgs-like boson at the LHC and the search for New Phenomena beyond the Standard Model of Particle Physics. Physicists at LPSC in Grenoble are leading the search for one type of New Phenomena in ATLAS. Given the rich multitude of physics studies proceeding in parallel in ATLAS, one limiting factor in the timely analysis of data is the availability of computing resources. Another LPSC team suffers from the same limitation. This team is leading the ultimate precision measurement of the W boson mass with DØ data, which yields an indirect constraint on the Higgs boson mass which can be compared with the direct measurements of the mass of the newly discovered boson at LHC. In this paper, we describe the synergy between CIMENT, a regional multidisciplinary HPC centre, and the HEP community in Grenoble in the context of the analysis of data recorded by the ATLAS experiment at the LHC collider and the D0 experiment at the Tevatron collider. CIMENT is a federation of twelve HPC clusters, of about 90 TFlop/s, one of the most powerful HPC tier-2 centres in France. The sharing of resources between different scientific fields, like the ones discussed in this article, constitutes a great asset because the spikes in need of computing resources are uncorrelated in time between different fields.

  4. Multidisciplinary System Reliability Analysis

    Science.gov (United States)

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  5. Integration

    DEFF Research Database (Denmark)

    Emerek, Ruth

    2004-01-01

    Bidraget diskuterer de forskellige intergrationsopfattelse i Danmark - og hvad der kan forstås ved vellykket integration......Bidraget diskuterer de forskellige intergrationsopfattelse i Danmark - og hvad der kan forstås ved vellykket integration...

  6. Role of W and Mn for reliable 1X nanometer-node ultra-large-scale integration Cu interconnects proved by atom probe tomography

    Energy Technology Data Exchange (ETDEWEB)

    Shima, K.; Shimizu, H.; Momose, T.; Shimogaki, Y. [Department of Materials Engineering, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan); Tu, Y. [The Oarai Center, Institute for Materials Research, Tohoku University, Oarai, Ibaraki 311-1313 (Japan); Key Laboratory of Polar Materials and Devices, Ministry of Education, East China Normal University, Shanghai 200241 (China); Takamizawa, H.; Shimizu, Y.; Inoue, K.; Nagai, Y. [The Oarai Center, Institute for Materials Research, Tohoku University, Oarai, Ibaraki 311-1313 (Japan)

    2014-09-29

    We used atom probe tomography (APT) to study the use of a Cu(Mn) as a seed layer of Cu, and a Co(W) single-layer as reliable Cu diffusion barriers for future interconnects in ultra-large-scale integration. The use of Co(W) layer enhances adhesion of Cu to prevent electromigration and stress-induced voiding failures. The use of Cu(Mn) as seed layer may enhance the diffusion barrier performance of Co(W) by stuffing the Cu diffusion pass with Mn. APT was used to visualize the distribution of W and Mn in three dimensions with sub-nanometer resolution. W was found to segregate at the grain boundaries of Co, which prevents diffusion of Cu via the grain boundaries. Mn was found to diffuse from the Cu(Mn) layer to Co(W) layer and selectively segregate at the Co(W) grain boundaries with W, reinforcing the barrier properties of Co(W) layer. Hence, a Co(W) barrier coupled with a Cu(Mn) seed layer can form a sufficient diffusion barrier with film that is less than 2.0-nm-thick. The diffusion barrier behavior was preserved following a 1-h annealing at 400 °C. The underlayer of the Cu interconnects requires a large adhesion strength with the Cu, as well as low electrical resistivity. The use of Co(W) has previously been shown to satisfy these requirements, and addition of Mn is not expected to deteriorate these properties.

  7. Systems reliability/structural reliability

    International Nuclear Information System (INIS)

    Green, A.E.

    1980-01-01

    The question of reliability technology using quantified techniques is considered for systems and structures. Systems reliability analysis has progressed to a viable and proven methodology whereas this has yet to be fully achieved for large scale structures. Structural loading variants over the half-time of the plant are considered to be more difficult to analyse than for systems, even though a relatively crude model may be a necessary starting point. Various reliability characteristics and environmental conditions are considered which enter this problem. The rare event situation is briefly mentioned together with aspects of proof testing and normal and upset loading conditions. (orig.)

  8. Reliability and validity of clinical tests to assess the anatomical integrity of the cervical spine in adults with neck pain and its associated disorders: Part 1-A systematic review from the Cervical Assessment and Diagnosis Research Evaluation (CADRE) Collaboration.

    Science.gov (United States)

    Lemeunier, Nadège; da Silva-Oolup, S; Chow, N; Southerst, D; Carroll, L; Wong, J J; Shearer, H; Mastragostino, P; Cox, J; Côté, E; Murnaghan, K; Sutton, D; Côté, P

    2017-09-01

    To determine the reliability and validity of clinical tests to assess the anatomical integrity of the cervical spine in adults with neck pain and its associated disorders. We updated the systematic review of the 2000-2010 Bone and Joint Decade Task Force on Neck Pain and its Associated Disorders. We also searched the literature to identify studies on the reliability and validity of Doppler velocimetry for the evaluation of cervical arteries. Two independent reviewers screened and critically appraised studies. We conducted a best evidence synthesis of low risk of bias studies and ranked the phases of investigations using the classification proposed by Sackett and Haynes. We screened 9022 articles and critically appraised 8 studies; all 8 studies had low risk of bias (three reliability and five validity Phase II-III studies). Preliminary evidence suggests that the extension-rotation test may be reliable and has adequate validity to rule out pain arising from facet joints. The evidence suggests variable reliability and preliminary validity for the evaluation of cervical radiculopathy including neurological examination (manual motor testing, dermatomal sensory testing, deep tendon reflexes, and pathological reflex testing), Spurling's and the upper limb neurodynamic tests. No evidence was found for doppler velocimetry. Little evidence exists to support the use of clinical tests to evaluate the anatomical integrity of the cervical spine in adults with neck pain and its associated disorders. We found preliminary evidence to support the use of the extension-rotation test, neurological examination, Spurling's and the upper limb neurodynamic tests.

  9. Electronics reliability calculation and design

    CERN Document Server

    Dummer, Geoffrey W A; Hiller, N

    1966-01-01

    Electronics Reliability-Calculation and Design provides an introduction to the fundamental concepts of reliability. The increasing complexity of electronic equipment has made problems in designing and manufacturing a reliable product more and more difficult. Specific techniques have been developed that enable designers to integrate reliability into their products, and reliability has become a science in its own right. The book begins with a discussion of basic mathematical and statistical concepts, including arithmetic mean, frequency distribution, median and mode, scatter or dispersion of mea

  10. Characterization, integration and reliability of HfO2 and LaLuO3 high-κ/metal gate stacks for CMOS applications

    International Nuclear Information System (INIS)

    Nichau, Alexander

    2013-01-01

    . A lower limit found was EOT=5 Aa for Al doping inside TiN. The doping of TiN on LaLuO 3 is proven by electron energy loss spectroscopy (EELS) studies to modify the interfacial silicate layer to La-rich silicates or even reduce the layer. The oxide quality in Si/HfO 2 /TiN gate stacks is characterized by charge pumping and carrier mobility measurements on 3d MOSFETs a.k.a. FinFETs. The oxide quality in terms of the number of interface (and oxide) traps on top- and sidewall of FinFETs is compared for three different annealing processes. A high temperature anneal of HfO 2 improves significantly the oxide quality and mobility. The gate oxide integrity (GOI) of gate stacks below 1 nm EOT is determined by time-dependent dielectric breakdown (TDDB) measurements on FinFETs with HfO 2 /TiN gate stacks. A successful EOT scaling has always to consider the oxide quality and resulting reliability. Degraded oxide quality leads to mobility degradation and earlier soft-breakdown, i.e. leakage current increase.

  11. Characterization, integration and reliability of HfO{sub 2} and LaLuO{sub 3} high-κ/metal gate stacks for CMOS applications

    Energy Technology Data Exchange (ETDEWEB)

    Nichau, Alexander

    2013-07-15

    gate electrode to decrease the EOT of HfO{sub 2} gate stacks. A lower limit found was EOT=5 Aa for Al doping inside TiN. The doping of TiN on LaLuO{sub 3} is proven by electron energy loss spectroscopy (EELS) studies to modify the interfacial silicate layer to La-rich silicates or even reduce the layer. The oxide quality in Si/HfO{sub 2}/TiN gate stacks is characterized by charge pumping and carrier mobility measurements on 3d MOSFETs a.k.a. FinFETs. The oxide quality in terms of the number of interface (and oxide) traps on top- and sidewall of FinFETs is compared for three different annealing processes. A high temperature anneal of HfO{sub 2} improves significantly the oxide quality and mobility. The gate oxide integrity (GOI) of gate stacks below 1 nm EOT is determined by time-dependent dielectric breakdown (TDDB) measurements on FinFETs with HfO{sub 2}/TiN gate stacks. A successful EOT scaling has always to consider the oxide quality and resulting reliability. Degraded oxide quality leads to mobility degradation and earlier soft-breakdown, i.e. leakage current increase.

  12. Human reliability

    International Nuclear Information System (INIS)

    Bubb, H.

    1992-01-01

    This book resulted from the activity of Task Force 4.2 - 'Human Reliability'. This group was established on February 27th, 1986, at the plenary meeting of the Technical Reliability Committee of VDI, within the framework of the joint committee of VDI on industrial systems technology - GIS. It is composed of representatives of industry, representatives of research institutes, of technical control boards and universities, whose job it is to study how man fits into the technical side of the world of work and to optimize this interaction. In a total of 17 sessions, information from the part of ergonomy dealing with human reliability in using technical systems at work was exchanged, and different methods for its evaluation were examined and analyzed. The outcome of this work was systematized and compiled in this book. (orig.) [de

  13. Microelectronics Reliability

    Science.gov (United States)

    2017-01-17

    inverters  connected in a chain. ................................................. 5  Figure 3  Typical graph showing frequency versus square root of...developing an experimental  reliability estimating methodology that could both illuminate the  lifetime  reliability of advanced devices,  circuits and...or  FIT of the device. In other words an accurate estimate of the device  lifetime  was found and thus the  reliability  that  can  be  conveniently

  14. [Integrity].

    Science.gov (United States)

    Gómez Rodríguez, Rafael Ángel

    2014-01-01

    To say that someone possesses integrity is to claim that that person is almost predictable about responses to specific situations, that he or she can prudentially judge and to act correctly. There is a closed interrelationship between integrity and autonomy, and the autonomy rests on the deeper moral claim of all humans to integrity of the person. Integrity has two senses of significance for medical ethic: one sense refers to the integrity of the person in the bodily, psychosocial and intellectual elements; and in the second sense, the integrity is the virtue. Another facet of integrity of the person is la integrity of values we cherish and espouse. The physician must be a person of integrity if the integrity of the patient is to be safeguarded. The autonomy has reduced the violations in the past, but the character and virtues of the physician are the ultimate safeguard of autonomy of patient. A field very important in medicine is the scientific research. It is the character of the investigator that determines the moral quality of research. The problem arises when legitimate self-interests are replaced by selfish, particularly when human subjects are involved. The final safeguard of moral quality of research is the character and conscience of the investigator. Teaching must be relevant in the scientific field, but the most effective way to teach virtue ethics is through the example of the a respected scientist.

  15. Understanding the compaction behaviour of low-substituted HPC: macro, micro, and nano-metric evaluations.

    Science.gov (United States)

    ElShaer, Amr; Al-Khattawi, Ali; Mohammed, Afzal R; Warzecha, Monika; Lamprou, Dimitrios A; Hassanin, Hany

    2018-06-01

    The fast development in materials science has resulted in the emergence of new pharmaceutical materials with superior physical and mechanical properties. Low-substituted hydroxypropyl cellulose is an ether derivative of cellulose and is praised for its multi-functionality as a binder, disintegrant, film coating agent and as a suitable material for medical dressings. Nevertheless, very little is known about the compaction behaviour of this polymer. The aim of the current study was to evaluate the compaction and disintegration behaviour of four grades of L-HPC namely; LH32, LH21, LH11, and LHB1. The macrometric properties of the four powders were studied and the compaction behaviour was evaluated using the out-of-die method. LH11 and LH22 showed poor flow properties as the powders were dominated by fibrous particles with high aspect ratios, which reduced the powder flow. LH32 showed a weak compressibility profile and demonstrated a large elastic region, making it harder for this polymer to deform plastically. These findings are supported by AFM which revealed the high roughness of LH32 powder (100.09 ± 18.84 nm), resulting in small area of contact, but promoting mechanical interlocking. On the contrary, LH21 and LH11 powders had smooth surfaces which enabled larger contact area and higher adhesion forces of 21.01 ± 11.35 nN and 9.50 ± 5.78 nN, respectively. This promoted bond formation during compression as LH21 and LH11 powders had low strength yield.

  16. Processing of poultry feathers by alkaline keratin hydrolyzing enzyme from Serratia sp. HPC 1383.

    Science.gov (United States)

    Khardenavis, Anshuman A; Kapley, Atya; Purohit, Hemant J

    2009-04-01

    The present study describes the production and characterization of a feather hydrolyzing enzyme by Serratia sp. HPC 1383 isolated from tannery sludge, which was identified by the ability to form clear zones around colonies on milk agar plates. The proteolytic activity was expressed in terms of the micromoles of tyrosine released from substrate casein per ml per min (U/mL min). Induction of the inoculum with protein was essential to stimulate higher activity of the enzyme, with 0.03% feathermeal in the inoculum resulting in increased enzyme activity (45U/mL) that further increased to 90U/mL when 3d old inoculum was used. The highest enzyme activity, 130U/mL, was observed in the presence of 0.2% yeast extract. The optimum assay temperature and pH for the enzyme were found to be 60 degrees C and 10.0, respectively. The enzyme had a half-life of 10min at 60 degrees C, which improved slightly to 18min in presence of 1mM Ca(2+). Inhibition of the enzyme by phenylmethyl sulfonyl fluoride (PMSF) indicated that the enzyme was a serine protease. The enzyme was also partially inhibited (39%) by the reducing agent beta-mercaptoethanol and by divalent metal ions such as Zn(2+) (41% inhibition). However, Ca(2+) and Fe(2+) resulted in increases in enzyme activity of 15% and 26%, respectively. The kinetic constants of the keratinase were found to be 3.84 microM (K(m)) and 108.7 microM/mLmin (V(max)). These results suggest that this extracellular keratinase may be a useful alternative and eco-friendly route for handling the abundant amount of waste feathers or for applications in other industrial processes.

  17. MRI interrReader and intra-reader reliabilities for assessing injury morphology and posterior ligamentous complex integrity of the spine according to the thoracolumbar injury classification system and severity score

    International Nuclear Information System (INIS)

    Lee, Guen Young; Lee, Joon Woo; Choi, Seung Woo; Lim, Hyun Jin; Sun, Hye Young; Kang, Yu Suhn; Kang, Heung Sik; Chai, Jee Won; Kim, Su Jin

    2015-01-01

    To evaluate spine magnetic resonance imaging (MRI) inter-reader and intra-reader reliabilities using the thoracolumbar injury classification system and severity score (TLICS) and to analyze the effects of reader experience on reliability and the possible reasons for discordant interpretations. Six radiologists (two senior, two junior radiologists, and two residents) independently scored 100 MRI examinations of thoracolumbar spine injuries to assess injury morphology and posterior ligamentous complex (PLC) integrity according to the TLICS. Inter-reader and intra-reader agreements were determined and analyzed according to the number of years of radiologist experience. Inter-reader agreement between the six readers was moderate (k = 0.538 for the first and 0.537 for the second review) for injury morphology and fair to moderate (k = 0.440 for the first and 0.389 for the second review) for PLC integrity. No significant difference in inter-reader agreement was observed according to the number of years of radiologist experience. Intra-reader agreements showed a wide range (k = 0.538-0.822 for injury morphology and 0.423-0.616 for PLC integrity). Agreement was achieved in 44 for the first and 45 for the second review about injury morphology, as well as in 41 for the first and 38 for the second review of PLC integrity. A positive correlation was detected between injury morphology score and PLC integrity. The reliability of MRI for assessing thoracolumbar spinal injuries according to the TLICS was moderate for injury morphology and fair to moderate for PLC integrity, which may not be influenced by radiologist' experience

  18. Systems analysis programs for hands-on integrated reliability evaluations (SAPHIRE) Version 5.0. Fault tree, event tree, and piping ampersand instrumentation diagram (FEP) editors reference manual: Volume 7

    International Nuclear Information System (INIS)

    McKay, M.K.; Skinner, N.L.; Wood, S.T.

    1994-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. The Fault Tree, Event Tree, and Piping and Instrumentation Diagram (FEP) editors allow the user to graphically build and edit fault trees, and event trees, and piping and instrumentation diagrams (P and IDs). The software is designed to enable the independent use of the graphical-based editors found in the Integrated Reliability and Risk Assessment System (IRRAS). FEP is comprised of three separate editors (Fault Tree, Event Tree, and Piping and Instrumentation Diagram) and a utility module. This reference manual provides a screen-by-screen guide of the entire FEP System

  19. Redefining reliability

    International Nuclear Information System (INIS)

    Paulson, S.L.

    1995-01-01

    Want to buy some reliability? The question would have been unthinkable in some markets served by the natural gas business even a few years ago, but in the new gas marketplace, industrial, commercial and even some residential customers have the opportunity to choose from among an array of options about the kind of natural gas service they need--and are willing to pay for. The complexities of this brave new world of restructuring and competition have sent the industry scrambling to find ways to educate and inform its customers about the increased responsibility they will have in determining the level of gas reliability they choose. This article discusses the new options and the new responsibilities of customers, the needed for continuous education, and MidAmerican Energy Company's experiment in direct marketing of natural gas

  20. Construction of the energy matrix for complex atoms. Part VIII: Hyperfine structure HPC calculations for terbium atom

    Science.gov (United States)

    Elantkowska, Magdalena; Ruczkowski, Jarosław; Sikorski, Andrzej; Dembczyński, Jerzy

    2017-11-01

    A parametric analysis of the hyperfine structure (hfs) for the even parity configurations of atomic terbium (Tb I) is presented in this work. We introduce the complete set of 4fN-core states in our high-performance computing (HPC) calculations. For calculations of the huge hyperfine structure matrix, requiring approximately 5000 hours when run on a single CPU, we propose the methods utilizing a personal computer cluster or, alternatively a cluster of Microsoft Azure virtual machines (VM). These methods give a factor 12 performance boost, enabling the calculations to complete in an acceptable time.

  1. Usage of OpenStack Virtual Machine and MATLAB HPC Add-on leads to faster turnaround

    KAUST Repository

    Van Waveren, Matthijs

    2017-03-16

    We need to run hundreds of MATLAB® simulations while changing the parameters between each simulation. These simulations need to be run sequentially, and the parameters are defined manually from one simulation to the next. This makes this type of workload unsuitable for a shared cluster. For this reason we are using a cluster running in an OpenStack® Virtual Machine and are using the MATLAB HPC Add-on for submitting jobs to the cluster. As a result we are now able to have a turnaround time for the simulations of the order of a few hours, instead of the 24 hours needed on a local workstation.

  2. Reliability Centered Maintenance - Methodologies

    Science.gov (United States)

    Kammerer, Catherine C.

    2009-01-01

    Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

  3. Towards a HPC-oriented parallel implementation of a learning algorithm for bioinformatics applications.

    Science.gov (United States)

    D'Angelo, Gianni; Rampone, Salvatore

    2014-01-01

    The huge quantity of data produced in Biomedical research needs sophisticated algorithmic methodologies for its storage, analysis, and processing. High Performance Computing (HPC) appears as a magic bullet in this challenge. However, several hard to solve parallelization and load balancing problems arise in this context. Here we discuss the HPC-oriented implementation of a general purpose learning algorithm, originally conceived for DNA analysis and recently extended to treat uncertainty on data (U-BRAIN). The U-BRAIN algorithm is a learning algorithm that finds a Boolean formula in disjunctive normal form (DNF), of approximately minimum complexity, that is consistent with a set of data (instances) which may have missing bits. The conjunctive terms of the formula are computed in an iterative way by identifying, from the given data, a family of sets of conditions that must be satisfied by all the positive instances and violated by all the negative ones; such conditions allow the computation of a set of coefficients (relevances) for each attribute (literal), that form a probability distribution, allowing the selection of the term literals. The great versatility that characterizes it, makes U-BRAIN applicable in many of the fields in which there are data to be analyzed. However the memory and the execution time required by the running are of O(n(3)) and of O(n(5)) order, respectively, and so, the algorithm is unaffordable for huge data sets. We find mathematical and programming solutions able to lead us towards the implementation of the algorithm U-BRAIN on parallel computers. First we give a Dynamic Programming model of the U-BRAIN algorithm, then we minimize the representation of the relevances. When the data are of great size we are forced to use the mass memory, and depending on where the data are actually stored, the access times can be quite different. According to the evaluation of algorithmic efficiency based on the Disk Model, in order to reduce the costs of

  4. High-pressure coolant effect on the surface integrity of machining titanium alloy Ti-6Al-4V: a review

    Science.gov (United States)

    Liu, Wentao; Liu, Zhanqiang

    2018-03-01

    Machinability improvement of titanium alloy Ti-6Al-4V is a challenging work in academic and industrial applications owing to its low thermal conductivity, low elasticity modulus and high chemical affinity at high temperatures. Surface integrity of titanium alloys Ti-6Al-4V is prominent in estimating the quality of machined components. The surface topography (surface defects and surface roughness) and the residual stress induced by machining Ti-6Al-4V occupy pivotal roles for the sustainability of Ti-6Al-4V components. High-pressure coolant (HPC) is a potential choice in meeting the requirements for the manufacture and application of Ti-6Al-4V. This paper reviews the progress towards the improvements of Ti-6Al4V surface integrity under HPC. Various researches of surface integrity characteristics have been reported. In particularly, surface roughness, surface defects, residual stress as well as work hardening are investigated in order to evaluate the machined surface qualities. Several coolant parameters (including coolant type, coolant pressure and the injection position) deserve investigating to provide the guidance for a satisfied machined surface. The review also provides a clear roadmap for applications of HPC in machining Ti-6Al4V. Experimental studies and analysis are reviewed to better understand the surface integrity under HPC machining process. A distinct discussion has been presented regarding the limitations and highlights of the prospective for machining Ti-6Al4V under HPC.

  5. ISC High Performance 2016 International Workshops, ExaComm, E-MuCoCoS, HPC-IODC, IXPUG, IWOPH, P^3MA, VHPC, WOPSSS

    CERN Document Server

    Mohr, Bernd; Kunkel, Julian M

    2016-01-01

    This book constitutes revised selected papers from 7 workshops that were held in conjunction with the ISC High Performance 2016 conference in Frankfurt, Germany, in June 2016. The 45 papers presented in this volume were carefully reviewed and selected for inclusion in this book. They stem from the following workshops: Workshop on Exascale Multi/Many Core Computing Systems, E-MuCoCoS; Second International Workshop on Communication Architectures at Extreme Scale, ExaComm; HPC I/O in the Data Center Workshop, HPC-IODC; International Workshop on OpenPOWER for HPC, IWOPH; Workshop on the Application Performance on Intel Xeon Phi – Being Prepared for KNL and Beyond, IXPUG; Workshop on Performance and Scalability of Storage Systems, WOPSSS; and International Workshop on Performance Portable Programming Models for Accelerators, P3MA.

  6. Architectural improvements and 28 nm FPGA implementation of the APEnet+ 3D Torus network for hybrid HPC systems

    International Nuclear Information System (INIS)

    Ammendola, Roberto; Biagioni, Andrea; Frezza, Ottorino; Cicero, Francesca Lo; Paolucci, Pier Stanislao; Lonardo, Alessandro; Rossetti, Davide; Simula, Francesco; Tosoratto, Laura; Vicini, Piero

    2014-01-01

    Modern Graphics Processing Units (GPUs) are now considered accelerators for general purpose computation. A tight interaction between the GPU and the interconnection network is the strategy to express the full potential on capability computing of a multi-GPU system on large HPC clusters; that is the reason why an efficient and scalable interconnect is a key technology to finally deliver GPUs for scientific HPC. In this paper we show the latest architectural and performance improvement of the APEnet+ network fabric, a FPGA-based PCIe board with 6 fully bidirectional off-board links with 34 Gbps of raw bandwidth per direction, and X8 Gen2 bandwidth towards the host PC. The board implements a Remote Direct Memory Access (RDMA) protocol that leverages upon peer-to-peer (P2P) capabilities of Fermi- and Kepler-class NVIDIA GPUs to obtain real zero-copy, low-latency GPU-to-GPU transfers. Finally, we report on the development activities for 2013 focusing on the adoption of the latest generation 28 nm FPGAs and the preliminary tests performed on this new platform.

  7. Architectural improvements and 28 nm FPGA implementation of the APEnet+ 3D Torus network for hybrid HPC systems

    Energy Technology Data Exchange (ETDEWEB)

    Ammendola, Roberto [INFN Sezione Roma Tor Vergata (Italy); Biagioni, Andrea; Frezza, Ottorino; Cicero, Francesca Lo; Paolucci, Pier Stanislao; Lonardo, Alessandro; Rossetti, Davide; Simula, Francesco; Tosoratto, Laura; Vicini, Piero [INFN Sezione Roma (Italy)

    2014-06-11

    Modern Graphics Processing Units (GPUs) are now considered accelerators for general purpose computation. A tight interaction between the GPU and the interconnection network is the strategy to express the full potential on capability computing of a multi-GPU system on large HPC clusters; that is the reason why an efficient and scalable interconnect is a key technology to finally deliver GPUs for scientific HPC. In this paper we show the latest architectural and performance improvement of the APEnet+ network fabric, a FPGA-based PCIe board with 6 fully bidirectional off-board links with 34 Gbps of raw bandwidth per direction, and X8 Gen2 bandwidth towards the host PC. The board implements a Remote Direct Memory Access (RDMA) protocol that leverages upon peer-to-peer (P2P) capabilities of Fermi- and Kepler-class NVIDIA GPUs to obtain real zero-copy, low-latency GPU-to-GPU transfers. Finally, we report on the development activities for 2013 focusing on the adoption of the latest generation 28 nm FPGAs and the preliminary tests performed on this new platform.

  8. Reliability practice

    NARCIS (Netherlands)

    Kuper, F.G.; Fan, X.J.; Zhang, G.Q.; van Driel, W.D.; Fan, X.J.

    2006-01-01

    The technology trends of Microelectronics and Microsystems are mainly characterized by miniaturization down to the nano-scale, increasing levels of system and function integration, and the introduction of new materials, while the business trends are mainly characterized by cost reduction,

  9. Safety and reliability of pressure components with special emphasis on the contribution of component and large specimen testing to structural integrity assessment methodology. Vol. 1 and 2

    International Nuclear Information System (INIS)

    1987-01-01

    The 51 papers of the 13. MPA-seminar contribute to structural integrity assessment methodology with special emphasis on the component and large specimen testing. 8 of the papers deal with fracture mechanics, 6 papers with dynamic loading, 13 papers with nondestructive testing, 2 papers with radiation embrittlement, 5 papers with pipe failure, 4 papers with components, 2 papers with thermal shock loading, 5 papers with the high temperature behaviour, 4 papers with the integrity of vessels and 3 papers with the integrity of welded joints. Especially also the fracture behaviour of steel material is verificated. All papers are separately indexed and analysed for the database. (DG) [de

  10. An Introduction To Reliability

    International Nuclear Information System (INIS)

    Park, Kyoung Su

    1993-08-01

    This book introduces reliability with definition of reliability, requirement of reliability, system of life cycle and reliability, reliability and failure rate such as summary, reliability characteristic, chance failure, failure rate which changes over time, failure mode, replacement, reliability in engineering design, reliability test over assumption of failure rate, and drawing of reliability data, prediction of system reliability, conservation of system, failure such as summary and failure relay and analysis of system safety.

  11. 'Integration'

    DEFF Research Database (Denmark)

    Olwig, Karen Fog

    2011-01-01

    , while the countries have adopted disparate policies and ideologies, differences in the actual treatment and attitudes towards immigrants and refugees in everyday life are less clear, due to parallel integration programmes based on strong similarities in the welfare systems and in cultural notions...... of equality in the three societies. Finally, it shows that family relations play a central role in immigrants’ and refugees’ establishment of a new life in the receiving societies, even though the welfare society takes on many of the social and economic functions of the family....

  12. High Possibility Classrooms as a Pedagogical Framework for Technology Integration in Classrooms: An Inquiry in Two Australian Secondary Schools

    Science.gov (United States)

    Hunter, Jane

    2017-01-01

    Understanding how well teachers integrate digital technology in learning is the subject of considerable debate in education. High Possibility Classrooms (HPC) is a pedagogical framework drawn from research on exemplary teachers' knowledge of technology integration in Australian school classrooms. The framework is being used to support teachers who…

  13. SAME4HPC: A Promising Approach in Building a Scalable and Mobile Environment for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Karthik, Rajasekar [ORNL

    2014-01-01

    In this paper, an architecture for building Scalable And Mobile Environment For High-Performance Computing with spatial capabilities called SAME4HPC is described using cutting-edge technologies and standards such as Node.js, HTML5, ECMAScript 6, and PostgreSQL 9.4. Mobile devices are increasingly becoming powerful enough to run high-performance apps. At the same time, there exist a significant number of low-end and older devices that rely heavily on the server or the cloud infrastructure to do the heavy lifting. Our architecture aims to support both of these types of devices to provide high-performance and rich user experience. A cloud infrastructure consisting of OpenStack with Ubuntu, GeoServer, and high-performance JavaScript frameworks are some of the key open-source and industry standard practices that has been adopted in this architecture.

  14. Assessment of Material Solutions of Multi-level Garage Structure Within Integrated Life Cycle Design Process

    Science.gov (United States)

    Wałach, Daniel; Sagan, Joanna; Gicala, Magdalena

    2017-10-01

    The paper presents an environmental and economic analysis of the material solutions of multi-level garage. The construction project approach considered reinforced concrete structure under conditions of use of ordinary concrete and high-performance concrete (HPC). Using of HPC allowed to significant reduction of reinforcement steel, mainly in compression elements (columns) in the construction of the object. The analysis includes elements of the methodology of integrated lice cycle design (ILCD). By making multi-criteria analysis based on established weight of the economic and environmental parameters, three solutions have been evaluated and compared within phase of material production (information modules A1-A3).

  15. Frontiers of reliability

    CERN Document Server

    Basu, Asit P; Basu, Sujit K

    1998-01-01

    This volume presents recent results in reliability theory by leading experts in the world. It will prove valuable for researchers, and users of reliability theory. It consists of refereed invited papers on a broad spectrum of topics in reliability. The subjects covered include Bayesian reliability, Bayesian reliability modeling, confounding in a series system, DF tests, Edgeworth approximation to reliability, estimation under random censoring, fault tree reduction for reliability, inference about changes in hazard rates, information theory and reliability, mixture experiment, mixture of Weibul

  16. A knowledge-based operator advisor system for integration of fault detection, control, and diagnosis to enhance the safe and reliable operation of nuclear power plants

    International Nuclear Information System (INIS)

    Bhatnagar, R.

    1989-01-01

    A Knowledged-Based Operator Advisor System has been developed for enhancing the complex task of maintaining safe and reliable operation of nuclear power plants. The operator's activities have been organized into the four tasks of data interpretation for abstracting high level information from sensor data, plant state monitoring for identification of faults, plan execution for controlling the faults, and diagnosis for determination of root causes of faults. The Operator Advisor System is capable of identifying the abnormal functioning of the plant in terms of: (1) deviations from normality, (2) pre-enumerated abnormal events, and (3) safety threats. The classification of abnormal functioning into the three categories of deviations from normality, abnormal events, and safety threats allows the detection of faults at three levels of: (1) developing faults, (2) developed faults, and (3) safety threatening faults. After the identification of abnormal functioning the system will identify the procedures to be executed to mitigate the consequences of abnormal functioning and will help the operator by displaying the procedure steps and monitoring the success of actions taken. The system also is capable of diagnosing the root causes of abnormal functioning. The identification, and diagnosis of root causes of abnormal functioning are done in parallel to the task of procedure execution, allowing the detection of more critical safety threats while executing procedures to control abnormal events

  17. Selecting a Benchmark Suite to Profile High-Performance Computing (HPC) Machines

    Science.gov (United States)

    2014-11-01

    architectures. Machines now contain central processing units (CPUs), graphics processing units (GPUs), and many integrated core ( MIC ) architecture all...evaluate the feasibility and applicability of a new architecture just released to the market . Researchers are often unsure how available resources will...architectures. Having a suite of programs running on different architectures, such as GPUs, MICs , and CPUs, adds complexity and technical challenges

  18. Experiments to Understand HPC Time to Development (Final report for Department of Energy contract DE-FG02-04ER25633) Report DOE/ER/25633-1

    Energy Technology Data Exchange (ETDEWEB)

    Basili, Victor, R.; Zelkowitz, Marvin, V.

    2007-11-14

    In order to understand how high performance computing (HPC) programs are developed, a series of experiments, using students in graduate level HPC classes and various research centers, were conducted at various locations in the US. In this report, we discuss this research, give some of the early results of those experiments, and describe a web-based Experiment Manager we are developing that allows us to run studies more easily and consistently at universities and laboratories, allowing us to generate results that more accurately reflect the process of building HPC programs.

  19. Use of reliability in the LMFBR industry

    International Nuclear Information System (INIS)

    Penland, J.R.; Smith, A.M.; Goeser, D.K.

    1977-01-01

    This mission of a Reliability Program for an LMFBR should be to enhance the design and operational characteristics relative to safety and to plant availability. Successful accomplishment of this mission requires proper integration of several reliability engineering tasks--analysis, testing, parts controls and program controls. Such integration requires, in turn, that the program be structured, planned and managed. This paper describes the technical integration necessary and the management activities required to achieve mission success for LMFBR's

  20. Reliable effective number of breeders/adult census size ratios in seasonal-breeding species: Opportunity for integrative demographic inferences based on capture-mark-recapture data and multilocus genotypes.

    Science.gov (United States)

    Sánchez-Montes, Gregorio; Wang, Jinliang; Ariño, Arturo H; Vizmanos, José Luis; Martínez-Solano, Iñigo

    2017-12-01

    The ratio of the effective number of breeders ( N b ) to the adult census size ( N a ), N b / N a , approximates the departure from the standard capacity of a population to maintain genetic diversity in one reproductive season. This information is relevant for assessing population status, understanding evolutionary processes operating at local scales, and unraveling how life-history traits affect these processes. However, our knowledge on N b / N a ratios in nature is limited because estimation of both parameters is challenging. The sibship frequency (SF) method is adequate for reliable N b estimation because it is based on sibship and parentage reconstruction from genetic marker data, thereby providing demographic inferences that can be compared with field-based information. In addition, capture-mark-recapture (CMR) robust design methods are well suited for N a estimation in seasonal-breeding species. We used tadpole genotypes of three pond-breeding amphibian species ( Epidalea calamita , Hyla molleri, and Pelophylax perezi , n  =   73-96 single-cohort tadpoles/species genotyped at 15-17 microsatellite loci) and candidate parental genotypes ( n  =   94-300 adults/species) to estimate N b by the SF method. To assess the reliability of N b estimates, we compared sibship and parentage inferences with field-based information and checked for the convergence of results in replicated subsampled analyses. Finally, we used CMR data from a 6-year monitoring program to estimate annual N a in the three species and calculate the N b / N a ratio. Reliable ratios were obtained for E. calamita ( N b / N a  = 0.18-0.28) and P. perezi (0.5), but in H. molleri, N a could not be estimated and genetic information proved insufficient for reliable N b estimation. Integrative demographic studies taking full advantage of SF and CMR methods can provide accurate estimates of the N b / N a ratio in seasonal-breeding species. Importantly, the SF method provides results that can be

  1. System Reliability Engineering

    International Nuclear Information System (INIS)

    Lim, Tae Jin

    2005-02-01

    This book tells of reliability engineering, which includes quality and reliability, reliability data, importance of reliability engineering, reliability and measure, the poisson process like goodness of fit test and the poisson arrival model, reliability estimation like exponential distribution, reliability of systems, availability, preventive maintenance such as replacement policies, minimal repair policy, shock models, spares, group maintenance and periodic inspection, analysis of common cause failure, and analysis model of repair effect.

  2. 24. MPA-seminar: safety and reliability of plant technology with special emphasis on integrity and life management. Vol. 2. Papers 28-63; 24. MPA-Seminar: Sicherheit und Verfuegbarkeit in der Anlagentechnik mit dem Schwerpunk Integritaet und Lebensdauermanagement. Bd. 2. Vortraege 28-63

    Energy Technology Data Exchange (ETDEWEB)

    1999-09-01

    The second volume is dedicated to the safety and reliability of plant technology with special emphasis on the integrity and life management. The following topics are discussed: 1. Integrity of vessels, pipes and components. 2. Fracture mechanics. 3. Measures for the extension of service life, and 4. Online Monitoring. All 30 contributions are separately analyzed for this database. (orig.)

  3. Measuring maternal satisfaction with maternity care: A systematic integrative review: What is the most appropriate, reliable and valid tool that can be used to measure maternal satisfaction with continuity of maternity care?

    Science.gov (United States)

    Perriman, Noelyn; Davis, Deborah

    2016-06-01

    The objective of this systematic integrative review is to identify, summarise and communicate the findings of research relating to tools that measure maternal satisfaction with continuity of maternity care models. In so doing the most appropriate, reliable and valid tool that can be used to measure maternal satisfaction with continuity of maternity care will be determined. A systematic integrative review of published and unpublished literature was undertaken using selected databases. Research papers were included if they measured maternal satisfaction in a continuity model of maternity care, were published in English after 1999 and if they included (or made available) the instrument used to measure satisfaction. Six hundred and thirty two unique papers were identified and after applying the selection criteria, four papers were included in the review. Three of these originated in Australia and one in Canada. The primary focus of all papers was not on the development of a tool to measure maternal satisfaction but on the comparison of outcomes in different models of care. The instruments developed varied in terms of the degree to which they were tested for validity and reliability. Women's satisfaction with maternity services is an important measure of quality. Most satisfaction surveys in maternity appear to reflect fragmented models of care though continuity of care models are increasing in line with the evidence demonstrating their effectiveness. It is important that robust tools are developed for this context and that there is some consistency in the way this is measured and reported for the purposes of benchmarking and quality improvement. Copyright © 2016 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.

  4. User's and Programmer's Guide for HPC Platforms in CIEMAT; Guia de Utilizacion y programacion de las Plataformas de Calculo del CIEMAT

    Energy Technology Data Exchange (ETDEWEB)

    Munoz Roldan, A.

    2003-07-01

    This Technical Report presents a description of the High Performance Computing platforms available to researchers in CIEMAT and dedicated mainly to scientific computing. It targets to users and programmers and tries to help in the processes of developing new code and porting code across platforms. A brief review is also presented about historical evolution in the field of HPC, ie, the programming paradigms and underlying architectures. (Author) 32 refs.

  5. User's and Programmer's Guide for HPC Platforms in CIEMAT; Guia de Utilizacion y programacion de las Plataformas de Calculo del CIEMAT

    Energy Technology Data Exchange (ETDEWEB)

    Munoz Roldan, A

    2003-07-01

    This Technical Report presents a description of the High Performance Computing platforms available to researchers in CIEMAT and dedicated mainly to scientific computing. It targets to users and programmers and tries to help in the processes of developing new code and porting code across platforms. A brief review is also presented about historical evolution in the field of HPC, ie, the programming paradigms and underlying architectures. (Author) 32 refs.

  6. Exploiting Redundancy and Application Scalability for Cost-Effective, Time-Constrained Execution of HPC Applications on Amazon EC2

    International Nuclear Information System (INIS)

    Marathe, Aniruddha P.; Harris, Rachel A.; Lowenthal, David K.; Supinski, Bronis R. de; Rountree, Barry L.; Schulz, Martin

    2015-01-01

    The use of clouds to execute high-performance computing (HPC) applications has greatly increased recently. Clouds provide several potential advantages over traditional supercomputers and in-house clusters. The most popular cloud is currently Amazon EC2, which provides fixed-cost and variable-cost, auction-based options. The auction market trades lower cost for potential interruptions that necessitate checkpointing; if the market price exceeds the bid price, a node is taken away from the user without warning. We explore techniques to maximize performance per dollar given a time constraint within which an application must complete. Specifically, we design and implement multiple techniques to reduce expected cost by exploiting redundancy in the EC2 auction market. We then design an adaptive algorithm that selects a scheduling algorithm and determines the bid price. We show that our adaptive algorithm executes programs up to seven times cheaper than using the on-demand market and up to 44 percent cheaper than the best non-redundant, auction-market algorithm. We extend our adaptive algorithm to incorporate application scalability characteristics for further cost savings. In conclusion, we show that the adaptive algorithm informed with scalability characteristics of applications achieves up to 56 percent cost savings compared to the expected cost for the base adaptive algorithm run at a fixed, user-defined scale.

  7. Cognitive and organizational ergonomics in the transition of the new integrated center of control of an oil refinery: human reliability and administration of changes.

    Science.gov (United States)

    Bau, Lucy M S; Puquirre, Magda S E S; Buso, Sandro A; Ogasawara, Érika L; Marcon Passero, Carolina R; Bianchi, Marcos C

    2012-01-01

    The conception of a product is closely tied to its adaptation level to the users. In this view, designers are increasingly oriented to survey the needs and features of the users. This paper aims at developing a diagnosis of employees working in high-complexity activities in a petrochemical company, in light of the physical and operating changes in the Integrated Center of Control; assessing the reception sensibility to changes; assessing the cognitive pattern of the group; and making suggestions that might eliminate or minimize the difficulties in the transition process of the change, in order to reduce the adaptation period. The field of study comprised 111 production, transfer and storage operators, forming 5 groups of desktop activities. The stages of the study followed the following flow: survey of the prescribed tasks and organizational structure; Concentrated Attention test; application of the Work and Disease Risks Inventory (ITRA, Portuguese acronym); and structured psychological interview. The ITRA results pointed to a serious cognitive cost (3.83) for all five groups, this being the largest intervention focus. The items: division of task contents (3.52), social professional relationships (2.93), quality of the physical environment (2.91), physical cost (3.24), emotional cost (2.71), freedom of expression (3.77), professional fulfillment (3.41); experience and suffering (2.75), lack of recognition (2.18) and physical injuries (2.07) were considered critical. Meanwhile, social damages (1.64) and psychological injuries (1.35) are bearable. As to the Concentrated Attention test, most workers registered average level. In the individual interviews, workers showed that larger involvement in the process of physical, organizational and operational change in the desktops and on field works was required, as well as the follow up of implementations, so as to reduce the adaptation process and prevent rework (furniture, equipment, noise, form of communication with the

  8. Development of a SaaS application probe to the physical properties of the Earth's interior: An attempt at moving HPC to the cloud

    Science.gov (United States)

    Huang, Qian

    2014-09-01

    Scientific computing often requires the availability of a massive number of computers for performing large-scale simulations, and computing in mineral physics is no exception. In order to investigate physical properties of minerals at extreme conditions in computational mineral physics, parallel computing technology is used to speed up the performance by utilizing multiple computer resources to process a computational task simultaneously thereby greatly reducing computation time. Traditionally, parallel computing has been addressed by using High Performance Computing (HPC) solutions and installed facilities such as clusters and super computers. Today, it has been seen that there is a tremendous growth in cloud computing. Infrastructure as a Service (IaaS), the on-demand and pay-as-you-go model, creates a flexible and cost-effective mean to access computing resources. In this paper, a feasibility report of HPC on a cloud infrastructure is presented. It is found that current cloud services in IaaS layer still need to improve performance to be useful to research projects. On the other hand, Software as a Service (SaaS), another type of cloud computing, is introduced into an HPC system for computing in mineral physics, and an application of which is developed. In this paper, an overall description of this SaaS application is presented. This contribution can promote cloud application development in computational mineral physics, and cross-disciplinary studies.

  9. AMSAA Reliability Growth Guide

    National Research Council Canada - National Science Library

    Broemm, William

    2000-01-01

    ... has developed reliability growth methodology for all phases of the process, from planning to tracking to projection. The report presents this methodology and associated reliability growth concepts.

  10. Supply chain reliability modelling

    Directory of Open Access Journals (Sweden)

    Eugen Zaitsev

    2012-03-01

    Full Text Available Background: Today it is virtually impossible to operate alone on the international level in the logistics business. This promotes the establishment and development of new integrated business entities - logistic operators. However, such cooperation within a supply chain creates also many problems related to the supply chain reliability as well as the optimization of the supplies planning. The aim of this paper was to develop and formulate the mathematical model and algorithms to find the optimum plan of supplies by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Methods: The mathematical model and algorithms to find the optimum plan of supplies were developed and formulated by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Results and conclusions: The problem of ensuring failure-free performance of goods supply channel analyzed in the paper is characteristic of distributed network systems that make active use of business process outsourcing technologies. The complex planning problem occurring in such systems that requires taking into account the consumer's requirements for failure-free performance in terms of supply volumes and correctness can be reduced to a relatively simple linear programming problem through logical analysis of the structures. The sequence of the operations, which should be taken into account during the process of the supply planning with the supplier's functional reliability, was presented.

  11. Reliability evaluation of smart distribution grids

    OpenAIRE

    Kazemi, Shahram

    2011-01-01

    The term "Smart Grid" generally refers to a power grid equipped with the advanced technologies dedicated for purposes such as reliability improvement, ease of control and management, integrating of distributed energy resources and electricity market operations. Improving the reliability of electric power delivered to the end users is one of the main targets of employing smart grid technologies. The smart grid investments targeted for reliability improvement can be directed toward the generati...

  12. A reliability simulation language for reliability analysis

    International Nuclear Information System (INIS)

    Deans, N.D.; Miller, A.J.; Mann, D.P.

    1986-01-01

    The results of work being undertaken to develop a Reliability Description Language (RDL) which will enable reliability analysts to describe complex reliability problems in a simple, clear and unambiguous way are described. Component and system features can be stated in a formal manner and subsequently used, along with control statements to form a structured program. The program can be compiled and executed on a general-purpose computer or special-purpose simulator. (DG)

  13. Reliability issues : a Canadian perspective

    International Nuclear Information System (INIS)

    Konow, H.

    2004-01-01

    A Canadian perspective of power reliability issues was presented. Reliability depends on adequacy of supply and a framework for standards. The challenges facing the electric power industry include new demand, plant replacement and exports. It is expected that demand will by 670 TWh by 2020, with 205 TWh coming from new plants. Canada will require an investment of $150 billion to meet this demand and the need is comparable in the United States. As trade grows, the challenge becomes a continental issue and investment in the bi-national transmission grid will be essential. The 5 point plan of the Canadian Electricity Association is to: (1) establish an investment climate to ensure future electricity supply, (2) move government and industry towards smart and effective regulation, (3) work to ensure a sustainable future for the next generation, (4) foster innovation and accelerate skills development, and (5) build on the strengths of an integrated North American system to maximize opportunity for Canadians. The CEA's 7 measures that enhance North American reliability were listed with emphasis on its support for a self-governing international organization for developing and enforcing mandatory reliability standards. CEA also supports the creation of a binational Electric Reliability Organization (ERO) to identify and solve reliability issues in the context of a bi-national grid. tabs., figs

  14. Energy Systems Integration Facility Videos | Energy Systems Integration

    Science.gov (United States)

    Facility | NREL Energy Systems Integration Facility Videos Energy Systems Integration Facility Integration Facility NREL + SolarCity: Maximizing Solar Power on Electrical Grids Redefining What's Possible for Renewable Energy: Grid Integration Robot-Powered Reliability Testing at NREL's ESIF Microgrid

  15. APEnet+: a 3D Torus network optimized for GPU-based HPC Systems

    International Nuclear Information System (INIS)

    Ammendola, R; Biagioni, A; Frezza, O; Lo Cicero, F; Lonardo, A; Paolucci, P S; Rossetti, D; Simula, F; Tosoratto, L; Vicini, P

    2012-01-01

    In the supercomputing arena, the strong rise of GPU-accelerated clusters is a matter of fact. Within INFN, we proposed an initiative — the QUonG project — whose aim is to deploy a high performance computing system dedicated to scientific computations leveraging on commodity multi-core processors coupled with latest generation GPUs. The inter-node interconnection system is based on a point-to-point, high performance, low latency 3D torus network which is built in the framework of the APEnet+ project. It takes the form of an FPGA-based PCIe network card exposing six full bidirectional links running at 34 Gbps each that implements the RDMA protocol. In order to enable significant access latency reduction for inter-node data transfer, a direct network-to-GPU interface was built. The specialized hardware blocks, integrated in the APEnet+ board, provide support for GPU-initiated communications using the so called PCIe peer-to-peer (P2P) transactions. This development is made in close collaboration with the GPU vendor NVIDIA. The final shape of a complete QUonG deployment is an assembly of standard 42U racks, each one capable of 80 TFLOPS/rack of peak performance, at a cost of 5 k€/T F LOPS and for an estimated power consumption of 25 kW/rack. In this paper we report on the status of final rack deployment and on the R and D activities for 2012 that will focus on performance enhancement of the APEnet+ hardware through the adoption of new generation 28 nm FPGAs allowing the implementation of PCIe Gen3 host interface and the addition of new fault tolerance-oriented capabilities.

  16. APEnet+: a 3D Torus network optimized for GPU-based HPC Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ammendola, R [INFN Tor Vergata (Italy); Biagioni, A; Frezza, O; Lo Cicero, F; Lonardo, A; Paolucci, P S; Rossetti, D; Simula, F; Tosoratto, L; Vicini, P [INFN Roma (Italy)

    2012-12-13

    In the supercomputing arena, the strong rise of GPU-accelerated clusters is a matter of fact. Within INFN, we proposed an initiative - the QUonG project - whose aim is to deploy a high performance computing system dedicated to scientific computations leveraging on commodity multi-core processors coupled with latest generation GPUs. The inter-node interconnection system is based on a point-to-point, high performance, low latency 3D torus network which is built in the framework of the APEnet+ project. It takes the form of an FPGA-based PCIe network card exposing six full bidirectional links running at 34 Gbps each that implements the RDMA protocol. In order to enable significant access latency reduction for inter-node data transfer, a direct network-to-GPU interface was built. The specialized hardware blocks, integrated in the APEnet+ board, provide support for GPU-initiated communications using the so called PCIe peer-to-peer (P2P) transactions. This development is made in close collaboration with the GPU vendor NVIDIA. The final shape of a complete QUonG deployment is an assembly of standard 42U racks, each one capable of 80 TFLOPS/rack of peak performance, at a cost of 5 k Euro-Sign /T F LOPS and for an estimated power consumption of 25 kW/rack. In this paper we report on the status of final rack deployment and on the R and D activities for 2012 that will focus on performance enhancement of the APEnet+ hardware through the adoption of new generation 28 nm FPGAs allowing the implementation of PCIe Gen3 host interface and the addition of new fault tolerance-oriented capabilities.

  17. Programs for Increasing the Engagement of Underrepresented Ethnic Groups and People with Disabilities in HPC. Final assessment report

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Valerie

    2012-12-23

    Given the significant impact of computing on society, it is important that all cultures, especially underrepresented cultures, are fully engaged in the field of computing to ensure that everyone benefits from the advances in computing. This proposal is focused on the field of high performance computing. The lack of cultural diversity in computing, in particular high performance computing, is especially evident with respect to the following ethnic groups – African Americans, Hispanics, and Native Americans – as well as People with Disabilities. The goal of this proposal is to organize and coordinate a National Laboratory Career Development Workshop focused on underrepresented cultures (ethnic cultures and disability cultures) in high performance computing. It is expected that the proposed workshop will increase the engagement of underrepresented cultures in HPC through increased exposure to the excellent work at the national laboratories. The National Laboratory Workshops are focused on the recruitment of senior graduate students and the retention of junior lab staff through the various panels and discussions at the workshop. Further, the workshop will include a community building component that extends beyond the workshop. The workshop was held was held at the Lawrence Livermore National Laboratory campus in Livermore, CA. from June 14 - 15, 2012. The grant provided funding for 25 participants from underrepresented groups. The workshop also included another 25 local participants in the summer programs at Lawrence Livermore National Laboratory. Below are some key results from the assessment of the workshops: 86% of the participants indicated strongly agree or agree to the statement "I am more likely to consider/continue a career at a national laboratory as a result of participating in this workshop." 77% indicated strongly agree or agree to the statement "I plan to pursue a summer internship at a national laboratory." 100% of the participants indicated strongly

  18. Unusual social behavior in HPC-1/syntaxin1A knockout mice is caused by disruption of the oxytocinergic neural system.

    Science.gov (United States)

    Fujiwara, Tomonori; Sanada, Masumi; Kofuji, Takefumi; Akagawa, Kimio

    2016-07-01

    HPC-1/syntaxin1A (STX1A), a neuronal soluble N-ethylmaleimide-sensitive fusion attachment protein receptor, contributes to neural function in the CNS by regulating transmitter release. Recent studies reported that STX1A is associated with human neuropsychological disorders, such as autism spectrum disorder and attention deficit hyperactivity disorder. Previously, we showed that STX1A null mutant mice (STX1A KO) exhibit neuropsychological abnormalities, such as fear memory deficits, attenuation of latent inhibition, and unusual social behavior. These observations suggested that STX1A may be involved in the neuropsychological basis of these abnormalities. Here, to study the neural basis of social behavior, we analyzed the profile of unusual social behavior in STX1A KO with a social novelty preference test, which is a useful method for quantification of social behavior. Interestingly, the unusual social behavior in STX1A KO was partially rescued by intracerebroventricular administration of oxytocin (OXT). In vivo microdialysis studies revealed that the extracellular OXT concentration in the CNS of STX1A KO was significantly lower compared with wild-type mice. Furthermore, dopamine-induced OXT release was reduced in STX1A KO. These results suggested that STX1A plays an important role in social behavior through regulation of the OXTergic neural system. Dopamine (DA) release is reduced in CNS of syntaxin1A null mutant mice (STX1A KO). Unusual social behavior was observed in STX1A KO. We found that oxytocin (OXT) release, which was stimulated by DA, was reduced and was rescued the unusual social behavior in STX1A KO was rescued by OXT. These results indicated that STX1A plays an important role in promoting social behavior through regulation of DA-induced OXT release in amygdala. © 2016 International Society for Neurochemistry.

  19. Reliability data banks

    International Nuclear Information System (INIS)

    Cannon, A.G.; Bendell, A.

    1991-01-01

    Following an introductory chapter on Reliability, what is it, why it is needed, how it is achieved and measured, the principles of reliability data bases and analysis methodologies are the subject of the next two chapters. Achievements due to the development of data banks are mentioned for different industries in the next chapter, FACTS, a comprehensive information system for industrial safety and reliability data collection in process plants are covered next. CREDO, the Central Reliability Data Organization is described in the next chapter and is indexed separately, as is the chapter on DANTE, the fabrication reliability Data analysis system. Reliability data banks at Electricite de France and IAEA's experience in compiling a generic component reliability data base are also separately indexed. The European reliability data system, ERDS, and the development of a large data bank come next. The last three chapters look at 'Reliability data banks, - friend foe or a waste of time'? and future developments. (UK)

  20. Suncor maintenance and reliability

    Energy Technology Data Exchange (ETDEWEB)

    Little, S. [Suncor Energy, Calgary, AB (Canada)

    2006-07-01

    Fleet maintenance and reliability at Suncor Energy was discussed in this presentation, with reference to Suncor Energy's primary and support equipment fleets. This paper also discussed Suncor Energy's maintenance and reliability standard involving people, processes and technology. An organizational maturity chart that graphed organizational learning against organizational performance was illustrated. The presentation also reviewed the maintenance and reliability framework; maintenance reliability model; the process overview of the maintenance and reliability standard; a process flow chart of maintenance strategies and programs; and an asset reliability improvement process flow chart. An example of an improvement initiative was included, with reference to a shovel reliability review; a dipper trip reliability investigation; bucket related failures by type and frequency; root cause analysis of the reliability process; and additional actions taken. Last, the presentation provided a graph of the results of the improvement initiative and presented the key lessons learned. tabs., figs.

  1. Reliability analysis of reactor pressure vessel intensity

    International Nuclear Information System (INIS)

    Zheng Liangang; Lu Yongbo

    2012-01-01

    This paper performs the reliability analysis of reactor pressure vessel (RPV) with ANSYS. The analysis method include direct Monte Carlo Simulation method, Latin Hypercube Sampling, central composite design and Box-Behnken Matrix design. The RPV integrity reliability under given input condition is proposed. The result shows that the effects on the RPV base material reliability are internal press, allowable basic stress and elasticity modulus of base material in descending order, and the effects on the bolt reliability are allowable basic stress of bolt material, preload of bolt and internal press in descending order. (authors)

  2. Safety and reliability of automatization software

    Energy Technology Data Exchange (ETDEWEB)

    Kapp, K; Daum, R [Karlsruhe Univ. (TH) (Germany, F.R.). Lehrstuhl fuer Angewandte Informatik, Transport- und Verkehrssysteme

    1979-02-01

    Automated technical systems have to meet very high requirements concerning safety, security and reliability. Today, modern computers, especially microcomputers, are used as integral parts of those systems. In consequence computer programs must work in a safe and reliable mannter. Methods are discussed which allow to construct safe and reliable software for automatic systems such as reactor protection systems and to prove that the safety requirements are met. As a result it is shown that only the method of total software diversification can satisfy all safety requirements at tolerable cost. In order to achieve a high degree of reliability, structured and modular programming in context with high level programming languages are recommended.

  3. The Accelerator Reliability Forum

    CERN Document Server

    Lüdeke, Andreas; Giachino, R

    2014-01-01

    A high reliability is a very important goal for most particle accelerators. The biennial Accelerator Reliability Workshop covers topics related to the design and operation of particle accelerators with a high reliability. In order to optimize the over-all reliability of an accelerator one needs to gather information on the reliability of many different subsystems. While a biennial workshop can serve as a platform for the exchange of such information, the authors aimed to provide a further channel to allow for a more timely communication: the Particle Accelerator Reliability Forum [1]. This contribution will describe the forum and advertise it’s usage in the community.

  4. Reliability analysis under epistemic uncertainty

    International Nuclear Information System (INIS)

    Nannapaneni, Saideep; Mahadevan, Sankaran

    2016-01-01

    This paper proposes a probabilistic framework to include both aleatory and epistemic uncertainty within model-based reliability estimation of engineering systems for individual limit states. Epistemic uncertainty is considered due to both data and model sources. Sparse point and/or interval data regarding the input random variables leads to uncertainty regarding their distribution types, distribution parameters, and correlations; this statistical uncertainty is included in the reliability analysis through a combination of likelihood-based representation, Bayesian hypothesis testing, and Bayesian model averaging techniques. Model errors, which include numerical solution errors and model form errors, are quantified through Gaussian process models and included in the reliability analysis. The probability integral transform is used to develop an auxiliary variable approach that facilitates a single-level representation of both aleatory and epistemic uncertainty. This strategy results in an efficient single-loop implementation of Monte Carlo simulation (MCS) and FORM/SORM techniques for reliability estimation under both aleatory and epistemic uncertainty. Two engineering examples are used to demonstrate the proposed methodology. - Highlights: • Epistemic uncertainty due to data and model included in reliability analysis. • A novel FORM-based approach proposed to include aleatory and epistemic uncertainty. • A single-loop Monte Carlo approach proposed to include both types of uncertainties. • Two engineering examples used for illustration.

  5. Developing Reliable Life Support for Mars

    Science.gov (United States)

    Jones, Harry W.

    2017-01-01

    A human mission to Mars will require highly reliable life support systems. Mars life support systems may recycle water and oxygen using systems similar to those on the International Space Station (ISS). However, achieving sufficient reliability is less difficult for ISS than it will be for Mars. If an ISS system has a serious failure, it is possible to provide spare parts, or directly supply water or oxygen, or if necessary bring the crew back to Earth. Life support for Mars must be designed, tested, and improved as needed to achieve high demonstrated reliability. A quantitative reliability goal should be established and used to guide development t. The designers should select reliable components and minimize interface and integration problems. In theory a system can achieve the component-limited reliability, but testing often reveal unexpected failures due to design mistakes or flawed components. Testing should extend long enough to detect any unexpected failure modes and to verify the expected reliability. Iterated redesign and retest may be required to achieve the reliability goal. If the reliability is less than required, it may be improved by providing spare components or redundant systems. The number of spares required to achieve a given reliability goal depends on the component failure rate. If the failure rate is under estimated, the number of spares will be insufficient and the system may fail. If the design is likely to have undiscovered design or component problems, it is advisable to use dissimilar redundancy, even though this multiplies the design and development cost. In the ideal case, a human tended closed system operational test should be conducted to gain confidence in operations, maintenance, and repair. The difficulty in achieving high reliability in unproven complex systems may require the use of simpler, more mature, intrinsically higher reliability systems. The limitations of budget, schedule, and technology may suggest accepting lower and

  6. Modularly Integrated MEMS Technology

    National Research Council Canada - National Science Library

    Eyoum, Marie-Angie N

    2006-01-01

    Process design, development and integration to fabricate reliable MEMS devices on top of VLSI-CMOS electronics without damaging the underlying circuitry have been investigated throughout this dissertation...

  7. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    CERN Document Server

    Anisenkov, Alexey; The ATLAS collaboration; Alandes Pradillo, Maria

    2016-01-01

    AGIS is the information system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing (ADC) applications and services. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others.

  8. Power electronics reliability analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Mark A.; Atcitty, Stanley

    2009-12-01

    This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

  9. Reliability of software

    International Nuclear Information System (INIS)

    Kopetz, H.

    1980-01-01

    Common factors and differences in the reliability of hardware and software; reliability increase by means of methods of software redundancy. Maintenance of software for long term operating behavior. (HP) [de

  10. Reliable Design Versus Trust

    Science.gov (United States)

    Berg, Melanie; LaBel, Kenneth A.

    2016-01-01

    This presentation focuses on reliability and trust for the users portion of the FPGA design flow. It is assumed that the manufacturer prior to hand-off to the user tests FPGA internal components. The objective is to present the challenges of creating reliable and trusted designs. The following will be addressed: What makes a design vulnerable to functional flaws (reliability) or attackers (trust)? What are the challenges for verifying a reliable design versus a trusted design?

  11. Pocket Handbook on Reliability

    Science.gov (United States)

    1975-09-01

    exponencial distributions Weibull distribution, -xtimating reliability, confidence intervals, relia- bility growth, 0. P- curves, Bayesian analysis. 20 A S...introduction for those not familiar with reliability and a good refresher for those who are currently working in the area. LEWIS NERI, CHIEF...includes one or both of the following objectives: a) prediction of the current system reliability, b) projection on the system reliability for someI future

  12. 24. MPA-seminar: safety and reliability of plant technology with special emphasis on integrity and life management. Vol. 1. Papers 1-27; 24. MPA-Seminar: Sicherheit und Verfuegbarkeit in der Anlagentechnik mit dem Schwerpunkt Integritaet und Lebensdauermanagement. Bd. 1. Vortraege 1-27

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-08-01

    The first volume is dedicated to the safety and reliability of plant technology with special emphasis on the integrity and life management. The main topic in the volume is the contribution of nondestructive testing to the reactor safety from an international point of view. All 20 papers are separately analyzed for this database. (orig.)

  13. System Reliability Analysis Considering Correlation of Performances

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Saekyeol; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of); Lim, Woochul [Mando Corporation, Seongnam (Korea, Republic of)

    2017-04-15

    Reliability analysis of a mechanical system has been developed in order to consider the uncertainties in the product design that may occur from the tolerance of design variables, uncertainties of noise, environmental factors, and material properties. In most of the previous studies, the reliability was calculated independently for each performance of the system. However, the conventional methods cannot consider the correlation between the performances of the system that may lead to a difference between the reliability of the entire system and the reliability of the individual performance. In this paper, the joint probability density function (PDF) of the performances is modeled using a copula which takes into account the correlation between performances of the system. The system reliability is proposed as the integral of joint PDF of performances and is compared with the individual reliability of each performance by mathematical examples and two-bar truss example.

  14. System Reliability Analysis Considering Correlation of Performances

    International Nuclear Information System (INIS)

    Kim, Saekyeol; Lee, Tae Hee; Lim, Woochul

    2017-01-01

    Reliability analysis of a mechanical system has been developed in order to consider the uncertainties in the product design that may occur from the tolerance of design variables, uncertainties of noise, environmental factors, and material properties. In most of the previous studies, the reliability was calculated independently for each performance of the system. However, the conventional methods cannot consider the correlation between the performances of the system that may lead to a difference between the reliability of the entire system and the reliability of the individual performance. In this paper, the joint probability density function (PDF) of the performances is modeled using a copula which takes into account the correlation between performances of the system. The system reliability is proposed as the integral of joint PDF of performances and is compared with the individual reliability of each performance by mathematical examples and two-bar truss example.

  15. Solid State Lighting Reliability Components to Systems

    CERN Document Server

    Fan, XJ

    2013-01-01

    Solid State Lighting Reliability: Components to Systems begins with an explanation of the major benefits of solid state lighting (SSL) when compared to conventional lighting systems including but not limited to long useful lifetimes of 50,000 (or more) hours and high efficacy. When designing effective devices that take advantage of SSL capabilities the reliability of internal components (optics, drive electronics, controls, thermal design) take on critical importance. As such a detailed discussion of reliability from performance at the device level to sub components is included as well as the integrated systems of SSL modules, lamps and luminaires including various failure modes, reliability testing and reliability performance. This book also: Covers the essential reliability theories and practices for current and future development of Solid State Lighting components and systems Provides a systematic overview for not only the state-of-the-art, but also future roadmap and perspectives of Solid State Lighting r...

  16. Principles of Bridge Reliability

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Nowak, Andrzej S.

    The paper gives a brief introduction to the basic principles of structural reliability theory and its application to bridge engineering. Fundamental concepts like failure probability and reliability index are introduced. Ultimate as well as serviceability limit states for bridges are formulated......, and as an example the reliability profile and a sensitivity analyses for a corroded reinforced concrete bridge is shown....

  17. Reliability in engineering '87

    International Nuclear Information System (INIS)

    Tuma, M.

    1987-01-01

    The participants heard 51 papers dealing with the reliability of engineering products. Two of the papers were incorporated in INIS, namely ''Reliability comparison of two designs of low pressure regeneration of the 1000 MW unit at the Temelin nuclear power plant'' and ''Use of probability analysis of reliability in designing nuclear power facilities.''(J.B.)

  18. HPC Insights, Fall 2011

    Science.gov (United States)

    2011-01-01

    Power 6 ( Davinci ) systems. We have also made use of the Air Force Research Laboratory DSRC Altix (Hawk) and the Engineer Research and Development...the design and development of high performance gas turbine combustion systems both as a pretest analysis tool to predict static and dynamic...application while gaining insight into MATLAB’s value as an engineering tool . I would like to thank the MHPCC and the Akamai Workforce Initiative

  19. HPC Annual Report: Emulytics.

    Energy Technology Data Exchange (ETDEWEB)

    Crussell, Jonathan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boote, Jeffrey W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Fritz, David Jakob [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-10-01

    Networked Information Technology systems play a key role supporting critical government, military, and private computer installations. Many of today's critical infrastructure systems have strong dependencies on secure information exchange among geographically dispersed facilities. As operations become increasingly dependent on the information exchange they also become targets for exploitation. The need to protect data and defend these systems from external attack has become increasingly vital while the nature of the threats has become sophisticated and pervasive making the challenges daunting. Enter Emulytics.

  20. Equipment Reliability Program in NPP Krsko

    International Nuclear Information System (INIS)

    Skaler, F.; Djetelic, N.

    2006-01-01

    Operation that is safe, reliable, effective and acceptable to public is the common message in a mission statement of commercial nuclear power plants (NPPs). To fulfill these goals, nuclear industry, among other areas, has to focus on: 1 Human Performance (HU) and 2 Equipment Reliability (EQ). The performance objective of HU is as follows: The behaviors of all personnel result in safe and reliable station operation. While unwanted human behaviors in operations mostly result directly in the event, the behavior flaws either in the area of maintenance or engineering usually cause decreased equipment reliability. Unsatisfied Human performance leads even the best designed power plants into significant operating events, which can be found as well-known examples in nuclear industry. Equipment reliability is today recognized as the key to success. While the human performance at most NPPs has been improving since the start of WANO / INPO / IAEA evaluations, the open energy market has forced the nuclear plants to reduce production costs and operate more reliably and effectively. The balance between these two (opposite) goals has made equipment reliability even more important for safe, reliable and efficient production. Insisting on on-line operation by ignoring some principles of safety could nowadays in a well-developed safety culture and human performance environment exceed the cost of electricity losses. In last decade the leading USA nuclear companies put a lot of effort to improve equipment reliability primarily based on INPO Equipment Reliability Program AP-913 at their NPP stations. The Equipment Reliability Program is the key program not only for safe and reliable operation, but also for the Life Cycle Management and Aging Management on the way to the nuclear power plant life extension. The purpose of Equipment Reliability process is to identify, organize, integrate and coordinate equipment reliability activities (preventive and predictive maintenance, maintenance

  1. Reliable computer systems.

    Science.gov (United States)

    Wear, L L; Pinkert, J R

    1993-11-01

    In this article, we looked at some decisions that apply to the design of reliable computer systems. We began with a discussion of several terms such as testability, then described some systems that call for highly reliable hardware and software. The article concluded with a discussion of methods that can be used to achieve higher reliability in computer systems. Reliability and fault tolerance in computers probably will continue to grow in importance. As more and more systems are computerized, people will want assurances about the reliability of these systems, and their ability to work properly even when sub-systems fail.

  2. Human factor reliability program

    International Nuclear Information System (INIS)

    Knoblochova, L.

    2017-01-01

    The human factor's reliability program was at Slovenske elektrarne, a.s. (SE) nuclear power plants. introduced as one of the components Initiatives of Excellent Performance in 2011. The initiative's goal was to increase the reliability of both people and facilities, in response to 3 major areas of improvement - Need for improvement of the results, Troubleshooting support, Supporting the achievement of the company's goals. The human agent's reliability program is in practice included: - Tools to prevent human error; - Managerial observation and coaching; - Human factor analysis; -Quick information about the event with a human agent; -Human reliability timeline and performance indicators; - Basic, periodic and extraordinary training in human factor reliability(authors)

  3. The reliability process as an integral part of the product creation process. A contribution to assure the maturity level; Der Zuverlaessigkeitsprozess als integraler Bestandteil des Produktentstehungsprozesses. Ein Beitrag zur Reifegradabsicherung

    Energy Technology Data Exchange (ETDEWEB)

    Savic, R.; Kusenic, D. [ZF Friedrichshafen AG (Germany)

    2007-07-01

    The reliability process in the automotive and supplier industry covers all phases of the product creation process. The objective of the main tasks of the reliability process is to meet the requirements of the customer, authorities and law. Therefore the reliability process assures a high readiness for a stable and undisturbed start of production in the line with the product creation process. The paper presents a reliability based model to assure the maturity level in form of the reliability growth methodology and its verification respectively monitoring of the progress during the product creation process. As a consequence, the performance of the product creation process is required to be monitored and reported on. In this case the monitoring is based on the reliability parameters of the reliability growth in form of the achieved MTBF respectively MTTF, out of which the maturity level is derived during the product creation process. This makes it possible for the reliability management to track the reliability targets and the efficiency of corrective actions and the product creation process itself for all phases of the process. (orig.)

  4. Extrapolation Method for System Reliability Assessment

    DEFF Research Database (Denmark)

    Qin, Jianjun; Nishijima, Kazuyoshi; Faber, Michael Havbro

    2012-01-01

    of integrals with scaled domains. The performance of this class of approximation depends on the approach applied for the scaling and the functional form utilized for the extrapolation. A scheme for this task is derived here taking basis in the theory of asymptotic solutions to multinormal probability integrals......The present paper presents a new scheme for probability integral solution for system reliability analysis, which takes basis in the approaches by Naess et al. (2009) and Bucher (2009). The idea is to evaluate the probability integral by extrapolation, based on a sequence of MC approximations...... that the proposed scheme is efficient and adds to generality for this class of approximations for probability integrals....

  5. Land Use Management in the Panama Canal Watershed to Maximize Hydrologic Ecosystem Services Benefits: Explicit Simulation of Preferential Flow Paths in an HPC Environment

    Science.gov (United States)

    Regina, J. A.; Ogden, F. L.; Steinke, R. C.; Frazier, N.; Cheng, Y.; Zhu, J.

    2017-12-01

    Preferential flow paths (PFP) resulting from biotic and abiotic factors contribute significantly to the generation of runoff in moist lowland tropical watersheds. Flow through PFPs represents the dominant mechanism by which land use choices affect hydrological behavior. The relative influence of PFP varies depending upon land-use management practices. Assessing the possible effects of land-use and landcover change on flows, and other ecosystem services, in the humid tropics partially depends on adequate simulation of PFP across different land-uses. Currently, 5% of global trade passes through the Panama Canal, which is supplied with fresh water from the Panama Canal Watershed. A third set of locks, recently constructed, are expected to double the capacity of the Canal. We incorporated explicit simulation of PFPs in to the ADHydro HPC distributed hydrological model to simulate the effects of land-use and landcover change due to land management incentives on water resources availability in the Panama Canal Watershed. These simulations help to test hypotheses related to the effectiveness of various proposed payments for ecosystem services schemes. This presentation will focus on hydrological model formulation and performance in an HPC environment.

  6. Current Capabilities at SNL for the Integration of Small Modular Reactors onto Smart Microgrids Using Sandia's Smart Microgrid Technology High Performance Computing and Advanced Manufacturing.

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez, Salvador B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    Smart grids are a crucial component for enabling the nation’s future energy needs, as part of a modernization effort led by the Department of Energy. Smart grids and smart microgrids are being considered in niche applications, and as part of a comprehensive energy strategy to help manage the nation’s growing energy demands, for critical infrastructures, military installations, small rural communities, and large populations with limited water supplies. As part of a far-reaching strategic initiative, Sandia National Laboratories (SNL) presents herein a unique, three-pronged approach to integrate small modular reactors (SMRs) into microgrids, with the goal of providing economically-competitive, reliable, and secure energy to meet the nation’s needs. SNL’s triad methodology involves an innovative blend of smart microgrid technology, high performance computing (HPC), and advanced manufacturing (AM). In this report, Sandia’s current capabilities in those areas are summarized, as well as paths forward that will enable DOE to achieve its energy goals. In the area of smart grid/microgrid technology, Sandia’s current computational capabilities can model the entire grid, including temporal aspects and cyber security issues. Our tools include system development, integration, testing and evaluation, monitoring, and sustainment.

  7. Reliability and safety engineering

    CERN Document Server

    Verma, Ajit Kumar; Karanki, Durga Rao

    2016-01-01

    Reliability and safety are core issues that must be addressed throughout the life cycle of engineering systems. Reliability and Safety Engineering presents an overview of the basic concepts, together with simple and practical illustrations. The authors present reliability terminology in various engineering fields, viz.,electronics engineering, software engineering, mechanical engineering, structural engineering and power systems engineering. The book describes the latest applications in the area of probabilistic safety assessment, such as technical specification optimization, risk monitoring and risk informed in-service inspection. Reliability and safety studies must, inevitably, deal with uncertainty, so the book includes uncertainty propagation methods: Monte Carlo simulation, fuzzy arithmetic, Dempster-Shafer theory and probability bounds. Reliability and Safety Engineering also highlights advances in system reliability and safety assessment including dynamic system modeling and uncertainty management. Cas...

  8. Evaluation of structural reliability using simulation methods

    Directory of Open Access Journals (Sweden)

    Baballëku Markel

    2015-01-01

    Full Text Available Eurocode describes the 'index of reliability' as a measure of structural reliability, related to the 'probability of failure'. This paper is focused on the assessment of this index for a reinforced concrete bridge pier. It is rare to explicitly use reliability concepts for design of structures, but the problems of structural engineering are better known through them. Some of the main methods for the estimation of the probability of failure are the exact analytical integration, numerical integration, approximate analytical methods and simulation methods. Monte Carlo Simulation is used in this paper, because it offers a very good tool for the estimation of probability in multivariate functions. Complicated probability and statistics problems are solved through computer aided simulations of a large number of tests. The procedures of structural reliability assessment for the bridge pier and the comparison with the partial factor method of the Eurocodes have been demonstrated in this paper.

  9. Operational safety reliability research

    International Nuclear Information System (INIS)

    Hall, R.E.; Boccio, J.L.

    1986-01-01

    Operating reactor events such as the TMI accident and the Salem automatic-trip failures raised the concern that during a plant's operating lifetime the reliability of systems could degrade from the design level that was considered in the licensing process. To address this concern, NRC is sponsoring the Operational Safety Reliability Research project. The objectives of this project are to identify the essential tasks of a reliability program and to evaluate the effectiveness and attributes of such a reliability program applicable to maintaining an acceptable level of safety during the operating lifetime at the plant

  10. Circuit design for reliability

    CERN Document Server

    Cao, Yu; Wirth, Gilson

    2015-01-01

    This book presents physical understanding, modeling and simulation, on-chip characterization, layout solutions, and design techniques that are effective to enhance the reliability of various circuit units.  The authors provide readers with techniques for state of the art and future technologies, ranging from technology modeling, fault detection and analysis, circuit hardening, and reliability management. Provides comprehensive review on various reliability mechanisms at sub-45nm nodes; Describes practical modeling and characterization techniques for reliability; Includes thorough presentation of robust design techniques for major VLSI design units; Promotes physical understanding with first-principle simulations.

  11. A reliability program approach to operational safety

    International Nuclear Information System (INIS)

    Mueller, C.J.; Bezella, W.A.

    1985-01-01

    A Reliability Program (RP) model based on proven reliability techniques is being formulated for potential application in the nuclear power industry. Methods employed under NASA and military direction, commercial airline and related FAA programs were surveyed and a review of current nuclear risk-dominant issues conducted. The need for a reliability approach to address dependent system failures, operating and emergency procedures and human performance, and develop a plant-specific performance data base for safety decision making is demonstrated. Current research has concentrated on developing a Reliability Program approach for the operating phase of a nuclear plant's lifecycle. The approach incorporates performance monitoring and evaluation activities with dedicated tasks that integrate these activities with operation, surveillance, and maintenance of the plant. The detection, root-cause evaluation and before-the-fact correction of incipient or actual systems failures as a mechanism for maintaining plant safety is a major objective of the Reliability Program. (orig./HP)

  12. Quality and reliability management and its applications

    CERN Document Server

    2016-01-01

    Integrating development processes, policies, and reliability predictions from the beginning of the product development lifecycle to ensure high levels of product performance and safety, this book helps companies overcome the challenges posed by increasingly complex systems in today’s competitive marketplace.   Examining both research on and practical aspects of product quality and reliability management with an emphasis on applications, the book features contributions written by active researchers and/or experienced practitioners in the field, so as to effectively bridge the gap between theory and practice and address new research challenges in reliability and quality management in practice.    Postgraduates, researchers and practitioners in the areas of reliability engineering and management, amongst others, will find the book to offer a state-of-the-art survey of quality and reliability management and practices.

  13. Computer-aided reliability and risk assessment

    International Nuclear Information System (INIS)

    Leicht, R.; Wingender, H.J.

    1989-01-01

    Activities in the fields of reliability and risk analyses have led to the development of particular software tools which now are combined in the PC-based integrated CARARA system. The options available in this system cover a wide range of reliability-oriented tasks, like organizing raw failure data in the component/event data bank FDB, performing statistical analysis of those data with the program FDA, managing the resulting parameters in the reliability data bank RDB, and performing fault tree analysis with the fault tree code FTL or evaluating the risk of toxic or radioactive material release with the STAR code. (orig.)

  14. Challenges Regarding IP Core Functional Reliability

    Science.gov (United States)

    Berg, Melanie D.; LaBel, Kenneth A.

    2017-01-01

    For many years, intellectual property (IP) cores have been incorporated into field programmable gate array (FPGA) and application specific integrated circuit (ASIC) design flows. However, the usage of large complex IP cores were limited within products that required a high level of reliability. This is no longer the case. IP core insertion has become mainstream including their use in highly reliable products. Due to limited visibility and control, challenges exist when using IP cores and subsequently compromise product reliability. We discuss challenges and suggest potential solutions to critical application IP insertion.

  15. CADRIGS--computer aided design reliability interactive graphics system

    International Nuclear Information System (INIS)

    Kwik, R.J.; Polizzi, L.M.; Sticco, S.; Gerrard, P.B.; Yeater, M.L.; Hockenbury, R.W.; Phillips, M.A.

    1982-01-01

    An integrated reliability analysis program combining graphic representation of fault trees, automated data base loadings and reference, and automated construction of reliability code input files was developed. The functional specifications for CADRIGS, the computer aided design reliability interactive graphics system, are presented. Previously developed fault tree segments used in auxiliary feedwater system safety analysis were constructed on CADRIGS and, when combined, yielded results identical to those resulting from manual input to the same reliability codes

  16. Improving machinery reliability

    CERN Document Server

    Bloch, Heinz P

    1998-01-01

    This totally revised, updated and expanded edition provides proven techniques and procedures that extend machinery life, reduce maintenance costs, and achieve optimum machinery reliability. This essential text clearly describes the reliability improvement and failure avoidance steps practiced by best-of-class process plants in the U.S. and Europe.

  17. LED system reliability

    NARCIS (Netherlands)

    Driel, W.D. van; Yuan, C.A.; Koh, S.; Zhang, G.Q.

    2011-01-01

    This paper presents our effort to predict the system reliability of Solid State Lighting (SSL) applications. A SSL system is composed of a LED engine with micro-electronic driver(s) that supplies power to the optic design. Knowledge of system level reliability is not only a challenging scientific

  18. Reliability of neural encoding

    DEFF Research Database (Denmark)

    Alstrøm, Preben; Beierholm, Ulrik; Nielsen, Carsten Dahl

    2002-01-01

    The reliability with which a neuron is able to create the same firing pattern when presented with the same stimulus is of critical importance to the understanding of neuronal information processing. We show that reliability is closely related to the process of phaselocking. Experimental results f...

  19. A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems

    Energy Technology Data Exchange (ETDEWEB)

    Wan, Lipeng [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wang, Feiyi [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Oral, H. Sarp [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cao, Qing [Univ. of Tennessee, Knoxville, TN (United States)

    2014-11-01

    High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storage systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results

  20. Bayesian methods in reliability

    Science.gov (United States)

    Sander, P.; Badoux, R.

    1991-11-01

    The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.

  1. Design for reliability: NASA reliability preferred practices for design and test

    Science.gov (United States)

    Lalli, Vincent R.

    1994-01-01

    This tutorial summarizes reliability experience from both NASA and industry and reflects engineering practices that support current and future civil space programs. These practices were collected from various NASA field centers and were reviewed by a committee of senior technical representatives from the participating centers (members are listed at the end). The material for this tutorial was taken from the publication issued by the NASA Reliability and Maintainability Steering Committee (NASA Reliability Preferred Practices for Design and Test. NASA TM-4322, 1991). Reliability must be an integral part of the systems engineering process. Although both disciplines must be weighed equally with other technical and programmatic demands, the application of sound reliability principles will be the key to the effectiveness and affordability of America's space program. Our space programs have shown that reliability efforts must focus on the design characteristics that affect the frequency of failure. Herein, we emphasize that these identified design characteristics must be controlled by applying conservative engineering principles.

  2. Equipment Reliability Process in Krsko NPP

    International Nuclear Information System (INIS)

    Gluhak, M.

    2016-01-01

    To ensure long-term safe and reliable plant operation, equipment operability and availability must also be ensured by setting a group of processes to be established within the nuclear power plant. Equipment reliability process represents the integration and coordination of important equipment reliability activities into one process, which enables equipment performance and condition monitoring, preventive maintenance activities development, implementation and optimization, continuous improvement of the processes and long term planning. The initiative for introducing systematic approach for equipment reliability assuring came from US nuclear industry guided by INPO (Institute of Nuclear Power Operations) and by participation of several US nuclear utilities. As a result of the initiative, first edition of INPO document AP-913, 'Equipment Reliability Process Description' was issued and it became a basic document for implementation of equipment reliability process for the whole nuclear industry. The scope of equipment reliability process in Krsko NPP consists of following programs: equipment criticality classification, preventive maintenance program, corrective action program, system health reports and long-term investment plan. By implementation, supervision and continuous improvement of those programs, guided by more than thirty years of operating experience, Krsko NPP will continue to be on a track of safe and reliable operation until the end of prolonged life time. (author).

  3. Reliability engineering for nuclear and other high technology systems

    International Nuclear Information System (INIS)

    Lakner, A.A.; Anderson, R.T.

    1985-01-01

    This book is written for the reliability instructor, program manager, system engineer, design engineer, reliability engineer, nuclear regulator, probability risk assessment (PRA) analyst, general manager and others who are involved in system hardware acquisition, design and operation and are concerned with plant safety and operational cost-effectiveness. It provides criteria, guidelines and comprehensive engineering data affecting reliability; it covers the key aspects of system reliability as it relates to conceptual planning, cost tradeoff decisions, specification, contractor selection, design, test and plant acceptance and operation. It treats reliability as an integrated methodology, explicitly describing life cycle management techniques as well as the basic elements of a total hardware development program, including: reliability parameters and design improvement attributes, reliability testing, reliability engineering and control. It describes how these elements can be defined during procurement, and implemented during design and development to yield reliable equipment. (author)

  4. Reliability Lessons Learned From GPU Experience With The Titan Supercomputer at Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Gallarno, George [Christian Brothers University; Rogers, James H [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learned in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.

  5. Reliability of construction materials

    International Nuclear Information System (INIS)

    Merz, H.

    1976-01-01

    One can also speak of reliability with respect to materials. While for reliability of components the MTBF (mean time between failures) is regarded as the main criterium, this is replaced with regard to materials by possible failure mechanisms like physical/chemical reaction mechanisms, disturbances of physical or chemical equilibrium, or other interactions or changes of system. The main tasks of the reliability analysis of materials therefore is the prediction of the various failure reasons, the identification of interactions, and the development of nondestructive testing methods. (RW) [de

  6. Structural Reliability Methods

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Madsen, H. O.

    The structural reliability methods quantitatively treat the uncertainty of predicting the behaviour and properties of a structure given the uncertain properties of its geometry, materials, and the actions it is supposed to withstand. This book addresses the probabilistic methods for evaluation...... of structural reliability, including the theoretical basis for these methods. Partial safety factor codes under current practice are briefly introduced and discussed. A probabilistic code format for obtaining a formal reliability evaluation system that catches the most essential features of the nature...... of the uncertainties and their interplay is the developed, step-by-step. The concepts presented are illustrated by numerous examples throughout the text....

  7. Reliability and mechanical design

    International Nuclear Information System (INIS)

    Lemaire, Maurice

    1997-01-01

    A lot of results in mechanical design are obtained from a modelisation of physical reality and from a numerical solution which would lead to the evaluation of needs and resources. The goal of the reliability analysis is to evaluate the confidence which it is possible to grant to the chosen design through the calculation of a probability of failure linked to the retained scenario. Two types of analysis are proposed: the sensitivity analysis and the reliability analysis. Approximate methods are applicable to problems related to reliability, availability, maintainability and safety (RAMS)

  8. RTE - 2013 Reliability Report

    International Nuclear Information System (INIS)

    Denis, Anne-Marie

    2014-01-01

    RTE publishes a yearly reliability report based on a standard model to facilitate comparisons and highlight long-term trends. The 2013 report is not only stating the facts of the Significant System Events (ESS), but it moreover underlines the main elements dealing with the reliability of the electrical power system. It highlights the various elements which contribute to present and future reliability and provides an overview of the interaction between the various stakeholders of the Electrical Power System on the scale of the European Interconnected Network. (author)

  9. A reliability analysis tool for SpaceWire network

    Science.gov (United States)

    Zhou, Qiang; Zhu, Longjiang; Fei, Haidong; Wang, Xingyou

    2017-04-01

    A SpaceWire is a standard for on-board satellite networks as the basis for future data-handling architectures. It is becoming more and more popular in space applications due to its technical advantages, including reliability, low power and fault protection, etc. High reliability is the vital issue for spacecraft. Therefore, it is very important to analyze and improve the reliability performance of the SpaceWire network. This paper deals with the problem of reliability modeling and analysis with SpaceWire network. According to the function division of distributed network, a reliability analysis method based on a task is proposed, the reliability analysis of every task can lead to the system reliability matrix, the reliability result of the network system can be deduced by integrating these entire reliability indexes in the matrix. With the method, we develop a reliability analysis tool for SpaceWire Network based on VC, where the computation schemes for reliability matrix and the multi-path-task reliability are also implemented. By using this tool, we analyze several cases on typical architectures. And the analytic results indicate that redundancy architecture has better reliability performance than basic one. In practical, the dual redundancy scheme has been adopted for some key unit, to improve the reliability index of the system or task. Finally, this reliability analysis tool will has a directive influence on both task division and topology selection in the phase of SpaceWire network system design.

  10. Approach to reliability assessment

    International Nuclear Information System (INIS)

    Green, A.E.; Bourne, A.J.

    1975-01-01

    Experience has shown that reliability assessments can play an important role in the early design and subsequent operation of technological systems where reliability is at a premium. The approaches to and techniques for such assessments, which have been outlined in the paper, have been successfully applied in variety of applications ranging from individual equipments to large and complex systems. The general approach involves the logical and systematic establishment of the purpose, performance requirements and reliability criteria of systems. This is followed by an appraisal of likely system achievment based on the understanding of different types of variational behavior. A fundamental reliability model emerges from the correlation between the appropriate Q and H functions for performance requirement and achievement. This model may cover the complete spectrum of performance behavior in all the system dimensions

  11. The rating reliability calculator

    Directory of Open Access Journals (Sweden)

    Solomon David J

    2004-04-01

    Full Text Available Abstract Background Rating scales form an important means of gathering evaluation data. Since important decisions are often based on these evaluations, determining the reliability of rating data can be critical. Most commonly used methods of estimating reliability require a complete set of ratings i.e. every subject being rated must be rated by each judge. Over fifty years ago Ebel described an algorithm for estimating the reliability of ratings based on incomplete data. While his article has been widely cited over the years, software based on the algorithm is not readily available. This paper describes an easy-to-use Web-based utility for estimating the reliability of ratings based on incomplete data using Ebel's algorithm. Methods The program is available public use on our server and the source code is freely available under GNU General Public License. The utility is written in PHP, a common open source imbedded scripting language. The rating data can be entered in a convenient format on the user's personal computer that the program will upload to the server for calculating the reliability and other statistics describing the ratings. Results When the program is run it displays the reliability, number of subject rated, harmonic mean number of judges rating each subject, the mean and standard deviation of the averaged ratings per subject. The program also displays the mean, standard deviation and number of ratings for each subject rated. Additionally the program will estimate the reliability of an average of a number of ratings for each subject via the Spearman-Brown prophecy formula. Conclusion This simple web-based program provides a convenient means of estimating the reliability of rating data without the need to conduct special studies in order to provide complete rating data. I would welcome other researchers revising and enhancing the program.

  12. Structural systems reliability analysis

    International Nuclear Information System (INIS)

    Frangopol, D.

    1975-01-01

    For an exact evaluation of the reliability of a structure it appears necessary to determine the distribution densities of the loads and resistances and to calculate the correlation coefficients between loads and between resistances. These statistical characteristics can be obtained only on the basis of a long activity period. In case that such studies are missing the statistical properties formulated here give upper and lower bounds of the reliability. (orig./HP) [de

  13. Reliability and maintainability

    International Nuclear Information System (INIS)

    1994-01-01

    Several communications in this conference are concerned with nuclear plant reliability and maintainability; their titles are: maintenance optimization of stand-by Diesels of 900 MW nuclear power plants; CLAIRE: an event-based simulation tool for software testing; reliability as one important issue within the periodic safety review of nuclear power plants; design of nuclear building ventilation by the means of functional analysis; operation characteristic analysis for a power industry plant park, as a function of influence parameters

  14. Reliability data book

    International Nuclear Information System (INIS)

    Bento, J.P.; Boerje, S.; Ericsson, G.; Hasler, A.; Lyden, C.O.; Wallin, L.; Poern, K.; Aakerlund, O.

    1985-01-01

    The main objective for the report is to improve failure data for reliability calculations as parts of safety analyses for Swedish nuclear power plants. The work is based primarily on evaluations of failure reports as well as information provided by the operation and maintenance staff of each plant. In the report are presented charts of reliability data for: pumps, valves, control rods/rod drives, electrical components, and instruments. (L.E.)

  15. Multi-Disciplinary System Reliability Analysis

    Science.gov (United States)

    Mahadevan, Sankaran; Han, Song

    1997-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  16. Analysis and Application of Reliability

    International Nuclear Information System (INIS)

    Jeong, Hae Seong; Park, Dong Ho; Kim, Jae Ju

    1999-05-01

    This book tells of analysis and application of reliability, which includes definition, importance and historical background of reliability, function of reliability and failure rate, life distribution and assumption of reliability, reliability of unrepaired system, reliability of repairable system, sampling test of reliability, failure analysis like failure analysis by FEMA and FTA, and cases, accelerated life testing such as basic conception, acceleration and acceleration factor, and analysis of accelerated life testing data, maintenance policy about alternation and inspection.

  17. SIERRA Mechanics, an emerging massively parallel HPC capability, for use in coupled THMC analyses of HLW repositories in clay/shale

    International Nuclear Information System (INIS)

    Bean, J.E.; Sanchez, M.; Arguello, J.G.

    2012-01-01

    Document available in extended abstract form only. Because, until recently, U.S. efforts had been focused on the volcanic tuff site at Yucca Mountain, radioactive waste disposal in U.S. clay/shale formations has not been considered for many years. However, advances in multi-physics computational modeling and research into clay mineralogy continue to improve the scientific basis for assessing nuclear waste repository performance in such formations. Disposal of high-level radioactive waste (HLW) in suitable clay/shale formations is attractive because the material is essentially impermeable and self-sealing, conditions are chemically reducing, and sorption tends to prevent radionuclide transport. Vertically and laterally extensive shale and clay formations exist in multiple locations in the contiguous 48 states. This paper describes an emerging massively parallel (MP) high performance computing (HPC) capability - SIERRA Mechanics - that is applicable to the simulation of coupled-physics processes occurring within a potential clay/shale repository for disposal of HLW within the U.S. The SIERRA Mechanics code development project has been underway at Sandia National Laboratories for approximately the past decade under the auspices of the U.S. Department of Energy's Advanced Scientific Computing (ASC) program. SIERRA Mechanics was designed and developed from its inception to run on the latest and most sophisticated massively parallel computing hardware, with the capability to span the hardware range from single workstations to systems with thousands of processors. The foundation of SIERRA Mechanics is the SIERRA tool-kit, which provides finite element application-code services such as: (1) mesh and field data management, both parallel and distributed; (2) transfer operators for mapping field variables from one mechanics application to another; (3) a solution controller for code coupling; and (4) included third party libraries (e.g., solver libraries, communications

  18. Safety and reliability criteria

    International Nuclear Information System (INIS)

    O'Neil, R.

    1978-01-01

    Nuclear power plants and, in particular, reactor pressure boundary components have unique reliability requirements, in that usually no significant redundancy is possible, and a single failure can give rise to possible widespread core damage and fission product release. Reliability may be required for availability or safety reasons, but in the case of the pressure boundary and certain other systems safety may dominate. Possible Safety and Reliability (S and R) criteria are proposed which would produce acceptable reactor design. Without some S and R requirement the designer has no way of knowing how far he must go in analysing his system or component, or whether his proposed solution is likely to gain acceptance. The paper shows how reliability targets for given components and systems can be individually considered against the derived S and R criteria at the design and construction stage. Since in the case of nuclear pressure boundary components there is often very little direct experience on which to base reliability studies, relevant non-nuclear experience is examined. (author)

  19. Proposed reliability cost model

    Science.gov (United States)

    Delionback, L. M.

    1973-01-01

    The research investigations which were involved in the study include: cost analysis/allocation, reliability and product assurance, forecasting methodology, systems analysis, and model-building. This is a classic example of an interdisciplinary problem, since the model-building requirements include the need for understanding and communication between technical disciplines on one hand, and the financial/accounting skill categories on the other. The systems approach is utilized within this context to establish a clearer and more objective relationship between reliability assurance and the subcategories (or subelements) that provide, or reenforce, the reliability assurance for a system. Subcategories are further subdivided as illustrated by a tree diagram. The reliability assurance elements can be seen to be potential alternative strategies, or approaches, depending on the specific goals/objectives of the trade studies. The scope was limited to the establishment of a proposed reliability cost-model format. The model format/approach is dependent upon the use of a series of subsystem-oriented CER's and sometimes possible CTR's, in devising a suitable cost-effective policy.

  20. Issues in cognitive reliability

    International Nuclear Information System (INIS)

    Woods, D.D.; Hitchler, M.J.; Rumancik, J.A.

    1984-01-01

    This chapter examines some problems in current methods to assess reactor operator reliability at cognitive tasks and discusses new approaches to solve these problems. The two types of human failures are errors in the execution of an intention and errors in the formation/selection of an intention. Topics considered include the types of description, error correction, cognitive performance and response time, the speed-accuracy tradeoff function, function based task analysis, and cognitive task analysis. One problem of human reliability analysis (HRA) techniques in general is the question of what are the units of behavior whose reliability are to be determined. A second problem for HRA is that people often detect and correct their errors. The use of function based analysis, which maps the problem space for plant control, is recommended

  1. Reliability issues in PACS

    Science.gov (United States)

    Taira, Ricky K.; Chan, Kelby K.; Stewart, Brent K.; Weinberg, Wolfram S.

    1991-07-01

    Reliability is an increasing concern when moving PACS from the experimental laboratory to the clinical environment. Any system downtime may seriously affect patient care. The authors report on the several classes of errors encountered during the pre-clinical release of the PACS during the past several months and present the solutions implemented to handle them. The reliability issues discussed include: (1) environmental precautions, (2) database backups, (3) monitor routines of critical resources and processes, (4) hardware redundancy (networks, archives), and (5) development of a PACS quality control program.

  2. Reliability Parts Derating Guidelines

    Science.gov (United States)

    1982-06-01

    226-30, October 1974. 66 I, 26. "Reliability of GAAS Injection Lasers", De Loach , B. C., Jr., 1973 IEEE/OSA Conference on Laser Engineering and...Vol. R-23, No. 4, 226-30, October 1974. 28. "Reliability of GAAS Injection Lasers", De Loach , B. C., Jr., 1973 IEEE/OSA Conference on Laser...opnatien ot 󈨊 deg C, mounted on a 4-inach square 0.250~ inch thick al~loy alum~nusi panel.. This mounting technique should be L~ ken into cunoidur~tiou

  3. Reliability evaluation programmable logic devices

    International Nuclear Information System (INIS)

    Srivani, L.; Murali, N.; Thirugnana Murthy, D.; Satya Murty, S.A.V.

    2014-01-01

    Programmable Logic Devices (PLD) are widely used as basic building modules in high integrity systems, considering their robust features such as gate density, performance, speed etc. PLDs are used to implement digital design such as bus interface logic, control logic, sequencing logic, glue logic etc. Due to semiconductor evolution, new PLDs with state-of-the-art features are arriving to the market. Since these devices are reliable as per the manufacturer's specification, they were used in the design of safety systems. But due to their reduced market life, the availability of performance data is limited. So evaluating the PLD before deploying in a safety system is very important. This paper presents a survey on the use of PLDs in the nuclear domain and the steps involved in the evaluation of PLD using Quantitative Accelerated Life Testing. (author)

  4. Columbus safety and reliability

    Science.gov (United States)

    Longhurst, F.; Wessels, H.

    1988-10-01

    Analyses carried out to ensure Columbus reliability, availability, and maintainability, and operational and design safety are summarized. Failure modes/effects/criticality is the main qualitative tool used. The main aspects studied are fault tolerance, hazard consequence control, risk minimization, human error effects, restorability, and safe-life design.

  5. Reliability versus reproducibility

    International Nuclear Information System (INIS)

    Lautzenheiser, C.E.

    1976-01-01

    Defect detection and reproducibility of results are two separate but closely related subjects. It is axiomatic that a defect must be detected from examination to examination or reproducibility of results is very poor. On the other hand, a defect can be detected on each of subsequent examinations for higher reliability and still have poor reproducibility of results

  6. Power transformer reliability modelling

    NARCIS (Netherlands)

    Schijndel, van A.

    2010-01-01

    Problem description Electrical power grids serve to transport and distribute electrical power with high reliability and availability at acceptable costs and risks. These grids play a crucial though preferably invisible role in supplying sufficient power in a convenient form. Today’s society has

  7. Designing reliability into accelerators

    International Nuclear Information System (INIS)

    Hutton, A.

    1992-08-01

    For the next generation of high performance, high average luminosity colliders, the ''factories,'' reliability engineering must be introduced right at the inception of the project and maintained as a central theme throughout the project. There are several aspects which will be addressed separately: Concept; design; motivation; management techniques; and fault diagnosis

  8. Proof tests on reliability

    International Nuclear Information System (INIS)

    Mishima, Yoshitsugu

    1983-01-01

    In order to obtain public understanding on nuclear power plants, tests should be carried out to prove the reliability and safety of present LWR plants. For example, the aseismicity of nuclear power plants must be verified by using a large scale earthquake simulator. Reliability test began in fiscal 1975, and the proof tests on steam generators and on PWR support and flexure pins against stress corrosion cracking have already been completed, and the results have been internationally highly appreciated. The capacity factor of the nuclear power plant operation in Japan rose to 80% in the summer of 1983, and considering the period of regular inspection, it means the operation of almost full capacity. Japanese LWR technology has now risen to the top place in the world after having overcome the defects. The significance of the reliability test is to secure the functioning till the age limit is reached, to confirm the correct forecast of deteriorating process, to confirm the effectiveness of the remedy to defects and to confirm the accuracy of predicting the behavior of facilities. The reliability of nuclear valves, fuel assemblies, the heat affected zones in welding, reactor cooling pumps and electric instruments has been tested or is being tested. (Kako, I.)

  9. Reliability and code level

    NARCIS (Netherlands)

    Kasperski, M.; Geurts, C.P.W.

    2005-01-01

    The paper describes the work of the IAWE Working Group WBG - Reliability and Code Level, one of the International Codification Working Groups set up at ICWE10 in Copenhagen. The following topics are covered: sources of uncertainties in the design wind load, appropriate design target values for the

  10. Reliability of Plastic Slabs

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle

    1989-01-01

    In the paper it is shown how upper and lower bounds for the reliability of plastic slabs can be determined. For the fundamental case it is shown that optimal bounds of a deterministic and a stochastic analysis are obtained on the basis of the same failure mechanisms and the same stress fields....

  11. Reliability based structural design

    NARCIS (Netherlands)

    Vrouwenvelder, A.C.W.M.

    2014-01-01

    According to ISO 2394, structures shall be designed, constructed and maintained in such a way that they are suited for their use during the design working life in an economic way. To fulfil this requirement one needs insight into the risk and reliability under expected and non-expected actions. A

  12. Travel time reliability modeling.

    Science.gov (United States)

    2011-07-01

    This report includes three papers as follows: : 1. Guo F., Rakha H., and Park S. (2010), "A Multi-state Travel Time Reliability Model," : Transportation Research Record: Journal of the Transportation Research Board, n 2188, : pp. 46-54. : 2. Park S.,...

  13. Reliability and Model Fit

    Science.gov (United States)

    Stanley, Leanne M.; Edwards, Michael C.

    2016-01-01

    The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…

  14. Parametric Mass Reliability Study

    Science.gov (United States)

    Holt, James P.

    2014-01-01

    The International Space Station (ISS) systems are designed based upon having redundant systems with replaceable orbital replacement units (ORUs). These ORUs are designed to be swapped out fairly quickly, but some are very large, and some are made up of many components. When an ORU fails, it is replaced on orbit with a spare; the failed unit is sometimes returned to Earth to be serviced and re-launched. Such a system is not feasible for a 500+ day long-duration mission beyond low Earth orbit. The components that make up these ORUs have mixed reliabilities. Components that make up the most mass-such as computer housings, pump casings, and the silicon board of PCBs-typically are the most reliable. Meanwhile components that tend to fail the earliest-such as seals or gaskets-typically have a small mass. To better understand the problem, my project is to create a parametric model that relates both the mass of ORUs to reliability, as well as the mass of ORU subcomponents to reliability.

  15. Reliability Approach of a Compressor System using Reliability Block ...

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... This paper presents a reliability analysis of such a system using reliability ... Keywords-compressor system, reliability, reliability block diagram, RBD .... the same structure has been kept with the three subsystems: air flow, oil flow and .... and Safety in Engineering Design", Springer, 2009. [3] P. O'Connor ...

  16. Reliability analysis of prestressed concrete containment structures

    International Nuclear Information System (INIS)

    Jiang, J.; Zhao, Y.; Sun, J.

    1993-01-01

    The reliability analysis of prestressed concrete containment structures subjected to combinations of static and dynamic loads with consideration of uncertainties of structural and load parameters is presented. Limit state probabilities for given parameters are calculated using the procedure developed at BNL, while that with consideration of parameter uncertainties are calculated by a fast integration for time variant structural reliability. The limit state surface of the prestressed concrete containment is constructed directly incorporating the prestress. The sensitivities of the Choleskey decomposition matrix and the natural vibration character are calculated by simplified procedures. (author)

  17. Assessing the Impact of Imperfect Diagnosis on Service Reliability

    DEFF Research Database (Denmark)

    Grønbæk, Lars Jesper; Schwefel, Hans-Peter; Kjærgaard, Jens Kristian

    2010-01-01

    , representative diagnosis performance metrics have been defined and their closed-form solutions obtained for the Markov model. These equations enable model parameterization from traces of implemented diagnosis components. The diagnosis model has been integrated in a reliability model assessing the impact...... of the diagnosis functions for the studied reliability problem. In a simulation study we finally analyze trade-off properties of diagnosis heuristics from literature, map them to the analytic Markov model, and investigate its suitability for service reliability optimization....

  18. Simulation Approach to Mission Risk and Reliability Analysis, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — It is proposed to develop and demonstrate an integrated total-system risk and reliability analysis approach that is based on dynamic, probabilistic simulation. This...

  19. Reliability in the utility computing era: Towards reliable Fog computing

    DEFF Research Database (Denmark)

    Madsen, Henrik; Burtschy, Bernard; Albeanu, G.

    2013-01-01

    This paper considers current paradigms in computing and outlines the most important aspects concerning their reliability. The Fog computing paradigm as a non-trivial extension of the Cloud is considered and the reliability of the networks of smart devices are discussed. Combining the reliability...... requirements of grid and cloud paradigms with the reliability requirements of networks of sensor and actuators it follows that designing a reliable Fog computing platform is feasible....

  20. Reliability analysis of wind embedded power generation system for ...

    African Journals Online (AJOL)

    This paper presents a method for Reliability Analysis of wind energy embedded in power generation system for Indian scenario. This is done by evaluating the reliability index, loss of load expectation, for the power generation system with and without integration of wind energy sources in the overall electric power system.

  1. RTE - Reliability report 2016

    International Nuclear Information System (INIS)

    2017-06-01

    Every year, RTE produces a reliability report for the past year. This document lays out the main factors that affected the electrical power system's operational reliability in 2016 and the initiatives currently under way intended to ensure its reliability in the future. Within a context of the energy transition, changes to the European interconnected network mean that RTE has to adapt on an on-going basis. These changes include the increase in the share of renewables injecting an intermittent power supply into networks, resulting in a need for flexibility, and a diversification in the numbers of stakeholders operating in the energy sector and changes in the ways in which they behave. These changes are dramatically changing the structure of the power system of tomorrow and the way in which it will operate - particularly the way in which voltage and frequency are controlled, as well as the distribution of flows, the power system's stability, the level of reserves needed to ensure supply-demand balance, network studies, assets' operating and control rules, the tools used and the expertise of operators. The results obtained in 2016 are evidence of a globally satisfactory level of reliability for RTE's operations in somewhat demanding circumstances: more complex supply-demand balance management, cross-border schedules at interconnections indicating operation that is closer to its limits and - most noteworthy - having to manage a cold spell just as several nuclear power plants had been shut down. In a drive to keep pace with the changes expected to occur in these circumstances, RTE implemented numerous initiatives to ensure high levels of reliability: - maintaining investment levels of euro 1.5 billion per year; - increasing cross-zonal capacity at borders with our neighbouring countries, thus bolstering the security of our electricity supply; - implementing new mechanisms (demand response, capacity mechanism, interruptibility, etc.); - involvement in tests or projects

  2. Waste package reliability analysis

    International Nuclear Information System (INIS)

    Pescatore, C.; Sastre, C.

    1983-01-01

    Proof of future performance of a complex system such as a high-level nuclear waste package over a period of hundreds to thousands of years cannot be had in the ordinary sense of the word. The general method of probabilistic reliability analysis could provide an acceptable framework to identify, organize, and convey the information necessary to satisfy the criterion of reasonable assurance of waste package performance according to the regulatory requirements set forth in 10 CFR 60. General principles which may be used to evaluate the qualitative and quantitative reliability of a waste package design are indicated and illustrated with a sample calculation of a repository concept in basalt. 8 references, 1 table

  3. Accelerator reliability workshop

    Energy Technology Data Exchange (ETDEWEB)

    Hardy, L; Duru, Ph; Koch, J M; Revol, J L; Van Vaerenbergh, P; Volpe, A M; Clugnet, K; Dely, A; Goodhew, D

    2002-07-01

    About 80 experts attended this workshop, which brought together all accelerator communities: accelerator driven systems, X-ray sources, medical and industrial accelerators, spallation sources projects (American and European), nuclear physics, etc. With newly proposed accelerator applications such as nuclear waste transmutation, replacement of nuclear power plants and others. Reliability has now become a number one priority for accelerator designers. Every part of an accelerator facility from cryogenic systems to data storage via RF systems are concerned by reliability. This aspect is now taken into account in the design/budget phase, especially for projects whose goal is to reach no more than 10 interruptions per year. This document gathers the slides but not the proceedings of the workshop.

  4. Human Reliability Program Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Landers, John; Rogers, Erin; Gerke, Gretchen

    2014-05-18

    A Human Reliability Program (HRP) is designed to protect national security as well as worker and public safety by continuously evaluating the reliability of those who have access to sensitive materials, facilities, and programs. Some elements of a site HRP include systematic (1) supervisory reviews, (2) medical and psychological assessments, (3) management evaluations, (4) personnel security reviews, and (4) training of HRP staff and critical positions. Over the years of implementing an HRP, the Department of Energy (DOE) has faced various challenges and overcome obstacles. During this 4-day activity, participants will examine programs that mitigate threats to nuclear security and the insider threat to include HRP, Nuclear Security Culture (NSC) Enhancement, and Employee Assistance Programs. The focus will be to develop an understanding of the need for a systematic HRP and to discuss challenges and best practices associated with mitigating the insider threat.

  5. Reliability and construction control

    Directory of Open Access Journals (Sweden)

    Sherif S. AbdelSalam

    2016-06-01

    Full Text Available The goal of this study was to determine the most reliable and efficient combination of design and construction methods required for vibro piles. For a wide range of static and dynamic formulas, the reliability-based resistance factors were calculated using EGYPT database, which houses load test results for 318 piles. The analysis was extended to introduce a construction control factor that determines the variation between the pile nominal capacities calculated using static versus dynamic formulae. From the major outcomes, the lowest coefficient of variation is associated with Davisson’s criterion, and the resistance factors calculated for the AASHTO method are relatively high compared with other methods. Additionally, the CPT-Nottingham and Schmertmann method provided the most economic design. Recommendations related to a pile construction control factor were also presented, and it was found that utilizing the factor can significantly reduce variations between calculated and actual capacities.

  6. Scyllac equipment reliability analysis

    International Nuclear Information System (INIS)

    Gutscher, W.D.; Johnson, K.J.

    1975-01-01

    Most of the failures in Scyllac can be related to crowbar trigger cable faults. A new cable has been designed, procured, and is currently undergoing evaluation. When the new cable has been proven, it will be worked into the system as quickly as possible without causing too much additional down time. The cable-tip problem may not be easy or even desirable to solve. A tightly fastened permanent connection that maximizes contact area would be more reliable than the plug-in type of connection in use now, but it would make system changes and repairs much more difficult. The balance of the failures have such a low occurrence rate that they do not cause much down time and no major effort is underway to eliminate them. Even though Scyllac was built as an experimental system and has many thousands of components, its reliability is very good. Because of this the experiment has been able to progress at a reasonable pace

  7. Improving Power Converter Reliability

    DEFF Research Database (Denmark)

    Ghimire, Pramod; de Vega, Angel Ruiz; Beczkowski, Szymon

    2014-01-01

    of a high-power IGBT module during converter operation, which may play a vital role in improving the reliability of the power converters. The measured voltage is used to estimate the module average junction temperature of the high and low-voltage side of a half-bridge IGBT separately in every fundamental......The real-time junction temperature monitoring of a high-power insulated-gate bipolar transistor (IGBT) module is important to increase the overall reliability of power converters for industrial applications. This article proposes a new method to measure the on-state collector?emitter voltage...... is measured in a wind power converter at a low fundamental frequency. To illustrate more, the test method as well as the performance of the measurement circuit are also presented. This measurement is also useful to indicate failure mechanisms such as bond wire lift-off and solder layer degradation...

  8. Accelerator reliability workshop

    International Nuclear Information System (INIS)

    Hardy, L.; Duru, Ph.; Koch, J.M.; Revol, J.L.; Van Vaerenbergh, P.; Volpe, A.M.; Clugnet, K.; Dely, A.; Goodhew, D.

    2002-01-01

    About 80 experts attended this workshop, which brought together all accelerator communities: accelerator driven systems, X-ray sources, medical and industrial accelerators, spallation sources projects (American and European), nuclear physics, etc. With newly proposed accelerator applications such as nuclear waste transmutation, replacement of nuclear power plants and others. Reliability has now become a number one priority for accelerator designers. Every part of an accelerator facility from cryogenic systems to data storage via RF systems are concerned by reliability. This aspect is now taken into account in the design/budget phase, especially for projects whose goal is to reach no more than 10 interruptions per year. This document gathers the slides but not the proceedings of the workshop

  9. Safety and reliability assessment

    International Nuclear Information System (INIS)

    1979-01-01

    This report contains the papers delivered at the course on safety and reliability assessment held at the CSIR Conference Centre, Scientia, Pretoria. The following topics were discussed: safety standards; licensing; biological effects of radiation; what is a PWR; safety principles in the design of a nuclear reactor; radio-release analysis; quality assurance; the staffing, organisation and training for a nuclear power plant project; event trees, fault trees and probability; Automatic Protective Systems; sources of failure-rate data; interpretation of failure data; synthesis and reliability; quantification of human error in man-machine systems; dispersion of noxious substances through the atmosphere; criticality aspects of enrichment and recovery plants; and risk and hazard analysis. Extensive examples are given as well as case studies

  10. Reliability and protection against failure in computer systems

    International Nuclear Information System (INIS)

    Daniels, B.K.

    1979-01-01

    Computers are being increasingly integrated into the control and safety systems of large and potentially hazardous industrial processes. This development introduces problems which are particular to computer systems and opens the way to new techniques of solving conventional reliability and availability problems. References to the developing fields of software reliability, human factors and software design are given, and these subjects are related, where possible, to the quantified assessment of reliability. Original material is presented in the areas of reliability growth and computer hardware failure data. The report draws on the experience of the National Centre of Systems Reliability in assessing the capability and reliability of computer systems both within the nuclear industry, and from the work carried out in other industries by the Systems Reliability Service. (author)

  11. Reliability of Circumplex Axes

    Directory of Open Access Journals (Sweden)

    Micha Strack

    2013-06-01

    Full Text Available We present a confirmatory factor analysis (CFA procedure for computing the reliability of circumplex axes. The tau-equivalent CFA variance decomposition model estimates five variance components: general factor, axes, scale-specificity, block-specificity, and item-specificity. Only the axes variance component is used for reliability estimation. We apply the model to six circumplex types and 13 instruments assessing interpersonal and motivational constructs—Interpersonal Adjective List (IAL, Interpersonal Adjective Scales (revised; IAS-R, Inventory of Interpersonal Problems (IIP, Impact Messages Inventory (IMI, Circumplex Scales of Interpersonal Values (CSIV, Support Action Scale Circumplex (SAS-C, Interaction Problems With Animals (IPI-A, Team Role Circle (TRC, Competing Values Leadership Instrument (CV-LI, Love Styles, Organizational Culture Assessment Instrument (OCAI, Customer Orientation Circle (COC, and System for Multi-Level Observation of Groups (behavioral adjectives; SYMLOG—in 17 German-speaking samples (29 subsamples, grouped by self-report, other report, and metaperception assessments. The general factor accounted for a proportion ranging from 1% to 48% of the item variance, the axes component for 2% to 30%; and scale specificity for 1% to 28%, respectively. Reliability estimates varied considerably from .13 to .92. An application of the Nunnally and Bernstein formula proposed by Markey, Markey, and Tinsley overestimated axes reliabilities in cases of large-scale specificities but otherwise works effectively. Contemporary circumplex evaluations such as Tracey’s RANDALL are sensitive to the ratio of the axes and scale-specificity components. In contrast, the proposed model isolates both components.

  12. The cost of reliability

    International Nuclear Information System (INIS)

    Ilic, M.

    1998-01-01

    In this article the restructuring process under way in the US power industry is being revisited from the point of view of transmission system provision and reliability was rolled into the average cost of electricity to all, it is not so obvious how is this cost managed in the new industry. A new MIT approach to transmission pricing is here suggested as a possible solution [it

  13. Software reliability studies

    Science.gov (United States)

    Hoppa, Mary Ann; Wilson, Larry W.

    1994-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.

  14. A part of patients with autism spectrum disorder has haploidy of HPC-1/syntaxin1A gene that possibly causes behavioral disturbance as in experimentally gene ablated mice.

    Science.gov (United States)

    Kofuji, Takefumi; Hayashi, Yuko; Fujiwara, Tomonori; Sanada, Masumi; Tamaru, Masao; Akagawa, Kimio

    2017-03-22

    Autism spectrum disorder (ASD) is highly heritable and encompasses a various set of neuropsychiatric disorders with a wide-ranging presentation. HPC-1/syntaxin1A (STX1A) encodes a neuronal plasma membrane protein that regulates the secretion of neurotransmitters and neuromodulators. STX1A gene ablated mice (null and heterozygote mutant) exhibit abnormal behavioral profiles similar to human autistic symptoms, accompanied by reduction of monoamine secretion. To determine whether copy number variation of STX1A gene and the change of its expression correlate with ASD as in STX1A gene ablated mice, we performed copy number assay and real-time quantitative RT-PCR using blood or saliva samples from ASD patients. We found that some ASD patients were haploid for the STX1A gene similar to STX1A heterozygote mutant mice. However, copy number of STX1A gene was normal in the parents and siblings of ASD patients with STX1A gene haploidy. In ASD patients with gene haploidy, STX1A mRNA expression was reduced to about half of their parents. Thus, a part of ASD patients had haploidy of STX1A gene and lower STX1A gene expression. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Investment in new product reliability

    International Nuclear Information System (INIS)

    Murthy, D.N.P.; Rausand, M.; Virtanen, S.

    2009-01-01

    Product reliability is of great importance to both manufacturers and customers. Building reliability into a new product is costly, but the consequences of inadequate product reliability can be costlier. This implies that manufacturers need to decide on the optimal investment in new product reliability by achieving a suitable trade-off between the two costs. This paper develops a framework and proposes an approach to help manufacturers decide on the investment in new product reliability.

  16. Experimental research of fuel element reliability

    International Nuclear Information System (INIS)

    Cech, B.; Novak, J.; Chamrad, B.

    1980-01-01

    The rate and extent of the damage of the can integrity for fission products is the basic criterion of reliability. The extent of damage is measurable by the fission product leakage into the reactor coolant circuit. An analysis is made of the causes of the fuel element can damage and a model is proposed for testing fuel element reliability. Special experiments should be carried out to assess partial processes, such as heat transfer and fuel element surface temperature, fission gas liberation and pressure changes inside the element, corrosion weakening of the can wall, can deformation as a result of mechanical interactions. The irradiation probe for reliability testing of fuel elements is described. (M.S.)

  17. Linkage reliability in local area network

    International Nuclear Information System (INIS)

    Buissson, J.; Sanchis, P.

    1984-11-01

    The local area networks for industrial applications e.g. in nuclear power plants counterparts intended for office use that they are required to meet more stringent requirements in terms of reliability, security and availability. The designers of such networks take full advantage of the office-oriented developments (more specifically the integrated circuits) and increase their performance capabilities with respect to the industrial requirements [fr

  18. Systems integration.

    Science.gov (United States)

    Siemieniuch, C E; Sinclair, M A

    2006-01-01

    The paper presents a view of systems integration, from an ergonomics/human factors perspective, emphasising the process of systems integration as is carried out by humans. The first section discusses some of the fundamental issues in systems integration, such as the significance of systems boundaries, systems lifecycle and systems entropy, issues arising from complexity, the implications of systems immortality, and so on. The next section outlines various generic processes for executing systems integration, to act as guides for practitioners. These address both the design of the system to be integrated and the preparation of the wider system in which the integration will occur. Then the next section outlines some of the human-specific issues that would need to be addressed in such processes; for example, indeterminacy and incompleteness, the prediction of human reliability, workload issues, extended situation awareness, and knowledge lifecycle management. For all of these, suggestions and further readings are proposed. Finally, the conclusions section reiterates in condensed form the major issues arising from the above.

  19. Final Technical Report: Integrated Distribution-Transmission Analysis for Very High Penetration Solar PV

    Energy Technology Data Exchange (ETDEWEB)

    Palmintier, Bryan [NREL (National Renewable Energy Laboratory (NREL), Golden, CO (United States)); Hale, Elaine [NREL (National Renewable Energy Laboratory (NREL), Golden, CO (United States)); Hansen, Timothy M. [NREL (National Renewable Energy Laboratory (NREL), Golden, CO (United States)); Jones, Wesley [NREL (National Renewable Energy Laboratory (NREL), Golden, CO (United States)); Biagioni, David [NREL (National Renewable Energy Laboratory (NREL), Golden, CO (United States)); Baker, Kyri [NREL (National Renewable Energy Laboratory (NREL), Golden, CO (United States)); Wu, Hongyu [NREL (National Renewable Energy Laboratory (NREL), Golden, CO (United States)); Giraldez, Julieta [NREL (National Renewable Energy Laboratory (NREL), Golden, CO (United States)); Sorensen, Harry [NREL (National Renewable Energy Laboratory (NREL), Golden, CO (United States)); Lunacek, Monte [NREL (National Renewable Energy Laboratory (NREL), Golden, CO (United States)); Merket, Noel [NREL (National Renewable Energy Laboratory (NREL), Golden, CO (United States)); Jorgenson, Jennie [NREL (National Renewable Energy Laboratory (NREL), Golden, CO (United States)); Hodge, Bri-Mathias [NREL (National Renewable Energy Laboratory (NREL), Golden, CO (United States))

    2016-01-29

    Transmission and distribution simulations have historically been conducted separately, echoing their division in grid operations and planning while avoiding inherent computational challenges. Today, however, rapid growth in distributed energy resources (DERs)--including distributed generation from solar photovoltaics (DGPV)--requires understanding the unprecedented interactions between distribution and transmission. To capture these interactions, especially for high-penetration DGPV scenarios, this research project developed a first-of-its-kind, high performance computer (HPC) based, integrated transmission-distribution tool, the Integrated Grid Modeling System (IGMS). The tool was then used in initial explorations of system-wide operational interactions of high-penetration DGPV.

  20. Nuclear performance and reliability

    International Nuclear Information System (INIS)

    Rothwell, G.

    1993-01-01

    If fewer forced outages are a sign of improved safety, nuclear power plants have become safer and more productive. There has been a significant improvement in nuclear power plant performance, due largely to a decline in the forced outage rate and a dramatic drop in the average number of forced outages per fuel cycle. If fewer forced outages are a sign of improved safety, nuclear power plants have become safer and more productive over time. To encourage further increases in performance, regulatory incentive schemes should reward reactor operators for improved reliability and safety, as well as for improved performance

  1. [How Reliable is Neuronavigation?].

    Science.gov (United States)

    Stieglitz, Lennart Henning

    2016-02-17

    Neuronavigation plays a central role in modern neurosurgery. It allows visualizing instruments and three-dimensional image data intraoperatively and supports spatial orientation. Thus it allows to reduce surgical risks and speed up complex surgical procedures. The growing availability and importance of neuronavigation makes clear how relevant it is to know about its reliability and accuracy. Different factors may influence the accuracy during the surgery unnoticed, misleading the surgeon. Besides the best possible optimization of the systems themselves, a good knowledge about its weaknesses is mandatory for every neurosurgeon.

  2. The value of reliability

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Karlström, Anders

    2010-01-01

    We derive the value of reliability in the scheduling of an activity of random duration, such as travel under congested conditions. Using a simple formulation of scheduling utility, we show that the maximal expected utility is linear in the mean and standard deviation of trip duration, regardless...... of the form of the standardised distribution of trip durations. This insight provides a unification of the scheduling model and models that include the standard deviation of trip duration directly as an argument in the cost or utility function. The results generalise approximately to the case where the mean...

  3. Product reliability and thin-film photovoltaics

    Science.gov (United States)

    Gaston, Ryan; Feist, Rebekah; Yeung, Simon; Hus, Mike; Bernius, Mark; Langlois, Marc; Bury, Scott; Granata, Jennifer; Quintana, Michael; Carlson, Carl; Sarakakis, Georgios; Ogden, Douglas; Mettas, Adamantios

    2009-08-01

    Despite significant growth in photovoltaics (PV) over the last few years, only approximately 1.07 billion kWhr of electricity is estimated to have been generated from PV in the US during 2008, or 0.27% of total electrical generation. PV market penetration is set for a paradigm shift, as fluctuating hydrocarbon prices and an acknowledgement of the environmental impacts associated with their use, combined with breakthrough new PV technologies, such as thin-film and BIPV, are driving the cost of energy generated with PV to parity or cost advantage versus more traditional forms of energy generation. In addition to reaching cost parity with grid supplied power, a key to the long-term success of PV as a viable energy alternative is the reliability of systems in the field. New technologies may or may not have the same failure modes as previous technologies. Reliability testing and product lifetime issues continue to be one of the key bottlenecks in the rapid commercialization of PV technologies today. In this paper, we highlight the critical need for moving away from relying on traditional qualification and safety tests as a measure of reliability and focus instead on designing for reliability and its integration into the product development process. A drive towards quantitative predictive accelerated testing is emphasized and an industrial collaboration model addressing reliability challenges is proposed.

  4. On the reliability of seasonal climate forecasts

    Science.gov (United States)

    Weisheimer, A.; Palmer, T. N.

    2014-01-01

    Seasonal climate forecasts are being used increasingly across a range of application sectors. A recent UK governmental report asked: how good are seasonal forecasts on a scale of 1–5 (where 5 is very good), and how good can we expect them to be in 30 years time? Seasonal forecasts are made from ensembles of integrations of numerical models of climate. We argue that ‘goodness’ should be assessed first and foremost in terms of the probabilistic reliability of these ensemble-based forecasts; reliable inputs are essential for any forecast-based decision-making. We propose that a ‘5’ should be reserved for systems that are not only reliable overall, but where, in particular, small ensemble spread is a reliable indicator of low ensemble forecast error. We study the reliability of regional temperature and precipitation forecasts of the current operational seasonal forecast system of the European Centre for Medium-Range Weather Forecasts, universally regarded as one of the world-leading operational institutes producing seasonal climate forecasts. A wide range of ‘goodness’ rankings, depending on region and variable (with summer forecasts of rainfall over Northern Europe performing exceptionally poorly) is found. Finally, we discuss the prospects of reaching ‘5’ across all regions and variables in 30 years time. PMID:24789559

  5. Reliability demonstration test planning using bayesian analysis

    International Nuclear Information System (INIS)

    Chandran, Senthil Kumar; Arul, John A.

    2003-01-01

    In Nuclear Power Plants, the reliability of all the safety systems is very critical from the safety viewpoint and it is very essential that the required reliability requirements be met while satisfying the design constraints. From practical experience, it is found that the reliability of complex systems such as Safety Rod Drive Mechanism is of the order of 10 -4 with an uncertainty factor of 10. To demonstrate the reliability of such systems is prohibitive in terms of cost and time as the number of tests needed is very large. The purpose of this paper is to develop a Bayesian reliability demonstrating testing procedure for exponentially distributed failure times with gamma prior distribution on the failure rate which can be easily and effectively used to demonstrate component/subsystem/system reliability conformance to stated requirements. The important questions addressed in this paper are: With zero failures, how long one should perform the tests and how many components are required to conclude with a given degree of confidence, that the component under test, meets the reliability requirement. The procedure is explained with an example. This procedure can also be extended to demonstrate with more number of failures. The approach presented is applicable for deriving test plans for demonstrating component failure rates of nuclear power plants, as the failure data for similar components are becoming available in existing plants elsewhere. The advantages of this procedure are the criterion upon which the procedure is based is simple and pertinent, the fitting of the prior distribution is an integral part of the procedure and is based on the use of information regarding two percentiles of this distribution and finally, the procedure is straightforward and easy to apply in practice. (author)

  6. NASA Applications and Lessons Learned in Reliability Engineering

    Science.gov (United States)

    Safie, Fayssal M.; Fuller, Raymond P.

    2011-01-01

    Since the Shuttle Challenger accident in 1986, communities across NASA have been developing and extensively using quantitative reliability and risk assessment methods in their decision making process. This paper discusses several reliability engineering applications that NASA has used over the year to support the design, development, and operation of critical space flight hardware. Specifically, the paper discusses several reliability engineering applications used by NASA in areas such as risk management, inspection policies, components upgrades, reliability growth, integrated failure analysis, and physics based probabilistic engineering analysis. In each of these areas, the paper provides a brief discussion of a case study to demonstrate the value added and the criticality of reliability engineering in supporting NASA project and program decisions to fly safely. Examples of these case studies discussed are reliability based life limit extension of Shuttle Space Main Engine (SSME) hardware, Reliability based inspection policies for Auxiliary Power Unit (APU) turbine disc, probabilistic structural engineering analysis for reliability prediction of the SSME alternate turbo-pump development, impact of ET foam reliability on the Space Shuttle System risk, and reliability based Space Shuttle upgrade for safety. Special attention is given in this paper to the physics based probabilistic engineering analysis applications and their critical role in evaluating the reliability of NASA development hardware including their potential use in a research and technology development environment.

  7. Load Control System Reliability

    Energy Technology Data Exchange (ETDEWEB)

    Trudnowski, Daniel [Montana Tech of the Univ. of Montana, Butte, MT (United States)

    2015-04-03

    This report summarizes the results of the Load Control System Reliability project (DOE Award DE-FC26-06NT42750). The original grant was awarded to Montana Tech April 2006. Follow-on DOE awards and expansions to the project scope occurred August 2007, January 2009, April 2011, and April 2013. In addition to the DOE monies, the project also consisted of matching funds from the states of Montana and Wyoming. Project participants included Montana Tech; the University of Wyoming; Montana State University; NorthWestern Energy, Inc., and MSE. Research focused on two areas: real-time power-system load control methodologies; and, power-system measurement-based stability-assessment operation and control tools. The majority of effort was focused on area 2. Results from the research includes: development of fundamental power-system dynamic concepts, control schemes, and signal-processing algorithms; many papers (including two prize papers) in leading journals and conferences and leadership of IEEE activities; one patent; participation in major actual-system testing in the western North American power system; prototype power-system operation and control software installed and tested at three major North American control centers; and, the incubation of a new commercial-grade operation and control software tool. Work under this grant certainly supported the DOE-OE goals in the area of “Real Time Grid Reliability Management.”

  8. Microprocessor hardware reliability

    Energy Technology Data Exchange (ETDEWEB)

    Wright, R I

    1982-01-01

    Microprocessor-based technology has had an impact in nearly every area of industrial electronics and many applications have important safety implications. Microprocessors are being used for the monitoring and control of hazardous processes in the chemical, oil and power generation industries, for the control and instrumentation of aircraft and other transport systems and for the control of industrial machinery. Even in the field of nuclear reactor protection, where designers are particularly conservative, microprocessors are used to implement certain safety functions and may play increasingly important roles in protection systems in the future. Where microprocessors are simply replacing conventional hard-wired control and instrumentation systems no new hazards are created by their use. In the field of robotics, however, the microprocessor has opened up a totally new technology and with it has created possible new and as yet unknown hazards. The paper discusses some of the design and manufacturing techniques which may be used to enhance the reliability of microprocessor based systems and examines the available reliability data on lsi/vlsi microcircuits. 12 references.

  9. HPC Institutional Computing Project: W15_lesreactiveflow KIVA-hpFE Development: A Robust and Accurate Engine Modeling Software

    Energy Technology Data Exchange (ETDEWEB)

    Carrington, David Bradley [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Waters, Jiajia [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-05

    KIVA-hpFE is a high performance computer software for solving the physics of multi-species and multiphase turbulent reactive flow in complex geometries having immersed moving parts. The code is written in Fortran 90/95 and can be used on any computer platform with any popular complier. The code is in two versions, a serial version and a parallel version utilizing MPICH2 type Message Passing Interface (MPI or Intel MPI) for solving distributed domains. The parallel version is at least 30x faster than the serial version and much faster than our previous generation of parallel engine modeling software, by many factors. The 5th generation algorithm construction is a Galerkin type Finite Element Method (FEM) solving conservative momentum, species, and energy transport equations along with two-equation turbulent model k-ω Reynolds Averaged Navier-Stokes (RANS) model and a Vreman type dynamic Large Eddy Simulation (LES) method. The LES method is capable modeling transitional flow from laminar to fully turbulent; therefore, this LES method does not require special hybrid or blending to walls. The FEM projection method also uses a Petrov-Galerkin (P-G) stabilization along with pressure stabilization. We employ hierarchical basis sets, constructed on the fly with enrichment in areas associated with relatively larger error as determined by error estimation methods. In addition, when not using the hp-adaptive module, the code employs Lagrangian basis or shape functions. The shape functions are constructed for hexahedral, prismatic and tetrahedral elements. The software is designed to solve many types of reactive flow problems, from burners to internal combustion engines and turbines. In addition, the formulation allows for direct integration of solid bodies (conjugate heat transfer), as in heat transfer through housings, parts, cylinders. It can also easily be extended to stress modeling of solids, used in fluid structure interactions problems, solidification, porous media

  10. Wind integration in Alberta

    International Nuclear Information System (INIS)

    Frost, W.

    2007-01-01

    This presentation described the role of the Alberta Electric System Operator (AESO) for Alberta's interconnected electric system with particular reference to wind integration in Alberta. The challenges of wind integration were discussed along with the requirements for implementing the market and operational framework. The AESO is an independent system operator that directs the reliable operation of Alberta's power grid; develops and operates Alberta's real-time wholesale energy market to promote open competition; plans and develops the province's transmission system to ensure reliability; and provides transmission system access for both generation and load customers. Alberta has over 280 power generating station, with a total generating capacity of 11,742 MW, of which 443 is wind generated. Since 2004, the AESO has been working with industry on wind integration issues, such as operating limits, need for mitigation measures and market rules. In April 2006, the AESO implemented a temporary 900 MW reliability threshold to ensure reliability. In 2006, a Wind Forecasting Working Group was created in collaboration with industry and the Canadian Wind Energy Association in an effort to integrate as much wind as is feasible without compromising the system reliability or the competitive operation of the market. The challenges facing wind integration include reliability issues; predictability of wind power; the need for dispatchable generation; transmission upgrades; and, defining a market and operational framework for the large wind potential in Alberta. It was noted that 1400 MW of installed wind energy capacity can be accommodated in Alberta with approved transmission upgrades. figs

  11. OSS reliability measurement and assessment

    CERN Document Server

    Yamada, Shigeru

    2016-01-01

    This book analyses quantitative open source software (OSS) reliability assessment and its applications, focusing on three major topic areas: the Fundamentals of OSS Quality/Reliability Measurement and Assessment; the Practical Applications of OSS Reliability Modelling; and Recent Developments in OSS Reliability Modelling. Offering an ideal reference guide for graduate students and researchers in reliability for open source software (OSS) and modelling, the book introduces several methods of reliability assessment for OSS including component-oriented reliability analysis based on analytic hierarchy process (AHP), analytic network process (ANP), and non-homogeneous Poisson process (NHPP) models, the stochastic differential equation models and hazard rate models. These measurement and management technologies are essential to producing and maintaining quality/reliable systems using OSS.

  12. Transit ridership, reliability, and retention.

    Science.gov (United States)

    2008-10-01

    This project explores two major components that affect transit ridership: travel time reliability and rider : retention. It has been recognized that transit travel time reliability may have a significant impact on : attractiveness of transit to many ...

  13. Travel reliability inventory for Chicago.

    Science.gov (United States)

    2013-04-01

    The overarching goal of this research project is to enable state DOTs to document and monitor the reliability performance : of their highway networks. To this end, a computer tool, TRIC, was developed to produce travel reliability inventories from : ...

  14. STARS software tool for analysis of reliability and safety

    International Nuclear Information System (INIS)

    Poucet, A.; Guagnini, E.

    1989-01-01

    This paper reports on the STARS (Software Tool for the Analysis of Reliability and Safety) project aims at developing an integrated set of Computer Aided Reliability Analysis tools for the various tasks involved in systems safety and reliability analysis including hazard identification, qualitative analysis, logic model construction and evaluation. The expert system technology offers the most promising perspective for developing a Computer Aided Reliability Analysis tool. Combined with graphics and analysis capabilities, it can provide a natural engineering oriented environment for computer assisted reliability and safety modelling and analysis. For hazard identification and fault tree construction, a frame/rule based expert system is used, in which the deductive (goal driven) reasoning and the heuristic, applied during manual fault tree construction, is modelled. Expert system can explain their reasoning so that the analyst can become aware of the why and the how results are being obtained. Hence, the learning aspect involved in manual reliability and safety analysis can be maintained and improved

  15. 2017 NREL Photovoltaic Reliability Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Kurtz, Sarah [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-08-15

    NREL's Photovoltaic (PV) Reliability Workshop (PVRW) brings together PV reliability experts to share information, leading to the improvement of PV module reliability. Such improvement reduces the cost of solar electricity and promotes investor confidence in the technology -- both critical goals for moving PV technologies deeper into the electricity marketplace.

  16. AECL's reliability and maintainability program

    International Nuclear Information System (INIS)

    Wolfe, W.A.; Nieuwhof, G.W.E.

    1976-05-01

    AECL's reliability and maintainability program for nuclear generating stations is described. How the various resources of the company are organized to design and construct stations that operate reliably and safely is shown. Reliability and maintainability includes not only special mathematically oriented techniques, but also the technical skills and organizational abilities of the company. (author)

  17. Procedures for controlling the risks of reliability, safety, and availability of technical systems

    International Nuclear Information System (INIS)

    1987-01-01

    The reference book covers four sections. Apart from the fundamental aspects of the reliability problem, of risk and safety and the relevant criteria with regard to reliability, the material presented explains reliability in terms of maintenance, logistics and availability, and presents procedures for reliability assessment and determination of factors influencing the reliability, together with suggestions for systems technical integration. The reliability assessment consists of diagnostic and prognostic analyses. The section on factors influencing reliability discusses aspects of organisational structures, programme planning and control, and critical activities. (DG) [de

  18. Business of reliability

    Science.gov (United States)

    Engel, Pierre

    1999-12-01

    The presentation is organized around three themes: (1) The decrease of reception equipment costs allows non-Remote Sensing organization to access a technology until recently reserved to scientific elite. What this means is the rise of 'operational' executive agencies considering space-based technology and operations as a viable input to their daily tasks. This is possible thanks to totally dedicated ground receiving entities focusing on one application for themselves, rather than serving a vast community of users. (2) The multiplication of earth observation platforms will form the base for reliable technical and financial solutions. One obstacle to the growth of the earth observation industry is the variety of policies (commercial versus non-commercial) ruling the distribution of the data and value-added products. In particular, the high volume of data sales required for the return on investment does conflict with traditional low-volume data use for most applications. Constant access to data sources supposes monitoring needs as well as technical proficiency. (3) Large volume use of data coupled with low- cost equipment costs is only possible when the technology has proven reliable, in terms of application results, financial risks and data supply. Each of these factors is reviewed. The expectation is that international cooperation between agencies and private ventures will pave the way for future business models. As an illustration, the presentation proposes to use some recent non-traditional monitoring applications, that may lead to significant use of earth observation data, value added products and services: flood monitoring, ship detection, marine oil pollution deterrent systems and rice acreage monitoring.

  19. Systems Integration Fact Sheet

    Energy Technology Data Exchange (ETDEWEB)

    None

    2016-06-01

    This fact sheet is an overview of the Systems Integration subprogram at the U.S. Department of Energy SunShot Initiative. The Systems Integration subprogram enables the widespread deployment of safe, reliable, and cost-effective solar energy technologies by addressing the associated technical and non-technical challenges. These include timely and cost-effective interconnection procedures, optimal system planning, accurate prediction of solar resources, monitoring and control of solar power, maintaining grid reliability and stability, and many more. To address the challenges associated with interconnecting and integrating hundreds of gigawatts of solar power onto the electricity grid, the Systems Integration program funds research, development, and demonstration projects in four broad, interrelated focus areas: grid performance and reliability, dispatchability, power electronics, and communications.

  20. From the X-rays to a reliable “low cost” computational structure of caffeic acid: DFT, MP2, HF and integrated molecular dynamics-X-ray diffraction approach to condensed phases

    Science.gov (United States)

    Lombardo, Giuseppe M.; Portalone, Gustavo; Colapietro, Marcello; Rescifina, Antonio; Punzo, Francesco

    2011-05-01

    The ability of caffeic acid to act as antioxidant against hyperoxo-radicals as well as its recently found therapeutic properties in the treatment of hepatocarcinoma, still make this compound, more than 20 years later the refinement of its crystal structure, object of study. It belongs to the vast family of humic substances, which play a key role in the biodegradation processes and easily form complexes with ions widely diffused in the environment. This class of compounds is therefore interesting for potential environmental chemistry applications concerning the possible complexation of heavy metals. Our study focused on the characterization of caffeic acid as a starting necessary step, which will be followed in the future by the application of our findings on the study of the properties of caffeate anion interaction with heavy metal ions. To reach this goal, we applied a low cost approach - in terms of computational time and resources - aimed at the achievement of a high resolution, robust and trustable structure using the X-ray single crystal data, recollected with a higher resolution, as touchstone for a detailed check. A comparison between the calculations carried out with density functional theory (DFT), Hartree-Fock (HF) method and post SCF second order Møller-Plesset perturbation method (MP2), at the 6-31G ** level of the theory, molecular mechanics (MM) and molecular dynamics (MD) was performed. As a consequence we explained on one hand the possible reasons for the pitfalls of the DFT approach and on the other the benefits of using a good and robust force field developed for condensed phases, as AMBER, with MM and MD. The reliability of the latter, highlighted by the overall agreement extended up to the anisotropic displacement parameters calculated by means of MD and the ones gathered by X-ray measurements, makes it very promising for the above-mentioned goals.