WorldWideScience

Sample records for integrated hpc reliability

  1. Modeling Energy & Reliability of a CNT based WSN on an HPC Setup

    Directory of Open Access Journals (Sweden)

    Rohit Pathak

    2010-07-01

    Full Text Available We have analyzed the effect of innovations in Nanotechnology on Wireless Sensor Networks (WSN and have modeled Carbon Nanotube (CNT based sensor nodes from a device prospective. A WSN model has been programmed in Simulink-MATLAB and a library has been developed. Integration of CNT in WSN for various modules such as sensors, microprocessors, batteries etc has been shown. Also average energy consumption for the system has been formulated and its reliability has been shown holistically. A proposition has been put forward on the changes needed in existing sensor node structure to improve its efficiency and to facilitate as well as enhance the assimilation of CNT based devices in a WSN. Finally we have commented on the challenges that exist in this technology and described the important factors that need to be considered for calculating reliability. This research will help in practical implementation of CNT based devices and analysis of their key effects on the WSN environment. The work has been executed on Simulink and Distributive Computing toolbox of MATLAB. The proposal has been compared to the recent developments and past experimental results reported in this field. This attempt to derieve the energy consumption and reliability implications will help in development of real devices using CNT which is a major hurdle in bringing the success from lab to commercial market. Recent research in CNT has been used to model an energy efficient model which will also lead to the development CAD tools. Library for Reliability and Energy consumption includes analysis of various parts of a WSN system which is being constructed from CNT. Nano routing in a CNT system is also implemented with its dependencies. Finally the computations were executed on a HPC setup and the model showed remarkable speedup.

  2. ATLAS computing on CSCS HPC

    Science.gov (United States)

    Filipcic, A.; Haug, S.; Hostettler, M.; Walker, R.; Weber, M.

    2015-12-01

    The Piz Daint Cray XC30 HPC system at CSCS, the Swiss National Supercomputing centre, was the highest ranked European system on TOP500 in 2014, also featuring GPU accelerators. Event generation and detector simulation for the ATLAS experiment have been enabled for this machine. We report on the technical solutions, performance, HPC policy challenges and possible future opportunities for HEP on extreme HPC systems. In particular a custom made integration to the ATLAS job submission system has been developed via the Advanced Resource Connector (ARC) middleware. Furthermore, a partial GPU acceleration of the Geant4 detector simulations has been implemented.

  3. ATLAS computing on CSCS HPC

    CERN Document Server

    Hostettler, Michael Artur; The ATLAS collaboration; Haug, Sigve; Walker, Rodney; Weber, Michele

    2015-01-01

    The Piz Daint Cray XC30 HPC system at CSCS, the Swiss National Supercomputing centre, was in 2014 the highest ranked European system on TOP500, also featuring GPU accelerators. Event generation and detector simulation for the ATLAS experiment have been enabled for this machine. We report on the technical solutions, performance, HPC policy challenges and possible future opportunities for HEP on extreme HPC systems. In particular a custom made integration to the ATLAS job submission system has been developed via the Advanced Resource Connector (ARC) middleware. Furthermore, some GPU acceleration of the Geant4 detector simulations has been implemented to justify the allocation request for this machine.

  4. ATLAS computing on CSCS HPC

    CERN Document Server

    Filipcic, Andrej; The ATLAS collaboration; Weber, Michele; Walker, Rodney; Hostettler, Michael Artur

    2015-01-01

    The Piz Daint Cray XC30 HPC system at CSCS, the Swiss National Supercomputing centre, is in 2014 the highest ranked European system on TOP500, also featuring GPU accelerators. Event generation and detector simulation for the ATLAS experiment has been enabled for this machine. We report on the technical solutions, performance, HPC policy challenges and possible future opportunities for HEP on extreme HPC systems. In particular a custom made integration to the ATLAS job submission system has been developed via the Advanced Resource Connector (ARC) middleware. Further, some GPU acceleration of the Geant4 detector simulations were implemented to justify the allocation request for this machine.

  5. HPC s Pivot to Data

    Energy Technology Data Exchange (ETDEWEB)

    Parete-Koon, Suzanne [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Caldwell, Blake A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Canon, Richard Shane [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Dart, Eli [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Sciences Network (ESnet); Hick, Jason [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Hill, Jason J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Layton, Chris [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Pelfrey, Daniel S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Shipman, Galen M [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Skinner, David [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Nam, Hai Ah [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Wells, Jack C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Zurawski, Jason [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Sciences Network (ESnet)

    2014-05-03

    Computer centers such as NERSC and OLCF have traditionally focused on delivering computational capability that enables breakthrough innovation in a wide range of science domains. Accessing that computational power has required services and tools to move the data from input and output to computation and storage. A ''pivot to data'' is occurring in HPC. Data transfer tools and services that were previously peripheral are becoming integral to scientific workflows. Emerging requirements from high-bandwidth detectors, high-throughput screening techniques, highly concur- rent simulations, increased focus on uncertainty quantification, and an emerging open-data policy posture toward published research are among the data-drivers shaping the networks, file systems, databases, and overall compute and data environment. In this paper we explain the pivot to data in HPC through user requirements and the changing resources provided by HPC with particular focus on data movement. For WAN data transfers we present the results of a study of network performance between centers.

  6. Big Data and HPC collocation: Using HPC idle resources for Big Data Analytics

    OpenAIRE

    MERCIER , Michael; Glesser , David; Georgiou , Yiannis; Richard , Olivier

    2017-01-01

    International audience; Executing Big Data workloads upon High Performance Computing (HPC) infrastractures has become an attractive way to improve their performances. However, the collocation of HPC and Big Data workloads is not an easy task, mainly because of their core concepts' differences. This paper focuses on the challenges related to the scheduling of both Big Data and HPC workloads on the same computing platform. In classic HPC workloads, the rigidity of jobs tends to create holes in ...

  7. HPC in a HEP lab: lessons learned from setting up cost-effective HPC clusters

    OpenAIRE

    Husejko, Michal; Agtzidis, Ioannis; Baehler, Pierre; Dul, Tadeusz; Evans, John; Himyr, Nils; Meinhard, Helge

    2015-01-01

    In this paper we present our findings gathered during the evaluation and testing of Windows Server High-Performance Computing (Windows HPC) in view of potentially using it as a production HPC system for engineering applications. The Windows HPC package, an extension of Microsofts Windows Server product, provides all essential interfaces, utilities and management functionality for creating, operating and monitoring a Windows-based HPC cluster infrastructure. The evaluation and test phase was f...

  8. Simplifying the Access to HPC Resources by Integrating them in the Application GUI

    KAUST Repository

    van Waveren, Matthijs

    2016-06-22

    The computing landscape of KAUST is increasing in complexity. Researchers have access to the 9th fastest supercomputer in the world (Shaheen II) and several other HPC clusters. They work on local Windows, Mac, or Linux workstations. In order to facilitate the access of the HPC systems, we have developed interfaces to several research applications that automate input data transfer, job submission and retrieval of results. The user now submits his jobs to the cluster from within the application GUI on his workstation, and does not have to physically go onto the cluster anymore.

  9. HPC: Rent or Buy

    Science.gov (United States)

    Fredette, Michelle

    2012-01-01

    "Rent or buy?" is a question people ask about everything from housing to textbooks. It is also a question universities must consider when it comes to high-performance computing (HPC). With the advent of Amazon's Elastic Compute Cloud (EC2), Microsoft Windows HPC Server, Rackspace's OpenStack, and other cloud-based services, researchers now have…

  10. Leveraging HPC resources for High Energy Physics

    International Nuclear Information System (INIS)

    O'Brien, B; Washbrook, A; Walker, R

    2014-01-01

    High Performance Computing (HPC) supercomputers provide unprecedented computing power for a diverse range of scientific applications. The most powerful supercomputers now deliver petaflop peak performance with the expectation of 'exascale' technologies available in the next five years. More recent HPC facilities use x86-based architectures managed by Linux-based operating systems which could potentially allow unmodified HEP software to be run on supercomputers. There is now a renewed interest from both the LHC experiments and the HPC community to accommodate data analysis and event simulation production on HPC facilities. This study provides an outline of the challenges faced when incorporating HPC resources for HEP software by using the HECToR supercomputer as a demonstrator.

  11. Virtualization of the ATLAS software environment on a shared HPC system

    CERN Document Server

    Gamel, Anton Josef; The ATLAS collaboration

    2017-01-01

    The shared HPC cluster NEMO at the University of Freiburg has been made available to local ATLAS users through the provisioning of virtual machines incorporating the ATLAS software environment analogously to a WLCG center. This concept allows to run both data analysis and production on the HPC host system which is connected to the existing Tier2/Tier3 infrastructure. Schedulers of the two clusters were integrated in a dynamic, on-demand way. An automatically generated, fully functional virtual machine image provides access to the local user environment. The performance in the virtualized environment is evaluated for typical High-Energy Physics applications.

  12. HPC Annual Report 2017

    Energy Technology Data Exchange (ETDEWEB)

    Dennig, Yasmin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-10-01

    Sandia National Laboratories has a long history of significant contributions to the high performance community and industry. Our innovative computer architectures allowed the United States to become the first to break the teraFLOP barrier—propelling us to the international spotlight. Our advanced simulation and modeling capabilities have been integral in high consequence US operations such as Operation Burnt Frost. Strong partnerships with industry leaders, such as Cray, Inc. and Goodyear, have enabled them to leverage our high performance computing (HPC) capabilities to gain a tremendous competitive edge in the marketplace. As part of our continuing commitment to providing modern computing infrastructure and systems in support of Sandia missions, we made a major investment in expanding Building 725 to serve as the new home of HPC systems at Sandia. Work is expected to be completed in 2018 and will result in a modern facility of approximately 15,000 square feet of computer center space. The facility will be ready to house the newest National Nuclear Security Administration/Advanced Simulation and Computing (NNSA/ASC) Prototype platform being acquired by Sandia, with delivery in late 2019 or early 2020. This new system will enable continuing advances by Sandia science and engineering staff in the areas of operating system R&D, operation cost effectiveness (power and innovative cooling technologies), user environment and application code performance.

  13. Virtualization of the ATLAS software environment on a shared HPC system

    CERN Document Server

    Schnoor, Ulrike; The ATLAS collaboration

    2017-01-01

    High-Performance Computing (HPC) and other research cluster computing resources provided by universities can be useful supplements to the collaboration’s own WLCG computing resources for data analysis and production of simulated event samples. The shared HPC cluster "NEMO" at the University of Freiburg has been made available to local ATLAS users through the provisioning of virtual machines incorporating the ATLAS software environment analogously to a WLCG center. The talk describes the concept and implementation of virtualizing the ATLAS software environment to run both data analysis and production on the HPC host system which is connected to the existing Tier-3 infrastructure. Main challenges include the integration into the NEMO and Tier-3 schedulers in a dynamic, on-demand way, the scalability of the OpenStack infrastructure, as well as the automatic generation of a fully functional virtual machine image providing access to the local user environment, the dCache storage element and the parallel file sys...

  14. Bringing ATLAS production to HPC resources. A case study with SuperMuc and Hydra

    Energy Technology Data Exchange (ETDEWEB)

    Duckeck, Guenter; Walker, Rodney [LMU Muenchen (Germany); Kennedy, John; Mazzaferro, Luca [RZG Garching (Germany); Kluth, Stefan [Max-Planck-Institut fuer Physik, Muenchen (Germany); Collaboration: ATLAS-Collaboration

    2015-07-01

    The possible usage of Supercomputer systems or HPC resources by ATLAS is now becoming viable due to the changing nature of these systems and it is also very attractive due to the need for increasing amounts of simulated data. The ATLAS experiment at CERN will begin a period of high luminosity data taking in 2015. The corresponding need for simulated data might potentially exceed the capabilities of the current Grid infrastructure. ATLAS aims to address this need by opportunistically accessing resources such as cloud and HPC systems. This contribution presents the results of two projects undertaken by LMU/LRZ and MPP/RZG to use the supercomputer facilities SuperMuc (LRZ) and Hydra (RZG). Both are Linux based supercomputers in the 100 k CPU-core category. The integration of such HPC resources into the ATLAS production system poses many challenges. Firstly, established techniques and features of standard WLCG operation are prohibited or much restricted on HPC systems, e.g. Grid middleware, software installation, outside connectivity, etc. Secondly, efficient use of available resources requires massive multi-core jobs, back-fill submission and check-pointing. We discuss the customization of these components and the strategies for HPC usage as well as possibilities for future directions.

  15. WinHPC System Configuration | High-Performance Computing | NREL

    Science.gov (United States)

    ), login node (WinHPC02) and worker/compute nodes. The head node acts as the file, DNS, and license server . The login node is where the users connect to access the cluster. Node 03 has dual Intel Xeon E5530 2008 R2 HPC Edition. The login node, WinHPC02, is where users login to access the system. This is where

  16. HPC Test Results Analysis with Splunk

    Energy Technology Data Exchange (ETDEWEB)

    Green, Jennifer Kathleen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-04-21

    This PowerPoint presentation details Los Alamos National Laboratory’s (LANL) outstanding computing division. LANL’s high performance computing (HPC) aims at having the first platform large and fast enough to accommodate resolved 3D calculations for full scale end-to-end calculations. Strategies for managing LANL’s HPC division are also discussed.

  17. Integrated analysis of hematopoietic differentiation outcomes and molecular characterization reveals unbiased differentiation capacity and minor transcriptional memory in HPC/HSC-iPSCs.

    Science.gov (United States)

    Gao, Shuai; Hou, Xinfeng; Jiang, Yonghua; Xu, Zijian; Cai, Tao; Chen, Jiajie; Chang, Gang

    2017-01-23

    Transcription factor-mediated reprogramming can reset the epigenetics of somatic cells into a pluripotency compatible state. Recent studies show that induced pluripotent stem cells (iPSCs) always inherit starting cell-specific characteristics, called epigenetic memory, which may be advantageous, as directed differentiation into specific cell types is still challenging; however, it also may be unpredictable when uncontrollable differentiation occurs. In consideration of biosafety in disease modeling and personalized medicine, the availability of high-quality iPSCs which lack a biased differentiation capacity and somatic memory could be indispensable. Herein, we evaluate the hematopoietic differentiation capacity and somatic memory state of hematopoietic progenitor and stem cell (HPC/HSC)-derived-iPSCs (HPC/HSC-iPSCs) using a previously established sequential reprogramming system. We found that HPC/HSCs are amenable to being reprogrammed into iPSCs with unbiased differentiation capacity to hematopoietic progenitors and mature hematopoietic cells. Genome-wide analyses revealed that no global epigenetic memory was detectable in HPC/HSC-iPSCs, but only a minor transcriptional memory of HPC/HSCs existed in a specific tetraploid complementation (4 N)-incompetent HPC/HSC-iPSC line. However, the observed minor transcriptional memory had no influence on the hematopoietic differentiation capacity, indicating the reprogramming of the HPC/HSCs was nearly complete. Further analysis revealed the correlation of minor transcriptional memory with the aberrant distribution of H3K27me3. This work provides a comprehensive framework for obtaining high-quality iPSCs from HPC/HSCs with unbiased hematopoietic differentiation capacity and minor transcriptional memory.

  18. ATLAS utilisation of the Czech national HPC center

    CERN Document Server

    Svatos, Michal; The ATLAS collaboration

    2018-01-01

    The Czech national HPC center IT4Innovations located in Ostrava provides two HPC systems, Anselm and Salomon. The Salomon HPC is amongst the hundred most powerful supercomputers on Earth since its commissioning in 2015. Both clusters were tested for usage by the ATLAS experiment for running simulation jobs. Several thousand core hours were allocated to the project for tests, but the main aim is to use free resources waiting for large parallel jobs of other users. Multiple strategies for ATLAS job execution were tested on the Salomon and Anselm HPCs. The solution described herein is based on the ATLAS experience with other HPC sites. ARC Compute Element (ARC-CE) installed at the grid site in Prague is used for job submission to Salomon. The ATLAS production system submits jobs to the ARC-CE via ARC Control Tower (aCT). The ARC-CE processes job requirements from aCT and creates a script for a batch system which is then executed via ssh. Sshfs is used to share scripts and input files between the site and the HPC...

  19. WinHPC System Policies | High-Performance Computing | NREL

    Science.gov (United States)

    ) cluster. The WinHPC login node (WinHPC02) is intended to allow users with approved access to connect to also be run from the login node. There is a single login node for this system so any applications

  20. The clinical phenotype of hereditary versus sporadic prostate cancer: HPC definition revisited

    NARCIS (Netherlands)

    Cremers, R.G.H.M.; Aben, K.K.H.; Oort, I.M. van; Sedelaar, J.P.M.; Vasen, H.F.A.; Vermeulen, S.H.; Kiemeney, L.A.L.M.

    2016-01-01

    BACKGROUND: The definition of hereditary prostate cancer (HPC) is based on family history and age at onset. Intuitively, HPC is a serious subtype of prostate cancer but there are only limited data on the clinical phenotype of HPC. Here, we aimed to compare the prognosis of HPC to the sporadic form

  1. Modular HPC I/O characterization with Darshan

    Energy Technology Data Exchange (ETDEWEB)

    Snyder, Shane; Carns, Philip; Harms, Kevin; Ross, Robert; Lockwood, Glenn K.; Wright, Nicholas J.

    2016-11-13

    Contemporary high-performance computing (HPC) applications encompass a broad range of distinct I/O strategies and are often executed on a number of different compute platforms in their lifetime. These large-scale HPC platforms employ increasingly complex I/O subsystems to provide a suitable level of I/O performance to applications. Tuning I/O workloads for such a system is nontrivial, and the results generally are not portable to other HPC systems. I/O profiling tools can help to address this challenge, but most existing tools only instrument specific components within the I/O subsystem that provide a limited perspective on I/O performance. The increasing diversity of scientific applications and computing platforms calls for greater flexibililty and scope in I/O characterization.

  2. Programming Models in HPC

    Energy Technology Data Exchange (ETDEWEB)

    Shipman, Galen M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-13

    These are the slides for a presentation on programming models in HPC, at the Los Alamos National Laboratory's Parallel Computing Summer School. The following topics are covered: Flynn's Taxonomy of computer architectures; single instruction single data; single instruction multiple data; multiple instruction multiple data; address space organization; definition of Trinity (Intel Xeon-Phi is a MIMD architecture); single program multiple data; multiple program multiple data; ExMatEx workflow overview; definition of a programming model, programming languages, runtime systems; programming model and environments; MPI (Message Passing Interface); OpenMP; Kokkos (Performance Portable Thread-Parallel Programming Model); Kokkos abstractions, patterns, policies, and spaces; RAJA, a systematic approach to node-level portability and tuning; overview of the Legion Programming Model; mapping tasks and data to hardware resources; interoperability: supporting task-level models; Legion S3D execution and performance details; workflow, integration of external resources into the programming model.

  3. Big Data and HPC: A Happy Marriage

    KAUST Repository

    Mehmood, Rashid

    2016-01-25

    International Data Corporation (IDC) defines Big Data technologies as “a new generation of technologies and architectures, designed to economically extract value from very large volumes of a wide variety of data produced every day, by enabling high velocity capture, discovery, and/or analysis”. High Performance Computing (HPC) most generally refers to “the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer or workstation in order to solve large problems in science, engineering, or business”. Big data platforms are built primarily considering the economics and capacity of the system for dealing with the 4V characteristics of data. HPC traditionally has been more focussed on the speed of digesting (computing) the data. For these reasons, the two domains (HPC and Big Data) have developed their own paradigms and technologies. However, recently, these two have grown fond of each other. HPC technologies are needed by Big Data to deal with the ever increasing Vs of data in order to forecast and extract insights from existing and new domains, faster, and with greater accuracy. Increasingly more data is being produced by scientific experiments from areas such as bioscience, physics, and climate, and therefore, HPC needs to adopt data-driven paradigms. Moreover, there are synergies between them with unimaginable potential for developing new computing paradigms, solving long-standing grand challenges, and making new explorations and discoveries. Therefore, they must get married to each other. In this talk, we will trace the HPC and big data landscapes through time including their respective technologies, paradigms and major applications areas. Subsequently, we will present the factors that are driving the convergence of the two technologies, the synergies between them, as well as the benefits of their convergence to the biosciences field. The opportunities and challenges of the

  4. Interactive reliability assessment using an integrated reliability data bank

    International Nuclear Information System (INIS)

    Allan, R.N.; Whitehead, A.M.

    1986-01-01

    The logical structure, techniques and practical application of a computer-aided technique based on a microcomputer using floppy disc Random Access Files is described. This interactive computational technique is efficient if the reliability prediction program is coupled directly to a relevant source of data to create an integrated reliability assessment/reliability data bank system. (DG)

  5. COMPOSE-HPC: A Transformational Approach to Exascale

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, David E [ORNL; Allan, Benjamin A. [Sandia National Laboratories (SNL); Armstrong, Robert C. [Sandia National Laboratories (SNL); Chavarria-Miranda, Daniel [Pacific Northwest National Laboratory (PNNL); Dahlgren, Tamara L. [Lawrence Livermore National Laboratory (LLNL); Elwasif, Wael R [ORNL; Epperly, Tom [Lawrence Livermore National Laboratory (LLNL); Foley, Samantha S [ORNL; Hulette, Geoffrey C. [Sandia National Laboratories (SNL); Krishnamoorthy, Sriram [Pacific Northwest National Laboratory (PNNL); Prantl, Adrian [Lawrence Livermore National Laboratory (LLNL); Panyala, Ajay [Louisiana State University; Sottile, Matthew [Galois, Inc.

    2012-04-01

    The goal of the COMPOSE-HPC project is to 'democratize' tools for automatic transformation of program source code so that it becomes tractable for the developers of scientific applications to create and use their own transformations reliably and safely. This paper describes our approach to this challenge, the creation of the KNOT tool chain, which includes tools for the creation of annotation languages to control the transformations (PAUL), to perform the transformations (ROTE), and optimization and code generation (BRAID), which can be used individually and in combination. We also provide examples of current and future uses of the KNOT tools, which include transforming code to use different programming models and environments, providing tests that can be used to detect errors in software or its execution, as well as composition of software written in different programming languages, or with different threading patterns.

  6. MARIANE: MApReduce Implementation Adapted for HPC Environments

    Energy Technology Data Exchange (ETDEWEB)

    Fadika, Zacharia; Dede, Elif; Govindaraju, Madhusudhan; Ramakrishnan, Lavanya

    2011-07-06

    MapReduce is increasingly becoming a popular framework, and a potent programming model. The most popular open source implementation of MapReduce, Hadoop, is based on the Hadoop Distributed File System (HDFS). However, as HDFS is not POSIX compliant, it cannot be fully leveraged by applications running on a majority of existing HPC environments such as Teragrid and NERSC. These HPC environments typicallysupport globally shared file systems such as NFS and GPFS. On such resourceful HPC infrastructures, the use of Hadoop not only creates compatibility issues, but also affects overall performance due to the added overhead of the HDFS. This paper not only presents a MapReduce implementation directly suitable for HPC environments, but also exposes the design choices for better performance gains in those settings. By leveraging inherent distributed file systems' functions, and abstracting them away from its MapReduce framework, MARIANE (MApReduce Implementation Adapted for HPC Environments) not only allows for the use of the model in an expanding number of HPCenvironments, but also allows for better performance in such settings. This paper shows the applicability and high performance of the MapReduce paradigm through MARIANE, an implementation designed for clustered and shared-disk file systems and as such not dedicated to a specific MapReduce solution. The paper identifies the components and trade-offs necessary for this model, and quantifies the performance gains exhibited by our approach in distributed environments over Apache Hadoop in a data intensive setting, on the Magellan testbed at the National Energy Research Scientific Computing Center (NERSC).

  7. INFN-Pisa scientific computation environment (GRID, HPC and Interactive Analysis)

    International Nuclear Information System (INIS)

    Arezzini, S; Carboni, A; Caruso, G; Ciampa, A; Coscetti, S; Mazzoni, E; Piras, S

    2014-01-01

    The INFN-Pisa Tier2 infrastructure is described, optimized not only for GRID CPU and Storage access, but also for a more interactive use of the resources in order to provide good solutions for the final data analysis step. The Data Center, equipped with about 6700 production cores, permits the use of modern analysis techniques realized via advanced statistical tools (like RooFit and RooStat) implemented in multicore systems. In particular a POSIX file storage access integrated with standard SRM access is provided. Therefore the unified storage infrastructure is described, based on GPFS and Xrootd, used both for SRM data repository and interactive POSIX access. Such a common infrastructure allows a transparent access to the Tier2 data to the users for their interactive analysis. The organization of a specialized many cores CPU facility devoted to interactive analysis is also described along with the login mechanism integrated with the INFN-AAI (National INFN Infrastructure) to extend the site access and use to a geographical distributed community. Such infrastructure is used also for a national computing facility in use to the INFN theoretical community, it enables a synergic use of computing and storage resources. Our Center initially developed for the HEP community is now growing and includes also HPC resources fully integrated. In recent years has been installed and managed a cluster facility (1000 cores, parallel use via InfiniBand connection) and we are now updating this facility that will provide resources for all the intermediate level HPC computing needs of the INFN theoretical national community.

  8. Reliability criteria selection for integrated resource planning

    International Nuclear Information System (INIS)

    Ruiu, D.; Ye, C.; Billinton, R.; Lakhanpal, D.

    1993-01-01

    A study was conducted on the selection of a generating system reliability criterion that ensures a reasonable continuity of supply while minimizing the total costs to utility customers. The study was conducted using the Institute for Electronic and Electrical Engineers (IEEE) reliability test system as the study system. The study inputs and results for conditions and load forecast data, new supply resources data, demand-side management resource data, resource planning criterion, criterion value selection, supply side development, integrated resource development, and best criterion values, are tabulated and discussed. Preliminary conclusions are drawn as follows. In the case of integrated resource planning, the selection of the best value for a given type of reliability criterion can be done using methods similar to those used for supply side planning. The reliability criteria values previously used for supply side planning may not be economically justified when integrated resource planning is used. Utilities may have to revise and adopt new, and perhaps lower supply reliability criteria for integrated resource planning. More complex reliability criteria, such as energy related indices, which take into account the magnitude, frequency and duration of the expected interruptions are better adapted than the simpler capacity-based reliability criteria such as loss of load expectation. 7 refs., 5 figs., 10 tabs

  9. The Fifth Workshop on HPC Best Practices: File Systems and Archives

    Energy Technology Data Exchange (ETDEWEB)

    Hick, Jason; Hules, John; Uselton, Andrew

    2011-11-30

    The workshop on High Performance Computing (HPC) Best Practices on File Systems and Archives was the fifth in a series sponsored jointly by the Department Of Energy (DOE) Office of Science and DOE National Nuclear Security Administration. The workshop gathered technical and management experts for operations of HPC file systems and archives from around the world. Attendees identified and discussed best practices in use at their facilities, and documented findings for the DOE and HPC community in this report.

  10. Integrated reliability condition monitoring and maintenance of equipment

    CERN Document Server

    Osarenren, John

    2015-01-01

    Consider a Viable and Cost-Effective Platform for the Industries of the Future (IOF) Benefit from improved safety, performance, and product deliveries to your customers. Achieve a higher rate of equipment availability, performance, product quality, and reliability. Integrated Reliability: Condition Monitoring and Maintenance of Equipment incorporates reliable engineering and mathematical modeling to help you move toward sustainable development in reliability condition monitoring and maintenance. This text introduces a cost-effective integrated reliability growth monitor, integrated reliability degradation monitor, technological inheritance coefficient sensors, and a maintenance tool that supplies real-time information for predicting and preventing potential failures of manufacturing processes and equipment. The author highlights five key elements that are essential to any improvement program: improving overall equipment and part effectiveness, quality, and reliability; improving process performance with maint...

  11. Enabling parallel simulation of large-scale HPC network systems

    International Nuclear Information System (INIS)

    Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; Carns, Philip

    2016-01-01

    Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks used in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations

  12. Energy efficient HPC on embedded SoCs : optimization techniques for mali GPU

    OpenAIRE

    Grasso, Ivan; Radojkovic, Petar; Rajovic, Nikola; Gelado Fernandez, Isaac; Ramírez Bellido, Alejandro

    2014-01-01

    A lot of effort from academia and industry has been invested in exploring the suitability of low-power embedded technologies for HPC. Although state-of-the-art embedded systems-on-chip (SoCs) inherently contain GPUs that could be used for HPC, their performance and energy capabilities have never been evaluated. Two reasons contribute to the above. Primarily, embedded GPUs until now, have not supported 64-bit floating point arithmetic - a requirement for HPC. Secondly, embedded GPUs did not pr...

  13. First experience with particle-in-cell plasma physics code on ARM-based HPC systems

    Science.gov (United States)

    Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Mantsinen, Mervi; Mateo, Sergi; Cela, José M.; Castejón, Francisco

    2015-09-01

    In this work, we will explore the feasibility of porting a Particle-in-cell code (EUTERPE) to an ARM multi-core platform from the Mont-Blanc project. The used prototype is based on a system-on-chip Samsung Exynos 5 with an integrated GPU. It is the first prototype that could be used for High-Performance Computing (HPC), since it supports double precision and parallel programming languages.

  14. Could Blobs Fuel Storage-Based Convergence between HPC and Big Data?

    Energy Technology Data Exchange (ETDEWEB)

    Matri, Pierre; Alforov, Yevhen; Brandon, Alvaro; Kuhn, Michael; Carns, Philip; Ludwig, Thomas

    2017-09-05

    The increasingly growing data sets processed on HPC platforms raise major challenges for the underlying storage layer. A promising alternative to POSIX-IO- compliant file systems are simpler blobs (binary large objects), or object storage systems. Such systems offer lower overhead and better performance at the cost of largely unused features such as file hierarchies or permissions. Similarly, blobs are increasingly considered for replacing distributed file systems for big data analytics or as a base for storage abstractions such as key-value stores or time-series databases. This growing interest in such object storage on HPC and big data platforms raises the question: Are blobs the right level of abstraction to enable storage-based convergence between HPC and Big Data? In this paper we study the impact of blob-based storage for real-world applications on HPC and cloud environments. The results show that blobbased storage convergence is possible, leading to a significant performance improvement on both platforms

  15. End-to-end experiment management in HPC

    Energy Technology Data Exchange (ETDEWEB)

    Bent, John M [Los Alamos National Laboratory; Kroiss, Ryan R [Los Alamos National Laboratory; Torrez, Alfred [Los Alamos National Laboratory; Wingate, Meghan [Los Alamos National Laboratory

    2010-01-01

    Experiment management in any domain is challenging. There is a perpetual feedback loop cycling through planning, execution, measurement, and analysis. The lifetime of a particular experiment can be limited to a single cycle although many require myriad more cycles before definite results can be obtained. Within each cycle, a large number of subexperiments may be executed in order to measure the effects of one or more independent variables. Experiment management in high performance computing (HPC) follows this general pattern but also has three unique characteristics. One, computational science applications running on large supercomputers must deal with frequent platform failures which can interrupt, perturb, or terminate running experiments. Two, these applications typically integrate in parallel using MPI as their communication medium. Three, there is typically a scheduling system (e.g. Condor, Moab, SGE, etc.) acting as a gate-keeper for the HPC resources. In this paper, we introduce LANL Experiment Management (LEM), an experimental management framework simplifying all four phases of experiment management. LEM simplifies experiment planning by allowing the user to describe their experimental goals without having to fully construct the individual parameters for each task. To simplify execution, LEM dispatches the subexperiments itself thereby freeing the user from remembering the often arcane methods for interacting with the various scheduling systems. LEM provides transducers for experiments that automatically measure and record important information about each subexperiment; these transducers can easily be extended to collect additional measurements specific to each experiment. Finally, experiment analysis is simplified by providing a general database visualization framework that allows users to quickly and easily interact with their measured data.

  16. Limits of reliability for the measurement of integral count

    International Nuclear Information System (INIS)

    Erbeszkorn, L.

    1979-01-01

    A method is presented for exact and approximate calculation of reliability limits of measured nuclear integral count. The formulae are applicable in measuring conditions which assure the Poisson distribution of the counts. The coefficients of the approximate formulae for 90, 95, 98 and 99 per cent reliability levels are given. The exact reliability limits for 90 per cent reliability level are calculated up to 80 integral counts. (R.J.)

  17. Robust Reliability or reliable robustness? - Integrated consideration of robustness and reliability aspects

    DEFF Research Database (Denmark)

    Kemmler, S.; Eifler, Tobias; Bertsche, B.

    2015-01-01

    products are and vice versa. For a comprehensive understanding and to use existing synergies between both domains, this paper discusses the basic principles of Reliability- and Robust Design theory. The development of a comprehensive model will enable an integrated consideration of both domains...

  18. Trends in Data Locality Abstractions for HPC Systems

    KAUST Repository

    Unat, Didem; Dubey, Anshu; Hoefler, Torsten; Shalf, John; Abraham, Mark; Bianco, Mauro; Chamberlain, Bradford L.; Cledat, Romain; Edwards, H. Carter; Finkel, Hal; Fuerlinger, Karl; Hannig, Frank; Jeannot, Emmanuel; Kamil, Amir; Keasler, Jeff; Kelly, Paul H J; Leung, Vitus; Ltaief, Hatem; Maruyama, Naoya; Newburn, Chris J.; Pericas, Miquel

    2017-01-01

    The cost of data movement has always been an important concern in high performance computing (HPC) systems. It has now become the dominant factor in terms of both energy consumption and performance. Support for expression of data locality has been explored in the past, but those efforts have had only modest success in being adopted in HPC applications for various reasons. them However, with the increasing complexity of the memory hierarchy and higher parallelism in emerging HPC systems, locality management has acquired a new urgency. Developers can no longer limit themselves to low-level solutions and ignore the potential for productivity and performance portability obtained by using locality abstractions. Fortunately, the trend emerging in recent literature on the topic alleviates many of the concerns that got in the way of their adoption by application developers. Data locality abstractions are available in the forms of libraries, data structures, languages and runtime systems; a common theme is increasing productivity without sacrificing performance. This paper examines these trends and identifies commonalities that can combine various locality concepts to develop a comprehensive approach to expressing and managing data locality on future large-scale high-performance computing systems.

  19. Trends in Data Locality Abstractions for HPC Systems

    KAUST Repository

    Unat, Didem

    2017-05-12

    The cost of data movement has always been an important concern in high performance computing (HPC) systems. It has now become the dominant factor in terms of both energy consumption and performance. Support for expression of data locality has been explored in the past, but those efforts have had only modest success in being adopted in HPC applications for various reasons. them However, with the increasing complexity of the memory hierarchy and higher parallelism in emerging HPC systems, locality management has acquired a new urgency. Developers can no longer limit themselves to low-level solutions and ignore the potential for productivity and performance portability obtained by using locality abstractions. Fortunately, the trend emerging in recent literature on the topic alleviates many of the concerns that got in the way of their adoption by application developers. Data locality abstractions are available in the forms of libraries, data structures, languages and runtime systems; a common theme is increasing productivity without sacrificing performance. This paper examines these trends and identifies commonalities that can combine various locality concepts to develop a comprehensive approach to expressing and managing data locality on future large-scale high-performance computing systems.

  20. Study of ageing side effects in the DELPHI HPC calorimeter

    CERN Document Server

    Bonivento, W

    1997-01-01

    The readout proportional chambers of the HPC electromagnetic calorimeter in the DELPHI experiment are affected by large ageing. In order to study the long-term behaviour fo the calorimeter, one HPC module was extracted from DELPHI in 1992 and was brought to a test area where it was artificially aged during a period of two years; an ageing level exceeding the one expected for the HPC at the end of the LEP era was reached. During this period the performance of the module was periodically tested by means of dedicated beam tests whose results are discussed in this paper. These show that ageing has no significant effects on the response linearity and on the energy resolution for electromagnetic showers, once the analog response loss is compensated for by increasing the chamber gain through the anode voltage.

  1. BEAM: A computational workflow system for managing and modeling material characterization data in HPC environments

    Energy Technology Data Exchange (ETDEWEB)

    Lingerfelt, Eric J [ORNL; Endeve, Eirik [ORNL; Ovchinnikov, Oleg S [ORNL; Borreguero Calvo, Jose M [ORNL; Park, Byung H [ORNL; Archibald, Richard K [ORNL; Symons, Christopher T [ORNL; Kalinin, Sergei V [ORNL; Messer, Bronson [ORNL; Shankar, Mallikarjun [ORNL; Jesse, Stephen [ORNL

    2016-01-01

    Improvements in scientific instrumentation allow imaging at mesoscopic to atomic length scales, many spectroscopic modes, and now with the rise of multimodal acquisition systems and the associated processing capability the era of multidimensional, informationally dense data sets has arrived. Technical issues in these combinatorial scientific fields are exacerbated by computational challenges best summarized as a necessity for drastic improvement in the capability to transfer, store, and analyze large volumes of data. The Bellerophon Environment for Analysis of Materials (BEAM) platform provides material scientists the capability to directly leverage the integrated computational and analytical power of High Performance Computing (HPC) to perform scalable data analysis and simulation via an intuitive, cross-platform client user interface. This framework delivers authenticated, push-button execution of complex user workflows that deploy data analysis algorithms and computational simulations utilizing the converged compute-and-data infrastructure at Oak Ridge National Laboratory s (ORNL) Compute and Data Environment for Science (CADES) and HPC environments like Titan at the Oak Ridge Leadership Computing Facility (OLCF). In this work we address the underlying HPC needs for characterization in the material science community, elaborate how BEAM s design and infrastructure tackle those needs, and present a small sub-set of user cases where scientists utilized BEAM across a broad range of analytical techniques and analysis modes.

  2. STAR Data Reconstruction at NERSC/Cori, an adaptable Docker container approach for HPC

    Science.gov (United States)

    Mustafa, Mustafa; Balewski, Jan; Lauret, Jérôme; Porter, Jefferson; Canon, Shane; Gerhardt, Lisa; Hajdu, Levente; Lukascsyk, Mark

    2017-10-01

    As HPC facilities grow their resources, adaptation of classic HEP/NP workflows becomes a need. Linux containers may very well offer a way to lower the bar to exploiting such resources and at the time, help collaboration to reach vast elastic resources on such facilities and address their massive current and future data processing challenges. In this proceeding, we showcase STAR data reconstruction workflow at Cori HPC system at NERSC. STAR software is packaged in a Docker image and runs at Cori in Shifter containers. We highlight two of the typical end-to-end optimization challenges for such pipelines: 1) data transfer rate which was carried over ESnet after optimizing end points and 2) scalable deployment of conditions database in an HPC environment. Our tests demonstrate equally efficient data processing workflows on Cori/HPC, comparable to standard Linux clusters.

  3. Special Issue on Automatic Application Tuning for HPC Architectures

    Directory of Open Access Journals (Sweden)

    Siegfried Benkner

    2014-01-01

    Full Text Available High Performance Computing architectures have become incredibly complex and exploiting their full potential is becoming more and more challenging. As a consequence, automatic performance tuning (autotuning of HPC applications is of growing interest and many research groups around the world are currently involved. Autotuning is still a rapidly evolving research field with many different approaches being taken. This special issue features selected papers presented at the Dagstuhl seminar on “Automatic Application Tuning for HPC Architectures” in October 2013, which brought together researchers from the areas of autotuning and performance analysis in order to exchange ideas and steer future collaborations.

  4. Towards Spherical Mesh Gravity and Magnetic Modelling in an HPC Environment

    Science.gov (United States)

    Lane, R. J.; Brodie, R. C.; de Hoog, M.; Navin, J.; Chen, C.; Du, J.; Liang, Q.; Wang, H.; Li, Y.

    2013-12-01

    Staff at Geoscience Australia (GA), Australia's Commonwealth Government geoscientific agency, have routinely performed 3D gravity and magnetic modelling as part of geoscience investigations. For this work, we have used software programs that have been based on a Cartesian mesh spatial framework. These programs have come as executable files that were compiled to operate in a Windows environment on single core personal computers (PCs). To cope with models with higher resolution and larger extents, we developed an approach whereby a large problem could be broken down into a number of overlapping smaller models (';tiles') that could be modelled separately, with the results combined back into a single output model. To speed up the processing, we established a Condor distributed network from existing desktop PCs. A number of factors have caused us to consider a new approach to this modelling work. The drivers for change include; 1) models with very large lateral extents where the effects of Earth curvature are a consideration, 2) a desire to ensure that the modelling of separate regions is carried out in a consistent and managed fashion, 3) migration of scientific computing to off-site High Performance Computing (HPC) facilities, and 4) development of virtual globe environments for integration and visualization of 3D spatial objects. Some of the more surprising realizations to emerge have been that; 1) there aren't any readily available commercial software packages for modelling gravity and magnetic data in a spherical mesh spatial framework, 2) there are many different types of HPC environments, 3) no two HPC environments are the same, and 4) the most common virtual globe environment (i.e., Google Earth) doesn't allow spatial objects to be displayed below the topographic/bathymetric surface. Our response has been to do the following; 1) form a collaborative partnership with researchers at the Colorado School of Mines (CSM) and the China University of Geosciences (CUG

  5. International Energy Agency's Heat Pump Centre (IEA-HPC) Annual National Team Working Group Meeting

    Science.gov (United States)

    Broders, M. A.

    1992-09-01

    The traveler, serving as Delegate from the United States Advanced Heat Pump National Team, participated in the activities of the fourth IEA-HPC National Team Working Group meeting. Highlights of this meeting included review and discussion of 1992 IEA-HPC activities and accomplishments, introduction of the Switzerland National Team, and development of the 1993 IEA-HPC work program. The traveler also gave a formal presentation about the Development and Activities of the IEA Advanced Heat Pump U.S. National Team.

  6. An integrated reliability management system for nuclear power plants

    International Nuclear Information System (INIS)

    Kimura, T.; Shimokawa, H.; Matsushima, H.

    1998-01-01

    The responsibility in the nuclear field of the Government, utilities and manufactures has increased in the past years due to the need of stable operation and great reliability of nuclear power plants. The need to improve the reliability is not only for the new plants but also for those now running. So, several measures have been taken to improve reliability. In particular, the plant manufactures have developed a reliability management system for each phase (planning, construction, maintenance and operation) and these have been integrated as a unified system. This integrated reliability management system for nuclear power plants contains information about plant performance, failures and incidents which have occurred in the plants. (author)

  7. First experience with particle-in-cell plasma physics code on ARM-based HPC systems

    OpenAIRE

    Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Mantsinen, Mervi; Mateo, Sergio; Cela, José M.; Castejón, Francisco

    2015-01-01

    In this work, we will explore the feasibility of porting a Particle-in-cell code (EUTERPE) to an ARM multi-core platform from the Mont-Blanc project. The used prototype is based on a system-on-chip Samsung Exynos 5 with an integrated GPU. It is the first prototype that could be used for High-Performance Computing (HPC), since it supports double precision and parallel programming languages. The research leading to these results has received funding from the European Com- munity's Seventh...

  8. Project Final Report: HPC-Colony II

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Terry R [ORNL; Kale, Laxmikant V [University of Illinois, Urbana-Champaign; Moreira, Jose [IBM T. J. Watson Research Center

    2013-11-01

    This report recounts the HPC Colony II Project which was a computer science effort funded by DOE's Advanced Scientific Computing Research office. The project included researchers from ORNL, IBM, and the University of Illinois at Urbana-Champaign. The topic of the effort was adaptive system software for extreme scale parallel machines. A description of findings is included.

  9. Study of thermal performance of capillary micro tubes integrated into the building sandwich element made of high performance concrete

    DEFF Research Database (Denmark)

    Mikeska, Tomas; Svendsen, Svend

    2013-01-01

    The thermal performance of radiant heating and cooling systems (RHCS) composed of capillary micro tubes (CMT) integrated into the inner plate of sandwich elements made of high performance concrete (HPC) was investigated in the article. Temperature distribution in HPC elements around integrated CM...... and cooling purposes of future low energy buildings. The investigations were conceived as a low temperature concept, where the difference between the temperature of circulating fluid and air in the room was kept in range of 1–4 °C.......The thermal performance of radiant heating and cooling systems (RHCS) composed of capillary micro tubes (CMT) integrated into the inner plate of sandwich elements made of high performance concrete (HPC) was investigated in the article. Temperature distribution in HPC elements around integrated CMT...... HPC layer covering the CMT. This paper shows that CMT integrated into the thin plate of sandwich element made of HPC can supply the energy needed for heating (cooling) and at the same time create the comfortable and healthy environment for the occupants. This solution is very suitable for heating...

  10. Towards anatomic scale agent-based modeling with a massively parallel spatially explicit general-purpose model of enteric tissue (SEGMEnT_HPC).

    Science.gov (United States)

    Cockrell, Robert Chase; Christley, Scott; Chang, Eugene; An, Gary

    2015-01-01

    Perhaps the greatest challenge currently facing the biomedical research community is the ability to integrate highly detailed cellular and molecular mechanisms to represent clinical disease states as a pathway to engineer effective therapeutics. This is particularly evident in the representation of organ-level pathophysiology in terms of abnormal tissue structure, which, through histology, remains a mainstay in disease diagnosis and staging. As such, being able to generate anatomic scale simulations is a highly desirable goal. While computational limitations have previously constrained the size and scope of multi-scale computational models, advances in the capacity and availability of high-performance computing (HPC) resources have greatly expanded the ability of computational models of biological systems to achieve anatomic, clinically relevant scale. Diseases of the intestinal tract are exemplary examples of pathophysiological processes that manifest at multiple scales of spatial resolution, with structural abnormalities present at the microscopic, macroscopic and organ-levels. In this paper, we describe a novel, massively parallel computational model of the gut, the Spatially Explicitly General-purpose Model of Enteric Tissue_HPC (SEGMEnT_HPC), which extends an existing model of the gut epithelium, SEGMEnT, in order to create cell-for-cell anatomic scale simulations. We present an example implementation of SEGMEnT_HPC that simulates the pathogenesis of ileal pouchitis, and important clinical entity that affects patients following remedial surgery for ulcerative colitis.

  11. Comparative performance of conventional OPC concrete and HPC designed by densified mixture design algorithm

    Science.gov (United States)

    Huynh, Trong-Phuoc; Hwang, Chao-Lung; Yang, Shu-Ti

    2017-12-01

    This experimental study evaluated the performance of normal ordinary Portland cement (OPC) concrete and high-performance concrete (HPC) that were designed by the conventional method (ACI) and densified mixture design algorithm (DMDA) method, respectively. Engineering properties and durability performance of both the OPC and HPC samples were studied using the tests of workability, compressive strength, water absorption, ultrasonic pulse velocity, and electrical surface resistivity. Test results show that the HPC performed good fresh property and further showed better performance in terms of strength and durability as compared to the OPC.

  12. DOD HPC Insights. Spring 2012

    Science.gov (United States)

    2012-04-01

    petascale and exascale HPC concepts has led to new research thrusts including power efficiency. Now, power efficiency is an important area of expertise... exascale supercomputers. MHPCC is also working on the gen- eration side of the energy equation. We have deployed a 100 KW research so- lar array... exascale su- percomputers. Within the HPCMP, en- ergy costs take an increasing amount of the limited budget that could be better used for service

  13. Perm State University HPC-hardware and software services: capabilities for aircraft engine aeroacoustics problems solving

    Science.gov (United States)

    Demenev, A. G.

    2018-02-01

    The present work is devoted to analyze high-performance computing (HPC) infrastructure capabilities for aircraft engine aeroacoustics problems solving at Perm State University. We explore here the ability to develop new computational aeroacoustics methods/solvers for computer-aided engineering (CAE) systems to handle complicated industrial problems of engine noise prediction. Leading aircraft engine engineering company, including “UEC-Aviadvigatel” JSC (our industrial partners in Perm, Russia), require that methods/solvers to optimize geometry of aircraft engine for fan noise reduction. We analysed Perm State University HPC-hardware resources and software services to use efficiently. The performed results demonstrate that Perm State University HPC-infrastructure are mature enough to face out industrial-like problems of development CAE-system with HPC-method and CFD-solvers.

  14. HPC Colony II Consolidated Annual Report: July-2010 to June-2011

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Terry R [ORNL

    2011-06-01

    This report provides a brief progress synopsis of the HPC Colony II project for the period of July 2010 to June 2011. HPC Colony II is a 36-month project and this report covers project months 10 through 21. It includes a consolidated view of all partners (Oak Ridge National Laboratory, IBM, and the University of Illinois at Urbana-Champaign) as well as detail for Oak Ridge. Highlights are noted and fund status data (burn rates) are provided.

  15. Delivering LHC software to HPC compute elements

    CERN Document Server

    Blomer, Jakob; Hardi, Nikola; Popescu, Radu

    2017-01-01

    In recent years, there was a growing interest in improving the utilization of supercomputers by running applications of experiments at the Large Hadron Collider (LHC) at CERN when idle cores cannot be assigned to traditional HPC jobs. At the same time, the upcoming LHC machine and detector upgrades will produce some 60 times higher data rates and challenge LHC experiments to use so far untapped compute resources. LHC experiment applications are tailored to run on high-throughput computing resources and they have a different anatomy than HPC applications. LHC applications comprise a core framework that allows hundreds of researchers to plug in their specific algorithms. The software stacks easily accumulate to many gigabytes for a single release. New releases are often produced on a daily basis. To facilitate the distribution of these software stacks to world-wide distributed computing resources, LHC experiments use a purpose-built, global, POSIX file system, the CernVM File System. CernVM-FS pre-processes dat...

  16. Management systems for high reliability organizations. Integration and effectiveness; Managementsysteme fuer Hochzuverlaessigkeitsorganisationen. Integration und Wirksamkeit

    Energy Technology Data Exchange (ETDEWEB)

    Mayer, Michael

    2015-03-09

    The scope of the thesis is the development of a method for improvement of efficient integrated management systems for high reliability organizations (HRO). A comprehensive analysis of severe accident prevention is performed. Severe accident management, mitigation measures and business continuity management are not included. High reliability organizations are complex and potentially dynamic organization forms that can be inherently dangerous like nuclear power plants, offshore platforms, chemical facilities, large ships or large aircrafts. A recursive generic management system model (RGM) was development based on the following factors: systemic and cybernetic Asepcts; integration of different management fields, high decision quality, integration of efficient methods of safety and risk analysis, integration of human reliability aspects, effectiveness evaluation and improvement.

  17. The VERCE Science Gateway: Enabling User Friendly HPC Seismic Wave Simulations.

    Science.gov (United States)

    Casarotti, E.; Spinuso, A.; Matser, J.; Leong, S. H.; Magnoni, F.; Krause, A.; Garcia, C. R.; Muraleedharan, V.; Krischer, L.; Anthes, C.

    2014-12-01

    The EU-funded project VERCE (Virtual Earthquake and seismology Research Community in Europe) aims to deploy technologies which satisfy the HPC and data-intensive requirements of modern seismology.As a result of VERCE official collaboration with the EU project SCI-BUS, access to computational resources, like local clusters and international infrastructures (EGI and PRACE), is made homogeneous and integrated within a dedicated science gateway based on the gUSE framework. In this presentation we give a detailed overview on the progress achieved with the developments of the VERCE Science Gateway, according to a use-case driven implementation strategy. More specifically, we show how the computational technologies and data services have been integrated within a tool for Seismic Forward Modelling, whose objective is to offer the possibility to performsimulations of seismic waves as a service to the seismological community.We will introduce the interactive components of the OGC map based web interface and how it supports the user with setting up the simulation. We will go through the selection of input data, which are either fetched from federated seismological web services, adopting community standards, or provided by the users themselves by accessing their own document data store. The HPC scientific codes can be selected from a number of waveform simulators, currently available to the seismological community as batch tools or with limited configuration capabilities in their interactive online versions.The results will be staged out via a secure GridFTP transfer to a VERCE data layer managed by iRODS. The provenance information of the simulation will be automatically cataloged by the data layer via NoSQL techonologies.Finally, we will show the example of how the visualisation output of the gateway could be enhanced by the connection with immersive projection technology at the Virtual Reality and Visualisation Centre of Leibniz Supercomputing Centre (LRZ).

  18. Integrated approach to economical, reliable, safe nuclear power production

    International Nuclear Information System (INIS)

    1982-06-01

    An Integrated Approach to Economical, Reliable, Safe Nuclear Power Production is the latest evolution of a concept which originated with the Defense-in-Depth philosophy of the nuclear industry. As Defense-in-Depth provided a framework for viewing physical barriers and equipment redundancy, the Integrated Approach gives a framework for viewing nuclear power production in terms of functions and institutions. In the Integrated Approach, four plant Goals are defined (Normal Operation, Core and Plant Protection, Containment Integrity and Emergency Preparedness) with the attendant Functional and Institutional Classifications that support them. The Integrated Approach provides a systematic perspective that combines the economic objective of reliable power production with the safety objective of consistent, controlled plant operation

  19. High Temperature Exposure of HPC – Experimental Analysis of Residual Properties and Thermal Response

    Directory of Open Access Journals (Sweden)

    Pavlík Zbyšek

    2016-01-01

    Full Text Available The effect of high temperature exposure on properties of a newly designed High Performance Concrete (HPC is studied in the paper. The HPC samples are exposed to the temperatures of 200, 400, 600, 800, and 1000°C respectively. Among the basic physical properties, bulk density, matrix density and total open porosity are measured. The mechanical resistivity against disruptive temperature action is characterised by compressive strength, flexural strength and dynamic modulus of elasticity. To study the chemical and physical processes in HPC during its high-temperature exposure, Simultaneous Thermal Analysis (STA is performed. Linear thermal expansion coefficient is determined as function of temperature using thermodilatometry (TDA. In order to describe the changes in microstructure of HPC induced by high temperature loading, MIP measurement of pore size distribution is done. Increase of the total open porosity and connected decrease of the mechanical parameters for temperatures higher than 200 °C were identified.

  20. HPC Cloud for Scientific and Business Applications: Taxonomy, Vision, and Research Challenges

    OpenAIRE

    Netto, Marco A. S.; Calheiros, Rodrigo N.; Rodrigues, Eduardo R.; Cunha, Renato L. F.; Buyya, Rajkumar

    2017-01-01

    High Performance Computing (HPC) clouds are becoming an alternative to on-premise clusters for executing scientific applications and business analytics services. Most research efforts in HPC cloud aim to understand the cost-benefit of moving resource-intensive applications from on-premise environments to public cloud platforms. Industry trends show hybrid environments are the natural path to get the best of the on-premise and cloud resources---steady (and sensitive) workloads can run on on-pr...

  1. Thermal performance of capillary micro tubes integrated into the sandwich element made of concrete

    DEFF Research Database (Denmark)

    Mikeska, Tomas; Svendsen, Svend

    2013-01-01

    integrated into the thin plate of sandwich element made of HPC can supply the energy needed for heating and cooling. The investigations were conceived as a low temperature concept, where the difference between the temperature of circulating fluid and air in the room was kept in range of 1 to 4°C. © (2013......The thermal performance of radiant heating and cooling systems (RHCS) composed of capillary micro tubes (CMT) integrated into the inner plate of sandwich elements made of High Performance Concrete (HPC) was investigated in the article. Temperature distribution in HPC elements around integrated CMT...

  2. HPC Access Using KVM over IP

    Science.gov (United States)

    2007-06-08

    Lightwave VDE /200 KVM-over-Fiber (Keyboard, Video and Mouse) devices installed throughout the TARDEC campus. Implementation of this system required...development effort through the pursuit of an Army-funded Phase-II Small Business Innovative Research (SBIR) effort with IP Video Systems (formerly known as...visualization capabilities of a DoD High- Performance Computing facility, many advanced features are necessary. TARDEC-HPC’s SBIR with IP Video Systems

  3. The VERCE Science Gateway: enabling user friendly seismic waves simulations across European HPC infrastructures

    Science.gov (United States)

    Spinuso, Alessandro; Krause, Amy; Ramos Garcia, Clàudia; Casarotti, Emanuele; Magnoni, Federica; Klampanos, Iraklis A.; Frobert, Laurent; Krischer, Lion; Trani, Luca; David, Mario; Leong, Siew Hoon; Muraleedharan, Visakh

    2014-05-01

    The EU-funded project VERCE (Virtual Earthquake and seismology Research Community in Europe) aims to deploy technologies which satisfy the HPC and data-intensive requirements of modern seismology. As a result of VERCE's official collaboration with the EU project SCI-BUS, access to computational resources, like local clusters and international infrastructures (EGI and PRACE), is made homogeneous and integrated within a dedicated science gateway based on the gUSE framework. In this presentation we give a detailed overview on the progress achieved with the developments of the VERCE Science Gateway, according to a use-case driven implementation strategy. More specifically, we show how the computational technologies and data services have been integrated within a tool for Seismic Forward Modelling, whose objective is to offer the possibility to perform simulations of seismic waves as a service to the seismological community. We will introduce the interactive components of the OGC map based web interface and how it supports the user with setting up the simulation. We will go through the selection of input data, which are either fetched from federated seismological web services, adopting community standards, or provided by the users themselves by accessing their own document data store. The HPC scientific codes can be selected from a number of waveform simulators, currently available to the seismological community as batch tools or with limited configuration capabilities in their interactive online versions. The results will be staged out from the HPC via a secure GridFTP transfer to a VERCE data layer managed by iRODS. The provenance information of the simulation will be automatically cataloged by the data layer via NoSQL techonologies. We will try to demonstrate how data access, validation and visualisation can be supported by a general purpose provenance framework which, besides common provenance concepts imported from the OPM and the W3C-PROV initiatives, also offers

  4. Using Formal Grammars to Predict I/O Behaviors in HPC: The Omnisc'IO Approach

    Energy Technology Data Exchange (ETDEWEB)

    Dorier, Matthieu; Ibrahim, Shadi; Antoniu, Gabriel; Ross, Rob

    2016-08-01

    The increasing gap between the computation performance of post-petascale machines and the performance of their I/O subsystem has motivated many I/O optimizations including prefetching, caching, and scheduling. In order to further improve these techniques, modeling and predicting spatial and temporal I/O patterns of HPC applications as they run has become crucial. In this paper we present Omnisc'IO, an approach that builds a grammar-based model of the I/O behavior of HPC applications and uses it to predict when future I/O operations will occur, and where and how much data will be accessed. To infer grammars, Omnisc'IO is based on StarSequitur, a novel algorithm extending Nevill-Manning's Sequitur algorithm. Omnisc'IO is transparently integrated into the POSIX and MPI I/O stacks and does not require any modification in applications or higher-level I/O libraries. It works without any prior knowledge of the application and converges to accurate predictions of any N future I/O operations within a couple of iterations. Its implementation is efficient in both computation time and memory footprint.

  5. Program integration of predictive maintenance with reliability centered maintenance

    International Nuclear Information System (INIS)

    Strong, D.K. Jr; Wray, D.M.

    1990-01-01

    This paper addresses improving the safety and reliability of power plants in a cost-effective manner by integrating the recently developed reliability centered maintenance techniques with the traditional predictive maintenance techniques of nuclear power plants. The topics of the paper include a description of reliability centered maintenance (RCM), enhancing RCM with predictive maintenance, predictive maintenance programs, condition monitoring techniques, performance test techniques, the mid-Atlantic Reliability Centered Maintenance Users Group, test guides and the benefits of shared guide development

  6. Performance Analysis Tool for HPC and Big Data Applications on Scientific Clusters

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wucherl [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Koo, Michelle [Univ. of California, Berkeley, CA (United States); Cao, Yu [California Inst. of Technology (CalTech), Pasadena, CA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Nugent, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States); Wu, Kesheng [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-09-17

    Big data is prevalent in HPC computing. Many HPC projects rely on complex workflows to analyze terabytes or petabytes of data. These workflows often require running over thousands of CPU cores and performing simultaneous data accesses, data movements, and computation. It is challenging to analyze the performance involving terabytes or petabytes of workflow data or measurement data of the executions, from complex workflows over a large number of nodes and multiple parallel task executions. To help identify performance bottlenecks or debug the performance issues in large-scale scientific applications and scientific clusters, we have developed a performance analysis framework, using state-ofthe- art open-source big data processing tools. Our tool can ingest system logs and application performance measurements to extract key performance features, and apply the most sophisticated statistical tools and data mining methods on the performance data. It utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of the big data analysis framework, we conduct case studies on the workflows from an astronomy project known as the Palomar Transient Factory (PTF) and the job logs from the genome analysis scientific cluster. Our study processed many terabytes of system logs and application performance measurements collected on the HPC systems at NERSC. The implementation of our tool is generic enough to be used for analyzing the performance of other HPC systems and Big Data workows.

  7. The rise of HPC accelerators: towards a common vision for a petascale future

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    Nowadays new exciting scientific discoveries are mainly driven by large challenging simulations. An analysis of the trends in High Performance Computing clearly show that we hit several barriers (CPU frequency, power consumption, technological limits, limitations of the present paradigms) that we cannot easily overcome. In this context, accelerators became the concrete alternative to increase the compute capabilities of the deployed HPC clusters inside Universities and research centers across Europe. Within the EC funded "Partnership of Advanced Computing in Europe" (PRACE) project, several actions has been taken and will be taken to enable community codes to exploit accelerators in modern HPC architectures. In this talk, the vision and the strategy adopted by the PRACE project will be presented, focusing on new HPC programming model and paradigm. Accelerators are a fundamental piece to innovate in this direction, from both the hardware and the software point of view. This work started dur...

  8. Addressing Uniqueness and Unison of Reliability and Safety for a Better Integration

    Science.gov (United States)

    Huang, Zhaofeng; Safie, Fayssal

    2016-01-01

    Over time, it has been observed that Safety and Reliability have not been clearly differentiated, which leads to confusion, inefficiency, and, sometimes, counter-productive practices in executing each of these two disciplines. It is imperative to address this situation to help Reliability and Safety disciplines improve their effectiveness and efficiency. The paper poses an important question to address, "Safety and Reliability - Are they unique or unisonous?" To answer the question, the paper reviewed several most commonly used analyses from each of the disciplines, namely, FMEA, reliability allocation and prediction, reliability design involvement, system safety hazard analysis, Fault Tree Analysis, and Probabilistic Risk Assessment. The paper pointed out uniqueness and unison of Safety and Reliability in their respective roles, requirements, approaches, and tools, and presented some suggestions for enhancing and improving the individual disciplines, as well as promoting the integration of the two. The paper concludes that Safety and Reliability are unique, but compensating each other in many aspects, and need to be integrated. Particularly, the individual roles of Safety and Reliability need to be differentiated, that is, Safety is to ensure and assure the product meets safety requirements, goals, or desires, and Reliability is to ensure and assure maximum achievability of intended design functions. With the integration of Safety and Reliability, personnel can be shared, tools and analyses have to be integrated, and skill sets can be possessed by the same person with the purpose of providing the best value to a product development.

  9. Optimizing new components of PanDA for ATLAS production on HPC resources

    CERN Document Server

    Maeno, Tadashi; The ATLAS collaboration

    2017-01-01

    The Production and Distributed Analysis system (PanDA) has been used for workload management in the ATLAS Experiment for over a decade. It uses pilots to retrieve jobs from the PanDA server and execute them on worker nodes. While PanDA has been mostly used on Worldwide LHC Computing Grid (WLCG) resources for production operations, R&D work has been ongoing on cloud and HPC resources for many years. These efforts have led to the significant usage of large scale HPC resources in the past couple of years. In this talk we will describe the changes to the pilot which enabled the use of HPC sites by PanDA, specifically the Titan supercomputer at Oakridge National Laboratory. Furthermore, it was decided in 2016 to start a fresh redesign of the Pilot with a more modern approach to better serve present and future needs from ATLAS and other collaborations that are interested in using the PanDA System. Another new project for development of a resource oriented service, PanDA Harvester, was also launched in 2016. The...

  10. Redundancy and Reliability for an HPC Data Centre

    OpenAIRE

    Erhan Yılmaz

    2012-01-01

    Defining a level of redundancy is a strategic question when planning a new data centre, as it will directly impact the entire design of the building as well as the construction and operational costs. It will also affect how to integrate future extension plans into the design. Redundancy is also a key strategic issue when upgrading or retrofitting an existing facility. Redundancy is a central strategic question to any business that relies on data centres for its operation. In th...

  11. Easy Access to HPC Resources through the Application GUI

    KAUST Repository

    van Waveren, Matthijs

    2016-11-01

    The computing environment at the King Abdullah University of Science and Technology (KAUST) is growing in size and complexity. KAUST hosts the tenth fastest supercomputer in the world (Shaheen II) and several HPC clusters. Researchers can be inhibited by the complexity, as they need to learn new languages and execute many tasks in order to access the HPC clusters and the supercomputer. In order to simplify the access, we have developed an interface between the applications and the clusters and supercomputer that automates the transfer of input data and job submission and also the retrieval of results to the researcher’s local workstation. The innovation is that the user now submits his jobs from within the application GUI on his workstation, and does not have to directly log into the clusters or supercomputer anymore. This article details the solution and its benefits to the researchers.

  12. Achieving High Reliability Operations Through Multi-Program Integration

    Energy Technology Data Exchange (ETDEWEB)

    Holly M. Ashley; Ronald K. Farris; Robert E. Richards

    2009-04-01

    Over the last 20 years the Idaho National Laboratory (INL) has adopted a number of operations and safety-related programs which has each periodically taken its turn in the limelight. As new programs have come along there has been natural competition for resources, focus and commitment. In the last few years, the INL has made real progress in integrating all these programs and are starting to realize important synergies. Contributing to this integration are both collaborative individuals and an emerging shared vision and goal of the INL fully maturing in its high reliability operations. This goal is so powerful because the concept of high reliability operations (and the resulting organizations) is a masterful amalgam and orchestrator of the best of all the participating programs (i.e. conduct of operations, behavior based safety, human performance, voluntary protection, quality assurance, and integrated safety management). This paper is a brief recounting of the lessons learned, thus far, at the INL in bringing previously competing programs into harmony under the goal (umbrella) of seeking to perform regularly as a high reliability organization. In addition to a brief diagram-illustrated historical review, the authors will share the INL’s primary successes (things already effectively stopped or started) and the gaps yet to be bridged.

  13. Continuous Security and Configuration Monitoring of HPC Clusters

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Lomeli, H. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bertsch, A. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fox, D. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-05-08

    Continuous security and configuration monitoring of information systems has been a time consuming and laborious task for system administrators at the High Performance Computing (HPC) center. Prior to this project, system administrators had to manually check the settings of thousands of nodes, which required a significant number of hours rendering the old process ineffective and inefficient. This paper explains the application of Splunk Enterprise, a software agent, and a reporting tool in the development of a user application interface to track and report on critical system updates and security compliance status of HPC Clusters. In conjunction with other configuration management systems, the reporting tool is to provide continuous situational awareness to system administrators of the compliance state of information systems. Our approach consisted of the development, testing, and deployment of an agent to collect any arbitrary information across a massively distributed computing center, and organize that information into a human-readable format. Using Splunk Enterprise, this raw data was then gathered into a central repository and indexed for search, analysis, and correlation. Following acquisition and accumulation, the reporting tool generated and presented actionable information by filtering the data according to command line parameters passed at run time. Preliminary data showed results for over six thousand nodes. Further research and expansion of this tool could lead to the development of a series of agents to gather and report critical system parameters. However, in order to make use of the flexibility and resourcefulness of the reporting tool the agent must conform to specifications set forth in this paper. This project has simplified the way system administrators gather, analyze, and report on the configuration and security state of HPC clusters, maintaining ongoing situational awareness. Rather than querying each cluster independently, compliance checking

  14. Connecting to HPC VPN | High-Performance Computing | NREL

    Science.gov (United States)

    visualization, and file transfers. NREL Users Logging in to Peregrine Use SSH to login to the system. Your login and password will match your NREL network account login/password. From OS X or Linux, open a terminal login for the Windows HPC Cluster will match your NREL Active Directory login/password that you use to

  15. The NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform to Support the Analysis of Petascale Environmental Data Collections

    Science.gov (United States)

    Evans, B. J. K.; Pugh, T.; Wyborn, L. A.; Porter, D.; Allen, C.; Smillie, J.; Antony, J.; Trenham, C.; Evans, B. J.; Beckett, D.; Erwin, T.; King, E.; Hodge, J.; Woodcock, R.; Fraser, R.; Lescinsky, D. T.

    2014-12-01

    The National Computational Infrastructure (NCI) has co-located a priority set of national data assets within a HPC research platform. This powerful in-situ computational platform has been created to help serve and analyse the massive amounts of data across the spectrum of environmental collections - in particular the climate, observational data and geoscientific domains. This paper examines the infrastructure, innovation and opportunity for this significant research platform. NCI currently manages nationally significant data collections (10+ PB) categorised as 1) earth system sciences, climate and weather model data assets and products, 2) earth and marine observations and products, 3) geosciences, 4) terrestrial ecosystem, 5) water management and hydrology, and 6) astronomy, social science and biosciences. The data is largely sourced from the NCI partners (who include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. By co-locating these large valuable data assets, new opportunities have arisen by harmonising the data collections, making a powerful transdisciplinary research platformThe data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. New scientific software, cloud-scale techniques, server-side visualisation and data services have been harnessed and integrated into the platform, so that analysis is performed seamlessly across the traditional boundaries of the underlying data domains. Characterisation of the techniques along with performance profiling ensures scalability of each software component, all of which can either be enhanced or replaced through future improvements. A Development-to-Operations (DevOps) framework has also been implemented to manage the scale of the software complexity alone. This ensures that

  16. Mechanisms of adhesion and subsequent actions of a haematopoietic stem cell line, HPC-7, in the injured murine intestinal microcirculation in vivo.

    Directory of Open Access Journals (Sweden)

    Dean P J Kavanagh

    Full Text Available Although haematopoietic stem cells (HSCs migrate to injured gut, therapeutic success clinically remains poor. This has been partially attributed to limited local HSC recruitment following systemic injection. Identifying site specific adhesive mechanisms underpinning HSC-endothelial interactions may provide important information on how to enhance their recruitment and thus potentially improve therapeutic efficacy. This study determined (i the integrins and inflammatory cyto/chemokines governing HSC adhesion to injured gut and muscle (ii whether pre-treating HSCs with these cyto/chemokines enhanced their adhesion and (iii whether the degree of HSC adhesion influenced their ability to modulate leukocyte recruitment.Adhesion of HPC-7, a murine HSC line, to ischaemia-reperfused (IR injured mouse gut or cremaster muscle was monitored intravitally. Critical adhesion molecules were identified by pre-treating HPC-7 with blocking antibodies to CD18 and CD49d. To identify cyto/chemokines capable of recruiting HPC-7, adhesion was monitored following tissue exposure to TNF-α, IL-1β or CXCL12. The effects of pre-treating HPC-7 with these cyto/chemokines on surface integrin expression/clustering, adhesion to ICAM-1/VCAM-1 and recruitment in vivo was also investigated. Endogenous leukocyte adhesion following HPC-7 injection was again determined intravitally.IR injury increased HPC-7 adhesion in vivo, with intestinal adhesion dependent upon CD18 and muscle adhesion predominantly relying on CD49d. Only CXCL12 pre-treatment enhanced HPC-7 adhesion within injured gut, likely by increasing CD18 binding to ICAM-1 and/or CD18 surface clustering on HPC-7. Leukocyte adhesion was reduced at 4 hours post-reperfusion, but only when local HPC-7 adhesion was enhanced using CXCL12.This data provides evidence that site-specific molecular mechanisms govern HPC-7 adhesion to injured tissue. Importantly, we show that HPC-7 adhesion is a modulatable event in IR injury and

  17. On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

    Science.gov (United States)

    Erli, G.; Fischer, F.; Fleig, G.; Giffels, M.; Hauth, T.; Quast, G.; Schnepf, M.; Heese, J.; Leppert, K.; Arnaez de Pedro, J.; Sträter, R.

    2017-10-01

    This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers. Rather than relying on dedicated HEP computing centers, it is nowadays more reasonable and flexible to utilize remote computing capacity via virtualization techniques or container concepts. We report on recent experience from incorporating a remote HPC center (NEMO Cluster, Freiburg University) and resources dynamically requested from the commercial provider 1&1 Internet SE into our intitute’s computing infrastructure. The Freiburg HPC resources are requested via the standard batch system, allowing HPC and HEP applications to be executed simultaneously, such that regular batch jobs run side by side to virtual machines managed via OpenStack [1]. For the inclusion of the 1&1 commercial resources, a Python API and SDK as well as the possibility to upload images were available. Large scale tests prove the capability to serve the scientific use case in the European 1&1 datacenters. The described environment at the Institute of Experimental Nuclear Physics (IEKP) at KIT serves the needs of researchers participating in the CMS and Belle II experiments. In total, resources exceeding half a million CPU hours have been provided by remote sites.

  18. Optical packet switching in HPC : an analysis of applications performance

    NARCIS (Netherlands)

    Meyer, Hugo; Sancho, Jose Carlos; Mrdakovic, Milica; Miao, Wang; Calabretta, Nicola

    2018-01-01

    Optical Packet Switches (OPS) could provide the needed low latency transmissions in today large data centers. OPS can deliver lower latency and higher bandwidth than traditional electrical switches. These features are needed for parallel High Performance Computing (HPC) applications. For this

  19. Neurophysiology underlying influence of stimulus reliability on audiovisual integration.

    Science.gov (United States)

    Shatzer, Hannah; Shen, Stanley; Kerlin, Jess R; Pitt, Mark A; Shahin, Antoine J

    2018-01-24

    We tested the predictions of the dynamic reweighting model (DRM) of audiovisual (AV) speech integration, which posits that spectrotemporally reliable (informative) AV speech stimuli induce a reweighting of processing from low-level to high-level auditory networks. This reweighting decreases sensitivity to acoustic onsets and in turn increases tolerance to AV onset asynchronies (AVOA). EEG was recorded while subjects watched videos of a speaker uttering trisyllabic nonwords that varied in spectrotemporal reliability and asynchrony of the visual and auditory inputs. Subjects judged the stimuli as in-sync or out-of-sync. Results showed that subjects exhibited greater AVOA tolerance for non-blurred than blurred visual speech and for less than more degraded acoustic speech. Increased AVOA tolerance was reflected in reduced amplitude of the P1-P2 auditory evoked potentials, a neurophysiological indication of reduced sensitivity to acoustic onsets and successful AV integration. There was also sustained visual alpha band (8-14 Hz) suppression (desynchronization) following acoustic speech onsets for non-blurred vs. blurred visual speech, consistent with continuous engagement of the visual system as the speech unfolds. The current findings suggest that increased spectrotemporal reliability of acoustic and visual speech promotes robust AV integration, partly by suppressing sensitivity to acoustic onsets, in support of the DRM's reweighting mechanism. Increased visual signal reliability also sustains the engagement of the visual system with the auditory system to maintain alignment of information across modalities. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  20. Integration of nondestructive examination reliability and fracture mechanics

    International Nuclear Information System (INIS)

    Doctor, S.R.; Bates, D.J.; Charlot, L.A.

    1985-01-01

    The primary pressure boundaries (pressure vessels and piping) of nuclear power plants are in-service inspected (ISI) according to the rules of ASME Boiler and Pressure Vessel Code, Section XI. Ultrasonic techniques are normally used for these inspections, which are periodically performed on a sampling of welds. The Integration of Nondestructive Examination (NDE) Reliability and Fracture Mechanics (FM) Program at Pacific Northwest Laboratory was established to determine the reliability of current ISI techniques and to develop recommendations that will ensure a suitably high inspection reliability. The objectives of this NRC program are to: 1) determine the reliability of ultrasonic ISI performed on commercial light-water reactor primary systems; 2) using probabilistic FM analysis, determine the impact of NDE unreliability on system safety and determine the level of inspection reliability required to ensure a suitably low failure probability; 3) evaluate the degree of reliability improvement that could be achieved using improved and advanced NDE techniques; and 4) based on material properties, service conditions, and NDE uncertainties, formulate recommended revisions to ASME Code, Section XI, and Regulatory Requirements needed to ensure suitably low failure probabilities

  1. Addressing Unison and Uniqueness of Reliability and Safety for Better Integration

    Science.gov (United States)

    Huang, Zhaofeng; Safie, Fayssal

    2015-01-01

    For a long time, both in theory and in practice, safety and reliability have not been clearly differentiated, which leads to confusion, inefficiency, and sometime counter-productive practices in executing each of these two disciplines. It is imperative to address the uniqueness and the unison of these two disciplines to help both disciplines become more effective and to promote a better integration of the two for enhancing safety and reliability in our products as an overall objective. There are two purposes of this paper. First, it will investigate the uniqueness and unison of each discipline and discuss the interrelationship between the two for awareness and clarification. Second, after clearly understanding the unique roles and interrelationship between the two in a product design and development life cycle, we offer suggestions to enhance the disciplines with distinguished and focused roles, to better integrate the two, and to improve unique sets of skills and tools of reliability and safety processes. From the uniqueness aspect, the paper identifies and discusses the respective uniqueness of reliability and safety from their roles, accountability, nature of requirements, technical scopes, detailed technical approaches, and analysis boundaries. It is misleading to equate unreliable to unsafe, since a safety hazard may or may not be related to the component, sub-system, or system functions, which are primarily what reliability addresses. Similarly, failing-to-function may or may not lead to hazard events. Examples will be given in the paper from aerospace, defense, and consumer products to illustrate the uniqueness and differences between reliability and safety. From the unison aspect, the paper discusses what the commonalities between reliability and safety are, and how these two disciplines are linked, integrated, and supplemented with each other to accomplish the customer requirements and product goals. In addition to understanding the uniqueness in

  2. When to Renew Software Licences at HPC Centres? A Mathematical Analysis

    Science.gov (United States)

    Baolai, Ge; MacIsaac, Allan B.

    2010-11-01

    In this paper we study a common problem faced by many high performance computing (HPC) centres: When and how to renew commercial software licences. Software vendors often sell perpetual licences along with forward update and support contracts at an additional, annual cost. Every year or so, software support personnel and the budget units of HPC centres are required to make the decision of whether or not to renew such support, and usually such decisions are made intuitively. The total cost for a continuing support contract can, however, be costly. One might therefore want a rational answer to the question of whether the option for a renewal should be exercised and when. In an attempt to study this problem within a market framework, we present the mathematical problem derived for the day to day operation of a hypothetical HPC centre that charges for the use of software packages. In the mathematical model, we assume that the uncertainty comes from the demand, number of users using the packages, as well as the price. Further we assume the availability of up to date software versions may also affect the demand. We develop a renewal strategy that aims to maximize the expected profit from the use the software under consideration. The derived problem involves a decision tree, which constitutes a numerical procedure that can be processed in parallel.

  3. When to Renew Software Licences at HPC Centres? A Mathematical Analysis

    International Nuclear Information System (INIS)

    Baolai, Ge; MacIsaac, Allan B

    2010-01-01

    In this paper we study a common problem faced by many high performance computing (HPC) centres: When and how to renew commercial software licences. Software vendors often sell perpetual licences along with forward update and support contracts at an additional, annual cost. Every year or so, software support personnel and the budget units of HPC centres are required to make the decision of whether or not to renew such support, and usually such decisions are made intuitively. The total cost for a continuing support contract can, however, be costly. One might therefore want a rational answer to the question of whether the option for a renewal should be exercised and when. In an attempt to study this problem within a market framework, we present the mathematical problem derived for the day to day operation of a hypothetical HPC centre that charges for the use of software packages. In the mathematical model, we assume that the uncertainty comes from the demand, number of users using the packages, as well as the price. Further we assume the availability of up to date software versions may also affect the demand. We develop a renewal strategy that aims to maximize the expected profit from the use the software under consideration. The derived problem involves a decision tree, which constitutes a numerical procedure that can be processed in parallel.

  4. Review of methods for the integration of reliability and design engineering

    International Nuclear Information System (INIS)

    Reilly, J.T.

    1978-03-01

    A review of methods for the integration of reliability and design engineering was carried out to establish a reliability program philosophy, an initial set of methods, and procedures to be used by both the designer and reliability analyst. The report outlines a set of procedures which implements a philosophy that requires increased involvement by the designer in reliability analysis. Discussions of each method reviewed include examples of its application

  5. Integrated Markov-neural reliability computation method: A case for multiple automated guided vehicle system

    International Nuclear Information System (INIS)

    Fazlollahtabar, Hamed; Saidi-Mehrabad, Mohammad; Balakrishnan, Jaydeep

    2015-01-01

    This paper proposes an integrated Markovian and back propagation neural network approaches to compute reliability of a system. While states of failure occurrences are significant elements for accurate reliability computation, Markovian based reliability assessment method is designed. Due to drawbacks shown by Markovian model for steady state reliability computations and neural network for initial training pattern, integration being called Markov-neural is developed and evaluated. To show efficiency of the proposed approach comparative analyses are performed. Also, for managerial implication purpose an application case for multiple automated guided vehicles (AGVs) in manufacturing networks is conducted. - Highlights: • Integrated Markovian and back propagation neural network approach to compute reliability. • Markovian based reliability assessment method. • Managerial implication is shown in an application case for multiple automated guided vehicles (AGVs) in manufacturing networks

  6. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 1: HARP introduction and user's guide

    Science.gov (United States)

    Bavuso, Salvatore J.; Rothmann, Elizabeth; Dugan, Joanne Bechta; Trivedi, Kishor S.; Mittal, Nitin; Boyd, Mark A.; Geist, Robert M.; Smotherman, Mark D.

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed to be compatible with most computing platforms and operating systems, and some programs have been beta tested, within the aerospace community for over 8 years. Volume 1 provides an introduction to the HARP program. Comprehensive information on HARP mathematical models can be found in the references.

  7. Fire performance of basalt FRP mesh reinforced HPC thin plates

    DEFF Research Database (Denmark)

    Hulin, Thomas; Hodicky, Kamil; Schmidt, Jacob Wittrup

    2013-01-01

    An experimental program was carried out to investigate the influence of basalt FRP (BFRP) reinforcing mesh on the fire behaviour of thin high performance concrete (HPC) plates applied to sandwich elements. Samples with BFRP mesh were compared to samples with no mesh, samples with steel mesh...

  8. An integrated reliability-based design optimization of offshore towers

    International Nuclear Information System (INIS)

    Karadeniz, Halil; Togan, Vedat; Vrouwenvelder, Ton

    2009-01-01

    After recognizing the uncertainty in the parameters such as material, loading, geometry and so on in contrast with the conventional optimization, the reliability-based design optimization (RBDO) concept has become more meaningful to perform an economical design implementation, which includes a reliability analysis and an optimization algorithm. RBDO procedures include structural analysis, reliability analysis and sensitivity analysis both for optimization and for reliability. The efficiency of the RBDO system depends on the mentioned numerical algorithms. In this work, an integrated algorithms system is proposed to implement the RBDO of the offshore towers, which are subjected to the extreme wave loading. The numerical strategies interacting with each other to fulfill the RBDO of towers are as follows: (a) a structural analysis program, SAPOS, (b) an optimization program, SQP and (c) a reliability analysis program based on FORM. A demonstration of an example tripod tower under the reliability constraints based on limit states of the critical stress, buckling and the natural frequency is presented.

  9. An integrated reliability-based design optimization of offshore towers

    Energy Technology Data Exchange (ETDEWEB)

    Karadeniz, Halil [Faculty of Civil Engineering and Geosciences, Delft University of Technology, Delft (Netherlands)], E-mail: h.karadeniz@tudelft.nl; Togan, Vedat [Department of Civil Engineering, Karadeniz Technical University, Trabzon (Turkey); Vrouwenvelder, Ton [Faculty of Civil Engineering and Geosciences, Delft University of Technology, Delft (Netherlands)

    2009-10-15

    After recognizing the uncertainty in the parameters such as material, loading, geometry and so on in contrast with the conventional optimization, the reliability-based design optimization (RBDO) concept has become more meaningful to perform an economical design implementation, which includes a reliability analysis and an optimization algorithm. RBDO procedures include structural analysis, reliability analysis and sensitivity analysis both for optimization and for reliability. The efficiency of the RBDO system depends on the mentioned numerical algorithms. In this work, an integrated algorithms system is proposed to implement the RBDO of the offshore towers, which are subjected to the extreme wave loading. The numerical strategies interacting with each other to fulfill the RBDO of towers are as follows: (a) a structural analysis program, SAPOS, (b) an optimization program, SQP and (c) a reliability analysis program based on FORM. A demonstration of an example tripod tower under the reliability constraints based on limit states of the critical stress, buckling and the natural frequency is presented.

  10. An integrated approach to human reliability analysis -- decision analytic dynamic reliability model

    International Nuclear Information System (INIS)

    Holmberg, J.; Hukki, K.; Norros, L.; Pulkkinen, U.; Pyy, P.

    1999-01-01

    The reliability of human operators in process control is sensitive to the context. In many contemporary human reliability analysis (HRA) methods, this is not sufficiently taken into account. The aim of this article is that integration between probabilistic and psychological approaches in human reliability should be attempted. This is achieved first, by adopting such methods that adequately reflect the essential features of the process control activity, and secondly, by carrying out an interactive HRA process. Description of the activity context, probabilistic modeling, and psychological analysis form an iterative interdisciplinary sequence of analysis in which the results of one sub-task maybe input to another. The analysis of the context is carried out first with the help of a common set of conceptual tools. The resulting descriptions of the context promote the probabilistic modeling, through which new results regarding the probabilistic dynamics can be achieved. These can be incorporated in the context descriptions used as reference in the psychological analysis of actual performance. The results also provide new knowledge of the constraints of activity, by providing information of the premises of the operator's actions. Finally, the stochastic marked point process model gives a tool, by which psychological methodology may be interpreted and utilized for reliability analysis

  11. Using HPC within an operational forecasting configuration

    Science.gov (United States)

    Jagers, H. R. A.; Genseberger, M.; van den Broek, M. A. F. H.

    2012-04-01

    Various natural disasters are caused by high-intensity events, for example: extreme rainfall can in a short time cause major damage in river catchments, storms can cause havoc in coastal areas. To assist emergency response teams in operational decisions, it's important to have reliable information and predictions as soon as possible. This starts before the event by providing early warnings about imminent risks and estimated probabilities of possible scenarios. In the context of various applications worldwide, Deltares has developed an open and highly configurable forecasting and early warning system: Delft-FEWS. Finding the right balance between simulation time (and hence prediction lead time) and simulation accuracy and detail is challenging. Model resolution may be crucial to capture certain critical physical processes. Uncertainty in forcing conditions may require running large ensembles of models; data assimilation techniques may require additional ensembles and repeated simulations. The computational demand is steadily increasing and data streams become bigger. Using HPC resources is a logical step; in different settings Delft-FEWS has been configured to take advantage of distributed computational resources available to improve and accelerate the forecasting process (e.g. Montanari et al, 2006). We will illustrate the system by means of a couple of practical applications including the real-time dynamic forecasting of wind driven waves, flow of water, and wave overtopping at dikes of Lake IJssel and neighboring lakes in the center of The Netherlands. Montanari et al., 2006. Development of an ensemble flood forecasting system for the Po river basin, First MAP D-PHASE Scientific Meeting, 6-8 November 2006, Vienna, Austria.

  12. Concept of a Cloud Service for Data Preparation and Computational Control on Custom HPC Systems in Application to Molecular Dynamics

    Science.gov (United States)

    Puzyrkov, Dmitry; Polyakov, Sergey; Podryga, Viktoriia; Markizov, Sergey

    2018-02-01

    At the present stage of computer technology development it is possible to study the properties and processes in complex systems at molecular and even atomic levels, for example, by means of molecular dynamics methods. The most interesting are problems related with the study of complex processes under real physical conditions. Solving such problems requires the use of high performance computing systems of various types, for example, GRID systems and HPC clusters. Considering the time consuming computational tasks, the need arises of software for automatic and unified monitoring of such computations. A complex computational task can be performed over different HPC systems. It requires output data synchronization between the storage chosen by a scientist and the HPC system used for computations. The design of the computational domain is also quite a problem. It requires complex software tools and algorithms for proper atomistic data generation on HPC systems. The paper describes the prototype of a cloud service, intended for design of atomistic systems of large volume for further detailed molecular dynamic calculations and computational management for this calculations, and presents the part of its concept aimed at initial data generation on the HPC systems.

  13. Concept of a Cloud Service for Data Preparation and Computational Control on Custom HPC Systems in Application to Molecular Dynamics

    Directory of Open Access Journals (Sweden)

    Puzyrkov Dmitry

    2018-01-01

    Full Text Available At the present stage of computer technology development it is possible to study the properties and processes in complex systems at molecular and even atomic levels, for example, by means of molecular dynamics methods. The most interesting are problems related with the study of complex processes under real physical conditions. Solving such problems requires the use of high performance computing systems of various types, for example, GRID systems and HPC clusters. Considering the time consuming computational tasks, the need arises of software for automatic and unified monitoring of such computations. A complex computational task can be performed over different HPC systems. It requires output data synchronization between the storage chosen by a scientist and the HPC system used for computations. The design of the computational domain is also quite a problem. It requires complex software tools and algorithms for proper atomistic data generation on HPC systems. The paper describes the prototype of a cloud service, intended for design of atomistic systems of large volume for further detailed molecular dynamic calculations and computational management for this calculations, and presents the part of its concept aimed at initial data generation on the HPC systems.

  14. Bringing ATLAS production to HPC resources - A use case with the Hydra supercomputer of the Max Planck Society

    CERN Document Server

    Kennedy, John; The ATLAS collaboration; Mazzaferro, Luca; Walker, Rodney

    2015-01-01

    The possible usage of HPC resources by ATLAS is now becoming viable due to the changing nature of these systems and it is also very attractive due to the need for increasing amounts of simulated data. In recent years the architecture of HPC systems has evolved, moving away from specialized monolithic systems, to a more generic Linux type platform. This change means that the deployment of non HPC specific codes has become much easier. The timing of this evolution perfectly suits the needs of ATLAS and opens a new window of opportunity. The ATLAS experiment at CERN will begin a period of high luminosity data taking in 2015. This high luminosity phase will be accompanied by a need for increasing amounts of simulated data which is expected to exceed the capabilities of the current Grid infrastructure. ATLAS aims to address this need by opportunistically accessing resources such as cloud and HPC systems. This paper presents the results of a pilot project undertaken by ATLAS and the MPP and RZG to provide access to...

  15. ISC High Performance 2017 International Workshops, DRBSD, ExaComm, HCPM, HPC-IODC, IWOPH, IXPUG, P^3MA, VHPC, Visualization at Scale, WOPSSS

    CERN Document Server

    Yokota, Rio; Taufer, Michela; Shalf, John

    2017-01-01

    This book constitutes revised selected papers from 10 workshops that were held as the ISC High Performance 2017 conference in Frankfurt, Germany, in June 2017. The 59 papers presented in this volume were carefully reviewed and selected for inclusion in this book. They stem from the following workshops: Workshop on Virtualization in High-Performance Cloud Computing (VHPC) Visualization at Scale: Deployment Case Studies and Experience Reports International Workshop on Performance Portable Programming Models for Accelerators (P^3MA) OpenPOWER for HPC (IWOPH) International Workshop on Data Reduction for Big Scientific Data (DRBSD) International Workshop on Communication Architectures for HPC, Big Data, Deep Learning and Clouds at Extreme Scale Workshop on HPC Computing in a Post Moore's Law World (HCPM) HPC I/O in the Data Center ( HPC-IODC) Workshop on Performance and Scalability of Storage Systems (WOPSSS) IXPUG: Experiences on Intel Knights Landing at the One Year Mark International Workshop on Communicati...

  16. Simplifying the Development, Use and Sustainability of HPC Software

    Directory of Open Access Journals (Sweden)

    Jeremy Cohen

    2014-07-01

    Full Text Available Developing software to undertake complex, compute-intensive scientific processes requires a challenging combination of both specialist domain knowledge and software development skills to convert this knowledge into efficient code. As computational platforms become increasingly heterogeneous and newer types of platform such as Infrastructure-as-a-Service (IaaS cloud computing become more widely accepted for high-performance computing (HPC, scientists require more support from computer scientists and resource providers to develop efficient code that offers long-term sustainability and makes optimal use of the resources available to them. As part of the libhpc stage 1 and 2 projects we are developing a framework to provide a richer means of job specification and efficient execution of complex scientific software on heterogeneous infrastructure. In this updated version of our submission to the WSSSPE13 workshop at SuperComputing 2013 we set out our approach to simplifying access to HPC applications and resources for end-users through the use of flexible and interchangeable software components and associated high-level functional-style operations. We believe this approach can support sustainability of scientific software and help to widen access to it.

  17. Novel HPC-ibuprofen conjugates: synthesis, characterization, thermal analysis and degradation kinetics

    International Nuclear Information System (INIS)

    Hussain, M.A.; Lodhi, B.A.; Abbas, K.

    2014-01-01

    Naturally occurring hydrophilic polysaccharides are advantageously used as drug carriers because they provide a mechanism to improve drug action. Hydroxypropylcellulose (HPC) is water-soluble, biocompatible and bears hydroxyl groups for drug conjugation outside the parent polymeric chains. This unique geometry allows the attachment of drug molecules with higher covalent loading. The HPC-Ibuprofen conjugates as macromolecular prodrugs were therefore synthesized employing homogenous and one pot reaction methodologies using p-toluenesulfonyl chloride in N,N-dimethylacetamide solvent at 80 degree C for 24 h under nitrogen atmosphere. The imidazole was used as a base for neutralization of acidic impurities. Present strategy appeared effective to get high yield (77-81%) and high degree of drug substitution (DS 0.88-1.40) onto the HPC polymer as determined by the acid-base titration and verified by 1H-NMR spectroscopy. The gel permeation chromatography has shown uni-modal absorption which indicates no significant degradation of polymer during reaction. Macromolecular prodrugs with different DS of ibuprofen were synthesized, purified, characterized and found soluble in organic solvents. From thermogravimetric analysis, initial, maximum and final degradation temperatures of the conjugates were calculated and compared for relative thermal stability. Thermal degradation kinetics was also studied and results have indicated that degradation of conjugates follows about first order kinetics as calculated by Kissinger model. The energy of activation was also found moderate 92.38, 99.34 and 87.34 kJ/mol as calculated using Friedman, Broido and Chang models. It was found that these novel prodrugs of ibuprofen were thermally stable therefore these may have potential pharmaceutical applications. (author)

  18. Social Information Is Integrated into Value and Confidence Judgments According to Its Reliability.

    Science.gov (United States)

    De Martino, Benedetto; Bobadilla-Suarez, Sebastian; Nouguchi, Takao; Sharot, Tali; Love, Bradley C

    2017-06-21

    How much we like something, whether it be a bottle of wine or a new film, is affected by the opinions of others. However, the social information that we receive can be contradictory and vary in its reliability. Here, we tested whether the brain incorporates these statistics when judging value and confidence. Participants provided value judgments about consumer goods in the presence of online reviews. We found that participants updated their initial value and confidence judgments in a Bayesian fashion, taking into account both the uncertainty of their initial beliefs and the reliability of the social information. Activity in dorsomedial prefrontal cortex tracked the degree of belief update. Analogous to how lower-level perceptual information is integrated, we found that the human brain integrates social information according to its reliability when judging value and confidence. SIGNIFICANCE STATEMENT The field of perceptual decision making has shown that the sensory system integrates different sources of information according to their respective reliability, as predicted by a Bayesian inference scheme. In this work, we hypothesized that a similar coding scheme is implemented by the human brain to process social signals and guide complex, value-based decisions. We provide experimental evidence that the human prefrontal cortex's activity is consistent with a Bayesian computation that integrates social information that differs in reliability and that this integration affects the neural representation of value and confidence. Copyright © 2017 De Martino et al.

  19. A harmonic polynomial cell (HPC) method for 3D Laplace equation with application in marine hydrodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Yan-Lin, E-mail: yanlin.shao@dnvgl.com; Faltinsen, Odd M.

    2014-10-01

    We propose a new efficient and accurate numerical method based on harmonic polynomials to solve boundary value problems governed by 3D Laplace equation. The computational domain is discretized by overlapping cells. Within each cell, the velocity potential is represented by the linear superposition of a complete set of harmonic polynomials, which are the elementary solutions of Laplace equation. By its definition, the method is named as Harmonic Polynomial Cell (HPC) method. The characteristics of the accuracy and efficiency of the HPC method are demonstrated by studying analytical cases. Comparisons will be made with some other existing boundary element based methods, e.g. Quadratic Boundary Element Method (QBEM) and the Fast Multipole Accelerated QBEM (FMA-QBEM) and a fourth order Finite Difference Method (FDM). To demonstrate the applications of the method, it is applied to some studies relevant for marine hydrodynamics. Sloshing in 3D rectangular tanks, a fully-nonlinear numerical wave tank, fully-nonlinear wave focusing on a semi-circular shoal, and the nonlinear wave diffraction of a bottom-mounted cylinder in regular waves are studied. The comparisons with the experimental results and other numerical results are all in satisfactory agreement, indicating that the present HPC method is a promising method in solving potential-flow problems. The underlying procedure of the HPC method could also be useful in other fields than marine hydrodynamics involved with solving Laplace equation.

  20. Model correction factor method for reliability problems involving integrals of non-Gaussian random fields

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  1. An Integrated Approach to Establish Validity and Reliability of Reading Tests

    Science.gov (United States)

    Razi, Salim

    2012-01-01

    This study presents the processes of developing and establishing reliability and validity of a reading test by administering an integrative approach as conventional reliability and validity measures superficially reveals the difficulty of a reading test. In this respect, analysing vocabulary frequency of the test is regarded as a more eligible way…

  2. Low latency network and distributed storage for next generation HPC systems: the ExaNeSt project

    Science.gov (United States)

    Ammendola, R.; Biagioni, A.; Cretaro, P.; Frezza, O.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Paolucci, P. S.; Pastorelli, E.; Pisani, F.; Simula, F.; Vicini, P.; Navaridas, J.; Chaix, F.; Chrysos, N.; Katevenis, M.; Papaeustathiou, V.

    2017-10-01

    With processor architecture evolution, the HPC market has undergone a paradigm shift. The adoption of low-cost, Linux-based clusters extended the reach of HPC from its roots in modelling and simulation of complex physical systems to a broader range of industries, from biotechnology, cloud computing, computer analytics and big data challenges to manufacturing sectors. In this perspective, the near future HPC systems can be envisioned as composed of millions of low-power computing cores, densely packed — meaning cooling by appropriate technology — with a tightly interconnected, low latency and high performance network and equipped with a distributed storage architecture. Each of these features — dense packing, distributed storage and high performance interconnect — represents a challenge, made all the harder by the need to solve them at the same time. These challenges lie as stumbling blocks along the road towards Exascale-class systems; the ExaNeSt project acknowledges them and tasks itself with investigating ways around them.

  3. High-pressure coolant effect on the surface integrity of machining titanium alloy Ti-6Al-4V: a review

    Science.gov (United States)

    Liu, Wentao; Liu, Zhanqiang

    2018-03-01

    Machinability improvement of titanium alloy Ti-6Al-4V is a challenging work in academic and industrial applications owing to its low thermal conductivity, low elasticity modulus and high chemical affinity at high temperatures. Surface integrity of titanium alloys Ti-6Al-4V is prominent in estimating the quality of machined components. The surface topography (surface defects and surface roughness) and the residual stress induced by machining Ti-6Al-4V occupy pivotal roles for the sustainability of Ti-6Al-4V components. High-pressure coolant (HPC) is a potential choice in meeting the requirements for the manufacture and application of Ti-6Al-4V. This paper reviews the progress towards the improvements of Ti-6Al4V surface integrity under HPC. Various researches of surface integrity characteristics have been reported. In particularly, surface roughness, surface defects, residual stress as well as work hardening are investigated in order to evaluate the machined surface qualities. Several coolant parameters (including coolant type, coolant pressure and the injection position) deserve investigating to provide the guidance for a satisfied machined surface. The review also provides a clear roadmap for applications of HPC in machining Ti-6Al4V. Experimental studies and analysis are reviewed to better understand the surface integrity under HPC machining process. A distinct discussion has been presented regarding the limitations and highlights of the prospective for machining Ti-6Al4V under HPC.

  4. Users and Programmers Guide for HPC Platforms in CIEMAT

    International Nuclear Information System (INIS)

    Munoz Roldan, A.

    2003-01-01

    This Technical Report presents a description of the High Performance Computing platforms available to researchers in CIEMAT and dedicated mainly to scientific computing. It targets to users and programmers and tries to help in the processes of developing new code and porting code across platforms. A brief review is also presented about historical evolution in the field of HPC, ie, the programming paradigms and underlying architectures. (Author) 32 refs

  5. Behavior of HPC with Fly Ash after Elevated Temperature

    OpenAIRE

    Shang, Huai-Shuai; Yi, Ting-Hua

    2013-01-01

    For use in fire resistance calculations, the relevant thermal properties of high-performance concrete (HPC) with fly ash were determined through an experimental study. These properties included compressive strength, cubic compressive strength, cleavage strength, flexural strength, and the ultrasonic velocity at various temperatures (20, 100, 200, 300, 400 and 500∘C) for high-performance concrete. The effect of temperature on compressive strength, cubic compressive strength, cleavage strength,...

  6. Structural reliability calculation method based on the dual neural network and direct integration method.

    Science.gov (United States)

    Li, Haibin; He, Yun; Nie, Xiaobo

    2018-01-01

    Structural reliability analysis under uncertainty is paid wide attention by engineers and scholars due to reflecting the structural characteristics and the bearing actual situation. The direct integration method, started from the definition of reliability theory, is easy to be understood, but there are still mathematics difficulties in the calculation of multiple integrals. Therefore, a dual neural network method is proposed for calculating multiple integrals in this paper. Dual neural network consists of two neural networks. The neural network A is used to learn the integrand function, and the neural network B is used to simulate the original function. According to the derivative relationships between the network output and the network input, the neural network B is derived from the neural network A. On this basis, the performance function of normalization is employed in the proposed method to overcome the difficulty of multiple integrations and to improve the accuracy for reliability calculations. The comparisons between the proposed method and Monte Carlo simulation method, Hasofer-Lind method, the mean value first-order second moment method have demonstrated that the proposed method is an efficient and accurate reliability method for structural reliability problems.

  7. Accelerating Memory-Access-Limited HPC Applications via Novel Fast Data Compression, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — A fast-paced continual increase on the ratio of CPU to memory speed feeds an exponentially growing limitation for extracting performance from HPC systems. Breaking...

  8. Accelerating Memory-Access-Limited HPC Applications via Novel Fast Data Compression, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — A fast-paced continual increase on the ratio of CPU to memory speed feeds an exponentially growing limitation for extracting performance from HPC systems. Ongoing...

  9. Engineering systems reliability, safety, and maintenance an integrated approach

    CERN Document Server

    Dhillon, B S

    2017-01-01

    Today, engineering systems are an important element of the world economy and each year billions of dollars are spent to develop, manufacture, operate, and maintain various types of engineering systems around the globe. Many of these systems are highly sophisticated and contain millions of parts. For example, a Boeing jumbo 747 is made up of approximately 4.5 million parts including fasteners. Needless to say, reliability, safety, and maintenance of systems such as this have become more important than ever before.  Global competition and other factors are forcing manufacturers to produce highly reliable, safe, and maintainable engineering products. Therefore, there is a definite need for the reliability, safety, and maintenance professionals to work closely during design and other phases. Engineering Systems Reliability, Safety, and Maintenance: An Integrated Approach eliminates the need to consult many different and diverse sources in the hunt for the information required to design better engineering syste...

  10. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    Science.gov (United States)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will

  11. The HARNESS Workbench: Unified and Adaptive Access to Diverse HPC Platforms

    Energy Technology Data Exchange (ETDEWEB)

    Sunderam, Vaidy S.

    2012-03-20

    The primary goal of the Harness WorkBench (HWB) project is to investigate innovative software environments that will help enhance the overall productivity of applications science on diverse HPC platforms. Two complementary frameworks were designed: one, a virtualized command toolkit for application building, deployment, and execution, that provides a common view across diverse HPC systems, in particular the DOE leadership computing platforms (Cray, IBM, SGI, and clusters); and two, a unified runtime environment that consolidates access to runtime services via an adaptive framework for execution-time and post processing activities. A prototype of the first was developed based on the concept of a 'system-call virtual machine' (SCVM), to enhance portability of the HPC application deployment process across heterogeneous high-end machines. The SCVM approach to portable builds is based on the insertion of toolkit-interpretable directives into original application build scripts. Modifications resulting from these directives preserve the semantics of the original build instruction flow. The execution of the build script is controlled by our toolkit that intercepts build script commands in a manner transparent to the end-user. We have applied this approach to a scientific production code (Gamess-US) on the Cray-XT5 machine. The second facet, termed Unibus, aims to facilitate provisioning and aggregation of multifaceted resources from resource providers and end-users perspectives. To achieve that, Unibus proposes a Capability Model and mediators (resource drivers) to virtualize access to diverse resources, and soft and successive conditioning to enable automatic and user-transparent resource provisioning. A proof of concept implementation has demonstrated the viability of this approach on high end machines, grid systems and computing clouds.

  12. OCCAM: a flexible, multi-purpose and extendable HPC cluster

    Science.gov (United States)

    Aldinucci, M.; Bagnasco, S.; Lusso, S.; Pasteris, P.; Rabellino, S.; Vallero, S.

    2017-10-01

    The Open Computing Cluster for Advanced data Manipulation (OCCAM) is a multipurpose flexible HPC cluster designed and operated by a collaboration between the University of Torino and the Sezione di Torino of the Istituto Nazionale di Fisica Nucleare. It is aimed at providing a flexible, reconfigurable and extendable infrastructure to cater to a wide range of different scientific computing use cases, including ones from solid-state chemistry, high-energy physics, computer science, big data analytics, computational biology, genomics and many others. Furthermore, it will serve as a platform for R&D activities on computational technologies themselves, with topics ranging from GPU acceleration to Cloud Computing technologies. A heterogeneous and reconfigurable system like this poses a number of challenges related to the frequency at which heterogeneous hardware resources might change their availability and shareability status, which in turn affect methods and means to allocate, manage, optimize, bill, monitor VMs, containers, virtual farms, jobs, interactive bare-metal sessions, etc. This work describes some of the use cases that prompted the design and construction of the HPC cluster, its architecture and resource provisioning model, along with a first characterization of its performance by some synthetic benchmark tools and a few realistic use-case tests.

  13. Self-service for software development projects and HPC activities

    International Nuclear Information System (INIS)

    Husejko, M; Høimyr, N; Gonzalez, A; Koloventzos, G; Asbury, D; Trzcinska, A; Agtzidis, I; Botrel, G; Otto, J

    2014-01-01

    This contribution describes how CERN has implemented several essential tools for agile software development processes, ranging from version control (Git) to issue tracking (Jira) and documentation (Wikis). Running such services in a large organisation like CERN requires many administrative actions both by users and service providers, such as creating software projects, managing access rights, users and groups, and performing tool-specific customisation. Dealing with these requests manually would be a time-consuming task. Another area of our CERN computing services that has required dedicated manual support has been clusters for specific user communities with special needs. Our aim is to move all our services to a layered approach, with server infrastructure running on the internal cloud computing infrastructure at CERN. This contribution illustrates how we plan to optimise the management of our of services by means of an end-user facing platform acting as a portal into all the related services for software projects, inspired by popular portals for open-source developments such as Sourceforge, GitHub and others. Furthermore, the contribution will discuss recent activities with tests and evaluations of High Performance Computing (HPC) applications on different hardware and software stacks, and plans to offer a dynamically scalable HPC service at CERN, based on affordable hardware.

  14. Lightweight HPC beam OMEGA

    Science.gov (United States)

    Sýkora, Michal; Jedlinský, Petr; Komanec, Jan

    2017-09-01

    In the design and construction of precast bridge structures, a general goal is to achieve the maximum possible span length. Often, the weight of individual beams makes them difficult to handle, which may be a limiting factor in achieving the desired span. The design of the OMEGA beam aims to solve a part of these problems. It is a thin-walled shell made of prestressed high-performance concrete (HPC) in the shape of inverted Ω character. The concrete shell with prestressed strands is fitted with a non-stressed tendon already in the casting yard and is more easily transported and installed on the site. The shells are subsequently completed with mild steel reinforcement and cores are cast in situ together with the deck. The OMEGA beams can also be used as an alternative to steel - concrete composite bridges. Due to the higher production complexity, OMEGA beam can hardly substitute conventional prestressed beams like T or PETRA completely, but it can be a useful alternative for specific construction needs.

  15. OpenTopography: Addressing Big Data Challenges Using Cloud Computing, HPC, and Data Analytics

    Science.gov (United States)

    Crosby, C. J.; Nandigam, V.; Phan, M.; Youn, C.; Baru, C.; Arrowsmith, R.

    2014-12-01

    OpenTopography (OT) is a geoinformatics-based data facility initiated in 2009 for democratizing access to high-resolution topographic data, derived products, and tools. Hosted at the San Diego Supercomputer Center (SDSC), OT utilizes cyberinfrastructure, including large-scale data management, high-performance computing, and service-oriented architectures to provide efficient Web based access to large, high-resolution topographic datasets. OT collocates data with processing tools to enable users to quickly access custom data and derived products for their application. OT's ongoing R&D efforts aim to solve emerging technical challenges associated with exponential growth in data, higher order data products, as well as user base. Optimization of data management strategies can be informed by a comprehensive set of OT user access metrics that allows us to better understand usage patterns with respect to the data. By analyzing the spatiotemporal access patterns within the datasets, we can map areas of the data archive that are highly active (hot) versus the ones that are rarely accessed (cold). This enables us to architect a tiered storage environment consisting of high performance disk storage (SSD) for the hot areas and less expensive slower disk for the cold ones, thereby optimizing price to performance. From a compute perspective, OT is looking at cloud based solutions such as the Microsoft Azure platform to handle sudden increases in load. An OT virtual machine image in Microsoft's VM Depot can be invoked and deployed quickly in response to increased system demand. OT has also integrated SDSC HPC systems like the Gordon supercomputer into our infrastructure tier to enable compute intensive workloads like parallel computation of hydrologic routing on high resolution topography. This capability also allows OT to scale to HPC resources during high loads to meet user demand and provide more efficient processing. With a growing user base and maturing scientific user

  16. HPC CLOUD APPLIED TO LATTICE OPTIMIZATION

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Changchun; Nishimura, Hiroshi; James, Susan; Song, Kai; Muriki, Krishna; Qin, Yong

    2011-03-18

    As Cloud services gain in popularity for enterprise use, vendors are now turning their focus towards providing cloud services suitable for scientific computing. Recently, Amazon Elastic Compute Cloud (EC2) introduced the new Cluster Compute Instances (CCI), a new instance type specifically designed for High Performance Computing (HPC) applications. At Berkeley Lab, the physicists at the Advanced Light Source (ALS) have been running Lattice Optimization on a local cluster, but the queue wait time and the flexibility to request compute resources when needed are not ideal for rapid development work. To explore alternatives, for the first time we investigate running the Lattice Optimization application on Amazon's new CCI to demonstrate the feasibility and trade-offs of using public cloud services for science.

  17. HPC Cloud Applied To Lattice Optimization

    International Nuclear Information System (INIS)

    Sun, Changchun; Nishimura, Hiroshi; James, Susan; Song, Kai; Muriki, Krishna; Qin, Yong

    2011-01-01

    As Cloud services gain in popularity for enterprise use, vendors are now turning their focus towards providing cloud services suitable for scientific computing. Recently, Amazon Elastic Compute Cloud (EC2) introduced the new Cluster Compute Instances (CCI), a new instance type specifically designed for High Performance Computing (HPC) applications. At Berkeley Lab, the physicists at the Advanced Light Source (ALS) have been running Lattice Optimization on a local cluster, but the queue wait time and the flexibility to request compute resources when needed are not ideal for rapid development work. To explore alternatives, for the first time we investigate running the Lattice Optimization application on Amazon's new CCI to demonstrate the feasibility and trade-offs of using public cloud services for science.

  18. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    Science.gov (United States)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  19. HPC4Energy Final Report : GE Energy

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Steven G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Van Zandt, Devin T. [GE Energy Consulting, Schenectady, NY (United States); Thomas, Brian [GE Energy Consulting, Schenectady, NY (United States); Mahmood, Sajjad [GE Energy Consulting, Schenectady, NY (United States); Woodward, Carol S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-02-25

    Power System planning tools are being used today to simulate systems that are far larger and more complex than just a few years ago. Advances in renewable technologies and more pervasive control technology are driving planning engineers to analyze an increasing number of scenarios and system models with much more detailed network representations. Although the speed of individual CPU’s has increased roughly according to Moore’s Law, the requirements for advanced models, increased system sizes, and larger sensitivities have outstripped CPU performance. This computational dilemma has reached a critical point and the industry needs to develop the technology to accurately model the power system of the future. The hpc4energy incubator program provided a unique opportunity to leverage the HPC resources available to LLNL and the power systems domain expertise of GE Energy to enhance the GE Concorda PSLF software. Well over 500 users worldwide, including all of the major California electric utilities, rely on Concorda PSLF software for their power flow and dynamics. This pilot project demonstrated that the GE Concorda PSLF software can perform contingency analysis in a massively parallel environment to significantly reduce the time to results. An analysis with 4,127 contingencies that would take 24 days on a single core was reduced to 24 minutes when run on 4,217 cores. A secondary goal of this project was to develop and test modeling techniques that will expand the computational capability of PSLF to efficiently deal with systems sizes greater than 150,000 buses. Toward this goal the matrix reordering implementation time was sped up 9.5 times by optimizing the code and introducing threading.

  20. Neural substrates of reliability-weighted visual-tactile multisensory integration

    Directory of Open Access Journals (Sweden)

    Michael S Beauchamp

    2010-06-01

    Full Text Available As sensory systems deteriorate in aging or disease, the brain must relearn the appropriate weights to assign each modality during multisensory integration. Using blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI of human subjects, we tested a model for the neural mechanisms of sensory weighting, termed “weighted connections”. This model holds that the connection weights between early and late areas vary depending on the reliability of the modality, independent of the level of early sensory cortex activity. When subjects detected viewed and felt touches to the hand, a network of brain areas was active, including visual areas in lateral occipital cortex, somatosensory areas in inferior parietal lobe, and multisensory areas in the intraparietal sulcus (IPS. In agreement with the weighted connection model, the connection weight measured with structural equation modeling between somatosensory cortex and IPS increased for somatosensory-reliable stimuli, and the connection weight between visual cortex and IPS increased for visual-reliable stimuli. This double dissociation of connection strengths was similar to the pattern of behavioral responses during incongruent multisensory stimulation, suggesting that weighted connections may be a neural mechanism for behavioral reliability weighting.for behavioral reliability weighting.

  1. ADVANCED COMPRESSOR ENGINE CONTROLS TO ENHANCE OPERATION, RELIABILITY AND INTEGRITY

    Energy Technology Data Exchange (ETDEWEB)

    Gary D. Bourn; Jess W. Gingrich; Jack A. Smith

    2004-03-01

    This document is the final report for the ''Advanced Compressor Engine Controls to Enhance Operation, Reliability, and Integrity'' project. SwRI conducted this project for DOE in conjunction with Cooper Compression, under DOE contract number DE-FC26-03NT41859. This report addresses an investigation of engine controls for integral compressor engines and the development of control strategies that implement closed-loop NOX emissions feedback.

  2. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 4: HARP Output (HARPO) graphics display user's guide

    Science.gov (United States)

    Sproles, Darrell W.; Bavuso, Salvatore J.

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical postprocessor program HARPO (HARP Output). HARPO reads ASCII files generated by HARP. It provides an interactive plotting capability that can be used to display alternate model data for trade-off analyses. File data can also be imported to other commercial software programs.

  3. I/O load balancing for big data HPC applications

    Energy Technology Data Exchange (ETDEWEB)

    Paul, Arnab K. [Virginia Polytechnic Institute and State University; Goyal, Arpit [Virginia Polytechnic Institute and State University; Wang, Feiyi [ORNL; Oral, H Sarp [ORNL; Butt, Ali R. [Virginia Tech, Blacksburg, VA; Brim, Michael J. [ORNL; Srinivasa, Sangeetha B. [Virginia Polytechnic Institute and State University

    2018-01-01

    High Performance Computing (HPC) big data problems require efficient distributed storage systems. However, at scale, such storage systems often experience load imbalance and resource contention due to two factors: the bursty nature of scientific application I/O; and the complex I/O path that is without centralized arbitration and control. For example, the extant Lustre parallel file system-that supports many HPC centers-comprises numerous components connected via custom network topologies, and serves varying demands of a large number of users and applications. Consequently, some storage servers can be more loaded than others, which creates bottlenecks and reduces overall application I/O performance. Existing solutions typically focus on per application load balancing, and thus are not as effective given their lack of a global view of the system. In this paper, we propose a data-driven approach to load balance the I/O servers at scale, targeted at Lustre deployments. To this end, we design a global mapper on Lustre Metadata Server, which gathers runtime statistics from key storage components on the I/O path, and applies Markov chain modeling and a minimum-cost maximum-flow algorithm to decide where data should be placed. Evaluation using a realistic system simulator and a real setup shows that our approach yields better load balancing, which in turn can improve end-to-end performance.

  4. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 3: HARP Graphics Oriented (GO) input user's guide

    Science.gov (United States)

    Bavuso, Salvatore J.; Rothmann, Elizabeth; Mittal, Nitin; Koppen, Sandra Howell

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems, and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical preprocessor Graphics Oriented (GO) program. GO is a graphical user interface for the HARP engine that enables the drawing of reliability/availability models on a monitor. A mouse is used to select fault tree gates or Markov graphical symbols from a menu for drawing.

  5. Degradation of 2,4,6-Trinitrophenol (TNP) by Arthrobacter sp. HPC1223 Isolated from Effluent Treatment Plant

    OpenAIRE

    Qureshi, Asifa; Kapley, Atya; Purohit, Hemant J.

    2012-01-01

    Arthrobacter sp. HPC1223 (Genebank Accession No. AY948280) isolated from activated biomass of effluent treatment plant was capable of utilizing 2,4,6 trinitrophenol (TNP) under aerobic condition at 30 °C and pH 7 as nitrogen source. It was observed that the isolated bacteria utilized TNP up to 70 % (1 mM) in R2A media with nitrite release. The culture growth media changed into orange-red color hydride-meisenheimer complex at 24 h as detected by HPLC. Oxygen uptake of Arthrobacter HPC1223 towa...

  6. Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system

    International Nuclear Information System (INIS)

    Meier, Konrad; Fleig, Georg; Hauth, Thomas; Quast, Günter; Janczyk, Michael; Von Suchodoletz, Dirk; Wiebelt, Bernd

    2016-01-01

    Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare

  7. Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system

    Science.gov (United States)

    Meier, Konrad; Fleig, Georg; Hauth, Thomas; Janczyk, Michael; Quast, Günter; von Suchodoletz, Dirk; Wiebelt, Bernd

    2016-10-01

    Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare

  8. Reliability-Weighted Integration of Audiovisual Signals Can Be Modulated by Top-down Attention

    Science.gov (United States)

    Noppeney, Uta

    2018-01-01

    Abstract Behaviorally, it is well established that human observers integrate signals near-optimally weighted in proportion to their reliabilities as predicted by maximum likelihood estimation. Yet, despite abundant behavioral evidence, it is unclear how the human brain accomplishes this feat. In a spatial ventriloquist paradigm, participants were presented with auditory, visual, and audiovisual signals and reported the location of the auditory or the visual signal. Combining psychophysics, multivariate functional MRI (fMRI) decoding, and models of maximum likelihood estimation (MLE), we characterized the computational operations underlying audiovisual integration at distinct cortical levels. We estimated observers’ behavioral weights by fitting psychometric functions to participants’ localization responses. Likewise, we estimated the neural weights by fitting neurometric functions to spatial locations decoded from regional fMRI activation patterns. Our results demonstrate that low-level auditory and visual areas encode predominantly the spatial location of the signal component of a region’s preferred auditory (or visual) modality. By contrast, intraparietal sulcus forms spatial representations by integrating auditory and visual signals weighted by their reliabilities. Critically, the neural and behavioral weights and the variance of the spatial representations depended not only on the sensory reliabilities as predicted by the MLE model but also on participants’ modality-specific attention and report (i.e., visual vs. auditory). These results suggest that audiovisual integration is not exclusively determined by bottom-up sensory reliabilities. Instead, modality-specific attention and report can flexibly modulate how intraparietal sulcus integrates sensory signals into spatial representations to guide behavioral responses (e.g., localization and orienting). PMID:29527567

  9. Economic Model For a Return on Investment Analysis of United States Government High Performance Computing (HPC) Research and Development (R & D) Investment

    Energy Technology Data Exchange (ETDEWEB)

    Joseph, Earl C. [IDC Research Inc., Framingham, MA (United States); Conway, Steve [IDC Research Inc., Framingham, MA (United States); Dekate, Chirag [IDC Research Inc., Framingham, MA (United States)

    2013-09-30

    This study investigated how high-performance computing (HPC) investments can improve economic success and increase scientific innovation. This research focused on the common good and provided uses for DOE, other government agencies, industry, and academia. The study created two unique economic models and an innovation index: 1 A macroeconomic model that depicts the way HPC investments result in economic advancements in the form of ROI in revenue (GDP), profits (and cost savings), and jobs. 2 A macroeconomic model that depicts the way HPC investments result in basic and applied innovations, looking at variations by sector, industry, country, and organization size. A new innovation index that provides a means of measuring and comparing innovation levels. Key findings of the pilot study include: IDC collected the required data across a broad set of organizations, with enough detail to create these models and the innovation index. The research also developed an expansive list of HPC success stories.

  10. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard; Allcock, William; Beggio, Chris; Campbell, Stuart; Cherry, Andrew; Cholia, Shreyas; Dart, Eli; England, Clay; Fahey, Tim; Foertter, Fernanda; Goldstone, Robin; Hick, Jason; Karelitz, David; Kelly, Kaki; Monroe, Laura; Prabhat,; Skinner, David; White, Julia

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at the DOE national laboratories. The report contains findings from that review.

  11. Performance Analysis, Modeling and Scaling of HPC Applications and Tools

    Energy Technology Data Exchange (ETDEWEB)

    Bhatele, Abhinav [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-13

    E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research along the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.

  12. Integrating generation and transmission networks reliability for unit commitment solution

    International Nuclear Information System (INIS)

    Jalilzadeh, S.; Shayeghi, H.; Hadadian, H.

    2009-01-01

    This paper presents a new method with integration of generation and transmission networks reliability for the solution of unit commitment (UC) problem. In fact, in order to have a more accurate assessment of system reserve requirement, in addition to unavailability of generation units, unavailability of transmission lines are also taken into account. In this way, evaluation of the required spinning reserve (SR) capacity is performed by applying reliability constraints based on loss of load probability and expected energy not supplied (EENS) indices. Calculation of the above parameters is accomplished by employing a novel procedure based on the linear programming which it also minimizes them to achieve optimum level of the SR capacity and consequently a cost-benefit reliability constrained UC schedule. In addition, a powerful solution technique called 'integer-coded genetic algorithm (ICGA)' is being used for the solution of the proposed method. Numerical results on the IEEE reliability test system show that the consideration of transmission network unavailability has an important influence on reliability indices of the UC schedules

  13. Plant Reliability - an Integrated System for Management (PR-ISM)

    International Nuclear Information System (INIS)

    Aukeman, M.C.; Leininger, E.G.; Carr, P.

    1984-01-01

    The Toledo Edison Company, located in Toledo, Ohio, United States of America, recently implemented a comprehensive maintenance management information system for the Davis-Besse Nuclear Power Station. The system is called PR-ISM, meaning Plant Reliability - An Integrated System for Management. PR-ISM provides the tools needed by station management to effectively plan and control maintenance and other plant activities. The PR-ISM system as it exists today consists of four integrated computer applications: equipment data base maintenance, maintenance work order control, administrative activity tracking, and technical specification compliance. PR-ISM is designed as an integrated on-line system and incorporates strong human factors features. PR-ISM provides each responsible person information to do his job on a daily basis and to look ahead towards future events. It goes beyond 'after the fact' reporting. In this respect, PR-ISM is an 'interactive' control system which: captures work requirements and commitments as they are identified, provides accurate and up-to-date status immediately to those who need it, simplifies paperwork and reduces the associated time delays, provides the information base for work management and reliability analysis, and improves productivity by replacing clerical tasks and consolidating maintenance activities. The functional and technical features of PR-ISM, the experience of Toledo Edison during the first year of operation, and the factors which led to the success of the development project are highlighted. (author)

  14. SoAx: A generic C++ Structure of Arrays for handling particles in HPC codes

    Science.gov (United States)

    Homann, Holger; Laenen, Francois

    2018-03-01

    The numerical study of physical problems often require integrating the dynamics of a large number of particles evolving according to a given set of equations. Particles are characterized by the information they are carrying such as an identity, a position other. There are generally speaking two different possibilities for handling particles in high performance computing (HPC) codes. The concept of an Array of Structures (AoS) is in the spirit of the object-oriented programming (OOP) paradigm in that the particle information is implemented as a structure. Here, an object (realization of the structure) represents one particle and a set of many particles is stored in an array. In contrast, using the concept of a Structure of Arrays (SoA), a single structure holds several arrays each representing one property (such as the identity) of the whole set of particles. The AoS approach is often implemented in HPC codes due to its handiness and flexibility. For a class of problems, however, it is known that the performance of SoA is much better than that of AoS. We confirm this observation for our particle problem. Using a benchmark we show that on modern Intel Xeon processors the SoA implementation is typically several times faster than the AoS one. On Intel's MIC co-processors the performance gap even attains a factor of ten. The same is true for GPU computing, using both computational and multi-purpose GPUs. Combining performance and handiness, we present the library SoAx that has optimal performance (on CPUs, MICs, and GPUs) while providing the same handiness as AoS. For this, SoAx uses modern C++ design techniques such template meta programming that allows to automatically generate code for user defined heterogeneous data structures.

  15. Advanced High and Low Fidelity HPC Simulations of FCS Concept Designs for Dynamic Systems

    National Research Council Canada - National Science Library

    Sandhu, S. S; Kanapady, R; Tamma, K. K

    2004-01-01

    ...) resources of many Army initiatives. In this paper we present a new and advanced HPC based rigid and flexible modeling and simulation technology capable of adaptive high/low fidelity modeling that is useful in the initial design concept...

  16. Innovative HPC architectures for the study of planetary plasma environments

    Science.gov (United States)

    Amaya, Jorge; Wolf, Anna; Lembège, Bertrand; Zitz, Anke; Alvarez, Damian; Lapenta, Giovanni

    2016-04-01

    DEEP-ER is an European Commission founded project that develops a new type of High Performance Computer architecture. The revolutionary system is currently used by KU Leuven to study the effects of the solar wind on the global environments of the Earth and Mercury. The new architecture combines the versatility of Intel Xeon computing nodes with the power of the upcoming Intel Xeon Phi accelerators. Contrary to classical heterogeneous HPC architectures, where it is customary to find CPU and accelerators in the same computing nodes, in the DEEP-ER system CPU nodes are grouped together (Cluster) and independently from the accelerator nodes (Booster). The system is equipped with a state of the art interconnection network, a highly scalable and fast I/O and a fail recovery resiliency system. The final objective of the project is to introduce a scalable system that can be used to create the next generation of exascale supercomputers. The code iPic3D from KU Leuven is being adapted to this new architecture. This particle-in-cell code can now perform the computation of the electromagnetic fields in the Cluster while the particles are moved in the Booster side. Using fast and scalable Xeon Phi accelerators in the Booster we can introduce many more particles per cell in the simulation than what is possible in the current generation of HPC systems, allowing to calculate fully kinetic plasmas with very low interpolation noise. The system will be used to perform fully kinetic, low noise, 3D simulations of the interaction of the solar wind with the magnetosphere of the Earth and Mercury. Preliminary simulations have been performed in other HPC centers in order to compare the results in different systems. In this presentation we show the complexity of the plasma flow around the planets, including the development of hydrodynamic instabilities at the flanks, the presence of the collision-less shock, the magnetosheath, the magnetopause, reconnection zones, the formation of the

  17. A Distributed Python HPC Framework: ODIN, PyTrilinos, & Seamless

    Energy Technology Data Exchange (ETDEWEB)

    Grant, Robert [Enthought, Inc., Austin, TX (United States)

    2015-11-23

    Under this grant, three significant software packages were developed or improved, all with the goal of improving the ease-of-use of HPC libraries. The first component is a Python package, named DistArray (originally named Odin), that provides a high-level interface to distributed array computing. This interface is based on the popular and widely used NumPy package and is integrated with the IPython project for enhanced interactive parallel distributed computing. The second Python package is the Distributed Array Protocol (DAP) that enables separate distributed array libraries to share arrays efficiently without copying or sending messages. If a distributed array library supports the DAP, it is then automatically able to communicate with any other library that also supports the protocol. This protocol allows DistArray to communicate with the Trilinos library via PyTrilinos, which was also enhanced during this project. A third package, PyTrilinos, was extended to support distributed structured arrays (in addition to the unstructured arrays of its original design), allow more flexible distributed arrays (i.e., the restriction to double precision data was lifted), and implement the DAP. DAP support includes both exporting the protocol so that external packages can use distributed Trilinos data structures, and importing the protocol so that PyTrilinos can work with distributed data from external packages.

  18. Personalized cloud-based bioinformatics services for research and education: use cases and the elasticHPC package.

    Science.gov (United States)

    El-Kalioby, Mohamed; Abouelhoda, Mohamed; Krüger, Jan; Giegerich, Robert; Sczyrba, Alexander; Wall, Dennis P; Tonellato, Peter

    2012-01-01

    Bioinformatics services have been traditionally provided in the form of a web-server that is hosted at institutional infrastructure and serves multiple users. This model, however, is not flexible enough to cope with the increasing number of users, increasing data size, and new requirements in terms of speed and availability of service. The advent of cloud computing suggests a new service model that provides an efficient solution to these problems, based on the concepts of "resources-on-demand" and "pay-as-you-go". However, cloud computing has not yet been introduced within bioinformatics servers due to the lack of usage scenarios and software layers that address the requirements of the bioinformatics domain. In this paper, we provide different use case scenarios for providing cloud computing based services, considering both the technical and financial aspects of the cloud computing service model. These scenarios are for individual users seeking computational power as well as bioinformatics service providers aiming at provision of personalized bioinformatics services to their users. We also present elasticHPC, a software package and a library that facilitates the use of high performance cloud computing resources in general and the implementation of the suggested bioinformatics scenarios in particular. Concrete examples that demonstrate the suggested use case scenarios with whole bioinformatics servers and major sequence analysis tools like BLAST are presented. Experimental results with large datasets are also included to show the advantages of the cloud model. Our use case scenarios and the elasticHPC package are steps towards the provision of cloud based bioinformatics services, which would help in overcoming the data challenge of recent biological research. All resources related to elasticHPC and its web-interface are available at http://www.elasticHPC.org.

  19. Integrated Reliability and Risk Analysis System (IRRAS)

    International Nuclear Information System (INIS)

    Russell, K.D.; McKay, M.K.; Sattison, M.B.; Skinner, N.L.; Wood, S.T.; Rasmuson, D.M.

    1992-01-01

    The Integrated Reliability and Risk Analysis System (IRRAS) is a state-of-the-art, microcomputer-based probabilistic risk assessment (PRA) model development and analysis tool to address key nuclear plant safety issues. IRRAS is an integrated software tool that gives the user the ability to create and analyze fault trees and accident sequences using a microcomputer. This program provides functions that range from graphical fault tree construction to cut set generation and quantification. Version 1.0 of the IRRAS program was released in February of 1987. Since that time, many user comments and enhancements have been incorporated into the program providing a much more powerful and user-friendly system. This version has been designated IRRAS 4.0 and is the subject of this Reference Manual. Version 4.0 of IRRAS provides the same capabilities as Version 1.0 and adds a relational data base facility for managing the data, improved functionality, and improved algorithm performance

  20. Human reliability analysis of performing tasks in plants based on fuzzy integral

    International Nuclear Information System (INIS)

    Washio, Takashi; Kitamura, Yutaka; Takahashi, Hideaki

    1991-01-01

    The effective improvement of the human working conditions in nuclear power plants might be a solution for the enhancement of the operation safety. The human reliability analysis (HRA) gives a methodological basis of the improvement based on the evaluation of human reliability under various working conditions. This study investigates some difficulties of the human reliability analysis using conventional linear models and recent fuzzy integral models, and provides some solutions to the difficulties. The following practical features of the provided methods are confirmed in comparison with the conventional methods: (1) Applicability to various types of tasks (2) Capability of evaluating complicated dependencies among working condition factors (3) A priori human reliability evaluation based on a systematic task analysis of human action processes (4) A conversion scheme to probability from indices representing human reliability. (author)

  1. The reliability of integrated gasification combined cycle (IGCC) power generation units

    Energy Technology Data Exchange (ETDEWEB)

    Higman, C.; DellaVilla, S.; Steele, B. [Syngas Consultants Ltd. (United Kingdom)

    2006-07-01

    This paper presents two interlinked projects aimed at supporting the improvement of integrated gasification combined cycle (IGCC) reliability. The one project comprises the extension of SPS's existing ORAP (Operational Reliability Analysis Program) reliability, availability and maintainability (RAM) tracking technology from its existing base in natural gas open and combined cycle operations into IGCC. The other project is using the extended ORAP database to evaluate performance data from existing plants. The initial work has concentrated on evaluating public domain data on the performance of gasification based power and chemical plants. This is being followed up by plant interviews in some 20 plants to verify and expand the database on current performance. 23 refs., 8 figs., 2 tabs.

  2. Divide and Conquer (DC BLAST: fast and easy BLAST execution within HPC environments

    Directory of Open Access Journals (Sweden)

    Won Cheol Yim

    2017-06-01

    Full Text Available Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI Basic Local Alignment Search Tool (BLAST and BLAST+ suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible and used due to the increasing availability of high-performance computing (HPC systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1 to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. This freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.

  3. Experiments to Understand HPC Time to Development (Final report for Department of Energy contract DE-FG02-04ER25633) Report DOE/ER/25633-1

    Energy Technology Data Exchange (ETDEWEB)

    Basili, Victor, R.; Zelkowitz, Marvin, V.

    2007-11-14

    In order to understand how high performance computing (HPC) programs are developed, a series of experiments, using students in graduate level HPC classes and various research centers, were conducted at various locations in the US. In this report, we discuss this research, give some of the early results of those experiments, and describe a web-based Experiment Manager we are developing that allows us to run studies more easily and consistently at universities and laboratories, allowing us to generate results that more accurately reflect the process of building HPC programs.

  4. Integrated system reliability analysis

    DEFF Research Database (Denmark)

    Gintautas, Tomas; Sørensen, John Dalsgaard

    Specific targets: 1) The report shall describe the state of the art of reliability and risk-based assessment of wind turbine components. 2) Development of methodology for reliability and risk-based assessment of the wind turbine at system level. 3) Describe quantitative and qualitative measures...

  5. Reliability evaluation methodologies for ensuring container integrity of stored transuranic (TRU) waste

    International Nuclear Information System (INIS)

    Smith, K.L.

    1995-06-01

    This report provides methodologies for providing defensible estimates of expected transuranic waste storage container lifetimes at the Radioactive Waste Management Complex. These methodologies can be used to estimate transuranic waste container reliability (for integrity and degradation) and as an analytical tool to optimize waste container integrity. Container packaging and storage configurations, which directly affect waste container integrity, are also addressed. The methodologies presented provide a means for demonstrating Resource Conservation and Recovery Act waste storage requirements

  6. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE), Version 5.0: Integrated Reliability and Risk Analysis System (IRRAS) reference manual. Volume 2

    International Nuclear Information System (INIS)

    Russell, K.D.; Kvarfordt, K.J.; Skinner, N.L.; Wood, S.T.; Rasmuson, D.M.

    1994-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. The Integrated Reliability and Risk Analysis System (IRRAS) is a state-of-the-art, microcomputer-based probabilistic risk assessment (PRA) model development and analysis tool to address key nuclear plant safety issues. IRRAS is an integrated software tool that gives the use the ability to create and analyze fault trees and accident sequences using a microcomputer. This program provides functions that range from graphical fault tree construction to cut set generation and quantification to report generation. Version 1.0 of the IRRAS program was released in February of 1987. Since then, many user comments and enhancements have been incorporated into the program providing a much more powerful and user-friendly system. This version has been designated IRRAS 5.0 and is the subject of this Reference Manual. Version 5.0 of IRRAS provides the same capabilities as earlier versions and ads the ability to perform location transformations, seismic analysis, and provides enhancements to the user interface as well as improved algorithm performance. Additionally, version 5.0 contains new alphanumeric fault tree and event used for event tree rules, recovery rules, and end state partitioning

  7. ENHANCING PERFORMANCE OF AN HPC CLUSTER BY ADOPTING NONDEDICATED NODES

    OpenAIRE

    Pil Seong Park

    2015-01-01

    Persona-sized HPC clusters are widely used in many small labs, because they are cost-effective and easy to build. Instead of adding costly new nodes to old clusters, we may try to make use of some servers’ idle times by including them working independently on the same LAN, especially during the night. However such extension across a firewall raises not only some security problem with NFS but also a load balancing problem caused by heterogeneity. In this paper, we propose a meth...

  8. Design for High Performance, Low Power, and Reliable 3D Integrated Circuits

    CERN Document Server

    Lim, Sung Kyu

    2013-01-01

    This book describes the design of through-silicon-via (TSV) based three-dimensional integrated circuits.  It includes details of numerous “manufacturing-ready” GDSII-level layouts of TSV-based 3D ICs, developed with tools covered in the book. Readers will benefit from the sign-off level analysis of timing, power, signal integrity, and thermo-mechanical reliability for 3D IC designs.  Coverage also includes various design-for-manufacturability (DFM), design-for-reliability (DFR), and design-for-testability (DFT) techniques that are considered critical to the 3D IC design process. Describes design issues and solutions for high performance and low power 3D ICs, such as the pros/cons of regular and irregular placement of TSVs, Steiner routing, buffer insertion, low power 3D clock routing, power delivery network design and clock design for pre-bond testability. Discusses topics in design-for-electrical-reliability for 3D ICs, such as TSV-to-TSV coupling, current crowding at the wire-to-TSV junction and the e...

  9. Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data

    Energy Technology Data Exchange (ETDEWEB)

    Di, Sheng; Cappello, Franck

    2018-01-01

    Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points can be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.

  10. ISC High Performance 2016 International Workshops, ExaComm, E-MuCoCoS, HPC-IODC, IXPUG, IWOPH, P^3MA, VHPC, WOPSSS

    CERN Document Server

    Mohr, Bernd; Kunkel, Julian M

    2016-01-01

    This book constitutes revised selected papers from 7 workshops that were held in conjunction with the ISC High Performance 2016 conference in Frankfurt, Germany, in June 2016. The 45 papers presented in this volume were carefully reviewed and selected for inclusion in this book. They stem from the following workshops: Workshop on Exascale Multi/Many Core Computing Systems, E-MuCoCoS; Second International Workshop on Communication Architectures at Extreme Scale, ExaComm; HPC I/O in the Data Center Workshop, HPC-IODC; International Workshop on OpenPOWER for HPC, IWOPH; Workshop on the Application Performance on Intel Xeon Phi – Being Prepared for KNL and Beyond, IXPUG; Workshop on Performance and Scalability of Storage Systems, WOPSSS; and International Workshop on Performance Portable Programming Models for Accelerators, P3MA.

  11. Reliable electricity. The effects of system integration and cooperative measures to make it work

    Energy Technology Data Exchange (ETDEWEB)

    Hagspiel, Simeon [Koeln Univ. (Germany). Energiewirtschaftliches Inst.; Koeln Univ. (Germany). Dept. of Economics

    2017-12-15

    We investigate the effects of system integration for reliability of supply in regional electricity systems along with cooperative measures to support it. Specifically, we set up a model to contrast the benefits from integration through statistical balancing (i.e., a positive externality) with the risk of cascading outages (a negative externality). The model is calibrated with a comprehensive dataset comprising 28 European countries on a high spatial and temporal resolution. We find that positive externalities from system integration prevail, and that cooperation is key to meet reliability targets efficiently. To enable efficient solutions in a non-marketed environment, we formulate the problem as a cooperative game and study different rules to allocate the positive and negative effects to individual countries. Strikingly, we find that without a mechanism, the integrated solution is unstable. In contrast, proper transfer payments can be found to make all countries better off in full integration, and the Nucleolus is identified as a particularly promising candidate. The rule could be used as a basis for compensation payments to support the successful integration and cooperation of electricity systems.

  12. Reliable electricity. The effects of system integration and cooperative measures to make it work

    International Nuclear Information System (INIS)

    Hagspiel, Simeon; Koeln Univ.

    2017-01-01

    We investigate the effects of system integration for reliability of supply in regional electricity systems along with cooperative measures to support it. Specifically, we set up a model to contrast the benefits from integration through statistical balancing (i.e., a positive externality) with the risk of cascading outages (a negative externality). The model is calibrated with a comprehensive dataset comprising 28 European countries on a high spatial and temporal resolution. We find that positive externalities from system integration prevail, and that cooperation is key to meet reliability targets efficiently. To enable efficient solutions in a non-marketed environment, we formulate the problem as a cooperative game and study different rules to allocate the positive and negative effects to individual countries. Strikingly, we find that without a mechanism, the integrated solution is unstable. In contrast, proper transfer payments can be found to make all countries better off in full integration, and the Nucleolus is identified as a particularly promising candidate. The rule could be used as a basis for compensation payments to support the successful integration and cooperation of electricity systems.

  13. Educational program on HPC technologies based on the heterogeneous cluster HybriLIT (LIT JINR

    Directory of Open Access Journals (Sweden)

    Vladimir V. Korenkov

    2017-12-01

    Full Text Available The article highlights the issues of training personnel for work with high-performance computing systems (HPC, as well as of support of the software and information environment which is necessary for the efficient use of heterogeneous computing resources and the development of parallel and hybrid applications. The heterogeneous computing cluster HybriLIT, which is one of the components of the Multifunctional Information and Computing Complex of JINR, is used as the main platform for training and re-training specialists, as well as for training students, graduate students and young scientists. The HybriLIT cluster is a dynamic, actively developing structure, incorporating the most advanced HPC computing architectures (graphics accelerators, Intel Xeon Phi coprocessors, and also it has a developed software and information environment, which in turn, makes it possible to build educational programs on the up-to-date level, and enables the learners to master both modern computing platforms and modern IT technologies.

  14. Climate simulations and services on HPC, Cloud and Grid infrastructures

    Science.gov (United States)

    Cofino, Antonio S.; Blanco, Carlos; Minondo Tshuma, Antonio

    2017-04-01

    Cloud, Grid and High Performance Computing have changed the accessibility and availability of computing resources for Earth Science research communities, specially for Climate community. These paradigms are modifying the way how climate applications are being executed. By using these technologies the number, variety and complexity of experiments and resources are increasing substantially. But, although computational capacity is increasing, traditional applications and tools used by the community are not good enough to manage this large volume and variety of experiments and computing resources. In this contribution, we evaluate the challenges to run climate simulations and services on Grid, Cloud and HPC infrestructures and how to tackle them. The Grid and Cloud infrastructures provided by EGI's VOs ( esr , earth.vo.ibergrid and fedcloud.egi.eu) will be evaluated, as well as HPC resources from PRACE infrastructure and institutional clusters. To solve those challenges, solutions using DRM4G framework will be shown. DRM4G provides a good framework to manage big volume and variety of computing resources for climate experiments. This work has been supported by the Spanish National R&D Plan under projects WRF4G (CGL2011-28864), INSIGNIA (CGL2016-79210-R) and MULTI-SDM (CGL2015-66583-R) ; the IS-ENES2 project from the 7FP of the European Commission (grant agreement no. 312979); the European Regional Development Fund—ERDF and the Programa de Personal Investigador en Formación Predoctoral from Universidad de Cantabria and Government of Cantabria.

  15. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081160; The ATLAS collaboration

    2016-01-01

    Fifteen Chinese High Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte C...

  16. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081160

    2017-01-01

    Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte C...

  17. Reliability modelling - PETROBRAS 2010 integrated gas supply chain

    Energy Technology Data Exchange (ETDEWEB)

    Faertes, Denise; Heil, Luciana; Saker, Leonardo; Vieira, Flavia; Risi, Francisco; Domingues, Joaquim; Alvarenga, Tobias; Carvalho, Eduardo; Mussel, Patricia

    2010-09-15

    The purpose of this paper is to present the innovative reliability modeling of Petrobras 2010 integrated gas supply chain. The model represents a challenge in terms of complexity and software robustness. It was jointly developed by PETROBRAS Gas and Power Department and Det Norske Veritas. It was carried out with the objective of evaluating security of supply of 2010 gas network design that was conceived to connect Brazilian Northeast and Southeast regions. To provide best in class analysis, state of the art software was used to quantify the availability and the efficiency of the overall network and its individual components.

  18. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    Science.gov (United States)

    Filipčič, A.; ATLAS Collaboration

    2017-10-01

    Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte Carlo Simulation in SCEAPI and have been providing CPU power since fall 2015.

  19. 2014 HPC Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Jennings, Barbara [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-10-01

    Our commitment is to support you through delivery of an IT environment that provides mission value by transforming the way you use, protect, and access information. We approach this through technical innovation, risk management, and relationships with our workforce, Laboratories leadership, and policy makers nationwide. This second edition of our HPC Annual Report continues our commitment to communicate the details and impact of Sandia’s large-scale computing resources that support the programs associated with our diverse mission areas. A key tenet to our approach is to work with our mission partners to understand and anticipate their requirements and formulate an investment strategy that is aligned with those Laboratories priorities. In doing this, our investments include not only expanding the resources available for scientific computing and modeling and simulation, but also acquiring large-scale systems for data analytics, cloud computing, and Emulytics. We are also investigating new computer architectures in our advanced systems test bed to guide future platform designs and prepare for changes in our code development models. Our initial investments in large-scale institutional platforms that are optimized for Informatics and Emulytics work are serving a diverse customer base. We anticipate continued growth and expansion of these resources in the coming years as the use of these analytic techniques expands across our mission space. If your program could benefit from an investment in innovative systems, please work through your Program Management Unit ’s Mission Computing Council representatives to engage our teams.

  20. Using CyberShake Workflows to Manage Big Seismic Hazard Data on Large-Scale Open-Science HPC Resources

    Science.gov (United States)

    Callaghan, S.; Maechling, P. J.; Juve, G.; Vahi, K.; Deelman, E.; Jordan, T. H.

    2015-12-01

    The CyberShake computational platform, developed by the Southern California Earthquake Center (SCEC), is an integrated collection of scientific software and middleware that performs 3D physics-based probabilistic seismic hazard analysis (PSHA) for Southern California. CyberShake integrates large-scale and high-throughput research codes to produce probabilistic seismic hazard curves for individual locations of interest and hazard maps for an entire region. A recent CyberShake calculation produced about 500,000 two-component seismograms for each of 336 locations, resulting in over 300 million synthetic seismograms in a Los Angeles-area probabilistic seismic hazard model. CyberShake calculations require a series of scientific software programs. Early computational stages produce data used as inputs by later stages, so we describe CyberShake calculations using a workflow definition language. Scientific workflow tools automate and manage the input and output data and enable remote job execution on large-scale HPC systems. To satisfy the requests of broad impact users of CyberShake data, such as seismologists, utility companies, and building code engineers, we successfully completed CyberShake Study 15.4 in April and May 2015, calculating a 1 Hz urban seismic hazard map for Los Angeles. We distributed the calculation between the NSF Track 1 system NCSA Blue Waters, the DOE Leadership-class system OLCF Titan, and USC's Center for High Performance Computing. This study ran for over 5 weeks, burning about 1.1 million node-hours and producing over half a petabyte of data. The CyberShake Study 15.4 results doubled the maximum simulated seismic frequency from 0.5 Hz to 1.0 Hz as compared to previous studies, representing a factor of 16 increase in computational complexity. We will describe how our workflow tools supported splitting the calculation across multiple systems. We will explain how we modified CyberShake software components, including GPU implementations and

  1. Review of Reliability-Based Design Optimization Approach and Its Integration with Bayesian Method

    Science.gov (United States)

    Zhang, Xiangnan

    2018-03-01

    A lot of uncertain factors lie in practical engineering, such as external load environment, material property, geometrical shape, initial condition, boundary condition, etc. Reliability method measures the structural safety condition and determine the optimal design parameter combination based on the probabilistic theory. Reliability-based design optimization (RBDO) is the most commonly used approach to minimize the structural cost or other performance under uncertainty variables which combines the reliability theory and optimization. However, it cannot handle the various incomplete information. The Bayesian approach is utilized to incorporate this kind of incomplete information in its uncertainty quantification. In this paper, the RBDO approach and its integration with Bayesian method are introduced.

  2. Site-specific landslide assessment in Alpine area using a reliable integrated monitoring system

    Science.gov (United States)

    Romeo, Saverio; Di Matteo, Lucio; Kieffer, Daniel Scott

    2016-04-01

    Rockfalls are one of major cause of landslide fatalities around the world. The present work discusses the reliability of integrated monitoring of displacements in a rockfall within the Alpine region (Salzburg Land - Austria), taking into account also the effect of the ongoing climate change. Due to the unpredictability of the frequency and magnitude, that threatens human lives and infrastructure, frequently it is necessary to implement an efficient monitoring system. For this reason, during the last decades, integrated monitoring systems of unstable slopes were widely developed and used (e.g., extensometers, cameras, remote sensing, etc.). In this framework, Remote Sensing techniques, such as GBInSAR technique (Groung-Based Interferometric Synthetic Aperture Radar), have emerged as efficient and powerful tools for deformation monitoring. GBInSAR measurements can be useful to achieve an early warning system using surface deformation parameters as ground displacement or inverse velocity (for semi-empirical forecasting methods). In order to check the reliability of GBInSAR and to monitor the evolution of landslide, it is very important to integrate different techniques. Indeed, a multi-instrumental approach is essential to investigate movements both in surface and in depth and the use of different monitoring techniques allows to perform a cross analysis of the data and to minimize errors, to check the data quality and to improve the monitoring system. During 2013, an intense and complete monitoring campaign has been conducted on the Ingelsberg landslide. By analyzing both historical temperature series (HISTALP) recorded during the last century and those from local weather stations, temperature values (Autumn-Winter, Winter and Spring) are clearly increased in Bad Hofgastein area as well as in Alpine region. As consequence, in the last decades the rockfall events have been shifted from spring to summer due to warmer winters. It is interesting to point out that

  3. Impact of Thresholds and Load Patterns when Executing HPC Applications with Cloud Elasticity

    Directory of Open Access Journals (Sweden)

    Vinicius Facco Rodrigues

    2016-04-01

    Full Text Available Elasticity is one of the most known capabilities related to cloud computing, being largely deployed reactively using thresholds. In this way, maximum and minimum limits are used to drive resource allocation and deallocation actions, leading to the following problem statements: How can cloud users set the threshold values to enable elasticity in their cloud applications? And what is the impact of the application’s load pattern in the elasticity? This article tries to answer these questions for iterative high performance computing applications, showing the impact of both thresholds and load patterns on application performance and resource consumption. To accomplish this, we developed a reactive and PaaS-based elasticity model called AutoElastic and employed it over a private cloud to execute a numerical integration application. Here, we are presenting an analysis of best practices and possible optimizations regarding the elasticity and HPC pair. Considering the results, we observed that the maximum threshold influences the application time more than the minimum one. We concluded that threshold values close to 100% of CPU load are directly related to a weaker reactivity, postponing resource reconfiguration when its activation in advance could be pertinent for reducing the application runtime.

  4. Integrated Evaluation of Reliability and Power Consumption of Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Antônio Dâmaso

    2017-11-01

    Full Text Available Power consumption is a primary interest in Wireless Sensor Networks (WSNs, and a large number of strategies have been proposed to evaluate it. However, those approaches usually neither consider reliability issues nor the power consumption of applications executing in the network. A central concern is the lack of consolidated solutions that enable us to evaluate the power consumption of applications and the network stack also considering their reliabilities. To solve this problem, we introduce a fully automatic solution to design power consumption aware WSN applications and communication protocols. The solution presented in this paper comprises a methodology to evaluate the power consumption based on the integration of formal models, a set of power consumption and reliability models, a sensitivity analysis strategy to select WSN configurations and a toolbox named EDEN to fully support the proposed methodology. This solution allows accurately estimating the power consumption of WSN applications and the network stack in an automated way.

  5. Integrated Evaluation of Reliability and Power Consumption of Wireless Sensor Networks

    Science.gov (United States)

    Dâmaso, Antônio; Maciel, Paulo

    2017-01-01

    Power consumption is a primary interest in Wireless Sensor Networks (WSNs), and a large number of strategies have been proposed to evaluate it. However, those approaches usually neither consider reliability issues nor the power consumption of applications executing in the network. A central concern is the lack of consolidated solutions that enable us to evaluate the power consumption of applications and the network stack also considering their reliabilities. To solve this problem, we introduce a fully automatic solution to design power consumption aware WSN applications and communication protocols. The solution presented in this paper comprises a methodology to evaluate the power consumption based on the integration of formal models, a set of power consumption and reliability models, a sensitivity analysis strategy to select WSN configurations and a toolbox named EDEN to fully support the proposed methodology. This solution allows accurately estimating the power consumption of WSN applications and the network stack in an automated way. PMID:29113078

  6. Integrating reliability analysis and design

    International Nuclear Information System (INIS)

    Rasmuson, D.M.

    1980-10-01

    This report describes the Interactive Reliability Analysis Project and demonstrates the advantages of using computer-aided design systems (CADS) in reliability analysis. Common cause failure problems require presentations of systems, analysis of fault trees, and evaluation of solutions to these. Results have to be communicated between the reliability analyst and the system designer. Using a computer-aided design system saves time and money in the analysis of design. Computer-aided design systems lend themselves to cable routing, valve and switch lists, pipe routing, and other component studies. At EG and G Idaho, Inc., the Applicon CADS is being applied to the study of water reactor safety systems

  7. Synergy between the CIMENT tier-2 HPC centre and the HEP community at LPSC in Grenoble (France)

    International Nuclear Information System (INIS)

    Biscarat, C; Bzeznik, B

    2014-01-01

    Two of the most pressing questions in current research in Particle Physics are the characterisation of the newly discovered Higgs-like boson at the LHC and the search for New Phenomena beyond the Standard Model of Particle Physics. Physicists at LPSC in Grenoble are leading the search for one type of New Phenomena in ATLAS. Given the rich multitude of physics studies proceeding in parallel in ATLAS, one limiting factor in the timely analysis of data is the availability of computing resources. Another LPSC team suffers from the same limitation. This team is leading the ultimate precision measurement of the W boson mass with DØ data, which yields an indirect constraint on the Higgs boson mass which can be compared with the direct measurements of the mass of the newly discovered boson at LHC. In this paper, we describe the synergy between CIMENT, a regional multidisciplinary HPC centre, and the HEP community in Grenoble in the context of the analysis of data recorded by the ATLAS experiment at the LHC collider and the D0 experiment at the Tevatron collider. CIMENT is a federation of twelve HPC clusters, of about 90 TFlop/s, one of the most powerful HPC tier-2 centres in France. The sharing of resources between different scientific fields, like the ones discussed in this article, constitutes a great asset because the spikes in need of computing resources are uncorrelated in time between different fields.

  8. Toward Transparent Data Management in Multi-layer Storage Hierarchy for HPC Systems

    Energy Technology Data Exchange (ETDEWEB)

    Wadhwa, Bharti [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States). Dept. of Computer Science; Byna, Suren [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Butt, Ali R. [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States). Dept. of Computer Science

    2018-04-17

    Upcoming exascale high performance computing (HPC) systems are expected to comprise multi-tier storage hierarchy, and thus will necessitate innovative storage and I/O mechanisms. Traditional disk and block-based interfaces and file systems face severe challenges in utilizing capabilities of storage hierarchies due to the lack of hierarchy support and semantic interfaces. Object-based and semantically-rich data abstractions for scientific data management on large scale systems offer a sustainable solution to these challenges. Such data abstractions can also simplify users involvement in data movement. Here, we take the first steps of realizing such an object abstraction and explore storage mechanisms for these objects to enhance I/O performance, especially for scientific applications. We explore how an object-based interface can facilitate next generation scalable computing systems by presenting the mapping of data I/O from two real world HPC scientific use cases: a plasma physics simulation code (VPIC) and a cosmology simulation code (HACC). Our storage model stores data objects in different physical organizations to support data movement across layers of memory/storage hierarchy. Our implementation sclaes well to 16K parallel processes, and compared to the state of the art, such as MPI-IO and HDF5, our object-based data abstractions and data placement strategy in multi-level storage hierarchy achieves up to 7 X I/O performance improvement for scientific data.

  9. Enviro-HIRLAM/ HARMONIE Studies in ECMWF HPC EnviroAerosols Project

    Science.gov (United States)

    Hansen Sass, Bent; Mahura, Alexander; Nuterman, Roman; Baklanov, Alexander; Palamarchuk, Julia; Ivanov, Serguei; Pagh Nielsen, Kristian; Penenko, Alexey; Edvardsson, Nellie; Stysiak, Aleksander Andrzej; Bostanbekov, Kairat; Amstrup, Bjarne; Yang, Xiaohua; Ruban, Igor; Bergen Jensen, Marina; Penenko, Vladimir; Nurseitov, Daniyar; Zakarin, Edige

    2017-04-01

    The EnviroAerosols on ECMWF HPC project (2015-2017) "Enviro-HIRLAM/ HARMONIE model research and development for online integrated meteorology-chemistry-aerosols feedbacks and interactions in weather and atmospheric composition forecasting" is aimed at analysis of importance of the meteorology-chemistry/aerosols interactions and to provide a way for development of efficient techniques for on-line coupling of numerical weather prediction and atmospheric chemical transport via process-oriented parameterizations and feedback algorithms, which will improve both the numerical weather prediction and atmospheric composition forecasts. Two main application areas of the on-line integrated modelling are considered: (i) improved numerical weather prediction with short-term feedbacks of aerosols and chemistry on formation and development of meteorological variables, and (ii) improved atmospheric composition forecasting with on-line integrated meteorological forecast and two-way feedbacks between aerosols/chemistry and meteorology. During 2015-2016 several research projects were realized. At first, the study on "On-line Meteorology-Chemistry/Aerosols Modelling and Integration for Risk Assessment: Case Studies" focused on assessment of scenarios with accidental and continuous emissions of sulphur dioxide for case studies for Atyrau (Kazakhstan) near the northern part of the Caspian Sea and metallurgical enterprises on the Kola Peninsula (Russia), with GIS integration of modelling results into the RANDOM (Risk Assessment of Nature Detriment due to Oil spill Migration) system. At second, the studies on "The sensitivity of precipitation simulations to the soot aerosol presence" & "The precipitation forecast sensitivity to data assimilation on a very high resolution domain" focused on sensitivity and changes in precipitation life-cycle under black carbon polluted conditions over Scandinavia. At third, studies on "Aerosol effects over China investigated with a high resolution

  10. An integrated approach to estimate storage reliability with initial failures based on E-Bayesian estimates

    International Nuclear Information System (INIS)

    Zhang, Yongjin; Zhao, Ming; Zhang, Shitao; Wang, Jiamei; Zhang, Yanjun

    2017-01-01

    Storage reliability that measures the ability of products in a dormant state to keep their required functions is studied in this paper. For certain types of products, Storage reliability may not always be 100% at the beginning of storage, unlike the operational reliability, which exist possible initial failures that are normally neglected in the models of storage reliability. In this paper, a new integrated technique, the non-parametric measure based on the E-Bayesian estimates of current failure probabilities is combined with the parametric measure based on the exponential reliability function, is proposed to estimate and predict the storage reliability of products with possible initial failures, where the non-parametric method is used to estimate the number of failed products and the reliability at each testing time, and the parameter method is used to estimate the initial reliability and the failure rate of storage product. The proposed method has taken into consideration that, the reliability test data of storage products containing the unexamined before and during the storage process, is available for providing more accurate estimates of both the initial failure probability and the storage failure probability. When storage reliability prediction that is the main concern in this field should be made, the non-parametric estimates of failure numbers can be used into the parametric models for the failure process in storage. In the case of exponential models, the assessment and prediction method for storage reliability is presented in this paper. Finally, a numerical example is given to illustrate the method. Furthermore, a detailed comparison between the proposed and traditional method, for examining the rationality of assessment and prediction on the storage reliability, is investigated. The results should be useful for planning a storage environment, decision-making concerning the maximum length of storage, and identifying the production quality. - Highlights:

  11. A simple reliability block diagram method for safety integrity verification

    International Nuclear Information System (INIS)

    Guo Haitao; Yang Xianhui

    2007-01-01

    IEC 61508 requires safety integrity verification for safety related systems to be a necessary procedure in safety life cycle. PFD avg must be calculated to verify the safety integrity level (SIL). Since IEC 61508-6 does not give detailed explanations of the definitions and PFD avg calculations for its examples, it is difficult for common reliability or safety engineers to understand when they use the standard as guidance in practice. A method using reliability block diagram is investigated in this study in order to provide a clear and feasible way of PFD avg calculation and help those who take IEC 61508-6 as their guidance. The method finds mean down times (MDTs) of both channel and voted group first and then PFD avg . The calculated results of various voted groups are compared with those in IEC61508 part 6 and Ref. [Zhang T, Long W, Sato Y. Availability of systems with self-diagnostic components-applying Markov model to IEC 61508-6. Reliab Eng System Saf 2003;80(2):133-41]. An interesting outcome can be realized from the comparison. Furthermore, although differences in MDT of voted groups exist between IEC 61508-6 and this paper, PFD avg of voted groups are comparatively close. With detailed description, the method of RBD presented can be applied to the quantitative SIL verification, showing a similarity of the method in IEC 61508-6

  12. Design reliability engineering

    International Nuclear Information System (INIS)

    Buden, D.; Hunt, R.N.M.

    1989-01-01

    Improved design techniques are needed to achieve high reliability at minimum cost. This is especially true of space systems where lifetimes of many years without maintenance are needed and severe mass limitations exist. Reliability must be designed into these systems from the start. Techniques are now being explored to structure a formal design process that will be more complete and less expensive. The intent is to integrate the best features of design, reliability analysis, and expert systems to design highly reliable systems to meet stressing needs. Taken into account are the large uncertainties that exist in materials, design models, and fabrication techniques. Expert systems are a convenient method to integrate into the design process a complete definition of all elements that should be considered and an opportunity to integrate the design process with reliability, safety, test engineering, maintenance and operator training. 1 fig

  13. Optimizing CyberShake Seismic Hazard Workflows for Large HPC Resources

    Science.gov (United States)

    Callaghan, S.; Maechling, P. J.; Juve, G.; Vahi, K.; Deelman, E.; Jordan, T. H.

    2014-12-01

    The CyberShake computational platform is a well-integrated collection of scientific software and middleware that calculates 3D simulation-based probabilistic seismic hazard curves and hazard maps for the Los Angeles region. Currently each CyberShake model comprises about 235 million synthetic seismograms from about 415,000 rupture variations computed at 286 sites. CyberShake integrates large-scale parallel and high-throughput serial seismological research codes into a processing framework in which early stages produce files used as inputs by later stages. Scientific workflow tools are used to manage the jobs, data, and metadata. The Southern California Earthquake Center (SCEC) developed the CyberShake platform using USC High Performance Computing and Communications systems and open-science NSF resources.CyberShake calculations were migrated to the NSF Track 1 system NCSA Blue Waters when it became operational in 2013, via an interdisciplinary team approach including domain scientists, computer scientists, and middleware developers. Due to the excellent performance of Blue Waters and CyberShake software optimizations, we reduced the makespan (a measure of wallclock time-to-solution) of a CyberShake study from 1467 to 342 hours. We will describe the technical enhancements behind this improvement, including judicious introduction of new GPU software, improved scientific software components, increased workflow-based automation, and Blue Waters-specific workflow optimizations.Our CyberShake performance improvements highlight the benefits of scientific workflow tools. The CyberShake workflow software stack includes the Pegasus Workflow Management System (Pegasus-WMS, which includes Condor DAGMan), HTCondor, and Globus GRAM, with Pegasus-mpi-cluster managing the high-throughput tasks on the HPC resources. The workflow tools handle data management, automatically transferring about 13 TB back to SCEC storage.We will present performance metrics from the most recent Cyber

  14. Assessment of Material Solutions of Multi-level Garage Structure Within Integrated Life Cycle Design Process

    Science.gov (United States)

    Wałach, Daniel; Sagan, Joanna; Gicala, Magdalena

    2017-10-01

    The paper presents an environmental and economic analysis of the material solutions of multi-level garage. The construction project approach considered reinforced concrete structure under conditions of use of ordinary concrete and high-performance concrete (HPC). Using of HPC allowed to significant reduction of reinforcement steel, mainly in compression elements (columns) in the construction of the object. The analysis includes elements of the methodology of integrated lice cycle design (ILCD). By making multi-criteria analysis based on established weight of the economic and environmental parameters, three solutions have been evaluated and compared within phase of material production (information modules A1-A3).

  15. Standard high-reliability integrated circuit logic packaging. [for deep space tracking stations

    Science.gov (United States)

    Slaughter, D. W.

    1977-01-01

    A family of standard, high-reliability hardware used for packaging digital integrated circuits is described. The design transition from early prototypes to production hardware is covered and future plans are discussed. Interconnections techniques are described as well as connectors and related hardware available at both the microcircuit packaging and main-frame level. General applications information is also provided.

  16. Towards Reliable Integrated Services for Dependable Systems

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Ravn, Anders Peter; Izadi-Zamanabadi, Roozbeh

    Reliability issues for various technical systems are discussed and focus is directed towards distributed systems, where communication facilities are vital to maintain system functionality. Reliability in communication subsystems is considered as a resource to be shared among a number of logical c...... applications residing on alternative routes. Details are provided for the operation of RRRSVP based on reliability slack calculus. Conclusions summarize the considerations and give directions for future research....... connections and a reliability management framework is suggested. We suggest a network layer level reliability management protocol RRSVP (Reliability Resource Reservation Protocol) as a counterpart of the RSVP for bandwidth and time resource management. Active and passive standby redundancy by background...

  17. Towards Reliable Integrated Services for Dependable Systems

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Ravn, Anders Peter; Izadi-Zamanabadi, Roozbeh

    2003-01-01

    Reliability issues for various technical systems are discussed and focus is directed towards distributed systems, where communication facilities are vital to maintain system functionality. Reliability in communication subsystems is considered as a resource to be shared among a number of logical c...... applications residing on alternative routes. Details are provided for the operation of RRRSVP based on reliability slack calculus. Conclusions summarize the considerations and give directions for future research....... connections and a reliability management framework is suggested. We suggest a network layer level reliability management protocol RRSVP (Reliability Resource Reservation Protocol) as a counterpart of the RSVP for bandwidth and time resource management. Active and passive standby redundancy by background...

  18. Behavior of HPC with Fly Ash after Elevated Temperature

    Directory of Open Access Journals (Sweden)

    Huai-Shuai Shang

    2013-01-01

    Full Text Available For use in fire resistance calculations, the relevant thermal properties of high-performance concrete (HPC with fly ash were determined through an experimental study. These properties included compressive strength, cubic compressive strength, cleavage strength, flexural strength, and the ultrasonic velocity at various temperatures (20, 100, 200, 300, 400 and 500∘C for high-performance concrete. The effect of temperature on compressive strength, cubic compressive strength, cleavage strength, flexural strength, and the ultrasonic velocity of the high-performance concrete with fly ash was discussed according to the experimental results. The change of surface characteristics with the temperature was observed. It can serve as a reference for the maintenance, design, and the life prediction of high-performance concrete engineering, such as high-rise building, subjected to elevated temperatures.

  19. A Bayesian reliability evaluation method with integrated accelerated degradation testing and field information

    International Nuclear Information System (INIS)

    Wang, Lizhi; Pan, Rong; Li, Xiaoyang; Jiang, Tongmin

    2013-01-01

    Accelerated degradation testing (ADT) is a common approach in reliability prediction, especially for products with high reliability. However, oftentimes the laboratory condition of ADT is different from the field condition; thus, to predict field failure, one need to calibrate the prediction made by using ADT data. In this paper a Bayesian evaluation method is proposed to integrate the ADT data from laboratory with the failure data from field. Calibration factors are introduced to calibrate the difference between the lab and the field conditions so as to predict a product's actual field reliability more accurately. The information fusion and statistical inference procedure are carried out through a Bayesian approach and Markov chain Monte Carlo methods. The proposed method is demonstrated by two examples and the sensitivity analysis to prior distribution assumption

  20. Reliability assessment of distribution system with the integration of renewable distributed generation

    International Nuclear Information System (INIS)

    Adefarati, T.; Bansal, R.C.

    2017-01-01

    Highlights: • Addresses impacts of renewable DG on the reliability of the distribution system. • Multi-objective formulation for maximizing the cost saving with integration of DG. • Uses Markov model to study the stochastic characteristics of the major components. • The investigation is done using modified RBTS bus test distribution system. • Proposed approach is useful for electric utilities to enhance the reliability. - Abstract: Recent studies have shown that renewable energy resources will contribute substantially to future energy generation owing to the rapid depletion of fossil fuels. Wind and solar energy resources are major sources of renewable energy that have the ability to reduce the energy crisis and the greenhouse gases emitted by the conventional power plants. Reliability assessment is one of the key indicators to measure the impact of the renewable distributed generation (DG) units in the distribution networks and to minimize the cost that is associated with power outage. This paper presents a comprehensive reliability assessment of the distribution system that satisfies the consumer load requirements with the penetration of wind turbine generator (WTG), electric storage system (ESS) and photovoltaic (PV). A Markov model is proposed to access the stochastic characteristics of the major components of the renewable DG resources as well as their influence on the reliability of a conventional distribution system. The results obtained from the case studies have demonstrated the effectiveness of using WTG, ESS and PV to enhance the reliability of the conventional distribution system.

  1. Thermosyphon Cooler Hybrid System for Water Savings in an Energy-Efficient HPC Data Center: Modeling and Installation: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Thomas; Liu, Zan; Sickinger, David; Regimbal, Kevin; Martinez, David

    2017-02-01

    The Thermosyphon Cooler Hybrid System (TCHS) integrates the control of a dry heat rejection device, the thermosyphon cooler (TSC), with an open cooling tower. A combination of equipment and controls, this new heat rejection system embraces the 'smart use of water,' using evaporative cooling when it is most advantageous and then saving water and modulating toward increased dry sensible cooling as system operations and ambient weather conditions permit. Innovative fan control strategies ensure the most economical balance between water savings and parasitic fan energy. The unique low-pressure-drop design of the TSC allows water to be cooled directly by the TSC evaporator without risk of bursting tubes in subfreezing ambient conditions. Johnson Controls partnered with the National Renewable Energy Laboratory (NREL) and Sandia National Laboratories to deploy the TSC as a test bed at NREL's high-performance computing (HPC) data center in the first half of 2016. Located in NREL's Energy Systems Integration Facility (ESIF), this HPC data center has achieved an annualized average power usage effectiveness rating of 1.06 or better since 2012. Warm-water liquid cooling is used to capture heat generated by computer systems direct to water; that waste heat is either reused as the primary heat source in the ESIF building or rejected using evaporative cooling. This data center is the single largest source of water and power demand on the NREL campus, using about 7,600 m3 (2.0 million gal) of water during the past year with an hourly average IT load of nearly 1 MW (3.4 million Btu/h) -- so dramatically reducing water use while continuing efficient data center operations is of significant interest. Because Sandia's climate is similar to NREL's, this new heat rejection system being deployed at NREL has gained interest at Sandia. Sandia's data centers utilize an hourly average of 8.5 MW (29 million Btu/h) and are also one of the largest consumers of

  2. Optimal integrated sizing and planning of hubs with midsize/large CHP units considering reliability of supply

    International Nuclear Information System (INIS)

    Moradi, Saeed; Ghaffarpour, Reza; Ranjbar, Ali Mohammad; Mozaffari, Babak

    2017-01-01

    Highlights: • New hub planning formulation is proposed to exploit assets of midsize/large CHPs. • Linearization approaches are proposed for two-variable nonlinear CHP fuel function. • Efficient operation of addressed CHPs & hub devices at contingencies are considered. • Reliability-embedded integrated planning & sizing is formulated as one single MILP. • Noticeable results for costs & reliability-embedded planning due to mid/large CHPs. - Abstract: Use of multi-carrier energy systems and the energy hub concept has recently been a widespread trend worldwide. However, most of the related researches specialize in CHP systems with constant electricity/heat ratios and linear operating characteristics. In this paper, integrated energy hub planning and sizing is developed for the energy systems with mid-scale and large-scale CHP units, by taking their wide operating range into consideration. The proposed formulation is aimed at taking the best use of the beneficial degrees of freedom associated with these units for decreasing total costs and increasing reliability. High-accuracy piecewise linearization techniques with approximation errors of about 1% are introduced for the nonlinear two-dimensional CHP input-output function, making it possible to successfully integrate the CHP sizing. Efficient operation of CHP and the hub at contingencies is extracted via a new formulation, which is developed to be incorporated to the planning and sizing problem. Optimal operation, planning, sizing and contingency operation of hub components are integrated and formulated as a single comprehensive MILP problem. Results on a case study with midsize CHPs reveal a 33% reduction in total costs, and it is demonstrated that the proposed formulation ceases the need for additional components/capacities for increasing reliability of supply.

  3. A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems

    Energy Technology Data Exchange (ETDEWEB)

    Wan, Lipeng [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wang, Feiyi [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Oral, H. Sarp [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cao, Qing [Univ. of Tennessee, Knoxville, TN (United States)

    2014-11-01

    High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storage systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results

  4. Mechanical Integrity Issues at MCM-Cs for High Reliability Applications

    International Nuclear Information System (INIS)

    Morgenstern, H.A.; Tarbutton, T.J.; Becka, G.A.; Uribe, F.; Monroe, S.; Burchett, S.

    1998-01-01

    During the qualification of a new high reliability low-temperature cofired ceramic (LTCC) multichip module (MCM), two issues relating to the electrical and mechanical integrity of the LTCC network were encountered while performing qualification testing. One was electrical opens after aging tests that were caused by cracks in the solder joints. The other was fracturing of the LTCC networks during mechanical testing. Through failure analysis, computer modeling, bend testing, and test samples, changes were identified. Upon implementation of all these changes, the modules passed testing, and the MCM was placed into production

  5. Ensuring Structural Integrity through Reliable Residual Stress Measurement: From Crystals to Crankshafts

    International Nuclear Information System (INIS)

    Edwards, Lyndon

    2005-01-01

    Full text: The determination of accurate, reliable stresses is critical to many fields of engineering and, in particular, the structural integrity and hence, safety, of many systems. Neutron stress measurement is a non-destructive technique that uniquely provides insights into stress fields deep within components and structures. As such, it has become an increasingly important tool within the engineering community leading to improved manufacturing processes to reduce stress and distortion as well as to the definition of more precise structural integrity lifting procedures. This talk describes the current state of the art and identifies the key opportunities for improved structural integrity provided by the 2nd generation dedicated engineering stress diffractometers currently being designed and commissioned world-wide. Examples are provided covering a range of industrially relevant problems from the fields. (author)

  6. High Possibility Classrooms as a Pedagogical Framework for Technology Integration in Classrooms: An Inquiry in Two Australian Secondary Schools

    Science.gov (United States)

    Hunter, Jane

    2017-01-01

    Understanding how well teachers integrate digital technology in learning is the subject of considerable debate in education. High Possibility Classrooms (HPC) is a pedagogical framework drawn from research on exemplary teachers' knowledge of technology integration in Australian school classrooms. The framework is being used to support teachers who…

  7. Measuring Integrated Socioemotional Guidance at School: Factor Structure and Reliability of the Socioemotional Guidance Questionnaire (SEG-Q)

    Science.gov (United States)

    Jacobs, Karen; Struyf, Elke

    2013-01-01

    Socioemotional guidance of students has recently become an integral part of education, however no instrument exists to measure integrated socioemotional guidance. This study therefore examines the factor structure and reliability of the Socioemotional Guidance Questionnaire. Psychometric properties of the Socioemotional Guidance Questionnaire and…

  8. Behaviour of slag HPC submitted to immersion-drying cycles

    Directory of Open Access Journals (Sweden)

    Rabah Chaid

    2016-04-01

    Full Text Available This article is part of a summary of the work developed in conjunction with the Laboratory of Civil Engineering and Mechanical Engineering from INSA Rennes and Research Unit: Materials, Processes and Environment, University of Boumerdes. One of the objectives was indeed to promote, through studies of variants, the use of local cementitious additions in the formulation of high performance concretes (HPC. The binding contribution of mineral additions to the physical, mechanical and durability of concrete was evaluated by an experimental methodology to subjugate their original granular and pozzolanic effect. The results show that the contribution of couple cement -slag intensification of the matrix is higher than that obtained when the cement is not substituted by addition. Therefore, a significant improvement in performance of concretes was observed, despite the adverse action immersion cycles - drying maintained for 365 days.

  9. Building and integrating reliability models in a Reliability-Centered-Maintenance approach

    International Nuclear Information System (INIS)

    Verite, B.; Villain, B.; Venturini, V.; Hugonnard, S.; Bryla, P.

    1998-03-01

    Electricite de France (EDF) has recently developed its OMF-Structures method, designed to optimize preventive maintenance of passive structures such as pipes and support, based on risk. In particular, reliability performances of components need to be determined; it is a two-step process, consisting of a qualitative sort followed by a quantitative evaluation, involving two types of models. Initially, degradation models are widely used to exclude some components from the field of preventive maintenance. The reliability of the remaining components is then evaluated by means of quantitative reliability models. The results are then included in a risk indicator that is used to directly optimize preventive maintenance tasks. (author)

  10. Utilizing HPC Network Technologies in High Energy Physics Experiments

    CERN Document Server

    AUTHOR|(CDS)2088631; The ATLAS collaboration

    2017-01-01

    Because of their performance characteristics high-performance fabrics like Infiniband or OmniPath are interesting technologies for many local area network applications, including data acquisition systems for high-energy physics experiments like the ATLAS experiment at CERN. This paper analyzes existing APIs for high-performance fabrics and evaluates their suitability for data acquisition systems in terms of performance and domain applicability. The study finds that existing software APIs for high-performance interconnects are focused on applications in high-performance computing with specific workloads and are not compatible with the requirements of data acquisition systems. To evaluate the use of high-performance interconnects in data acquisition systems a custom library, NetIO, is presented and compared against existing technologies. NetIO has a message queue-like interface which matches the ATLAS use case better than traditional HPC APIs like MPI. The architecture of NetIO is based on a interchangeable bac...

  11. Charliecloud: Unprivileged containers for user-defined software stacks in HPC

    Energy Technology Data Exchange (ETDEWEB)

    Priedhorsky, Reid [Los Alamos National Laboratory; Randles, Timothy C. [Los Alamos National Laboratory

    2016-08-09

    Supercomputing centers are seeing increasing demand for user-defined software stacks (UDSS), instead of or in addition to the stack provided by the center. These UDSS support user needs such as complex dependencies or build requirements, externally required configurations, portability, and consistency. The challenge for centers is to provide these services in a usable manner while minimizing the risks: security, support burden, missing functionality, and performance. We present Charliecloud, which uses the Linux user and mount namespaces to run industry-standard Docker containers with no privileged operations or daemons on center resources. Our simple approach avoids most security risks while maintaining access to the performance and functionality already on offer, doing so in less than 500 lines of code. Charliecloud promises to bring an industry-standard UDSS user workflow to existing, minimally altered HPC resources.

  12. Improving the high performance concrete (HPC behaviour in high temperatures

    Directory of Open Access Journals (Sweden)

    Cattelan Antocheves De Lima, R.

    2003-12-01

    Full Text Available High performance concrete (HPC is an interesting material that has been long attracting the interest from the scientific and technical community, due to the clear advantages obtained in terms of mechanical strength and durability. Given these better characteristics, HFC, in its various forms, has been gradually replacing normal strength concrete, especially in structures exposed to severe environments. However, the veiy dense microstructure and low permeability typical of HPC can result in explosive spalling under certain thermal and mechanical conditions, such as when concrete is subject to rapid temperature rises, during a f¡re. This behaviour is caused by the build-up of internal water pressure, in the pore structure, during heating, and by stresses originating from thermal deformation gradients. Although there are still a limited number of experimental programs in this area, some researchers have reported that the addition of polypropylene fibers to HPC is a suitable way to avoid explosive spalling under f re conditions. This change in behavior is derived from the fact that polypropylene fibers melt in high temperatures and leave a pathway for heated gas to escape the concrete matrix, therefore allowing the outward migration of water vapor and resulting in the reduction of interned pore pressure. The present research investigates the behavior of high performance concrete on high temperatures, especially when polypropylene fibers are added to the mix.

    El hormigón de alta resistencia (HAR es un material de gran interés para la comunidad científica y técnica, debido a las claras ventajas obtenidas en término de resistencia mecánica y durabilidad. A causa de estas características, el HAR, en sus diversas formas, en algunas aplicaciones está reemplazando gradualmente al hormigón de resistencia normal, especialmente en estructuras expuestas a ambientes severos. Sin embargo, la microestructura muy densa y la baja permeabilidad t

  13. Integration of external estimated breeding values and associated reliabilities using correlations among traits and effects

    NARCIS (Netherlands)

    Vandenplas, J.; Colinet, F.G.; Glorieux, G.; Bertozzi, C.; Gengler, N.

    2015-01-01

    Based on a Bayesian view of linear mixed models, several studies showed the possibilities to integrate estimated breeding values (EBV) and associated reliabilities (REL) provided by genetic evaluations performed outside a given evaluation system into this genetic evaluation. Hereafter, the term

  14. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    CERN Document Server

    Anisenkov, Alexey; The ATLAS collaboration; Alandes Pradillo, Maria

    2016-01-01

    AGIS is the information system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing (ADC) applications and services. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others.

  15. Development and assessment of a fiber reinforced HPC container for radioactive waste

    International Nuclear Information System (INIS)

    Roulet, A.; Pineau, F.; Chanut, S.; Thibaux, Th.

    2007-01-01

    As part of its research into solutions for concrete disposal containers for long-lived radioactive waste, Andra defined requirements for high-performance concretes with enhanced porosity, diffusion, and permeability characteristics. It is the starting point for further research into severe conditions of containment and durability. To meet these objectives, Eiffage TP consequently developed a highly fibered High Performance Concrete (HPC) design mix using CEM V cement and silica fume. Then, mockups were produced to characterize the performance various concepts of containers with this new concrete mix. These mockups helped to identify possible manufacturing problems, and particularly the risk of cracking due to restrained shrinkage. (authors)

  16. Pipeline integrity model-a formative approach towards reliability and life assessment

    International Nuclear Information System (INIS)

    Sayed, A.M.; Jaffery, M.A.

    2005-01-01

    Pipe forms an integral part of transmission medium in oil and gas industry. This holds true for both upstream and downstream segments of this global energy business. With the aging of this asset base, emphasis on its operational aspects has been under immense considerations from the operators and regulators sides. Moreover, the milieu of information area and enhancement in global trade has lifted the barriers on means to forge forward towards better utilization of resources. This has resulted in optimized solutions as priority for business and technical manager's world over. There is a paradigm shift from mere development of 'smart materials' to 'low life cycle cost material'. The force inducing this change is a rationale one: the recovery of development costs is no more a problem in a global community; rather it is the pay-off time which matters most to the materials end users. This means that decision makers are not evaluating just the price offered but are keen to judge the entire life cycle cost of a product. The integrity of pipe are affected by factors such as corrosion, fatigue-crack growth, stress-corrosion cracking, and mechanical damage. Extensive research in the area of reliability and life assessment has been carried out. A number of models concerning with the reliability issues of pipes have been developed and are being used by a number of pipeline operators worldwide. Yet, it is emphasised that there are no substitute for sound engineering judgment and allowance for factors of safety. The ability of a laid down pipe network to transport the intended fluid under pre-defined conditions for the entire project envisaged life, is referred to the reliability of system. The reliability is built into the product through extensive benchmarking against industry standard codes. The process of pipes construction for oil and gas service is regulated through American Petroleum Institute's Specification for Line Pipe. Subsequently, specific programs have been

  17. Reliability and integrity management program for PBMR helium pressure boundary components - HTR2008-58036

    International Nuclear Information System (INIS)

    Fleming, K. N.; Gamble, R.; Gosselin, S.; Fletcher, J.; Broom, N.

    2008-01-01

    The purpose of this paper is to present the results of a study to establish strategies for the reliability and integrity management (RIM) of passive metallic components for the PBMR. The RIM strategies investigated include design elements, leak detection and testing approaches, and non-destructive examinations. Specific combinations of strategies are determined to be necessary and sufficient to achieve target reliability goals for passive components. This study recommends a basis for the RIM program for the PBMR Demonstration Power Plant (DPP) and provides guidance for the development by the American Society of Mechanical Engineers (ASME) of RIM requirements for Modular High Temperature Gas-Cooled Reactors (MHRs). (authors)

  18. Survey on Projects at DLR Simulation and Software Technology with Focus on Software Engineering and HPC

    OpenAIRE

    Schreiber, Andreas; Basermann, Achim

    2013-01-01

    We introduce the DLR institute “Simulation and Software Technology” (SC) and present current activities regarding software engineering and high performance computing (HPC) in German or international projects. Software engineering at SC focusses on data and knowledge management as well as tools for studies and experiments. We discuss how we apply software configuration management, validation and verification in our projects. Concrete research topics are traceability of (software devel...

  19. High-Bandwidth Tactical-Network Data Analysis in a High-Performance-Computing (HPC) Environment: Packet-Level Analysis

    Science.gov (United States)

    2015-09-01

    individual fragments using the hash-based method. In general, fragments 6 appear in order and relatively close to each other in the file. A fragment...data product derived from the data model is shown in Fig. 5, a Google Earth12 Keyhole Markup Language (KML) file. This product includes aggregate...System BLOb binary large object FPGA field-programmable gate array HPC high-performance computing IP Internet Protocol KML Keyhole Markup Language

  20. ICAROUS - Integrated Configurable Algorithms for Reliable Operations Of Unmanned Systems

    Science.gov (United States)

    Consiglio, María; Muñoz, César; Hagen, George; Narkawicz, Anthony; Balachandran, Swee

    2016-01-01

    NASA's Unmanned Aerial System (UAS) Traffic Management (UTM) project aims at enabling near-term, safe operations of small UAS vehicles in uncontrolled airspace, i.e., Class G airspace. A far-term goal of UTM research and development is to accommodate the expected rise in small UAS traffic density throughout the National Airspace System (NAS) at low altitudes for beyond visual line-of-sight operations. This paper describes a new capability referred to as ICAROUS (Integrated Configurable Algorithms for Reliable Operations of Unmanned Systems), which is being developed under the UTM project. ICAROUS is a software architecture comprised of highly assured algorithms for building safety-centric, autonomous, unmanned aircraft applications. Central to the development of the ICAROUS algorithms is the use of well-established formal methods to guarantee higher levels of safety assurance by monitoring and bounding the behavior of autonomous systems. The core autonomy-enabling capabilities in ICAROUS include constraint conformance monitoring and contingency control functions. ICAROUS also provides a highly configurable user interface that enables the modular integration of mission-specific software components.

  1. Architectural improvements and 28 nm FPGA implementation of the APEnet+ 3D Torus network for hybrid HPC systems

    International Nuclear Information System (INIS)

    Ammendola, Roberto; Biagioni, Andrea; Frezza, Ottorino; Cicero, Francesca Lo; Paolucci, Pier Stanislao; Lonardo, Alessandro; Rossetti, Davide; Simula, Francesco; Tosoratto, Laura; Vicini, Piero

    2014-01-01

    Modern Graphics Processing Units (GPUs) are now considered accelerators for general purpose computation. A tight interaction between the GPU and the interconnection network is the strategy to express the full potential on capability computing of a multi-GPU system on large HPC clusters; that is the reason why an efficient and scalable interconnect is a key technology to finally deliver GPUs for scientific HPC. In this paper we show the latest architectural and performance improvement of the APEnet+ network fabric, a FPGA-based PCIe board with 6 fully bidirectional off-board links with 34 Gbps of raw bandwidth per direction, and X8 Gen2 bandwidth towards the host PC. The board implements a Remote Direct Memory Access (RDMA) protocol that leverages upon peer-to-peer (P2P) capabilities of Fermi- and Kepler-class NVIDIA GPUs to obtain real zero-copy, low-latency GPU-to-GPU transfers. Finally, we report on the development activities for 2013 focusing on the adoption of the latest generation 28 nm FPGAs and the preliminary tests performed on this new platform.

  2. Architectural improvements and 28 nm FPGA implementation of the APEnet+ 3D Torus network for hybrid HPC systems

    Energy Technology Data Exchange (ETDEWEB)

    Ammendola, Roberto [INFN Sezione Roma Tor Vergata (Italy); Biagioni, Andrea; Frezza, Ottorino; Cicero, Francesca Lo; Paolucci, Pier Stanislao; Lonardo, Alessandro; Rossetti, Davide; Simula, Francesco; Tosoratto, Laura; Vicini, Piero [INFN Sezione Roma (Italy)

    2014-06-11

    Modern Graphics Processing Units (GPUs) are now considered accelerators for general purpose computation. A tight interaction between the GPU and the interconnection network is the strategy to express the full potential on capability computing of a multi-GPU system on large HPC clusters; that is the reason why an efficient and scalable interconnect is a key technology to finally deliver GPUs for scientific HPC. In this paper we show the latest architectural and performance improvement of the APEnet+ network fabric, a FPGA-based PCIe board with 6 fully bidirectional off-board links with 34 Gbps of raw bandwidth per direction, and X8 Gen2 bandwidth towards the host PC. The board implements a Remote Direct Memory Access (RDMA) protocol that leverages upon peer-to-peer (P2P) capabilities of Fermi- and Kepler-class NVIDIA GPUs to obtain real zero-copy, low-latency GPU-to-GPU transfers. Finally, we report on the development activities for 2013 focusing on the adoption of the latest generation 28 nm FPGAs and the preliminary tests performed on this new platform.

  3. Exploiting HPC Platforms for Metagenomics: Challenges and Opportunities (MICW - Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    Energy Technology Data Exchange (ETDEWEB)

    Canon, Shane

    2011-10-12

    DOE JGI's Zhong Wang, chair of the High-performance Computing session, gives a brief introduction before Berkeley Lab's Shane Canon talks about "Exploiting HPC Platforms for Metagenomics: Challenges and Opportunities" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.

  4. Final Report for File System Support for Burst Buffers on HPC Systems

    Energy Technology Data Exchange (ETDEWEB)

    Yu, W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mohror, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-11-27

    Distributed burst buffers are a promising storage architecture for handling I/O workloads for exascale computing. As they are being deployed on more supercomputers, a file system that efficiently manages these burst buffers for fast I/O operations carries great consequence. Over the past year, FSU team has undertaken several efforts to design, prototype and evaluate distributed file systems for burst buffers on HPC systems. These include MetaKV: a Key-Value Store for Metadata Management of Distributed Burst Buffers, a user-level file system with multiple backends, and a specialized file system for large datasets of deep neural networks. Our progress for these respective efforts are elaborated further in this report.

  5. Integration of NDE Reliability and Fracture Mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Becker, F. L.; Doctor, S. R.; Heas!er, P. G.; Morris, C. J.; Pitman, S. G.; Selby, G. P.; Simonen, F. A.

    1981-03-01

    The Pacific Northwest Laboratory is conducting a four-phase program for measuring and evaluating the effectiveness and reliability of in-service inspection (lSI} performed on the primary system piping welds of commercial light water reactors (LWRs). Phase I of the program is complete. A survey was made of the state of practice for ultrasonic rsr of LWR primary system piping welds. Fracture mechanics calculations were made to establish required nondestrutive testing sensitivities. In general, it was found that fatigue flaws less than 25% of wall thickness would not grow to failure within an inspection interval of 10 years. However, in some cases failure could occur considerably faster. Statistical methods for predicting and measuring the effectiveness and reliability of lSI were developed and will be applied in the "Round Robin Inspections" of Phase II. Methods were also developed for the production of flaws typical of those found in service. Samples fabricated by these methods wilI be used in Phase II to test inspection effectiveness and reliability. Measurements were made of the influence of flaw characteristics {i.e., roughness, tightness, and orientation) on inspection reliability. These measurernents, as well as the predictions of a statistical model for inspection reliability, indicate that current reporting and recording sensitivities are inadequate.

  6. Integrated Reliability and Risk Analysis System (IRRAS) Version 2.0 user's guide

    International Nuclear Information System (INIS)

    Russell, K.D.; Sattison, M.B.; Rasmuson, D.M.

    1990-06-01

    The Integrated Reliability and Risk Analysis System (IRRAS) is a state-of-the-art, microcomputer-based probabilistic risk assessment (PRA) model development and analysis tool to address key nuclear plant safety issues. IRRAS is an integrated software tool that gives the user the ability to create and analyze fault trees and accident sequences using a microcomputer. This program provides functions that range from graphical fault tree construction to cut set generation and quantification. Also provided in the system is an integrated full-screen editor for use when interfacing with remote mainframe computer systems. Version 1.0 of the IRRAS program was released in February of 1987. Since that time, many user comments and enhancements have been incorporated into the program providing a much more powerful and user-friendly system. This version has been designated IRRAS 2.0 and is the subject of this user's guide. Version 2.0 of IRRAS provides all of the same capabilities as Version 1.0 and adds a relational data base facility for managing the data, improved functionality, and improved algorithm performance. 9 refs., 292 figs., 4 tabs

  7. Development of a SaaS application probe to the physical properties of the Earth's interior: An attempt at moving HPC to the cloud

    Science.gov (United States)

    Huang, Qian

    2014-09-01

    Scientific computing often requires the availability of a massive number of computers for performing large-scale simulations, and computing in mineral physics is no exception. In order to investigate physical properties of minerals at extreme conditions in computational mineral physics, parallel computing technology is used to speed up the performance by utilizing multiple computer resources to process a computational task simultaneously thereby greatly reducing computation time. Traditionally, parallel computing has been addressed by using High Performance Computing (HPC) solutions and installed facilities such as clusters and super computers. Today, it has been seen that there is a tremendous growth in cloud computing. Infrastructure as a Service (IaaS), the on-demand and pay-as-you-go model, creates a flexible and cost-effective mean to access computing resources. In this paper, a feasibility report of HPC on a cloud infrastructure is presented. It is found that current cloud services in IaaS layer still need to improve performance to be useful to research projects. On the other hand, Software as a Service (SaaS), another type of cloud computing, is introduced into an HPC system for computing in mineral physics, and an application of which is developed. In this paper, an overall description of this SaaS application is presented. This contribution can promote cloud application development in computational mineral physics, and cross-disciplinary studies.

  8. Technology success: Integration of power plant reliability and effective maintenance

    International Nuclear Information System (INIS)

    Ferguson, K.

    2008-01-01

    The nuclear power generation sector has a tradition of utilizing technology as a key attribute for advancement. Companies that own, manage, and operate nuclear power plants can be expected to continue to rely on technology as a vital element of success. Inherent with the operations of the nuclear power industry in many parts of the world is the close connection between efficiency of power plant operations and successful business survival. The relationship among power plant availability, reliability of systems and components, and viability of the enterprise is more evident than ever. Technology decisions need to be accomplished that reflect business strategies, work processes, as well as needs of stakeholders and authorities. Such rigor is needed to address overarching concerns such as power plant life extension and license renewal, new plant orders, outage management, plant safety, inventory management etc. Particular to power plant reliability, the prudent leveraging of technology as a key to future success is vital. A dominant concern is effective asset management as physical plant assets age. Many plants are in, or are entering in, a situation in which systems and component design life and margins are converging such that failure threats can come into play with increasing frequency. Wisely selected technologies can be vital to the identification of emerging threats to reliable performance of key plant features and initiating effective maintenance actions and investments that can sustain or enhance current reliability in a cost effective manner. This attention to detail is vital to investment in new plants as well This paper and presentation will address (1) specific technology success in place at power plants, including nuclear, that integrates attention to attaining high plant reliability and effective maintenance actions as well as (2) complimentary actions that maximize technology success. In addition, the range of benefits that accrue as a result of

  9. Reliability Calculations

    DEFF Research Database (Denmark)

    Petersen, Kurt Erling

    1986-01-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety...... and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic...... approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very...

  10. Human reliability analysis

    International Nuclear Information System (INIS)

    Dougherty, E.M.; Fragola, J.R.

    1988-01-01

    The authors present a treatment of human reliability analysis incorporating an introduction to probabilistic risk assessment for nuclear power generating stations. They treat the subject according to the framework established for general systems theory. Draws upon reliability analysis, psychology, human factors engineering, and statistics, integrating elements of these fields within a systems framework. Provides a history of human reliability analysis, and includes examples of the application of the systems approach

  11. Integrated Reliability-Based Optimal Design of Structures

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    1987-01-01

    In conventional optimal design of structural systems the weight or the initial cost of the structure is usually used as objective function. Further, the constraints require that the stresses and/or strains at some critical points have to be less than some given values. Finally, all variables......-based optimal design is discussed. Next, an optimal inspection and repair strategy for existing structural systems is presented. An optimization problem is formulated , where the objective is to minimize the expected total future cost of inspection and repair subject to the constraint that the reliability...... value. The reliability can be measured from an element and/or a systems point of view. A number of methods to solve reliability-based optimization problems has been suggested, see e.g. Frangopol [I]. Murotsu et al. (2], Thoft-Christensen & Sørensen (3] and Sørensen (4). For structures where...

  12. An integrated methodology for the dynamic performance and reliability evaluation of fault-tolerant systems

    International Nuclear Information System (INIS)

    Dominguez-Garcia, Alejandro D.; Kassakian, John G.; Schindall, Joel E.; Zinchuk, Jeffrey J.

    2008-01-01

    We propose an integrated methodology for the reliability and dynamic performance analysis of fault-tolerant systems. This methodology uses a behavioral model of the system dynamics, similar to the ones used by control engineers to design the control system, but also incorporates artifacts to model the failure behavior of each component. These artifacts include component failure modes (and associated failure rates) and how those failure modes affect the dynamic behavior of the component. The methodology bases the system evaluation on the analysis of the dynamics of the different configurations the system can reach after component failures occur. For each of the possible system configurations, a performance evaluation of its dynamic behavior is carried out to check whether its properties, e.g., accuracy, overshoot, or settling time, which are called performance metrics, meet system requirements. Markov chains are used to model the stochastic process associated with the different configurations that a system can adopt when failures occur. This methodology not only enables an integrated framework for evaluating dynamic performance and reliability of fault-tolerant systems, but also enables a method for guiding the system design process, and further optimization. To illustrate the methodology, we present a case-study of a lateral-directional flight control system for a fighter aircraft

  13. Integrated Reliability and Risk Analysis System (IRRAS), Version 2.5: Reference manual

    International Nuclear Information System (INIS)

    Russell, K.D.; McKay, M.K.; Sattison, M.B.; Skinner, N.L.; Wood, S.T.; Rasmuson, D.M.

    1991-03-01

    The Integrated Reliability and Risk Analysis System (IRRAS) is a state-of-the-art, microcomputer-based probabilistic risk assessment (PRA) model development and analysis tool to address key nuclear plant safety issues. IRRAS is an integrated software tool that gives the user the ability to create and analyze fault trees and accident sequences using a microcomputer. This program provides functions that range from graphical fault tree construction to cut set generation and quantification. Version 1.0 of the IRRAS program was released in February of 1987. Since that time, many user comments and enhancements have been incorporated into the program providing a much more powerful and user-friendly system. This version has been designated IRRAS 2.5 and is the subject of this Reference Manual. Version 2.5 of IRRAS provides the same capabilities as Version 1.0 and adds a relational data base facility for managing the data, improved functionality, and improved algorithm performance. 7 refs., 348 figs

  14. HPC Co-operation between industry and university

    International Nuclear Information System (INIS)

    Ruhle, R.

    2003-01-01

    The full text of publication follows. Some years ago industry and university were using the same kind of high performance computers. Therefore it seemed appropriate to run the systems in common. Achieved synergies are larger systems to have better capabilities, to share skills in operating and using the system and to have less operating cost because of larger scale of operations. An example for a business model which allows that kind of co-operation would be demonstrated. Recently more and more simulations especially in the automotive industry are using PC clusters. A small number of PC's are used for one simulation, but the cluster is used for a large number of simulations as a throughput device. These devices are easily installed on the department level and it is difficult to achieve better cost on a central site, mainly because of the cost of the network. This is in contrast to the scientific need which still needs capability computing. In the presentation, strategies will be discussed for which cooperation potential in HPC (high performance computing) still exists. These are: to install heterogeneous computer farms, which allow to use the best computer for each application, to improve the quality of large scale simulation models to be used in design calculations or to form expert teams from industry and university to solve difficult problems in industry applications. Some examples of this co-operation are shown

  15. Human Reliability Program Overview

    Energy Technology Data Exchange (ETDEWEB)

    Bodin, Michael

    2012-09-25

    This presentation covers the high points of the Human Reliability Program, including certification/decertification, critical positions, due process, organizational structure, program components, personnel security, an overview of the US DOE reliability program, retirees and academia, and security program integration.

  16. Improved structural integrity through advances in reliable residual stress measurement: the impact of ENGIN-X

    Science.gov (United States)

    Edwards, L.; Santisteban, J. R.

    The determination of accurate reliable residual stresses is critical to many fields of structural integrity. Neutron stress measurement is a non-destructive technique that uniquely provides insights into stress fields deep within engineering components and structures. As such, it has become an increasingly important tool within engineering, leading to improved manufacturing processes to reduce stress and distortion as well as to the definition of more precise lifing procedures. This paper describes the likely impact of the next generation of dedicated engineering stress diffractometers currently being constructed and the utility of the technique using examples of residual stresses both beneficial and detrimental to structural integrity.

  17. Approach for an integral power transformer reliability model

    NARCIS (Netherlands)

    Schijndel, van A.; Wouters, P.A.A.F.; Steennis, E.F.; Wetzer, J.M.

    2012-01-01

    In electrical power transmission and distribution networks power transformers represent a crucial group of assets both in terms of reliability and investments. In order to safeguard the required quality at acceptable costs, decisions must be based on a reliable forecast of future behaviour. The aim

  18. Resilience Design Patterns: A Structured Approach to Resilience at Extreme Scale

    International Nuclear Information System (INIS)

    Engelmann, Christian; Hukerikar, Saurabh

    2017-01-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space remains fragmented. There are no formal methods and metrics to integrate the various HPC resilience techniques into composite solutions, nor are there methods to holistically evaluate the adequacy and efficacy of such solutions in terms of their protection coverage, and their performance \\& power efficiency characteristics. Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this paper, we develop a structured approach to the design, evaluation and optimization of HPC resilience using the concept of design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the problems caused by various types of faults, errors and failures in HPC systems and the techniques used to deal with these events. Each well-known solution that addresses a specific HPC resilience challenge is described in the form of a pattern. We develop a complete catalog of such resilience design patterns, which may be used by system architects, system software and tools developers, application programmers, as well as users and operators as essential building blocks when designing and deploying resilience solutions. We also develop a design framework that enhances a designer's understanding the opportunities for integrating multiple patterns across layers of the system stack and the important constraints during implementation of the individual patterns. It is also useful for defining mechanisms and interfaces to coordinate flexible fault management across

  19. Visual-Haptic Integration: Cue Weights are Varied Appropriately, to Account for Changes in Haptic Reliability Introduced by Using a Tool

    Directory of Open Access Journals (Sweden)

    Chie Takahashi

    2011-10-01

    Full Text Available Tools such as pliers systematically change the relationship between an object's size and the hand opening required to grasp it. Previous work suggests the brain takes this into account, integrating visual and haptic size information that refers to the same object, independent of the similarity of the ‘raw’ visual and haptic signals (Takahashi et al., VSS 2009. Variations in tool geometry also affect the reliability (precision of haptic size estimates, however, because they alter the change in hand opening caused by a given change in object size. Here, we examine whether the brain appropriately adjusts the weights given to visual and haptic size signals when tool geometry changes. We first estimated each cue's reliability by measuring size-discrimination thresholds in vision-alone and haptics-alone conditions. We varied haptic reliability using tools with different object-size:hand-opening ratios (1:1, 0.7:1, and 1.4:1. We then measured the weights given to vision and haptics with each tool, using a cue-conflict paradigm. The weight given to haptics varied with tool type in a manner that was well predicted by the single-cue reliabilities (MLE model; Ernst and Banks, 2002. This suggests that the process of visual-haptic integration appropriately accounts for variations in haptic reliability introduced by different tool geometries.

  20. Freva - Freie Univ Evaluation System Framework for Scientific HPC Infrastructures in Earth System Modeling

    Science.gov (United States)

    Kadow, C.; Illing, S.; Schartner, T.; Grieger, J.; Kirchner, I.; Rust, H.; Cubasch, U.; Ulbrich, U.

    2017-12-01

    The Freie Univ Evaluation System Framework (Freva - freva.met.fu-berlin.de) is a software infrastructure for standardized data and tool solutions in Earth system science (e.g. www-miklip.dkrz.de, cmip-eval.dkrz.de). Freva runs on high performance computers to handle customizable evaluation systems of research projects, institutes or universities. It combines different software technologies into one common hybrid infrastructure, including all features present in the shell and web environment. The database interface satisfies the international standards provided by the Earth System Grid Federation (ESGF). Freva indexes different data projects into one common search environment by storing the meta data information of the self-describing model, reanalysis and observational data sets in a database. This implemented meta data system with its advanced but easy-to-handle search tool supports users, developers and their plugins to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. The integrated web-shell (shellinabox) adds a degree of freedom in the choice of the working environment and can be used as a gate to the research projects HPC. Plugins are able to integrate their e.g. post-processed results into the database of the user. This allows e.g. post-processing plugins to feed statistical analysis plugins, which fosters an active exchange between plugin developers of a research project. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a database. Configurations and results of the tools can be shared among scientists via shell or web system. Furthermore, if configurations match

  1. Strengthening LLNL Missions through Laboratory Directed Research and Development in High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Willis, D. K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-12-01

    High performance computing (HPC) has been a defining strength of Lawrence Livermore National Laboratory (LLNL) since its founding. Livermore scientists have designed and used some of the world’s most powerful computers to drive breakthroughs in nearly every mission area. Today, the Laboratory is recognized as a world leader in the application of HPC to complex science, technology, and engineering challenges. Most importantly, HPC has been integral to the National Nuclear Security Administration’s (NNSA’s) Stockpile Stewardship Program—designed to ensure the safety, security, and reliability of our nuclear deterrent without nuclear testing. A critical factor behind Lawrence Livermore’s preeminence in HPC is the ongoing investments made by the Laboratory Directed Research and Development (LDRD) Program in cutting-edge concepts to enable efficient utilization of these powerful machines. Congress established the LDRD Program in 1991 to maintain the technical vitality of the Department of Energy (DOE) national laboratories. Since then, LDRD has been, and continues to be, an essential tool for exploring anticipated needs that lie beyond the planning horizon of our programs and for attracting the next generation of talented visionaries. Through LDRD, Livermore researchers can examine future challenges, propose and explore innovative solutions, and deliver creative approaches to support our missions. The present scientific and technical strengths of the Laboratory are, in large part, a product of past LDRD investments in HPC. Here, we provide seven examples of LDRD projects from the past decade that have played a critical role in building LLNL’s HPC, computer science, mathematics, and data science research capabilities, and describe how they have impacted LLNL’s mission.

  2. Hippocampal-medial prefrontal circuit supports memory updating during learning and post-encoding rest

    Science.gov (United States)

    Schlichting, Margaret L.; Preston, Alison R.

    2015-01-01

    Learning occurs in the context of existing memories. Encountering new information that relates to prior knowledge may trigger integration, whereby established memories are updated to incorporate new content. Here, we provide a critical test of recent theories suggesting hippocampal (HPC) and medial prefrontal (MPFC) involvement in integration, both during and immediately following encoding. Human participants with established memories for a set of initial (AB) associations underwent fMRI scanning during passive rest and encoding of new related (BC) and unrelated (XY) pairs. We show that HPC-MPFC functional coupling during learning was more predictive of trial-by-trial memory for associations related to prior knowledge relative to unrelated associations. Moreover, the degree to which HPC-MPFC functional coupling was enhanced following overlapping encoding was related to memory integration behavior across participants. We observed a dissociation between anterior and posterior MPFC, with integration signatures during post-encoding rest specifically in the posterior subregion. These results highlight the persistence of integration signatures into post-encoding periods, indicating continued processing of interrelated memories during rest. We also interrogated the coherence of white matter tracts to assess the hypothesis that integration behavior would be related to the integrity of the underlying anatomical pathways. Consistent with our predictions, more coherent HPC-MPFC white matter structure was associated with better performance across participants. This HPC-MPFC circuit also interacted with content-sensitive visual cortex during learning and rest, consistent with reinstatement of prior knowledge to enable updating. These results show that the HPC-MPFC circuit supports on- and offline integration of new content into memory. PMID:26608407

  3. Delivering on Industry Equipment Reliability Goals By Leveraging an Integration Platform and Decision Support Environment

    International Nuclear Information System (INIS)

    Coveney, Maureen K.; Bailey, W. Henry; Parkinson, William

    2004-01-01

    Utilities have invested in many costly enterprise systems - computerized maintenance management systems, document management systems, enterprise grade portals, to name but a few - and often very specialized systems, like data historians, high end diagnostic systems, and other focused and point solutions. From recent industry reports, we now know that the average nuclear power utilizes on average 1900 systems to perform daily work, of which 250 might facilitate the equipment reliability decision-making process. The time has come to leverage the investment in these systems by providing a common platform for integration and decision-making that will further the collective industry aim of enhancing the reliability of our nuclear generation assets to maintain high plant availability and to deliver on plant life extension goals without requiring additional large scale investment in IT infrastructure. (authors)

  4. Reliability and validity of the Japanese version of the Community Integration Measure for community-dwelling people with schizophrenia

    OpenAIRE

    Shioda, Ai; Tadaka, Etsuko; Okochi, Ayako

    2017-01-01

    Background Community integration is an essential right for people with schizophrenia that affects their well-being and quality of life, but no valid instrument exists to measure it in Japan. The aim of the present study is to develop and evaluate the reliability and validity of the Japanese version of the Community Integration Measure (CIM) for people with schizophrenia. Methods The Japanese version of the CIM was developed as a self-administered questionnaire based on the original version of...

  5. Electronics reliability calculation and design

    CERN Document Server

    Dummer, Geoffrey W A; Hiller, N

    1966-01-01

    Electronics Reliability-Calculation and Design provides an introduction to the fundamental concepts of reliability. The increasing complexity of electronic equipment has made problems in designing and manufacturing a reliable product more and more difficult. Specific techniques have been developed that enable designers to integrate reliability into their products, and reliability has become a science in its own right. The book begins with a discussion of basic mathematical and statistical concepts, including arithmetic mean, frequency distribution, median and mode, scatter or dispersion of mea

  6. Reliability calculations

    International Nuclear Information System (INIS)

    Petersen, K.E.

    1986-03-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very complex systems. In order to increase the applicability of the programs variance reduction techniques can be applied to speed up the calculation process. Variance reduction techniques have been studied and procedures for implementation of importance sampling are suggested. (author)

  7. Multidisciplinary System Reliability Analysis

    Science.gov (United States)

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  8. An integrated model for reliability estimation of digital nuclear protection system based on fault tree and software control flow methodologies

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Seong, Poong Hyun

    2000-01-01

    In the nuclear industry, the difficulty of proving the reliabilities of digital systems prohibits the widespread use of digital systems in various nuclear application such as plant protection system. Even though there exist a few models which are used to estimate the reliabilities of digital systems, we develop a new integrated model which is more realistic than the existing models. We divide the process of estimating the reliability of a digital system into two phases, a high-level phase and a low-level phase, and the boundary of two phases is the reliabilities of subsystems. We apply software control flow method to the low-level phase and fault tree analysis to the high-level phase. The application of the model to Dynamic Safety System(DDS) shows that the estimated reliability of the system is quite reasonable and realistic

  9. An integrated model for reliability estimation of digital nuclear protection system based on fault tree and software control flow methodologies

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Seong, Poong Hyun

    2000-01-01

    In nuclear industry, the difficulty of proving the reliabilities of digital systems prohibits the widespread use of digital systems in various nuclear application such as plant protection system. Even though there exist a few models which are used to estimate the reliabilities of digital systems, we develop a new integrated model which is more realistic than the existing models. We divide the process of estimating the reliability of a digital system into two phases, a high-level phase and a low-level phase, and the boundary of two phases is the reliabilities of subsystems. We apply software control flow method to the low-level phase and fault tree analysis to the high-level phase. The application of the model of dynamic safety system (DSS) shows that the estimated reliability of the system is quite reasonable and realistic. (author)

  10. Usage of OpenStack Virtual Machine and MATLAB HPC Add-on leads to faster turnaround

    KAUST Repository

    Van Waveren, Matthijs

    2017-03-16

    We need to run hundreds of MATLAB® simulations while changing the parameters between each simulation. These simulations need to be run sequentially, and the parameters are defined manually from one simulation to the next. This makes this type of workload unsuitable for a shared cluster. For this reason we are using a cluster running in an OpenStack® Virtual Machine and are using the MATLAB HPC Add-on for submitting jobs to the cluster. As a result we are now able to have a turnaround time for the simulations of the order of a few hours, instead of the 24 hours needed on a local workstation.

  11. Final Technical Report: Integrated Distribution-Transmission Analysis for Very High Penetration Solar PV

    Energy Technology Data Exchange (ETDEWEB)

    Palmintier, Bryan [NREL (National Renewable Energy Laboratory (NREL), Golden, CO (United States)); Hale, Elaine [NREL (National Renewable Energy Laboratory (NREL), Golden, CO (United States)); Hansen, Timothy M. [NREL (National Renewable Energy Laboratory (NREL), Golden, CO (United States)); Jones, Wesley [NREL (National Renewable Energy Laboratory (NREL), Golden, CO (United States)); Biagioni, David [NREL (National Renewable Energy Laboratory (NREL), Golden, CO (United States)); Baker, Kyri [NREL (National Renewable Energy Laboratory (NREL), Golden, CO (United States)); Wu, Hongyu [NREL (National Renewable Energy Laboratory (NREL), Golden, CO (United States)); Giraldez, Julieta [NREL (National Renewable Energy Laboratory (NREL), Golden, CO (United States)); Sorensen, Harry [NREL (National Renewable Energy Laboratory (NREL), Golden, CO (United States)); Lunacek, Monte [NREL (National Renewable Energy Laboratory (NREL), Golden, CO (United States)); Merket, Noel [NREL (National Renewable Energy Laboratory (NREL), Golden, CO (United States)); Jorgenson, Jennie [NREL (National Renewable Energy Laboratory (NREL), Golden, CO (United States)); Hodge, Bri-Mathias [NREL (National Renewable Energy Laboratory (NREL), Golden, CO (United States))

    2016-01-29

    Transmission and distribution simulations have historically been conducted separately, echoing their division in grid operations and planning while avoiding inherent computational challenges. Today, however, rapid growth in distributed energy resources (DERs)--including distributed generation from solar photovoltaics (DGPV)--requires understanding the unprecedented interactions between distribution and transmission. To capture these interactions, especially for high-penetration DGPV scenarios, this research project developed a first-of-its-kind, high performance computer (HPC) based, integrated transmission-distribution tool, the Integrated Grid Modeling System (IGMS). The tool was then used in initial explorations of system-wide operational interactions of high-penetration DGPV.

  12. Construction of the energy matrix for complex atoms. Part VIII: Hyperfine structure HPC calculations for terbium atom

    Science.gov (United States)

    Elantkowska, Magdalena; Ruczkowski, Jarosław; Sikorski, Andrzej; Dembczyński, Jerzy

    2017-11-01

    A parametric analysis of the hyperfine structure (hfs) for the even parity configurations of atomic terbium (Tb I) is presented in this work. We introduce the complete set of 4fN-core states in our high-performance computing (HPC) calculations. For calculations of the huge hyperfine structure matrix, requiring approximately 5000 hours when run on a single CPU, we propose the methods utilizing a personal computer cluster or, alternatively a cluster of Microsoft Azure virtual machines (VM). These methods give a factor 12 performance boost, enabling the calculations to complete in an acceptable time.

  13. A ``Cyber Wind Facility'' for HPC Wind Turbine Field Experiments

    Science.gov (United States)

    Brasseur, James; Paterson, Eric; Schmitz, Sven; Campbell, Robert; Vijayakumar, Ganesh; Lavely, Adam; Jayaraman, Balaji; Nandi, Tarak; Jha, Pankaj; Dunbar, Alex; Motta-Mena, Javier; Craven, Brent; Haupt, Sue

    2013-03-01

    The Penn State ``Cyber Wind Facility'' (CWF) is a high-fidelity multi-scale high performance computing (HPC) environment in which ``cyber field experiments'' are designed and ``cyber data'' collected from wind turbines operating within the atmospheric boundary layer (ABL) environment. Conceptually the ``facility'' is akin to a high-tech wind tunnel with controlled physical environment, but unlike a wind tunnel it replicates commercial-scale wind turbines operating in the field and forced by true atmospheric turbulence with controlled stability state. The CWF is created from state-of-the-art high-accuracy technology geometry and grid design and numerical methods, and with high-resolution simulation strategies that blend unsteady RANS near the surface with high fidelity large-eddy simulation (LES) in separated boundary layer, blade and rotor wake regions, embedded within high-resolution LES of the ABL. CWF experiments complement physical field facility experiments that can capture wider ranges of meteorological events, but with minimal control over the environment and with very small numbers of sensors at low spatial resolution. I shall report on the first CWF experiments aimed at dynamical interactions between ABL turbulence and space-time wind turbine loadings. Supported by DOE and NSF.

  14. Solar Energy Grid Integration Systems (SEGIS): adding functionality while maintaining reliability and economics

    Science.gov (United States)

    Bower, Ward

    2011-09-01

    An overview of the activities and progress made during the US DOE Solar Energy Grid Integration Systems (SEGIS) solicitation, while maintaining reliability and economics is provided. The SEGIS R&D opened pathways for interconnecting PV systems to intelligent utility grids and micro-grids of the future. In addition to new capabilities are "value added" features. The new hardware designs resulted in smaller, less material-intensive products that are being viewed by utilities as enabling dispatchable generation and not just unpredictable negative loads. The technical solutions enable "advanced integrated system" concepts and "smart grid" processes to move forward in a faster and focused manner. The advanced integrated inverters/controllers can now incorporate energy management functionality, intelligent electrical grid support features and a multiplicity of communication technologies. Portals for energy flow and two-way communications have been implemented. SEGIS hardware was developed for the utility grid of today, which was designed for one-way power flow, for intermediate grid scenarios, AND for the grid of tomorrow, which will seamlessly accommodate managed two-way power flows as required by large-scale deployment of solar and other distributed generation. The SEGIS hardware and control developed for today meets existing standards and codes AND provides for future connections to a "smart grid" mode that enables utility control and optimized performance.

  15. Use of reliability in the LMFBR industry

    International Nuclear Information System (INIS)

    Penland, J.R.; Smith, A.M.; Goeser, D.K.

    1977-01-01

    This mission of a Reliability Program for an LMFBR should be to enhance the design and operational characteristics relative to safety and to plant availability. Successful accomplishment of this mission requires proper integration of several reliability engineering tasks--analysis, testing, parts controls and program controls. Such integration requires, in turn, that the program be structured, planned and managed. This paper describes the technical integration necessary and the management activities required to achieve mission success for LMFBR's

  16. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Quality Assurance Manual

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; R. Nims; K. J. Kvarfordt; C. Wharton

    2008-08-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment using a personal computer running the Microsoft Windows operating system. SAPHIRE is primarily funded by the U.S. Nuclear Regulatory Commission (NRC). The role of the INL in this project is that of software developer and tester. This development takes place using formal software development procedures and is subject to quality assurance (QA) processes. The purpose of this document is to describe how the SAPHIRE software QA is performed for Version 6 and 7, what constitutes its parts, and limitations of those processes.

  17. Design for reliability: NASA reliability preferred practices for design and test

    Science.gov (United States)

    Lalli, Vincent R.

    1994-01-01

    This tutorial summarizes reliability experience from both NASA and industry and reflects engineering practices that support current and future civil space programs. These practices were collected from various NASA field centers and were reviewed by a committee of senior technical representatives from the participating centers (members are listed at the end). The material for this tutorial was taken from the publication issued by the NASA Reliability and Maintainability Steering Committee (NASA Reliability Preferred Practices for Design and Test. NASA TM-4322, 1991). Reliability must be an integral part of the systems engineering process. Although both disciplines must be weighed equally with other technical and programmatic demands, the application of sound reliability principles will be the key to the effectiveness and affordability of America's space program. Our space programs have shown that reliability efforts must focus on the design characteristics that affect the frequency of failure. Herein, we emphasize that these identified design characteristics must be controlled by applying conservative engineering principles.

  18. Network Traffic Analysis With Query Driven VisualizationSC 2005HPC Analytics Results

    Energy Technology Data Exchange (ETDEWEB)

    Stockinger, Kurt; Wu, Kesheng; Campbell, Scott; Lau, Stephen; Fisk, Mike; Gavrilov, Eugene; Kent, Alex; Davis, Christopher E.; Olinger,Rick; Young, Rob; Prewett, Jim; Weber, Paul; Caudell, Thomas P.; Bethel,E. Wes; Smith, Steve

    2005-09-01

    Our analytics challenge is to identify, characterize, and visualize anomalous subsets of large collections of network connection data. We use a combination of HPC resources, advanced algorithms, and visualization techniques. To effectively and efficiently identify the salient portions of the data, we rely on a multi-stage workflow that includes data acquisition, summarization (feature extraction), novelty detection, and classification. Once these subsets of interest have been identified and automatically characterized, we use a state-of-the-art-high-dimensional query system to extract data subsets for interactive visualization. Our approach is equally useful for other large-data analysis problems where it is more practical to identify interesting subsets of the data for visualization than to render all data elements. By reducing the size of the rendering workload, we enable highly interactive and useful visualizations. As a result of this work we were able to analyze six months worth of data interactively with response times two orders of magnitude shorter than with conventional methods.

  19. Systems analysis programs for hands-on integrated reliability evaluations (SAPHIRE) version 5.0, technical reference manual

    International Nuclear Information System (INIS)

    Russell, K.D.; Atwood, C.L.; Galyean, W.J.; Sattison, M.B.; Rasmuson, D.M.

    1994-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. This volume provides information on the principles used in the construction and operation of Version 5.0 of the Integrated Reliability and Risk Analysis System (IRRAS) and the System Analysis and Risk Assessment (SARA) system. It summarizes the fundamental mathematical concepts of sets and logic, fault trees, and probability. This volume then describes the algorithms that these programs use to construct a fault tree and to obtain the minimal cut sets. It gives the formulas used to obtain the probability of the top event from the minimal cut sets, and the formulas for probabilities that are appropriate under various assumptions concerning repairability and mission time. It defines the measures of basic event importance that these programs can calculate. This volume gives an overview of uncertainty analysis using simple Monte Carlo sampling or Latin Hypercube sampling, and states the algorithms used by these programs to generate random basic event probabilities from various distributions. Further references are given, and a detailed example of the reduction and quantification of a simple fault tree is provided in an appendix

  20. Efficient Machine Learning Approach for Optimizing Scientific Computing Applications on Emerging HPC Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Arumugam, Kamesh [Old Dominion Univ., Norfolk, VA (United States)

    2017-05-01

    the parallel implementation challenges of such irregular applications on different HPC architectures. In particular, we use supervised learning to predict the computation structure and use it to address the control-ow and memory access irregularities in the parallel implementation of such applications on GPUs, Xeon Phis, and heterogeneous architectures composed of multi-core CPUs with GPUs or Xeon Phis. We use numerical simulation of charged particles beam dynamics simulation as a motivating example throughout the dissertation to present our new approach, though they should be equally applicable to a wide range of irregular applications. The machine learning approach presented here use predictive analytics and forecasting techniques to adaptively model and track the irregular memory access pattern at each time step of the simulation to anticipate the future memory access pattern. Access pattern forecasts can then be used to formulate optimization decisions during application execution which improves the performance of the application at a future time step based on the observations from earlier time steps. In heterogeneous architectures, forecasts can also be used to improve the memory performance and resource utilization of all the processing units to deliver a good aggregate performance. We used these optimization techniques and anticipation strategy to design a cache-aware, memory efficient parallel algorithm to address the irregularities in the parallel implementation of charged particles beam dynamics simulation on different HPC architectures. Experimental result using a diverse mix of HPC architectures shows that our approach in using anticipation strategy is effective in maximizing data reuse, ensuring workload balance, minimizing branch and memory divergence, and in improving resource utilization.

  1. Integration of human reliability analysis into the probabilistic risk assessment process: Phase 1

    International Nuclear Information System (INIS)

    Bell, B.J.; Vickroy, S.C.

    1984-10-01

    A research program was initiated to develop a testable set of analytical procedures for integrating human reliability analysis (HRA) into the probabilistic risk assessment (PRA) process to more adequately assess the overall impact of human performance on risk. In this three-phase program, stand-alone HRA/PRA analytic procedures will be developed and field evaluated to provide improved methods, techniques, and models for applying quantitative and qualitative human error data which systematically integrate HRA principles, techniques, and analyses throughout the entire PRA process. Phase 1 of the program involved analysis of state-of-the-art PRAs to define the structures and processes currently in use in the industry. Phase 2 research will involve developing a new or revised PRA methodology which will enable more efficient regulation of the industry using quantitative or qualitative results of the PRA. Finally, Phase 3 will be to field test those procedures to assure that the results generated by the new methodologies will be usable and acceptable to the NRC. This paper briefly describes the first phase of the program and outlines the second

  2. Spark and HPC for High Energy Physics Data Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Sehrish, Saba; Kowalkowski, Jim; Paterno, Marc

    2017-05-01

    A full High Energy Physics (HEP) data analysis is divided into multiple data reduction phases. Processing within these phases is extremely time consuming, therefore intermediate results are stored in files held in mass storage systems and referenced as part of large datasets. This processing model limits what can be done with interactive data analytics. Growth in size and complexity of experimental datasets, along with emerging big data tools are beginning to cause changes to the traditional ways of doing data analyses. Use of big data tools for HEP analysis looks promising, mainly because extremely large HEP datasets can be represented and held in memory across a system, and accessed interactively by encoding an analysis using highlevel programming abstractions. The mainstream tools, however, are not designed for scientific computing or for exploiting the available HPC platform features. We use an example from the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) in Geneva, Switzerland. The LHC is the highest energy particle collider in the world. Our use case focuses on searching for new types of elementary particles explaining Dark Matter in the universe. We use HDF5 as our input data format, and Spark to implement the use case. We show the benefits and limitations of using Spark with HDF5 on Edison at NERSC.

  3. Adoption of High Performance Computational (HPC) Modeling Software for Widespread Use in the Manufacture of Welded Structures

    Energy Technology Data Exchange (ETDEWEB)

    Brust, Frederick W. [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Punch, Edward F. [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Twombly, Elizabeth Kurth [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Kalyanam, Suresh [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Kennedy, James [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Hattery, Garty R. [Engineering Mechanics Corporation of Columbus, Columbus, OH (United States); Dodds, Robert H. [Professional Consulting Services, Inc., Lisle, IL (United States); Mach, Justin C [Caterpillar, Peoria, IL (United States); Chalker, Alan [Ohio Supercomputer Center (OSC), Columbus, OH (United States); Nicklas, Jeremy [Ohio Supercomputer Center (OSC), Columbus, OH (United States); Gohar, Basil M [Ohio Supercomputer Center (OSC), Columbus, OH (United States); Hudak, David [Ohio Supercomputer Center (OSC), Columbus, OH (United States)

    2016-12-30

    . Through VFT®, manufacturing companies can avoid costly design changes after fabrication. This leads to the concept of joint design/fabrication where these important disciplines are intimately linked to minimize fabrication costs. Finally service performance (such as fatigue, corrosion, and fracture/damage) can be improved using this product. Emc2’s DOE SBIR Phase II effort successfully adapted VFT® to perform efficiently in an HPC environment independent of commercial software on a platform to permit easy and cost effective access to the code. This provides the key for SMEs to access this sophisticated and proven methodology that is quick, accurate, cost effective and available “on-demand” to address weld-simulation and fabrication problems prior to manufacture. In addition, other organizations, such as Government agencies and large companies, may have a need for spot use of such a tool. The open source code, WARP3D, a high performance finite element code used in fracture and damage assessment of structures, was significantly modified so computational weld problems can be solved efficiently on multiple processors and threads with VFT®. The thermal solver for VFT®, based on a series of closed form solution approximations, was extensively enhanced for solution on multiple processors greatly increasing overall speed. In addition, the graphical user interface (GUI) was re-written to permit SMEs access to an HPC environment at the Ohio Super Computer Center (OSC) to integrate these solutions with WARP3D. The GUI is used to define all weld pass descriptions, number of passes, material properties, consumable properties, weld speed, etc. for the structure to be modeled. The GUI was enhanced to make it more user-friendly so that non-experts can perform weld modeling. Finally, an extensive outreach program to market this capability to fabrication companies was performed. This access will permit SMEs to perform weld modeling to improve their competitiveness at a

  4. HPC Colony II: FAST_OS II: Operating Systems and Runtime Systems at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Moreira, Jose [IBM, Armonk, NY (United States)

    2013-11-13

    HPC Colony II has been a 36-month project focused on providing portable performance for leadership class machines—a task made difficult by the emerging variety of more complex computer architectures. The project attempts to move the burden of portable performance to adaptive system software, thereby allowing domain scientists to concentrate on their field rather than the fine details of a new leadership class machine. To accomplish our goals, we focused on adding intelligence into the system software stack. Our revised components include: new techniques to address OS jitter; new techniques to dynamically address load imbalances; new techniques to map resources according to architectural subtleties and application dynamic behavior; new techniques to dramatically improve the performance of checkpoint-restart; and new techniques to address membership service issues at scale.

  5. Strategy for establishing integrated l and c reliability of operating nuclear power plants in korea

    International Nuclear Information System (INIS)

    Kang, H. T.; Chung, H. Y.; Lee, Y. H.

    2008-01-01

    Korea hydro and nuclear power co. (KHNP) are in progress of developing a integrated I and C reliability establishing strategy for managing l and C obsolescence and phasing in new technology that both meets the needs of the fleet and captures the benefits of applying proven solutions to multiple plants, with reduced incremental costs. In view of this, we are developing I and C component management which covers major failure mode, symptom of performance degradation, condition-based or time-based preventive management (PM), monitoring, and failure finding and correction based on equipment reliability (ER). Furthermore, for the l and C system replacement management, we are in progress of 3-year-long I and C systems upgrade fundamental designing in developing the long-term major l and C systems implementation plan to improve plant operations, eliminate operator challenges, reduce maintenance costs, and cope with the challenges of component obsolescence. For accomplishing I and C digital upgrade in near future, we chose demonstration plant, Younggwang (YGN) unit 3 and 4 which are Korean Standard Nuclear Power Plant (KSNP). In this paper, we established the long term reliability strategy of I and C system based on ER in component replacement and furthermore I and C systems digital upgrade in system replacement. (authors)

  6. Multi-Disciplinary System Reliability Analysis

    Science.gov (United States)

    Mahadevan, Sankaran; Han, Song

    1997-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  7. User's and Programmer's Guide for HPC Platforms in CIEMAT; Guia de Utilizacion y programacion de las Plataformas de Calculo del CIEMAT

    Energy Technology Data Exchange (ETDEWEB)

    Munoz Roldan, A

    2003-07-01

    This Technical Report presents a description of the High Performance Computing platforms available to researchers in CIEMAT and dedicated mainly to scientific computing. It targets to users and programmers and tries to help in the processes of developing new code and porting code across platforms. A brief review is also presented about historical evolution in the field of HPC, ie, the programming paradigms and underlying architectures. (Author) 32 refs.

  8. 24. MPA-seminar: safety and reliability of plant technology with special emphasis on integrity and life management. Vol. 2. Papers 28-63

    International Nuclear Information System (INIS)

    1999-01-01

    The second volume is dedicated to the safety and reliability of plant technology with special emphasis on the integrity and life management. The following topics are discussed: 1. Integrity of vessels, pipes and components. 2. Fracture mechanics. 3. Measures for the extension of service life, and 4. Online Monitoring. All 30 contributions are separately analyzed for this database. (orig.)

  9. Probabilistic safety assessment of Tehran Research Reactor using systems analysis programs for hands-on integrated reliability evaluations

    International Nuclear Information System (INIS)

    Hosseini, M.H.; Nematollahi, M.R.; Sepanloo, K.

    2004-01-01

    Probabilistic safety assessment application is found to be a practical tool for research reactor safety due to intense involvement of human interactions in an experimental facility. In this document the application of the probabilistic safety assessment to the Tehran Research Reactor is presented. The level 1 practicabilities safety assessment application involved: Familiarization with the plant, selection of accident initiators, mitigating functions and system definitions, event tree constructions and quantifications, fault tree constructions and quantification, human reliability, component failure data base development and dependent failure analysis. Each of the steps of the analysis given above is discussed with highlights from the selected results. Quantification of the constructed models is done using systems analysis programs for hands-on integrated reliability evaluations software

  10. Current Capabilities at SNL for the Integration of Small Modular Reactors onto Smart Microgrids Using Sandia's Smart Microgrid Technology High Performance Computing and Advanced Manufacturing.

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez, Salvador B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    Smart grids are a crucial component for enabling the nation’s future energy needs, as part of a modernization effort led by the Department of Energy. Smart grids and smart microgrids are being considered in niche applications, and as part of a comprehensive energy strategy to help manage the nation’s growing energy demands, for critical infrastructures, military installations, small rural communities, and large populations with limited water supplies. As part of a far-reaching strategic initiative, Sandia National Laboratories (SNL) presents herein a unique, three-pronged approach to integrate small modular reactors (SMRs) into microgrids, with the goal of providing economically-competitive, reliable, and secure energy to meet the nation’s needs. SNL’s triad methodology involves an innovative blend of smart microgrid technology, high performance computing (HPC), and advanced manufacturing (AM). In this report, Sandia’s current capabilities in those areas are summarized, as well as paths forward that will enable DOE to achieve its energy goals. In the area of smart grid/microgrid technology, Sandia’s current computational capabilities can model the entire grid, including temporal aspects and cyber security issues. Our tools include system development, integration, testing and evaluation, monitoring, and sustainment.

  11. The large-scale integration of wind generation: Impacts on price, reliability and dispatchable conventional suppliers

    International Nuclear Information System (INIS)

    MacCormack, John; Hollis, Aidan; Zareipour, Hamidreza; Rosehart, William

    2010-01-01

    This work examines the effects of large-scale integration of wind powered electricity generation in a deregulated energy-only market on loads (in terms of electricity prices and supply reliability) and dispatchable conventional power suppliers. Hourly models of wind generation time series, load and resultant residual demand are created. From these a non-chronological residual demand duration curve is developed that is combined with a probabilistic model of dispatchable conventional generator availability, a model of an energy-only market with a price cap, and a model of generator costs and dispatch behavior. A number of simulations are performed to evaluate the effect on electricity prices, overall reliability of supply, the ability of a dominant supplier acting strategically to profitably withhold supplies, and the fixed cost recovery of dispatchable conventional power suppliers at different levels of wind generation penetration. Medium and long term responses of the market and/or regulator in the long term are discussed.

  12. An integral equation approach to the interval reliability of systems modelled by finite semi-Markov processes

    International Nuclear Information System (INIS)

    Csenki, A.

    1995-01-01

    The interval reliability for a repairable system which alternates between working and repair periods is defined as the probability of the system being functional throughout a given time interval. In this paper, a set of integral equations is derived for this dependability measure, under the assumption that the system is modelled by an irreducible finite semi-Markov process. The result is applied to the semi-Markov model of a two-unit system with sequential preventive maintenance. The method used for the numerical solution of the resulting system of integral equations is a two-point trapezoidal rule. The system of implementation is the matrix computation package MATLAB on the Apple Macintosh SE/30. The numerical results are discussed and compared with those from simulation

  13. Safety, reliability, risk management and human factors: an integrated engineering approach applied to nuclear facilities

    International Nuclear Information System (INIS)

    Vasconcelos, Vanderley de; Silva, Eliane Magalhaes Pereira da; Costa, Antonio Carlos Lopes da; Reis, Sergio Carneiro dos

    2009-01-01

    Nuclear energy has an important engineering legacy to share with the conventional industry. Much of the development of the tools related to safety, reliability, risk management, and human factors are associated with nuclear plant processes, mainly because the public concern about nuclear power generation. Despite the close association between these subjects, there are some important different approaches. The reliability engineering approach uses several techniques to minimize the component failures that cause the failure of the complex systems. These techniques include, for instance, redundancy, diversity, standby sparing, safety factors, and reliability centered maintenance. On the other hand system safety is primarily concerned with hazard management, that is, the identification, evaluation and control of hazards. Rather than just look at failure rates or engineering strengths, system safety would examine the interactions among system components. The events that cause accidents may be complex combinations of component failures, faulty maintenance, design errors, human actions, or actuation of instrumentation and control. Then, system safety deals with a broader spectrum of risk management, including: ergonomics, legal requirements, quality control, public acceptance, political considerations, and many other non-technical influences. Taking care of these subjects individually can compromise the completeness of the analysis and the measures associated with both risk reduction, and safety and reliability increasing. Analyzing together the engineering systems and controls of a nuclear facility, their management systems and operational procedures, and the human factors engineering, many benefits can be realized. This paper proposes an integration of these issues based on the application of systems theory. (author)

  14. Safety, reliability, risk management and human factors: an integrated engineering approach applied to nuclear facilities

    Energy Technology Data Exchange (ETDEWEB)

    Vasconcelos, Vanderley de; Silva, Eliane Magalhaes Pereira da; Costa, Antonio Carlos Lopes da; Reis, Sergio Carneiro dos [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)], e-mail: vasconv@cdtn.br, e-mail: silvaem@cdtn.br, e-mail: aclc@cdtn.br, e-mail: reissc@cdtn.br

    2009-07-01

    Nuclear energy has an important engineering legacy to share with the conventional industry. Much of the development of the tools related to safety, reliability, risk management, and human factors are associated with nuclear plant processes, mainly because the public concern about nuclear power generation. Despite the close association between these subjects, there are some important different approaches. The reliability engineering approach uses several techniques to minimize the component failures that cause the failure of the complex systems. These techniques include, for instance, redundancy, diversity, standby sparing, safety factors, and reliability centered maintenance. On the other hand system safety is primarily concerned with hazard management, that is, the identification, evaluation and control of hazards. Rather than just look at failure rates or engineering strengths, system safety would examine the interactions among system components. The events that cause accidents may be complex combinations of component failures, faulty maintenance, design errors, human actions, or actuation of instrumentation and control. Then, system safety deals with a broader spectrum of risk management, including: ergonomics, legal requirements, quality control, public acceptance, political considerations, and many other non-technical influences. Taking care of these subjects individually can compromise the completeness of the analysis and the measures associated with both risk reduction, and safety and reliability increasing. Analyzing together the engineering systems and controls of a nuclear facility, their management systems and operational procedures, and the human factors engineering, many benefits can be realized. This paper proposes an integration of these issues based on the application of systems theory. (author)

  15. HPC - Platforms Penta Chart

    Energy Technology Data Exchange (ETDEWEB)

    Trujillo, Angelina Michelle [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-10-08

    Strategy, Planning, Acquiring- very large scale computing platforms come and go and planning for immensely scalable machines often precedes actual procurement by 3 years. Procurement can be another year or more. Integration- After Acquisition, machines must be integrated into the computing environments at LANL. Connection to scalable storage via large scale storage networking, assuring correct and secure operations. Management and Utilization – Ongoing operations, maintenance, and trouble shooting of the hardware and systems software at massive scale is required.

  16. Study on seismic reliability for foundation grounds and surrounding slopes of nuclear power plants. Proposal of evaluation methodology and integration of seismic reliability evaluation system

    International Nuclear Information System (INIS)

    Ohtori, Yasuki; Kanatani, Mamoru

    2006-01-01

    This paper proposes an evaluation methodology of annual probability of failure for soil structures subjected to earthquakes and integrates the analysis system for seismic reliability of soil structures. The method is based on margin analysis, that evaluates the ground motion level at which structure is damaged. First, ground motion index that is strongly correlated with damage or response of the specific structure, is selected. The ultimate strength in terms of selected ground motion index is then evaluated. Next, variation of soil properties is taken into account for the evaluation of seismic stability of structures. The variation of the safety factor (SF) is evaluated and then the variation is converted into the variation of the specific ground motion index. Finally, the fragility curve is developed and then the annual probability of failure is evaluated combined with seismic hazard curve. The system facilitates the assessment of seismic reliability. A generator of random numbers, dynamic analysis program and stability analysis program are incorporated into one package. Once we define a structural model, distribution of the soil properties, input ground motions and so forth, list of safety factors for each sliding line is obtained. Monte Carlo Simulation (MCS), Latin Hypercube Sampling (LHS), point estimation method (PEM) and first order second moment (FOSM) implemented in this system are also introduced. As numerical examples, a ground foundation and a surrounding slope are assessed using the proposed method and the integrated system. (author)

  17. Modeling the Performance of Fast Mulipole Method on HPC platforms

    KAUST Repository

    Ibeid, Huda

    2012-04-06

    The current trend in high performance computing is pushing towards exascale computing. To achieve this exascale performance, future systems will have between 100 million and 1 billion cores assuming gigahertz cores. Currently, there are many efforts studying the hardware and software bottlenecks for building an exascale system. It is important to understand and meet these bottlenecks in order to attain 10 PFLOPS performance. On applications side, there is an urgent need to model application performance and to understand what changes need to be made to ensure continued scalability at this scale. Fast multipole methods (FMM) were originally developed for accelerating N-body problems for particle based methods. Nowadays, FMM is more than an N-body solver, recent trends in HPC have been to use FMMs in unconventional application areas. FMM is likely to be a main player in exascale due to its hierarchical nature and the techniques used to access the data via a tree structure which allow many operations to happen simultaneously at each level of the hierarchy. In this thesis , we discuss the challenges for FMM on current parallel computers and future exasclae architecture. Furthermore, we develop a novel performance model for FMM. Our ultimate aim of this thesis is to ensure the scalability of FMM on the future exascale machines.

  18. ASSESSING AND COMBINING RELIABILITY OF PROTEIN INTERACTION SOURCES

    Science.gov (United States)

    LEACH, SONIA; GABOW, AARON; HUNTER, LAWRENCE; GOLDBERG, DEBRA S.

    2008-01-01

    Integrating diverse sources of interaction information to create protein networks requires strategies sensitive to differences in accuracy and coverage of each source. Previous integration approaches calculate reliabilities of protein interaction information sources based on congruity to a designated ‘gold standard.’ In this paper, we provide a comparison of the two most popular existing approaches and propose a novel alternative for assessing reliabilities which does not require a gold standard. We identify a new method for combining the resultant reliabilities and compare it against an existing method. Further, we propose an extrinsic approach to evaluation of reliability estimates, considering their influence on the downstream tasks of inferring protein function and learning regulatory networks from expression data. Results using this evaluation method show 1) our method for reliability estimation is an attractive alternative to those requiring a gold standard and 2) the new method for combining reliabilities is less sensitive to noise in reliability assignments than the similar existing technique. PMID:17990508

  19. Parallel Application Performance on Two Generations of Intel Xeon HPC Platforms

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Christopher H.; Long, Hai; Sides, Scott; Vaidhynathan, Deepthi; Jones, Wesley

    2015-10-15

    Two next-generation node configurations hosting the Haswell microarchitecture were tested with a suite of microbenchmarks and application examples, and compared with a current Ivy Bridge production node on NREL" tm s Peregrine high-performance computing cluster. A primary conclusion from this study is that the additional cores are of little value to individual task performance--limitations to application parallelism, or resource contention among concurrently running but independent tasks, limits effective utilization of these added cores. Hyperthreading generally impacts throughput negatively, but can improve performance in the absence of detailed attention to runtime workflow configuration. The observations offer some guidance to procurement of future HPC systems at NREL. First, raw core count must be balanced with available resources, particularly memory bandwidth. Balance-of-system will determine value more than processor capability alone. Second, hyperthreading continues to be largely irrelevant to the workloads that are commonly seen, and were tested here, at NREL. Finally, perhaps the most impactful enhancement to productivity might occur through enabling multiple concurrent jobs per node. Given the right type and size of workload, more may be achieved by doing many slow things at once, than fast things in order.

  20. Integration of human reliability analysis into the probabilistic risk assessment process: phase 1

    International Nuclear Information System (INIS)

    Bell, B.J.; Vickroy, S.C.

    1985-01-01

    The US Nuclear Regulatory Commission and Pacific Northwest Laboratory initiated a research program in 1984 to develop a testable set of analytical procedures for integrating human reliability analysis (HRA) into the probabilistic risk assessment (PRA) process to more adequately assess the overall impact of human performance on risk. In this three phase program, stand-alone HRA/PRA analytic procedures will be developed and field evaluated to provide improved methods, techniques, and models for applying quantitative and qualitative human error data which systematically integrate HRA principles, techniques, and analyses throughout the entire PRA process. Phase 1 of the program involved analysis of state-of-the-art PRAs to define the structures and processes currently in use in the industry. Phase 2 research will involve developing a new or revised PRA methodology which will enable more efficient regulation of the industry using quantitative or qualitative results of the PRA. Finally, Phase 3 will be to field test those procedures to assure that the results generated by the new methodologies will be usable and acceptable to the NRC. This paper briefly describes the first phase of the program and outlines the second

  1. Reliability Lessons Learned From GPU Experience With The Titan Supercomputer at Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Gallarno, George [Christian Brothers University; Rogers, James H [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learned in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.

  2. A graphical user interface for real-time analysis of XPCS using HPC

    Energy Technology Data Exchange (ETDEWEB)

    Sikorski, M., E-mail: sikorski@aps.anl.gov [Argonne National Laboratory, Advanced Photon Source, 9700 S Cass Ave, Argonne, IL 60439 (United States); Jiang, Z. [Argonne National Laboratory, Advanced Photon Source, 9700 S Cass Ave, Argonne, IL 60439 (United States); Sprung, M. [HASYLAB at DESY, Notkestr. 85, D 22-607 Hamburg (Germany); Narayanan, S.; Sandy, A.R.; Tieman, B. [Argonne National Laboratory, Advanced Photon Source, 9700 S Cass Ave, Argonne, IL 60439 (United States)

    2011-09-01

    With the development of third generation synchrotron radiation sources, X-ray photon correlation spectroscopy has emerged as a powerful technique for characterizing equilibrium and non-equilibrium dynamics in complex materials at nanometer length scales over a wide range of time-scales (0.001-1000 s). Moreover, the development of powerful new direct detection CCD cameras has allowed investigation of faster dynamical processes. A consequence of these technical improvements is the need to reduce a very large amount of area detector data within a short time. This problem can be solved by utilizing a large number of processors (32-64) in the cluster architecture to improve the efficiency of the calculations by 1-2 orders of magnitude (Tieman et al., this issue). However, to make such a data analysis system operational, powerful and user-friendly control software needs to be developed. As a part of the effort to maintain a high data acquisition and reduction rate, we have developed a Matlab-based software that acts as an interface between the user and the high performance computing (HPC) cluster.

  3. A graphical user interface for real-time analysis of XPCS using HPC

    International Nuclear Information System (INIS)

    Sikorski, M.; Jiang, Z.; Sprung, M.; Narayanan, S.; Sandy, A.R.; Tieman, B.

    2011-01-01

    With the development of third generation synchrotron radiation sources, X-ray photon correlation spectroscopy has emerged as a powerful technique for characterizing equilibrium and non-equilibrium dynamics in complex materials at nanometer length scales over a wide range of time-scales (0.001-1000 s). Moreover, the development of powerful new direct detection CCD cameras has allowed investigation of faster dynamical processes. A consequence of these technical improvements is the need to reduce a very large amount of area detector data within a short time. This problem can be solved by utilizing a large number of processors (32-64) in the cluster architecture to improve the efficiency of the calculations by 1-2 orders of magnitude (Tieman et al., this issue). However, to make such a data analysis system operational, powerful and user-friendly control software needs to be developed. As a part of the effort to maintain a high data acquisition and reduction rate, we have developed a Matlab-based software that acts as an interface between the user and the high performance computing (HPC) cluster.

  4. New Methods for Building-In and Improvement of Integrated Circuit Reliability

    NARCIS (Netherlands)

    van der Pol, J.A.; van der Pol, Jacob Antonius

    2000-01-01

    Over the past 30 years the reliability of semiconductor products has improved by a factor of 100 while at the same time the complexity of the circuits has increased by a factor 105. This 7-decade reliability improvement has been realised by implementing a sophisticated reliability assurance system

  5. Coordinated Energy Management in Heterogeneous Processors

    Directory of Open Access Journals (Sweden)

    Indrani Paul

    2014-01-01

    Full Text Available This paper examines energy management in a heterogeneous processor consisting of an integrated CPU–GPU for high-performance computing (HPC applications. Energy management for HPC applications is challenged by their uncompromising performance requirements and complicated by the need for coordinating energy management across distinct core types – a new and less understood problem. We examine the intra-node CPU–GPU frequency sensitivity of HPC applications on tightly coupled CPU–GPU architectures as the first step in understanding power and performance optimization for a heterogeneous multi-node HPC system. The insights from this analysis form the basis of a coordinated energy management scheme, called DynaCo, for integrated CPU–GPU architectures. We implement DynaCo on a modern heterogeneous processor and compare its performance to a state-of-the-art power- and performance-management algorithm. DynaCo improves measured average energy-delay squared (ED2 product by up to 30% with less than 2% average performance loss across several exascale and other HPC workloads.

  6. Hawaii Electric System Reliability

    Energy Technology Data Exchange (ETDEWEB)

    Loose, Verne William [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Silva Monroy, Cesar Augusto [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2012-08-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers’ views of reliability “worth” and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers’ views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  7. Development of a methodology for conducting an integrated HRA/PRA --. Task 1, An assessment of human reliability influences during LP&S conditions PWRs

    Energy Technology Data Exchange (ETDEWEB)

    Luckas, W.J.; Barriere, M.T.; Brown, W.S. [Brookhaven National Lab., Upton, NY (United States); Wreathall, J. [Wreathall (John) and Co., Dublin, OH (United States); Cooper, S.E. [Science Applications International Corp., McLean, VA (United States)

    1993-06-01

    During Low Power and Shutdown (LP&S) conditions in a nuclear power plant (i.e., when the reactor is subcritical or at less than 10--15% power), human interactions with the plant`s systems will be more frequent and more direct. Control is typically not mediated by automation, and there are fewer protective systems available. Therefore, an assessment of LP&S related risk should include a greater emphasis on human reliability than such an assessment made for power operation conditions. In order to properly account for the increase in human interaction and thus be able to perform a probabilistic risk assessment (PRA) applicable to operations during LP&S, it is important that a comprehensive human reliability assessment (HRA) methodology be developed and integrated into the LP&S PRA. The tasks comprising the comprehensive HRA methodology development are as follows: (1) identification of the human reliability related influences and associated human actions during LP&S, (2) identification of potentially important LP&S related human actions and appropriate HRA framework and quantification methods, and (3) incorporation and coordination of methodology development with other integrated PRA/HRA efforts. This paper describes the first task, i.e., the assessment of human reliability influences and any associated human actions during LP&S conditions for a pressurized water reactor (PWR).

  8. A novel evaluation method for building construction project based on integrated information entropy with reliability theory.

    Science.gov (United States)

    Bai, Xiao-ping; Zhang, Xi-wei

    2013-01-01

    Selecting construction schemes of the building engineering project is a complex multiobjective optimization decision process, in which many indexes need to be selected to find the optimum scheme. Aiming at this problem, this paper selects cost, progress, quality, and safety as the four first-order evaluation indexes, uses the quantitative method for the cost index, uses integrated qualitative and quantitative methodologies for progress, quality, and safety indexes, and integrates engineering economics, reliability theories, and information entropy theory to present a new evaluation method for building construction project. Combined with a practical case, this paper also presents detailed computing processes and steps, including selecting all order indexes, establishing the index matrix, computing score values of all order indexes, computing the synthesis score, sorting all selected schemes, and making analysis and decision. Presented method can offer valuable references for risk computing of building construction projects.

  9. Reliability evaluation of smart distribution grids

    OpenAIRE

    Kazemi, Shahram

    2011-01-01

    The term "Smart Grid" generally refers to a power grid equipped with the advanced technologies dedicated for purposes such as reliability improvement, ease of control and management, integrating of distributed energy resources and electricity market operations. Improving the reliability of electric power delivered to the end users is one of the main targets of employing smart grid technologies. The smart grid investments targeted for reliability improvement can be directed toward the generati...

  10. Land Use Management in the Panama Canal Watershed to Maximize Hydrologic Ecosystem Services Benefits: Explicit Simulation of Preferential Flow Paths in an HPC Environment

    Science.gov (United States)

    Regina, J. A.; Ogden, F. L.; Steinke, R. C.; Frazier, N.; Cheng, Y.; Zhu, J.

    2017-12-01

    Preferential flow paths (PFP) resulting from biotic and abiotic factors contribute significantly to the generation of runoff in moist lowland tropical watersheds. Flow through PFPs represents the dominant mechanism by which land use choices affect hydrological behavior. The relative influence of PFP varies depending upon land-use management practices. Assessing the possible effects of land-use and landcover change on flows, and other ecosystem services, in the humid tropics partially depends on adequate simulation of PFP across different land-uses. Currently, 5% of global trade passes through the Panama Canal, which is supplied with fresh water from the Panama Canal Watershed. A third set of locks, recently constructed, are expected to double the capacity of the Canal. We incorporated explicit simulation of PFPs in to the ADHydro HPC distributed hydrological model to simulate the effects of land-use and landcover change due to land management incentives on water resources availability in the Panama Canal Watershed. These simulations help to test hypotheses related to the effectiveness of various proposed payments for ecosystem services schemes. This presentation will focus on hydrological model formulation and performance in an HPC environment.

  11. La confiabilidad integral del activo. // The reliability of a physical asset.

    Directory of Open Access Journals (Sweden)

    L. F. Sexto Cabrera

    2008-01-01

    Full Text Available En el presente artículo se discute sobre algunos aspectos que influyen sobre la confiabilidad de un activo físico. En él seproponen diferentes clasificaciones de fallos. Se presentan algunos elementos sobre el análisis de la confiabilidad humana ylos tipos de errores. De manera introductoria se comenta acerca de los defectos crónicos tolerados y el modelo de la trilogíade Juran. Se presentan reflexiones acerca de los procesos de deterioro gradual que puede sufrir un activo cualquiera.Finalmente, se discute sobre los costos de la confiabilidad.Palabras claves: Confiabilidad, fallo, modo de fallo, defectos crónicos, confiabilidad humana, costos de laconfiabilidad.____________________________________________________________________________Abstract.This paper discusses some aspects that influence about the reliability of a physical asset. Is presented differentclassifications of failures. Some elements are proposed about the analysis of the human reliability and the types of errors, aswell as, an introduction about the tolerated chronic defects and the Juran trilogy. Reflections are presented about theprocesses of gradual deterioration. Finally, is discusses on the reliability costs.Key words; Reliability, failure, failure mode, chronic defects, human reliability, reliability costs.

  12. Integrated Life Cycle Management: A Strategy for Plants to Extend Operating Lifetimes Safely with High Operational Reliability

    International Nuclear Information System (INIS)

    Esselman, Thomas; Bruck, Paul; Mengers, Charles

    2012-01-01

    Nuclear plant operators are studying the possibility of extending their existing generating facilities operating lifetime to 60 years and beyond. Many nuclear plants have been granted licenses to operate their facilities beyond the original 40 year term; however, in order to optimize the long term operating strategies, plant decision-makers need a consistent approach to support their options. This paper proposes a standard methodology to support effective decision-making for the long-term management of selected station assets. Methods detailed are intended to be used by nuclear plant site management, equipment reliability personnel, long term planners, capital asset planners, license renewal staff, and others that intend to look at operation between the current time and the end of operation. This methodology, named Integrated Life Cycle Management (ILCM), will provide a technical basis to assist decision makers regarding the timing of large capital investments required to get to the end of operation safely and with high plant reliability. ILCM seeks to identify end of life cycle failure probabilities for individual plant large capital assets and attendant costs associated with their refurbishment or replacement. It will provide a standard basis for evaluation of replacement and refurbishment options for these components. ILCM will also develop methods to integrate the individual assets over the entire plant thus assisting nuclear plant decision-makers in their facility long term operating strategies. (author)

  13. Reliability and validity of the Japanese version of the Community Integration Measure for community-dwelling people with schizophrenia.

    Science.gov (United States)

    Shioda, Ai; Tadaka, Etsuko; Okochi, Ayako

    2017-01-01

    Community integration is an essential right for people with schizophrenia that affects their well-being and quality of life, but no valid instrument exists to measure it in Japan. The aim of the present study is to develop and evaluate the reliability and validity of the Japanese version of the Community Integration Measure (CIM) for people with schizophrenia. The Japanese version of the CIM was developed as a self-administered questionnaire based on the original version of the CIM, which was developed by McColl et al. This study of the Japanese CIM had a cross-sectional design. Construct validity was determined using a confirmatory factor analysis (CFA) and data from 291 community-dwelling people with schizophrenia in Japan. Internal consistency was calculated using Cronbach's alpha. The Lubben Social Network Scale (LSNS-6), the Rosenberg Self-Esteem Scale (RSE) and the UCLA Loneliness Scale, version 3 (UCLALS) were administered to assess the criterion-related validity of the Japanese version of the CIM. The participants were 263 people with schizophrenia who provided valid responses. The Cronbach's alpha was 0.87, and CFA identified one domain with ten items that demonstrated the following values: goodness of fit index = 0.924, adjusted goodness of fit index = 0.881, comparative fit index = 0.925, and root mean square error of approximation = 0.085. The correlation coefficients were 0.43 (p reliability and validity for assessing community integration for people with schizophrenia in Japan.

  14. Energy Systems Integration Facility Videos | Energy Systems Integration

    Science.gov (United States)

    Facility | NREL Energy Systems Integration Facility Videos Energy Systems Integration Facility Integration Facility NREL + SolarCity: Maximizing Solar Power on Electrical Grids Redefining What's Possible for Renewable Energy: Grid Integration Robot-Powered Reliability Testing at NREL's ESIF Microgrid

  15. Hawaii electric system reliability.

    Energy Technology Data Exchange (ETDEWEB)

    Silva Monroy, Cesar Augusto; Loose, Verne William

    2012-09-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  16. A Novel Evaluation Method for Building Construction Project Based on Integrated Information Entropy with Reliability Theory

    Directory of Open Access Journals (Sweden)

    Xiao-ping Bai

    2013-01-01

    Full Text Available Selecting construction schemes of the building engineering project is a complex multiobjective optimization decision process, in which many indexes need to be selected to find the optimum scheme. Aiming at this problem, this paper selects cost, progress, quality, and safety as the four first-order evaluation indexes, uses the quantitative method for the cost index, uses integrated qualitative and quantitative methodologies for progress, quality, and safety indexes, and integrates engineering economics, reliability theories, and information entropy theory to present a new evaluation method for building construction project. Combined with a practical case, this paper also presents detailed computing processes and steps, including selecting all order indexes, establishing the index matrix, computing score values of all order indexes, computing the synthesis score, sorting all selected schemes, and making analysis and decision. Presented method can offer valuable references for risk computing of building construction projects.

  17. Fault tolerance and reliability in integrated ship control

    DEFF Research Database (Denmark)

    Nielsen, Jens Frederik Dalsgaard; Izadi-Zamanabadi, Roozbeh; Schiøler, Henrik

    2002-01-01

    Various strategies for achieving fault tolerance in large scale control systems are discussed. The positive and negative impacts of distribution through network communication are presented. The ATOMOS framework for standardized reliable marine automation is presented along with the corresponding...

  18. Life cycle reliability assessment of new products—A Bayesian model updating approach

    International Nuclear Information System (INIS)

    Peng, Weiwen; Huang, Hong-Zhong; Li, Yanfeng; Zuo, Ming J.; Xie, Min

    2013-01-01

    The rapidly increasing pace and continuously evolving reliability requirements of new products have made life cycle reliability assessment of new products an imperative yet difficult work. While much work has been done to separately estimate reliability of new products in specific stages, a gap exists in carrying out life cycle reliability assessment throughout all life cycle stages. We present a Bayesian model updating approach (BMUA) for life cycle reliability assessment of new products. Novel features of this approach are the development of Bayesian information toolkits by separately including “reliability improvement factor” and “information fusion factor”, which allow the integration of subjective information in a specific life cycle stage and the transition of integrated information between adjacent life cycle stages. They lead to the unique characteristics of the BMUA in which information generated throughout life cycle stages are integrated coherently. To illustrate the approach, an application to the life cycle reliability assessment of a newly developed Gantry Machining Center is shown

  19. Evaluation of structural reliability using simulation methods

    Directory of Open Access Journals (Sweden)

    Baballëku Markel

    2015-01-01

    Full Text Available Eurocode describes the 'index of reliability' as a measure of structural reliability, related to the 'probability of failure'. This paper is focused on the assessment of this index for a reinforced concrete bridge pier. It is rare to explicitly use reliability concepts for design of structures, but the problems of structural engineering are better known through them. Some of the main methods for the estimation of the probability of failure are the exact analytical integration, numerical integration, approximate analytical methods and simulation methods. Monte Carlo Simulation is used in this paper, because it offers a very good tool for the estimation of probability in multivariate functions. Complicated probability and statistics problems are solved through computer aided simulations of a large number of tests. The procedures of structural reliability assessment for the bridge pier and the comparison with the partial factor method of the Eurocodes have been demonstrated in this paper.

  20. Integrated Reliability Estimation of a Nuclear Maintenance Robot including a Software

    Energy Technology Data Exchange (ETDEWEB)

    Eom, Heung Seop; Kim, Jae Hee; Jeong, Kyung Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2011-10-15

    Conventional reliability estimation techniques such as Fault Tree Analysis (FTA), Reliability Block Diagram (RBD), Markov Model, and Event Tree Analysis (ETA) have been used widely and approved in some industries. Then there are some limitations when we use them for a complicate robot systems including software such as intelligent reactor inspection robots. Therefore an expert's judgment plays an important role in estimating the reliability of a complicate system in practice, because experts can deal with diverse evidence related to the reliability and then perform an inference based on them. The proposed method in this paper combines qualitative and quantitative evidences and performs an inference like experts. Furthermore, it does the work in a formal and in a quantitative way unlike human experts, by the benefits of Bayesian Nets (BNs)

  1. CADRIGS--computer aided design reliability interactive graphics system

    International Nuclear Information System (INIS)

    Kwik, R.J.; Polizzi, L.M.; Sticco, S.; Gerrard, P.B.; Yeater, M.L.; Hockenbury, R.W.; Phillips, M.A.

    1982-01-01

    An integrated reliability analysis program combining graphic representation of fault trees, automated data base loadings and reference, and automated construction of reliability code input files was developed. The functional specifications for CADRIGS, the computer aided design reliability interactive graphics system, are presented. Previously developed fault tree segments used in auxiliary feedwater system safety analysis were constructed on CADRIGS and, when combined, yielded results identical to those resulting from manual input to the same reliability codes

  2. Safety and reliability of automatization software

    Energy Technology Data Exchange (ETDEWEB)

    Kapp, K; Daum, R [Karlsruhe Univ. (TH) (Germany, F.R.). Lehrstuhl fuer Angewandte Informatik, Transport- und Verkehrssysteme

    1979-02-01

    Automated technical systems have to meet very high requirements concerning safety, security and reliability. Today, modern computers, especially microcomputers, are used as integral parts of those systems. In consequence computer programs must work in a safe and reliable mannter. Methods are discussed which allow to construct safe and reliable software for automatic systems such as reactor protection systems and to prove that the safety requirements are met. As a result it is shown that only the method of total software diversification can satisfy all safety requirements at tolerable cost. In order to achieve a high degree of reliability, structured and modular programming in context with high level programming languages are recommended.

  3. Reliability analysis of reactor pressure vessel intensity

    International Nuclear Information System (INIS)

    Zheng Liangang; Lu Yongbo

    2012-01-01

    This paper performs the reliability analysis of reactor pressure vessel (RPV) with ANSYS. The analysis method include direct Monte Carlo Simulation method, Latin Hypercube Sampling, central composite design and Box-Behnken Matrix design. The RPV integrity reliability under given input condition is proposed. The result shows that the effects on the RPV base material reliability are internal press, allowable basic stress and elasticity modulus of base material in descending order, and the effects on the bolt reliability are allowable basic stress of bolt material, preload of bolt and internal press in descending order. (authors)

  4. Application of high efficiency and reliable 3D-designed integral shrouded blades to nuclear turbines

    International Nuclear Information System (INIS)

    Watanabe, Eiichiro; Ohyama, Hiroharu; Tashiro, Hikaru; Sugitani, Toshiro; Kurosawa, Masaru

    1998-01-01

    Mitsubishi Heavy Industries, Ltd. has recently developed new blades for nuclear turbines, in order to achieve higher efficiency and higher reliability. The 3D aerodynamic design for 41 inch and 46 inch blades, their one piece structural design (integral-shrouded blades: ISB), and the verification test results using a model steam turbine are described in this paper. The predicted efficiency and lower vibratory stress have been verified. Based on these 60Hz ISB, 50Hz ISB series are under development using 'the law of similarity' without changing their thermodynamic performance and mechanical stress levels. Our 3D-designed reaction blades which are used for the high pressure and low pressure upstream stages, are also briefly mentioned. (author)

  5. FOSS GIS on the GFZ HPC cluster: Towards a service-oriented Scientific Geocomputation Environment

    Science.gov (United States)

    Loewe, P.; Klump, J.; Thaler, J.

    2012-12-01

    High performance compute clusters can be used as geocomputation workbenches. Their wealth of resources enables us to take on geocomputation tasks which exceed the limitations of smaller systems. These general capabilities can be harnessed via tools such as Geographic Information System (GIS), provided they are able to utilize the available cluster configuration/architecture and provide a sufficient degree of user friendliness to allow for wide application. While server-level computing is clearly not sufficient for the growing numbers of data- or computation-intense tasks undertaken, these tasks do not get even close to the requirements needed for access to "top shelf" national cluster facilities. So until recently such kind of geocomputation research was effectively barred due to lack access to of adequate resources. In this paper we report on the experiences gained by providing GRASS GIS as a software service on a HPC compute cluster at the German Research Centre for Geosciences using Platform Computing's Load Sharing Facility (LSF). GRASS GIS is the oldest and largest Free Open Source (FOSS) GIS project. During ramp up in 2011, multiple versions of GRASS GIS (v 6.4.2, 6.5 and 7.0) were installed on the HPC compute cluster, which currently consists of 234 nodes with 480 CPUs providing 3084 cores. Nineteen different processing queues with varying hardware capabilities and priorities are provided, allowing for fine-grained scheduling and load balancing. After successful initial testing, mechanisms were developed to deploy scripted geocomputation tasks onto dedicated processing queues. The mechanisms are based on earlier work by NETELER et al. (2008) and allow to use all 3084 cores for GRASS based geocomputation work. However, in practice applications are limited to fewer resources as assigned to their respective queue. Applications of the new GIS functionality comprise so far of hydrological analysis, remote sensing and the generation of maps of simulated tsunamis

  6. Photovoltaic and Wind Turbine Integration Applying Cuckoo Search for Probabilistic Reliable Optimal Placement

    Directory of Open Access Journals (Sweden)

    R. A. Swief

    2018-01-01

    Full Text Available This paper presents an efficient Cuckoo Search Optimization technique to improve the reliability of electrical power systems. Various reliability objective indices such as Energy Not Supplied, System Average Interruption Frequency Index, System Average Interruption, and Duration Index are the main indices indicating reliability. The Cuckoo Search Optimization (CSO technique is applied to optimally place the protection devices, install the distributed generators, and to determine the size of distributed generators in radial feeders for reliability improvement. Distributed generator affects reliability and system power losses and voltage profile. The volatility behaviour for both photovoltaic cells and the wind turbine farms affect the values and the selection of protection devices and distributed generators allocation. To improve reliability, the reconfiguration will take place before installing both protection devices and distributed generators. Assessment of consumer power system reliability is a vital part of distribution system behaviour and development. Distribution system reliability calculation will be relayed on probabilistic reliability indices, which can expect the disruption profile of a distribution system based on the volatility behaviour of added generators and load behaviour. The validity of the anticipated algorithm has been tested using a standard IEEE 69 bus system.

  7. Challenges Regarding IP Core Functional Reliability

    Science.gov (United States)

    Berg, Melanie D.; LaBel, Kenneth A.

    2017-01-01

    For many years, intellectual property (IP) cores have been incorporated into field programmable gate array (FPGA) and application specific integrated circuit (ASIC) design flows. However, the usage of large complex IP cores were limited within products that required a high level of reliability. This is no longer the case. IP core insertion has become mainstream including their use in highly reliable products. Due to limited visibility and control, challenges exist when using IP cores and subsequently compromise product reliability. We discuss challenges and suggest potential solutions to critical application IP insertion.

  8. User's and Programmer's Guide for HPC Platforms in CIEMAT; Guia de Utilizacion y programacion de las Plataformas de Calculo del CIEMAT

    Energy Technology Data Exchange (ETDEWEB)

    Munoz Roldan, A.

    2003-07-01

    This Technical Report presents a description of the High Performance Computing platforms available to researchers in CIEMAT and dedicated mainly to scientific computing. It targets to users and programmers and tries to help in the processes of developing new code and porting code across platforms. A brief review is also presented about historical evolution in the field of HPC, ie, the programming paradigms and underlying architectures. (Author) 32 refs.

  9. Integration of external estimated breeding values and associated reliabilities using correlations among traits and effects.

    Science.gov (United States)

    Vandenplas, J; Colinet, F G; Glorieux, G; Bertozzi, C; Gengler, N

    2015-12-01

    Based on a Bayesian view of linear mixed models, several studies showed the possibilities to integrate estimated breeding values (EBV) and associated reliabilities (REL) provided by genetic evaluations performed outside a given evaluation system into this genetic evaluation. Hereafter, the term "internal" refers to this given genetic evaluation system, and the term "external" refers to all other genetic evaluations performed outside the internal evaluation system. Bayesian approaches integrate external information (i.e., external EBV and associated REL) by altering both the mean and (co)variance of the prior distributions of the additive genetic effects based on the knowledge of this external information. Extensions of the Bayesian approaches to multivariate settings are interesting because external information expressed on other scales, measurement units, or trait definitions, or associated with different heritabilities and genetic parameters than the internal traits, could be integrated into a multivariate genetic evaluation without the need to convert external information to the internal traits. Therefore, the aim of this study was to test the integration of external EBV and associated REL, expressed on a 305-d basis and genetically correlated with a trait of interest, into a multivariate genetic evaluation using a random regression test-day model for the trait of interest. The approach we used was a multivariate Bayesian approach. Results showed that the integration of external information led to a genetic evaluation for the trait of interest for, at least, animals associated with external information, as accurate as a bivariate evaluation including all available phenotypic information. In conclusion, the multivariate Bayesian approaches have the potential to integrate external information correlated with the internal phenotypic traits, and potentially to the different random regressions, into a multivariate genetic evaluation. This allows the use of different

  10. Reliability of electronic systems

    International Nuclear Information System (INIS)

    Roca, Jose L.

    2001-01-01

    Reliability techniques have been developed subsequently as a need of the diverse engineering disciplines, nevertheless they are not few those that think they have been work a lot on reliability before the same word was used in the current context. Military, space and nuclear industries were the first ones that have been involved in this topic, however not only in these environments it is that it has been carried out this small great revolution in benefit of the increase of the reliability figures of the products of those industries, but rather it has extended to the whole industry. The fact of the massive production, characteristic of the current industries, drove four decades ago, to the fall of the reliability of its products, on one hand, because the massively itself and, for other, to the recently discovered and even not stabilized industrial techniques. Industry should be changed according to those two new requirements, creating products of medium complexity and assuring an enough reliability appropriated to production costs and controls. Reliability began to be integral part of the manufactured product. Facing this philosophy, the book describes reliability techniques applied to electronics systems and provides a coherent and rigorous framework for these diverse activities providing a unifying scientific basis for the entire subject. It consists of eight chapters plus a lot of statistical tables and an extensive annotated bibliography. Chapters embrace the following topics: 1- Introduction to Reliability; 2- Basic Mathematical Concepts; 3- Catastrophic Failure Models; 4-Parametric Failure Models; 5- Systems Reliability; 6- Reliability in Design and Project; 7- Reliability Tests; 8- Software Reliability. This book is in Spanish language and has a potentially diverse audience as a text book from academic to industrial courses. (author)

  11. Reliability of the Test of Integrated Language and Literacy Skills (TILLS)

    Science.gov (United States)

    Mailend, Marja-Liisa; Plante, Elena; Anderson, Michele A.; Applegate, E. Brooks; Nelson, Nickola W.

    2016-01-01

    Background: As new standardized tests become commercially available, it is critical that clinicians have access to the information about a test's psychometric properties, including aspects of reliability. Aims: The purpose of the three studies reported in this article was to investigate the reliability of a new test, the Test of Integrated…

  12. A Numerical Study of Scalable Cardiac Electro-Mechanical Solvers on HPC Architectures

    Directory of Open Access Journals (Sweden)

    Piero Colli Franzone

    2018-04-01

    Full Text Available We introduce and study some scalable domain decomposition preconditioners for cardiac electro-mechanical 3D simulations on parallel HPC (High Performance Computing architectures. The electro-mechanical model of the cardiac tissue is composed of four coupled sub-models: (1 the static finite elasticity equations for the transversely isotropic deformation of the cardiac tissue; (2 the active tension model describing the dynamics of the intracellular calcium, cross-bridge binding and myofilament tension; (3 the anisotropic Bidomain model describing the evolution of the intra- and extra-cellular potentials in the deforming cardiac tissue; and (4 the ionic membrane model describing the dynamics of ionic currents, gating variables, ionic concentrations and stretch-activated channels. This strongly coupled electro-mechanical model is discretized in time with a splitting semi-implicit technique and in space with isoparametric finite elements. The resulting scalable parallel solver is based on Multilevel Additive Schwarz preconditioners for the solution of the Bidomain system and on BDDC preconditioned Newton-Krylov solvers for the non-linear finite elasticity system. The results of several 3D parallel simulations show the scalability of both linear and non-linear solvers and their application to the study of both physiological excitation-contraction cardiac dynamics and re-entrant waves in the presence of different mechano-electrical feedbacks.

  13. Engraftment Outcomes after HPC Co-Culture with Mesenchymal Stromal Cells and Osteoblasts

    Directory of Open Access Journals (Sweden)

    Matthew M. Cook

    2013-09-01

    Full Text Available Haematopoietic stem cell (HSC transplantation is an established cell-based therapy for a number of haematological diseases. To enhance this therapy, there is considerable interest in expanding HSCs in artificial niches prior to transplantation. This study compared murine HSC expansion supported through co-culture on monolayers of either undifferentiated mesenchymal stromal cells (MSCs or osteoblasts. Sorted Lineage− Sca-1+ c-kit+ (LSK haematopoietic stem/progenitor cells (HPC demonstrated proliferative capacity on both stromal monolayers with the greatest expansion of LSK shown in cultures supported by osteoblast monolayers. After transplantation, both types of bulk-expanded cultures were capable of engrafting and repopulating lethally irradiated primary and secondary murine recipients. LSKs co-cultured on MSCs showed comparable, but not superior, reconstitution ability to that of freshly isolated LSKs. Surprisingly, however, osteoblast co-cultured LSKs showed significantly poorer haematopoietic reconstitution compared to LSKs co-cultured on MSCs, likely due to a delay in short-term reconstitution. We demonstrated that stromal monolayers can be used to maintain, but not expand, functional HSCs without a need for additional haematopoietic growth factors. We also demonstrated that despite apparently superior in vitro performance, co-injection of bulk cultures of osteoblasts and LSKs in vivo was detrimental to recipient survival and should be avoided in translation to clinical practice.

  14. Systamatic approach to integration of a human reliability analysis into a NPP probabalistic risk assessment

    International Nuclear Information System (INIS)

    Fragola, J.R.

    1984-01-01

    This chapter describes the human reliability analysis tasks which were employed in the evaluation of the overall probability of an internal flood sequence and its consequences in terms of disabling vulnerable risk significant equipment. Topics considered include the problem familiarization process, the identification and classification of key human interactions, a human interaction review of potential initiators, a maintenance and operations review, human interaction identification, quantification model selection, the definition of operator-induced sequences, the quantification of specific human interactions, skill- and rule-based interactions, knowledge-based interactions, and the incorporation of human interaction-related events into the event tree structure. It is concluded that an integrated approach to the analysis of human interaction within the context of a Probabilistic Risk Assessment (PRA) is feasible

  15. Application to nuclear turbines of high-efficiency and reliable 3D-designed integral shrouded blades

    International Nuclear Information System (INIS)

    Watanabe, Eiichiro; Ohyama, Hiroharu; Tashiro, Hikaru; Sugitani, Toshio; Kurosawa, Masaru

    1999-01-01

    Mitsubishi Heavy Industries, Ltd. (MHI) has recently developed new blades for nuclear turbines, in order to achieve higher efficiency and higher reliability. The three-dimensional aerodynamic design for 41-inch and 46-inch blades, their one piece structural design (integral shrouded blades: ISB), and the verification test results using a model steam turbine are described in this paper. The predicted efficiency and lower vibratory stress have been verified. On the basis of these 60 Hz ISB, 50 Hz ISB series are under development using 'the law of similarity' without changing their thermodynamic performance and mechanical stress levels. Our 3D-designed reaction blades which are used for the high pressure and low pressure upstream stages, are also briefly mentioned. (author)

  16. Systems Integration Fact Sheet

    Energy Technology Data Exchange (ETDEWEB)

    None

    2016-06-01

    This fact sheet is an overview of the Systems Integration subprogram at the U.S. Department of Energy SunShot Initiative. The Systems Integration subprogram enables the widespread deployment of safe, reliable, and cost-effective solar energy technologies by addressing the associated technical and non-technical challenges. These include timely and cost-effective interconnection procedures, optimal system planning, accurate prediction of solar resources, monitoring and control of solar power, maintaining grid reliability and stability, and many more. To address the challenges associated with interconnecting and integrating hundreds of gigawatts of solar power onto the electricity grid, the Systems Integration program funds research, development, and demonstration projects in four broad, interrelated focus areas: grid performance and reliability, dispatchability, power electronics, and communications.

  17. Computer-aided reliability and risk assessment

    International Nuclear Information System (INIS)

    Leicht, R.; Wingender, H.J.

    1989-01-01

    Activities in the fields of reliability and risk analyses have led to the development of particular software tools which now are combined in the PC-based integrated CARARA system. The options available in this system cover a wide range of reliability-oriented tasks, like organizing raw failure data in the component/event data bank FDB, performing statistical analysis of those data with the program FDA, managing the resulting parameters in the reliability data bank RDB, and performing fault tree analysis with the fault tree code FTL or evaluating the risk of toxic or radioactive material release with the STAR code. (orig.)

  18. A New Biobjective Model to Optimize Integrated Redundancy Allocation and Reliability-Centered Maintenance Problems in a System Using Metaheuristics

    Directory of Open Access Journals (Sweden)

    Shima MohammadZadeh Dogahe

    2015-01-01

    Full Text Available A novel integrated model is proposed to optimize the redundancy allocation problem (RAP and the reliability-centered maintenance (RCM simultaneously. A system of both repairable and nonrepairable components has been considered. In this system, electronic components are nonrepairable while mechanical components are mostly repairable. For nonrepairable components, a redundancy allocation problem is dealt with to determine optimal redundancy strategy and number of redundant components to be implemented in each subsystem. In addition, a maintenance scheduling problem is considered for repairable components in order to identify the best maintenance policy and optimize system reliability. Both active and cold standby redundancy strategies have been taken into account for electronic components. Also, net present value of the secondary cost including operational and maintenance costs has been calculated. The problem is formulated as a biobjective mathematical programming model aiming to reach a tradeoff between system reliability and cost. Three metaheuristic algorithms are employed to solve the proposed model: Nondominated Sorting Genetic Algorithm (NSGA-II, Multiobjective Particle Swarm Optimization (MOPSO, and Multiobjective Firefly Algorithm (MOFA. Several test problems are solved using the mentioned algorithms to test efficiency and effectiveness of the solution approaches and obtained results are analyzed.

  19. Reliability modeling of digital component in plant protection system with various fault-tolerant techniques

    International Nuclear Information System (INIS)

    Kim, Bo Gyung; Kang, Hyun Gook; Kim, Hee Eun; Lee, Seung Jun; Seong, Poong Hyun

    2013-01-01

    Highlights: • Integrated fault coverage is introduced for reflecting characteristics of fault-tolerant techniques in the reliability model of digital protection system in NPPs. • The integrated fault coverage considers the process of fault-tolerant techniques from detection to fail-safe generation process. • With integrated fault coverage, the unavailability of repairable component of DPS can be estimated. • The new developed reliability model can reveal the effects of fault-tolerant techniques explicitly for risk analysis. • The reliability model makes it possible to confirm changes of unavailability according to variation of diverse factors. - Abstract: With the improvement of digital technologies, digital protection system (DPS) has more multiple sophisticated fault-tolerant techniques (FTTs), in order to increase fault detection and to help the system safely perform the required functions in spite of the possible presence of faults. Fault detection coverage is vital factor of FTT in reliability. However, the fault detection coverage is insufficient to reflect the effects of various FTTs in reliability model. To reflect characteristics of FTTs in the reliability model, integrated fault coverage is introduced. The integrated fault coverage considers the process of FTT from detection to fail-safe generation process. A model has been developed to estimate the unavailability of repairable component of DPS using the integrated fault coverage. The new developed model can quantify unavailability according to a diversity of conditions. Sensitivity studies are performed to ascertain important variables which affect the integrated fault coverage and unavailability

  20. Reliability and protection against failure in computer systems

    International Nuclear Information System (INIS)

    Daniels, B.K.

    1979-01-01

    Computers are being increasingly integrated into the control and safety systems of large and potentially hazardous industrial processes. This development introduces problems which are particular to computer systems and opens the way to new techniques of solving conventional reliability and availability problems. References to the developing fields of software reliability, human factors and software design are given, and these subjects are related, where possible, to the quantified assessment of reliability. Original material is presented in the areas of reliability growth and computer hardware failure data. The report draws on the experience of the National Centre of Systems Reliability in assessing the capability and reliability of computer systems both within the nuclear industry, and from the work carried out in other industries by the Systems Reliability Service. (author)

  1. Interpretive reliability of two common MMPI-2 profiles

    Directory of Open Access Journals (Sweden)

    Mark A. Deskovitz

    2016-12-01

    Full Text Available Users of multi-scale tests like the MMPI-2 tend not to interpret scales one at a time in a way that would correspond to standard scale-level reliability information. Instead, clinicians integrate inferences from a multitude of scales simultaneously, producing a descriptive narrative that is thought to characterize the examinee. This study was an attempt to measure the reliability of such integrated interpretations using a q-sort research methodology. Participants were 20 MMPI-2 users who responded to E-mail solicitations on professional listservs and in personal emails. Each participant interpreted one of two common MMPI-2 profiles using a q-set of 100 statements designed for MMPI-2 interpretation. To measure the “interpretive reliability” of the MMPI-2 profile interpretations, q-sort descriptions were intercorrelated. Mean pairwise interpretive reliability was .39, lower than expected, and there was no significant difference in reliability between profiles. There was also not a significant difference between within-profile and cross-profile correlations. Q-set item analysis was conducted to determine which individual statements had the most impact on interpretive reliability. Although sampling in this study was limited, implications for the field reliability of MMPI-2 interpretation are sobering.

  2. 24. MPA-seminar: safety and reliability of plant technology with special emphasis on integrity and life management. Vol. 1. Papers 1-27

    International Nuclear Information System (INIS)

    1999-01-01

    The first volume is dedicated to the safety and reliability of plant technology with special emphasis on the integrity and life management. The main topic in the volume is the contribution of nondestructive testing to the reactor safety from an international point of view. All 20 papers are separately analyzed for this database. (orig.)

  3. MRI interrReader and intra-reader reliabilities for assessing injury morphology and posterior ligamentous complex integrity of the spine according to the thoracolumbar injury classification system and severity score

    International Nuclear Information System (INIS)

    Lee, Guen Young; Lee, Joon Woo; Choi, Seung Woo; Lim, Hyun Jin; Sun, Hye Young; Kang, Yu Suhn; Kang, Heung Sik; Chai, Jee Won; Kim, Su Jin

    2015-01-01

    To evaluate spine magnetic resonance imaging (MRI) inter-reader and intra-reader reliabilities using the thoracolumbar injury classification system and severity score (TLICS) and to analyze the effects of reader experience on reliability and the possible reasons for discordant interpretations. Six radiologists (two senior, two junior radiologists, and two residents) independently scored 100 MRI examinations of thoracolumbar spine injuries to assess injury morphology and posterior ligamentous complex (PLC) integrity according to the TLICS. Inter-reader and intra-reader agreements were determined and analyzed according to the number of years of radiologist experience. Inter-reader agreement between the six readers was moderate (k = 0.538 for the first and 0.537 for the second review) for injury morphology and fair to moderate (k = 0.440 for the first and 0.389 for the second review) for PLC integrity. No significant difference in inter-reader agreement was observed according to the number of years of radiologist experience. Intra-reader agreements showed a wide range (k = 0.538-0.822 for injury morphology and 0.423-0.616 for PLC integrity). Agreement was achieved in 44 for the first and 45 for the second review about injury morphology, as well as in 41 for the first and 38 for the second review of PLC integrity. A positive correlation was detected between injury morphology score and PLC integrity. The reliability of MRI for assessing thoracolumbar spinal injuries according to the TLICS was moderate for injury morphology and fair to moderate for PLC integrity, which may not be influenced by radiologist' experience

  4. DIRAC: reliable data management for LHCb

    International Nuclear Information System (INIS)

    Smith, A C; Tsaregorodtsev, A

    2008-01-01

    DIRAC, LHCb's Grid Workload and Data Management System, utilizes WLCG resources and middleware components to perform distributed computing tasks satisfying LHCb's Computing Model. The Data Management System (DMS) handles data transfer and data access within LHCb. Its scope ranges from the output of the LHCb Online system to Grid-enabled storage for all data types. It supports metadata for these files in replica and bookkeeping catalogues, allowing dataset selection and localization. The DMS controls the movement of files in a redundant fashion whilst providing utilities for accessing all metadata. To do these tasks effectively the DMS requires complete self integrity between its components and external physical storage. The DMS provides highly redundant management of all LHCb data to leverage available storage resources and to manage transient errors in underlying services. It provides data driven and reliable distribution of files as well as reliable job output upload, utilizing VO Boxes at LHCb Tier1 sites to prevent data loss. This paper presents several examples of mechanisms implemented in the DMS to increase reliability, availability and integrity, highlighting successful design choices and limitations discovered

  5. Reliability-Based and Cost-Oriented Product Optimization Integrating Fuzzy Reasoning Petri Nets, Interval Expert Evaluation and Cultural-Based DMOPSO Using Crowding Distance Sorting

    Directory of Open Access Journals (Sweden)

    Zhaoxi Hong

    2017-08-01

    Full Text Available In reliability-based and cost-oriented product optimization, the target product reliability is apportioned to subsystems or components to achieve the maximum reliability and minimum cost. Main challenges to conducting such optimization design lie in how to simultaneously consider subsystem division, uncertain evaluation provided by experts for essential factors, and dynamic propagation of product failure. To overcome these problems, a reliability-based and cost-oriented product optimization method integrating fuzzy reasoning Petri net (FRPN, interval expert evaluation and cultural-based dynamic multi-objective particle swarm optimization (DMOPSO using crowding distance sorting is proposed in this paper. Subsystem division is performed based on failure decoupling, and then subsystem weights are calculated with FRPN reflecting dynamic and uncertain failure propagation, as well as interval expert evaluation considering six essential factors. A mathematical model of reliability-based and cost-oriented product optimization is established, and the cultural-based DMOPSO with crowding distance sorting is utilized to obtain the optimized design scheme. The efficiency and effectiveness of the proposed method are demonstrated by the numerical example of the optimization design for a computer numerically controlled (CNC machine tool.

  6. A critical review of frameworks used for evaluating reliability and relevance of (eco)toxicity data: Perspectives for an integrated eco-human decision-making framework.

    Science.gov (United States)

    Roth, N; Ciffroy, P

    2016-10-01

    Considerable efforts have been invested so far to evaluate and rank the quality and relevance of (eco)toxicity data for their use in regulatory risk assessment to assess chemical hazards. Many frameworks have been developed to improve robustness and transparency in the evaluation of reliability and relevance of individual tests, but these frameworks typically focus on either environmental risk assessment (ERA) or human health risk assessment (HHRA), and there is little cross talk between them. There is a need to develop a common approach that would support a more consistent, transparent and robust evaluation and weighting of the evidence across ERA and HHRA. This paper explores the applicability of existing Data Quality Assessment (DQA) frameworks for integrating environmental toxicity hazard data into human health assessments and vice versa. We performed a comparative analysis of the strengths and weaknesses of eleven frameworks for evaluating reliability and/or relevance of toxicity and ecotoxicity hazard data. We found that a frequent shortcoming is the lack of a clear separation between reliability and relevance criteria. A further gaps and needs analysis revealed that none of the reviewed frameworks satisfy the needs of a common eco-human DQA system. Based on our analysis, some key characteristics, perspectives and recommendations are identified and discussed for building a common DQA system as part of a future integrated eco-human decision-making framework. This work lays the basis for developing a common DQA system to support the further development and promotion of Integrated Risk Assessment. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Wind integration in Alberta

    International Nuclear Information System (INIS)

    Frost, W.

    2007-01-01

    This presentation described the role of the Alberta Electric System Operator (AESO) for Alberta's interconnected electric system with particular reference to wind integration in Alberta. The challenges of wind integration were discussed along with the requirements for implementing the market and operational framework. The AESO is an independent system operator that directs the reliable operation of Alberta's power grid; develops and operates Alberta's real-time wholesale energy market to promote open competition; plans and develops the province's transmission system to ensure reliability; and provides transmission system access for both generation and load customers. Alberta has over 280 power generating station, with a total generating capacity of 11,742 MW, of which 443 is wind generated. Since 2004, the AESO has been working with industry on wind integration issues, such as operating limits, need for mitigation measures and market rules. In April 2006, the AESO implemented a temporary 900 MW reliability threshold to ensure reliability. In 2006, a Wind Forecasting Working Group was created in collaboration with industry and the Canadian Wind Energy Association in an effort to integrate as much wind as is feasible without compromising the system reliability or the competitive operation of the market. The challenges facing wind integration include reliability issues; predictability of wind power; the need for dispatchable generation; transmission upgrades; and, defining a market and operational framework for the large wind potential in Alberta. It was noted that 1400 MW of installed wind energy capacity can be accommodated in Alberta with approved transmission upgrades. figs

  8. SAME4HPC: A Promising Approach in Building a Scalable and Mobile Environment for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Karthik, Rajasekar [ORNL

    2014-01-01

    In this paper, an architecture for building Scalable And Mobile Environment For High-Performance Computing with spatial capabilities called SAME4HPC is described using cutting-edge technologies and standards such as Node.js, HTML5, ECMAScript 6, and PostgreSQL 9.4. Mobile devices are increasingly becoming powerful enough to run high-performance apps. At the same time, there exist a significant number of low-end and older devices that rely heavily on the server or the cloud infrastructure to do the heavy lifting. Our architecture aims to support both of these types of devices to provide high-performance and rich user experience. A cloud infrastructure consisting of OpenStack with Ubuntu, GeoServer, and high-performance JavaScript frameworks are some of the key open-source and industry standard practices that has been adopted in this architecture.

  9. New advances in human reliability using the EPRIHRA calculator

    International Nuclear Information System (INIS)

    Julius, J. A.; Grobbelaar, J. F.

    2006-01-01

    This paper describes new advances in human reliability associated with the integration of HRA methods, lessons learned during the first few years of operation of the EPRI HRA / PRA Tools Users Group, and application of human reliability techniques in areas beyond the more traditional Level 1 internal events PRA. This paper is organized as follows. 1. EPRI HRA Users Group Overview (mission, membership, activities, approach) 2. HRA Methods Currently Used (selection, integration, and addressing dependencies) 3. New Advances in HRA Methods 4. Conclusions. (authors)

  10. Reliability Centered Maintenance - Methodologies

    Science.gov (United States)

    Kammerer, Catherine C.

    2009-01-01

    Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

  11. Quality and reliability management and its applications

    CERN Document Server

    2016-01-01

    Integrating development processes, policies, and reliability predictions from the beginning of the product development lifecycle to ensure high levels of product performance and safety, this book helps companies overcome the challenges posed by increasingly complex systems in today’s competitive marketplace.   Examining both research on and practical aspects of product quality and reliability management with an emphasis on applications, the book features contributions written by active researchers and/or experienced practitioners in the field, so as to effectively bridge the gap between theory and practice and address new research challenges in reliability and quality management in practice.    Postgraduates, researchers and practitioners in the areas of reliability engineering and management, amongst others, will find the book to offer a state-of-the-art survey of quality and reliability management and practices.

  12. System Reliability Analysis Considering Correlation of Performances

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Saekyeol; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of); Lim, Woochul [Mando Corporation, Seongnam (Korea, Republic of)

    2017-04-15

    Reliability analysis of a mechanical system has been developed in order to consider the uncertainties in the product design that may occur from the tolerance of design variables, uncertainties of noise, environmental factors, and material properties. In most of the previous studies, the reliability was calculated independently for each performance of the system. However, the conventional methods cannot consider the correlation between the performances of the system that may lead to a difference between the reliability of the entire system and the reliability of the individual performance. In this paper, the joint probability density function (PDF) of the performances is modeled using a copula which takes into account the correlation between performances of the system. The system reliability is proposed as the integral of joint PDF of performances and is compared with the individual reliability of each performance by mathematical examples and two-bar truss example.

  13. System Reliability Analysis Considering Correlation of Performances

    International Nuclear Information System (INIS)

    Kim, Saekyeol; Lee, Tae Hee; Lim, Woochul

    2017-01-01

    Reliability analysis of a mechanical system has been developed in order to consider the uncertainties in the product design that may occur from the tolerance of design variables, uncertainties of noise, environmental factors, and material properties. In most of the previous studies, the reliability was calculated independently for each performance of the system. However, the conventional methods cannot consider the correlation between the performances of the system that may lead to a difference between the reliability of the entire system and the reliability of the individual performance. In this paper, the joint probability density function (PDF) of the performances is modeled using a copula which takes into account the correlation between performances of the system. The system reliability is proposed as the integral of joint PDF of performances and is compared with the individual reliability of each performance by mathematical examples and two-bar truss example.

  14. Extrapolation Method for System Reliability Assessment

    DEFF Research Database (Denmark)

    Qin, Jianjun; Nishijima, Kazuyoshi; Faber, Michael Havbro

    2012-01-01

    of integrals with scaled domains. The performance of this class of approximation depends on the approach applied for the scaling and the functional form utilized for the extrapolation. A scheme for this task is derived here taking basis in the theory of asymptotic solutions to multinormal probability integrals......The present paper presents a new scheme for probability integral solution for system reliability analysis, which takes basis in the approaches by Naess et al. (2009) and Bucher (2009). The idea is to evaluate the probability integral by extrapolation, based on a sequence of MC approximations...... that the proposed scheme is efficient and adds to generality for this class of approximations for probability integrals....

  15. Visual-Haptic Integration: Cue Weights are Varied Appropriately, to Account for Changes in Haptic Reliability Introduced by Using a Tool

    OpenAIRE

    Chie Takahashi; Simon J Watt

    2011-01-01

    Tools such as pliers systematically change the relationship between an object's size and the hand opening required to grasp it. Previous work suggests the brain takes this into account, integrating visual and haptic size information that refers to the same object, independent of the similarity of the ‘raw’ visual and haptic signals (Takahashi et al., VSS 2009). Variations in tool geometry also affect the reliability (precision) of haptic size estimates, however, because they alter the change ...

  16. Development of a Reliability Program approach to assuring operational nuclear safety

    International Nuclear Information System (INIS)

    Mueller, C.J.; Bezella, W.A.

    1985-01-01

    A Reliability Program (RP) model based on proven reliability techniques used in other high technology industries is being formulated for potential application in the nuclear power industry. Research findings are discussed. The reliability methods employed under NASA and military direction, commercial airline and related FAA programs were surveyed with several reliability concepts (e.g., quantitative reliability goals, reliability centered maintenance) appearing to be directly transferable. Other tasks in the RP development effort involved the benchmarking and evaluation of the existing nuclear regulations and practices relevant to safety/reliability integration. A review of current risk-dominant issues was also conducted using results from existing probabilistic risk assessment studies. The ongoing RP development tasks have concentrated on defining a RP for the operating phase of a nuclear plant's lifecycle. The RP approach incorporates safety systems risk/reliability analysis and performance monitoring activities with dedicated tasks that integrate these activities with operating, surveillance, and maintenance of the plant. The detection, root-cause evaluation and before-the-fact correction of incipient or actual systems failures as a mechanism for maintaining plant safety is a major objective of the RP

  17. Photovoltaic and Wind Turbine Integration Applying Cuckoo Search for Probabilistic Reliable Optimal Placement

    OpenAIRE

    R. A. Swief; T. S. Abdel-Salam; Noha H. El-Amary

    2018-01-01

    This paper presents an efficient Cuckoo Search Optimization technique to improve the reliability of electrical power systems. Various reliability objective indices such as Energy Not Supplied, System Average Interruption Frequency Index, System Average Interruption, and Duration Index are the main indices indicating reliability. The Cuckoo Search Optimization (CSO) technique is applied to optimally place the protection devices, install the distributed generators, and to determine the size of ...

  18. Reliability engineering for nuclear and other high technology systems

    International Nuclear Information System (INIS)

    Lakner, A.A.; Anderson, R.T.

    1985-01-01

    This book is written for the reliability instructor, program manager, system engineer, design engineer, reliability engineer, nuclear regulator, probability risk assessment (PRA) analyst, general manager and others who are involved in system hardware acquisition, design and operation and are concerned with plant safety and operational cost-effectiveness. It provides criteria, guidelines and comprehensive engineering data affecting reliability; it covers the key aspects of system reliability as it relates to conceptual planning, cost tradeoff decisions, specification, contractor selection, design, test and plant acceptance and operation. It treats reliability as an integrated methodology, explicitly describing life cycle management techniques as well as the basic elements of a total hardware development program, including: reliability parameters and design improvement attributes, reliability testing, reliability engineering and control. It describes how these elements can be defined during procurement, and implemented during design and development to yield reliable equipment. (author)

  19. Equipment Reliability Process in Krsko NPP

    International Nuclear Information System (INIS)

    Gluhak, M.

    2016-01-01

    To ensure long-term safe and reliable plant operation, equipment operability and availability must also be ensured by setting a group of processes to be established within the nuclear power plant. Equipment reliability process represents the integration and coordination of important equipment reliability activities into one process, which enables equipment performance and condition monitoring, preventive maintenance activities development, implementation and optimization, continuous improvement of the processes and long term planning. The initiative for introducing systematic approach for equipment reliability assuring came from US nuclear industry guided by INPO (Institute of Nuclear Power Operations) and by participation of several US nuclear utilities. As a result of the initiative, first edition of INPO document AP-913, 'Equipment Reliability Process Description' was issued and it became a basic document for implementation of equipment reliability process for the whole nuclear industry. The scope of equipment reliability process in Krsko NPP consists of following programs: equipment criticality classification, preventive maintenance program, corrective action program, system health reports and long-term investment plan. By implementation, supervision and continuous improvement of those programs, guided by more than thirty years of operating experience, Krsko NPP will continue to be on a track of safe and reliable operation until the end of prolonged life time. (author).

  20. Experimental research of fuel element reliability

    International Nuclear Information System (INIS)

    Cech, B.; Novak, J.; Chamrad, B.

    1980-01-01

    The rate and extent of the damage of the can integrity for fission products is the basic criterion of reliability. The extent of damage is measurable by the fission product leakage into the reactor coolant circuit. An analysis is made of the causes of the fuel element can damage and a model is proposed for testing fuel element reliability. Special experiments should be carried out to assess partial processes, such as heat transfer and fuel element surface temperature, fission gas liberation and pressure changes inside the element, corrosion weakening of the can wall, can deformation as a result of mechanical interactions. The irradiation probe for reliability testing of fuel elements is described. (M.S.)

  1. Latency Analysis of Systems with Multiple Interfaces for Ultra-Reliable M2M Communication

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Popovski, Petar

    2016-01-01

    One of the ways to satisfy the requirements of ultra-reliable low latency communication for mission critical Machine-type Communications (MTC) applications is to integrate multiple communication interfaces. In order to estimate the performance in terms of latency and reliability of such an integr...

  2. CERTS: Consortium for Electric Reliability Technology Solutions - Research Highlights

    Energy Technology Data Exchange (ETDEWEB)

    Eto, Joseph

    2003-07-30

    Historically, the U.S. electric power industry was vertically integrated, and utilities were responsible for system planning, operations, and reliability management. As the nation moves to a competitive market structure, these functions have been disaggregated, and no single entity is responsible for reliability management. As a result, new tools, technologies, systems, and management processes are needed to manage the reliability of the electricity grid. However, a number of simultaneous trends prevent electricity market participants from pursuing development of these reliability tools: utilities are preoccupied with restructuring their businesses, research funding has declined, and the formation of Independent System Operators (ISOs) and Regional Transmission Organizations (RTOs) to operate the grid means that control of transmission assets is separate from ownership of these assets; at the same time, business uncertainty, and changing regulatory policies have created a climate in which needed investment for transmission infrastructure and tools for reliability management has dried up. To address the resulting emerging gaps in reliability R&D, CERTS has undertaken much-needed public interest research on reliability technologies for the electricity grid. CERTS' vision is to: (1) Transform the electricity grid into an intelligent network that can sense and respond automatically to changing flows of power and emerging problems; (2) Enhance reliability management through market mechanisms, including transparency of real-time information on the status of the grid; (3) Empower customers to manage their energy use and reliability needs in response to real-time market price signals; and (4) Seamlessly integrate distributed technologies--including those for generation, storage, controls, and communications--to support the reliability needs of both the grid and individual customers.

  3. Results of a Demonstration Assessment of Passive System Reliability Utilizing the Reliability Method for Passive Systems (RMPS)

    Energy Technology Data Exchange (ETDEWEB)

    Bucknor, Matthew; Grabaskas, David; Brunett, Acacia; Grelle, Austin

    2015-04-26

    Advanced small modular reactor designs include many advantageous design features such as passively driven safety systems that are arguably more reliable and cost effective relative to conventional active systems. Despite their attractiveness, a reliability assessment of passive systems can be difficult using conventional reliability methods due to the nature of passive systems. Simple deviations in boundary conditions can induce functional failures in a passive system, and intermediate or unexpected operating modes can also occur. As part of an ongoing project, Argonne National Laboratory is investigating various methodologies to address passive system reliability. The Reliability Method for Passive Systems (RMPS), a systematic approach for examining reliability, is one technique chosen for this analysis. This methodology is combined with the Risk-Informed Safety Margin Characterization (RISMC) approach to assess the reliability of a passive system and the impact of its associated uncertainties. For this demonstration problem, an integrated plant model of an advanced small modular pool-type sodium fast reactor with a passive reactor cavity cooling system is subjected to a station blackout using RELAP5-3D. This paper discusses important aspects of the reliability assessment, including deployment of the methodology, the uncertainty identification and quantification process, and identification of key risk metrics.

  4. Solid State Lighting Reliability Components to Systems

    CERN Document Server

    Fan, XJ

    2013-01-01

    Solid State Lighting Reliability: Components to Systems begins with an explanation of the major benefits of solid state lighting (SSL) when compared to conventional lighting systems including but not limited to long useful lifetimes of 50,000 (or more) hours and high efficacy. When designing effective devices that take advantage of SSL capabilities the reliability of internal components (optics, drive electronics, controls, thermal design) take on critical importance. As such a detailed discussion of reliability from performance at the device level to sub components is included as well as the integrated systems of SSL modules, lamps and luminaires including various failure modes, reliability testing and reliability performance. This book also: Covers the essential reliability theories and practices for current and future development of Solid State Lighting components and systems Provides a systematic overview for not only the state-of-the-art, but also future roadmap and perspectives of Solid State Lighting r...

  5. Integrating software reliability concepts into risk and reliability modeling of digital instrumentation and control systems used in nuclear power plants

    International Nuclear Information System (INIS)

    Arndt, S. A.

    2006-01-01

    As software-based digital systems are becoming more and more common in all aspects of industrial process control, including the nuclear power industry, it is vital that the current state of the art in quality, reliability, and safety analysis be advanced to support the quantitative review of these systems. Several research groups throughout the world are working on the development and assessment of software-based digital system reliability methods and their applications in the nuclear power, aerospace, transportation, and defense industries. However, these groups are hampered by the fact that software experts and probabilistic safety assessment experts view reliability engineering very differently. This paper discusses the characteristics of a common vocabulary and modeling framework. (authors)

  6. Reliability analysis of wind embedded power generation system for ...

    African Journals Online (AJOL)

    This paper presents a method for Reliability Analysis of wind energy embedded in power generation system for Indian scenario. This is done by evaluating the reliability index, loss of load expectation, for the power generation system with and without integration of wind energy sources in the overall electric power system.

  7. Systems analysis programs for hands-on integrated reliability evaluations (SAPHIRE) Version 5.0. Fault tree, event tree, and piping ampersand instrumentation diagram (FEP) editors reference manual: Volume 7

    International Nuclear Information System (INIS)

    McKay, M.K.; Skinner, N.L.; Wood, S.T.

    1994-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. The Fault Tree, Event Tree, and Piping and Instrumentation Diagram (FEP) editors allow the user to graphically build and edit fault trees, and event trees, and piping and instrumentation diagrams (P and IDs). The software is designed to enable the independent use of the graphical-based editors found in the Integrated Reliability and Risk Assessment System (IRRAS). FEP is comprised of three separate editors (Fault Tree, Event Tree, and Piping and Instrumentation Diagram) and a utility module. This reference manual provides a screen-by-screen guide of the entire FEP System

  8. The Software Reliability of Large Scale Integration Circuit and Very Large Scale Integration Circuit

    OpenAIRE

    Artem Ganiyev; Jan Vitasek

    2010-01-01

    This article describes evaluation method of faultless function of large scale integration circuits (LSI) and very large scale integration circuits (VLSI). In the article there is a comparative analysis of factors which determine faultless of integrated circuits, analysis of already existing methods and model of faultless function evaluation of LSI and VLSI. The main part describes a proposed algorithm and program for analysis of fault rate in LSI and VLSI circuits.

  9. HP-SEE User Forum 2012

    CERN Document Server

    Karaivanova, Aneta; Oulas, Anastasis; Liabotis, Ioannis; Stojiljkovic, Danica; Prnjat, Ognjen

    2014-01-01

    This book is a collection of carefully reviewed papers presented during the HP-SEE User Forum, the meeting of the High-Performance Computing Infrastructure for South East Europe’s (HP-SEE) Research Communities, held in October 17-19, 2012, in Belgrade, Serbia. HP-SEE aims at supporting and integrating regional HPC infrastructures; implementing solutions for HPC in the region; and making HPC resources available to research communities in SEE, region, which are working in a number of scientific fields with specific needs for massively parallel execution on powerful computing resources. HP-SEE brings together research communities and HPC operators from 14 different countries and enables them to share HPC facilities, software, tools, data and research results, thus fostering collaboration and strengthening the regional and national human network; the project specifically supports research groups in the areas of computational physics, computational chemistry and the life sciences. The contributions presented i...

  10. Design for ASIC reliability for low-temperature applications

    Science.gov (United States)

    Chen, Yuan; Mojaradi, Mohammad; Westergard, Lynett; Billman, Curtis; Cozy, Scott; Burke, Gary; Kolawa, Elizabeth

    2005-01-01

    In this paper, we present a methodology to design for reliability for low temperature applications without requiring process improvement. The developed hot carrier aging lifetime projection model takes into account both the transistor substrate current profile and temperature profile to determine the minimum transistor size needed in order to meet reliability requirements. The methodology is applicable for automotive, military, and space applications, where there can be varying temperature ranges. A case study utilizing this methodology is given to design for reliability into a custom application-specific integrated circuit (ASIC) for a Mars exploration mission.

  11. Reliability assessment using Bayesian networks. Case study on quantative reliability estimation of a software-based motor protection relay

    International Nuclear Information System (INIS)

    Helminen, A.; Pulkkinen, U.

    2003-06-01

    In this report a quantitative reliability assessment of motor protection relay SPAM 150 C has been carried out. The assessment focuses to the methodological analysis of the quantitative reliability assessment using the software-based motor protection relay as a case study. The assessment method is based on Bayesian networks and tries to take the full advantage of the previous work done in a project called Programmable Automation System Safety Integrity assessment (PASSI). From the results and experiences achieved during the work it is justified to claim that the assessment method presented in the work enables a flexible use of qualitative and quantitative elements of reliability related evidence in a single reliability assessment. At the same time the assessment method is a concurrent way of reasoning one's beliefs and references about the reliability of the system. Full advantage of the assessment method is taken when using the method as a way to cultivate the information related to the reliability of software-based systems. The method can also be used as a communicational instrument in a licensing process of software-based systems. (orig.)

  12. Sequential decision reliability concept and failure rate assessment

    International Nuclear Information System (INIS)

    Ciftcioglu, O.

    1990-11-01

    Conventionally, a reliability concept is considered together with both each basic unit and their integration in a complicated large scale system such as a nuclear power plant (NPP). Basically, as the plant's operational status is determined by the information obtained from various sensors, the plant's reliability and the risk assessment is closely related to the reliability of the sensory information and hence the sensor components. However, considering the relevant information-processing systems, e.g. fault detection processors, there exists a further question about the reliability of such systems, specifically the reliability of the systems' decision-based outcomes by means of which the further actions are performed. To this end, a general sequential decision reliability concept and the failure rate assessment methodology is introduced. The implications of the methodology are investigated and the importance of the decision reliability concept in system operation is demonstrated by means of sensory signals in real-time from the Borssele NPP in the Netherlands. (author). 21 refs.; 8 figs

  13. High-reliability computing for the smarter planet

    International Nuclear Information System (INIS)

    Quinn, Heather M.; Graham, Paul; Manuzzato, Andrea; Dehon, Andre

    2010-01-01

    The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities of inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is necessary

  14. High-reliability computing for the smarter planet

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Heather M [Los Alamos National Laboratory; Graham, Paul [Los Alamos National Laboratory; Manuzzato, Andrea [UNIV OF PADOVA; Dehon, Andre [UNIV OF PENN; Carter, Nicholas [INTEL CORPORATION

    2010-01-01

    The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities of inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is

  15. Perceptual attraction in tool use: evidence for a reliability-based weighting mechanism.

    Science.gov (United States)

    Debats, Nienke B; Ernst, Marc O; Heuer, Herbert

    2017-04-01

    Humans are well able to operate tools whereby their hand movement is linked, via a kinematic transformation, to a spatially distant object moving in a separate plane of motion. An everyday example is controlling a cursor on a computer monitor. Despite these separate reference frames, the perceived positions of the hand and the object were found to be biased toward each other. We propose that this perceptual attraction is based on the principles by which the brain integrates redundant sensory information of single objects or events, known as optimal multisensory integration. That is, 1 ) sensory information about the hand and the tool are weighted according to their relative reliability (i.e., inverse variances), and 2 ) the unisensory reliabilities sum up in the integrated estimate. We assessed whether perceptual attraction is consistent with optimal multisensory integration model predictions. We used a cursor-control tool-use task in which we manipulated the relative reliability of the unisensory hand and cursor position estimates. The perceptual biases shifted according to these relative reliabilities, with an additional bias due to contextual factors that were present in experiment 1 but not in experiment 2 The biased position judgments' variances were, however, systematically larger than the predicted optimal variances. Our findings suggest that the perceptual attraction in tool use results from a reliability-based weighting mechanism similar to optimal multisensory integration, but that certain boundary conditions for optimality might not be satisfied. NEW & NOTEWORTHY Kinematic tool use is associated with a perceptual attraction between the spatially separated hand and the effective part of the tool. We provide a formal account for this phenomenon, thereby showing that the process behind it is similar to optimal integration of sensory information relating to single objects. Copyright © 2017 the American Physiological Society.

  16. Feasibility study for the European Reliability Data System (ERDS)

    International Nuclear Information System (INIS)

    Mancini, G.

    1980-01-01

    In the framework of the Reactor Safety Programme of the Commission of the European Communities, the JRC - Ispra Establishment has performed a feasibility study for an integrated European Reliability Data System, the aim of which is the collection and organization of information related to the operation of LWRs with regard to component and systems behaviour, abnormal occurrences, outages, etc. Component Event Data Bank (CEGB), Abnormal Occurrences Reporting System, Generic Reliability Parameter Data Bank, Operating Unit Status Reports and the main activities carried out during the last two years are described. The most important achievements are briefly reported, such as: Reference Classification for Systems, Components and Failure Events, Informatic Structure of the Pilot Experiment of the CEDB, Information Retrieval System for Abnormal Occurrences Reports, Data Bank on Component Reliability Parameters, System on the Exchange of Operation Experience of LWRs, Statistical Data Treatment. Finally, the general conclusions of the feasibility study are summarized: the possibility and the usefulness for the creation of an integrated European Reliability Data System are outlined. (author)

  17. Reliability issues : a Canadian perspective

    International Nuclear Information System (INIS)

    Konow, H.

    2004-01-01

    A Canadian perspective of power reliability issues was presented. Reliability depends on adequacy of supply and a framework for standards. The challenges facing the electric power industry include new demand, plant replacement and exports. It is expected that demand will by 670 TWh by 2020, with 205 TWh coming from new plants. Canada will require an investment of $150 billion to meet this demand and the need is comparable in the United States. As trade grows, the challenge becomes a continental issue and investment in the bi-national transmission grid will be essential. The 5 point plan of the Canadian Electricity Association is to: (1) establish an investment climate to ensure future electricity supply, (2) move government and industry towards smart and effective regulation, (3) work to ensure a sustainable future for the next generation, (4) foster innovation and accelerate skills development, and (5) build on the strengths of an integrated North American system to maximize opportunity for Canadians. The CEA's 7 measures that enhance North American reliability were listed with emphasis on its support for a self-governing international organization for developing and enforcing mandatory reliability standards. CEA also supports the creation of a binational Electric Reliability Organization (ERO) to identify and solve reliability issues in the context of a bi-national grid. tabs., figs

  18. Assessing the Impact of Imperfect Diagnosis on Service Reliability

    DEFF Research Database (Denmark)

    Grønbæk, Lars Jesper; Schwefel, Hans-Peter; Kjærgaard, Jens Kristian

    2010-01-01

    , representative diagnosis performance metrics have been defined and their closed-form solutions obtained for the Markov model. These equations enable model parameterization from traces of implemented diagnosis components. The diagnosis model has been integrated in a reliability model assessing the impact...... of the diagnosis functions for the studied reliability problem. In a simulation study we finally analyze trade-off properties of diagnosis heuristics from literature, map them to the analytic Markov model, and investigate its suitability for service reliability optimization....

  19. Systems analysis programs for hands-on integrated reliability evaluations (SAPHIRE), Version 5.0

    International Nuclear Information System (INIS)

    Russell, K.D.; Kvarfordt, K.J.; Hoffman, C.L.

    1995-10-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. The Graphical Evaluation Module (GEM) is a special application tool designed for evaluation of operational occurrences using the Accident Sequence Precursor (ASP) program methods. GEM provides the capability for an analyst to quickly and easily perform conditional core damage probability (CCDP) calculations. The analyst can then use the CCDP calculations to determine if the occurrence of an initiating event or a condition adversely impacts safety. It uses models and data developed in the SAPHIRE specially for the ASP program. GEM requires more data than that normally provided in SAPHIRE and will not perform properly with other models or data bases. This is the first release of GEM and the developers of GEM welcome user comments and feedback that will generate ideas for improvements to future versions. GEM is designated as version 5.0 to track GEM codes along with the other SAPHIRE codes as the GEM relies on the same, shared database structure

  20. System reliability with correlated components: Accuracy of the Equivalent Planes method

    NARCIS (Netherlands)

    Roscoe, K.; Diermanse, F.; Vrouwenvelder, A.C.W.M.

    2015-01-01

    Computing system reliability when system components are correlated presents a challenge because it usually requires solving multi-fold integrals numerically, which is generally infeasible due to the computational cost. In Dutch flood defense reliability modeling, an efficient method for computing

  1. System reliability with correlated components : Accuracy of the Equivalent Planes method

    NARCIS (Netherlands)

    Roscoe, K.; Diermanse, F.; Vrouwenvelder, T.

    2015-01-01

    Computing system reliability when system components are correlated presents a challenge because it usually requires solving multi-fold integrals numerically, which is generally infeasible due to the computational cost. In Dutch flood defense reliability modeling, an efficient method for computing

  2. Integration of Human Reliability Analysis Models into the Simulation-Based Framework for the Risk-Informed Safety Margin Characterization Toolkit

    International Nuclear Information System (INIS)

    Boring, Ronald; Mandelli, Diego; Rasmussen, Martin; Ulrich, Thomas; Groth, Katrina; Smith, Curtis

    2016-01-01

    This report presents an application of a computation-based human reliability analysis (HRA) framework called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER). HUNTER has been developed not as a standalone HRA method but rather as framework that ties together different HRA methods to model dynamic risk of human activities as part of an overall probabilistic risk assessment (PRA). While we have adopted particular methods to build an initial model, the HUNTER framework is meant to be intrinsically flexible to new pieces that achieve particular modeling goals. In the present report, the HUNTER implementation has the following goals: • Integration with a high fidelity thermal-hydraulic model capable of modeling nuclear power plant behaviors and transients • Consideration of a PRA context • Incorporation of a solid psychological basis for operator performance • Demonstration of a functional dynamic model of a plant upset condition and appropriate operator response This report outlines these efforts and presents the case study of a station blackout scenario to demonstrate the various modules developed to date under the HUNTER research umbrella.

  3. Integration of Human Reliability Analysis Models into the Simulation-Based Framework for the Risk-Informed Safety Margin Characterization Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rasmussen, Martin [Norwegian Univ. of Science and Technology, Trondheim (Norway). Social Research; Herberger, Sarah [Idaho National Lab. (INL), Idaho Falls, ID (United States); Ulrich, Thomas [Idaho National Lab. (INL), Idaho Falls, ID (United States); Groth, Katrina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Smith, Curtis [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-06-01

    This report presents an application of a computation-based human reliability analysis (HRA) framework called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER). HUNTER has been developed not as a standalone HRA method but rather as framework that ties together different HRA methods to model dynamic risk of human activities as part of an overall probabilistic risk assessment (PRA). While we have adopted particular methods to build an initial model, the HUNTER framework is meant to be intrinsically flexible to new pieces that achieve particular modeling goals. In the present report, the HUNTER implementation has the following goals: • Integration with a high fidelity thermal-hydraulic model capable of modeling nuclear power plant behaviors and transients • Consideration of a PRA context • Incorporation of a solid psychological basis for operator performance • Demonstration of a functional dynamic model of a plant upset condition and appropriate operator response This report outlines these efforts and presents the case study of a station blackout scenario to demonstrate the various modules developed to date under the HUNTER research umbrella.

  4. Some aspects of the interaction between systems- and structural reliability

    International Nuclear Information System (INIS)

    Schueller, G.K.; Schmitt, W.

    1979-01-01

    The purpose of this paper is to study the interaction between systems- and structural reliability analysis with reference to the design of structural components of LWR. Presently the evaluation of systems reliability is carried out apart from structural reliability analysis. Moreover, two basically different methodologies are used for analysis. While in systems analysis the simplified binary approach is still generally accepted, in structural reliability one has to resort to more sophisticated procedures to obtain realistic results. The interactive effect may be illustrated as follows: For example, the integrity of the primary circuit interacts with the integrity of the containment structure. This means that the probability of occurrence of the pipe rupture which may cause a LOCA and consequently leads to a build-up of temperature and pressure within the containment affects directly its structural reliability. The piping system, particularly the primary piping, in turn interacts with the protective system, which is part of the safety system. This piping structure is also subjected to various operational loading conditions. In a numerical example dealing with leakage probabilities of pipes it is shown how methods of structural reliability may be used to gain more insight in the estimation of failure rates of system components. (orig.)

  5. Reliability analysis under epistemic uncertainty

    International Nuclear Information System (INIS)

    Nannapaneni, Saideep; Mahadevan, Sankaran

    2016-01-01

    This paper proposes a probabilistic framework to include both aleatory and epistemic uncertainty within model-based reliability estimation of engineering systems for individual limit states. Epistemic uncertainty is considered due to both data and model sources. Sparse point and/or interval data regarding the input random variables leads to uncertainty regarding their distribution types, distribution parameters, and correlations; this statistical uncertainty is included in the reliability analysis through a combination of likelihood-based representation, Bayesian hypothesis testing, and Bayesian model averaging techniques. Model errors, which include numerical solution errors and model form errors, are quantified through Gaussian process models and included in the reliability analysis. The probability integral transform is used to develop an auxiliary variable approach that facilitates a single-level representation of both aleatory and epistemic uncertainty. This strategy results in an efficient single-loop implementation of Monte Carlo simulation (MCS) and FORM/SORM techniques for reliability estimation under both aleatory and epistemic uncertainty. Two engineering examples are used to demonstrate the proposed methodology. - Highlights: • Epistemic uncertainty due to data and model included in reliability analysis. • A novel FORM-based approach proposed to include aleatory and epistemic uncertainty. • A single-loop Monte Carlo approach proposed to include both types of uncertainties. • Two engineering examples used for illustration.

  6. A reliability program approach to operational safety

    International Nuclear Information System (INIS)

    Mueller, C.J.; Bezella, W.A.

    1985-01-01

    A Reliability Program (RP) model based on proven reliability techniques is being formulated for potential application in the nuclear power industry. Methods employed under NASA and military direction, commercial airline and related FAA programs were surveyed and a review of current nuclear risk-dominant issues conducted. The need for a reliability approach to address dependent system failures, operating and emergency procedures and human performance, and develop a plant-specific performance data base for safety decision making is demonstrated. Current research has concentrated on developing a Reliability Program approach for the operating phase of a nuclear plant's lifecycle. The approach incorporates performance monitoring and evaluation activities with dedicated tasks that integrate these activities with operation, surveillance, and maintenance of the plant. The detection, root-cause evaluation and before-the-fact correction of incipient or actual systems failures as a mechanism for maintaining plant safety is a major objective of the Reliability Program. (orig./HP)

  7. Reliability Evaluation of a Single-phase H-bridge Inverter with Integrated Active Power Decoupling

    DEFF Research Database (Denmark)

    Tang, Junchaojie; Wang, Haoran; Ma, Siyuan

    2016-01-01

    it with the traditional passive DC-link solution. The converter level reliability is obtained by component level electro-thermal stress modeling, lifetime model, Weibull distribution, and Reliability Block Diagram (RBD) method. The results are demonstrated by a 2 kW single-phase inverter application.......Various power decoupling methods have been proposed recently to replace the DC-link Electrolytic Capacitors (E-caps) in single-phase conversion system, in order to extend the lifetime and improve the reliability of the DC-link. However, it is still an open question whether the converter level...... reliability becomes better or not, since additional components are introduced and the loading of the existing components may be changed. This paper aims to study the converter level reliability of a single-phase full-bridge inverter with two kinds of active power decoupling module and to compare...

  8. Evaluation of ECT reliability for axial ODSCC in steam generator tubes

    International Nuclear Information System (INIS)

    Lee, Jae Bong; Park, Jai Hak; Kim, Hong Deok; Chung, Han Sub

    2010-01-01

    The integrity of steam generator tubes is usually evaluated based on eddy current test (ECT) results. Because detection capacity of the ECT is not perfect, all of the physical flaws, which actually exist in steam generator tubes, cannot be detected by ECT inspection. Therefore it is very important to analyze ECT reliability in the integrity assessment of steam generators. The reliability of an ECT inspection system is divided into reliability of inspection technique and reliability of quality of analyst. And the reliability of ECT results is also divided into reliability of size and reliability of detection. The reliability of ECT sizing is often characterized as a linear regression model relating true flaw size data to measured flaw size data. The reliability of detection is characterized in terms of probability of detection (POD), which is expressed as a function of flaw size. In this paper the reliability of an ECT inspection system is analyzed quantitatively. POD of the ECT inspection system for axial outside diameter stress corrosion cracks (ODSCC) in steam generator tubes is evaluated. Using a log-logistic regression model, POD is evaluated from hit (detection) and miss (no detection) binary data obtained from destructive and non-destructive inspections of cracked tubes. Crack length and crack depth are considered as variables in multivariate log-logistic regression and their effects on detection capacity are assessed using two-dimensional POD (2-D POD) surface. The reliability of detection is also analyzed using POD for inspection technique (POD T ) and POD for analyst (POD A ).

  9. The reliability of the software of the digital control system Nuclear Advantage

    International Nuclear Information System (INIS)

    Graae, T.; Engdahl, L.

    1996-01-01

    The ABB nuclear power control system Nuclear Advantage is a truly integrated control system. The integration of process control and safety control aims at achieving a common operator interface in order to simplify and thus improve control room ergonomics. The challenge is to design an integrated control system and at the same time ensure the functional separation between the independent safety subsystems as well as between the safety and the conventional sections. Software reliability is discussed and illustrated by statistical test results. It has proved to be a hundred times better than the reliability of the high-quality hardware. (orig.) [de

  10. Ubiquitous Integrity via Network Integration and Parallelism—Sustaining Pedestrian/Bike Urbanism

    Directory of Open Access Journals (Sweden)

    Li-Yen Hsu

    2013-08-01

    Full Text Available Nowadays, due to the concern regarding environmental issues, establishing pedestrian/bike friendly urbanism is widely encouraged. To promote safety-assured, mobile communication environments, efficient, reliable maintenance, and information integrity need to be designed, especially in highly possibly interfered places. For busy traffic areas, regular degree-3 dedicated short range communication (DSRC networks are safety and information featured with availability, reliability, and maintainability in paths of multi-lanes. For sparsely populated areas, probes of wireless sensors are rational, especially if sensor nodes can be organized to enhance security, reliability, and flexibility. Applying alternative network topologies, such as spider-webs, generalized honeycomb tori, and cube-connected cycles, for comparing and analyzing is proposed in DSRC and cellular communications to enhance integrity in communications.

  11. Developing Reliable Life Support for Mars

    Science.gov (United States)

    Jones, Harry W.

    2017-01-01

    A human mission to Mars will require highly reliable life support systems. Mars life support systems may recycle water and oxygen using systems similar to those on the International Space Station (ISS). However, achieving sufficient reliability is less difficult for ISS than it will be for Mars. If an ISS system has a serious failure, it is possible to provide spare parts, or directly supply water or oxygen, or if necessary bring the crew back to Earth. Life support for Mars must be designed, tested, and improved as needed to achieve high demonstrated reliability. A quantitative reliability goal should be established and used to guide development t. The designers should select reliable components and minimize interface and integration problems. In theory a system can achieve the component-limited reliability, but testing often reveal unexpected failures due to design mistakes or flawed components. Testing should extend long enough to detect any unexpected failure modes and to verify the expected reliability. Iterated redesign and retest may be required to achieve the reliability goal. If the reliability is less than required, it may be improved by providing spare components or redundant systems. The number of spares required to achieve a given reliability goal depends on the component failure rate. If the failure rate is under estimated, the number of spares will be insufficient and the system may fail. If the design is likely to have undiscovered design or component problems, it is advisable to use dissimilar redundancy, even though this multiplies the design and development cost. In the ideal case, a human tended closed system operational test should be conducted to gain confidence in operations, maintenance, and repair. The difficulty in achieving high reliability in unproven complex systems may require the use of simpler, more mature, intrinsically higher reliability systems. The limitations of budget, schedule, and technology may suggest accepting lower and

  12. A reliability analysis tool for SpaceWire network

    Science.gov (United States)

    Zhou, Qiang; Zhu, Longjiang; Fei, Haidong; Wang, Xingyou

    2017-04-01

    A SpaceWire is a standard for on-board satellite networks as the basis for future data-handling architectures. It is becoming more and more popular in space applications due to its technical advantages, including reliability, low power and fault protection, etc. High reliability is the vital issue for spacecraft. Therefore, it is very important to analyze and improve the reliability performance of the SpaceWire network. This paper deals with the problem of reliability modeling and analysis with SpaceWire network. According to the function division of distributed network, a reliability analysis method based on a task is proposed, the reliability analysis of every task can lead to the system reliability matrix, the reliability result of the network system can be deduced by integrating these entire reliability indexes in the matrix. With the method, we develop a reliability analysis tool for SpaceWire Network based on VC, where the computation schemes for reliability matrix and the multi-path-task reliability are also implemented. By using this tool, we analyze several cases on typical architectures. And the analytic results indicate that redundancy architecture has better reliability performance than basic one. In practical, the dual redundancy scheme has been adopted for some key unit, to improve the reliability index of the system or task. Finally, this reliability analysis tool will has a directive influence on both task division and topology selection in the phase of SpaceWire network system design.

  13. The Benefits and Complexities of Operating Geographic Information Systems (GIS) in a High Performance Computing (HPC) Environment

    Science.gov (United States)

    Shute, J.; Carriere, L.; Duffy, D.; Hoy, E.; Peters, J.; Shen, Y.; Kirschbaum, D.

    2017-12-01

    The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center is building and maintaining an Enterprise GIS capability for its stakeholders, to include NASA scientists, industry partners, and the public. This platform is powered by three GIS subsystems operating in a highly-available, virtualized environment: 1) the Spatial Analytics Platform is the primary NCCS GIS and provides users discoverability of the vast DigitalGlobe/NGA raster assets within the NCCS environment; 2) the Disaster Mapping Platform provides mapping and analytics services to NASA's Disaster Response Group; and 3) the internal (Advanced Data Analytics Platform/ADAPT) enterprise GIS provides users with the full suite of Esri and open source GIS software applications and services. All systems benefit from NCCS's cutting edge infrastructure, to include an InfiniBand network for high speed data transfers; a mixed/heterogeneous environment featuring seamless sharing of information between Linux and Windows subsystems; and in-depth system monitoring and warning systems. Due to its co-location with the NCCS Discover High Performance Computing (HPC) environment and the Advanced Data Analytics Platform (ADAPT), the GIS platform has direct access to several large NCCS datasets including DigitalGlobe/NGA, Landsat, MERRA, and MERRA2. Additionally, the NCCS ArcGIS Desktop Windows virtual machines utilize existing NetCDF and OPeNDAP assets for visualization, modelling, and analysis - thus eliminating the need for data duplication. With the advent of this platform, Earth scientists have full access to vast data repositories and the industry-leading tools required for successful management and analysis of these multi-petabyte, global datasets. The full system architecture and integration with scientific datasets will be presented. Additionally, key applications and scientific analyses will be explained, to include the NASA Global Landslide Catalog (GLC) Reporter crowdsourcing application, the

  14. Field Programmable Gate Array Reliability Analysis Guidelines for Launch Vehicle Reliability Block Diagrams

    Science.gov (United States)

    Al Hassan, Mohammad; Britton, Paul; Hatfield, Glen Spencer; Novack, Steven D.

    2017-01-01

    Field Programmable Gate Arrays (FPGAs) integrated circuits (IC) are one of the key electronic components in today's sophisticated launch and space vehicle complex avionic systems, largely due to their superb reprogrammable and reconfigurable capabilities combined with relatively low non-recurring engineering costs (NRE) and short design cycle. Consequently, FPGAs are prevalent ICs in communication protocols and control signal commands. This paper will identify reliability concerns and high level guidelines to estimate FPGA total failure rates in a launch vehicle application. The paper will discuss hardware, hardware description language, and radiation induced failures. The hardware contribution of the approach accounts for physical failures of the IC. The hardware description language portion will discuss the high level FPGA programming languages and software/code reliability growth. The radiation portion will discuss FPGA susceptibility to space environment radiation.

  15. Procedures for controlling the risks of reliability, safety, and availability of technical systems

    International Nuclear Information System (INIS)

    1987-01-01

    The reference book covers four sections. Apart from the fundamental aspects of the reliability problem, of risk and safety and the relevant criteria with regard to reliability, the material presented explains reliability in terms of maintenance, logistics and availability, and presents procedures for reliability assessment and determination of factors influencing the reliability, together with suggestions for systems technical integration. The reliability assessment consists of diagnostic and prognostic analyses. The section on factors influencing reliability discusses aspects of organisational structures, programme planning and control, and critical activities. (DG) [de

  16. Equipment Reliability Program in NPP Krsko

    International Nuclear Information System (INIS)

    Skaler, F.; Djetelic, N.

    2006-01-01

    Operation that is safe, reliable, effective and acceptable to public is the common message in a mission statement of commercial nuclear power plants (NPPs). To fulfill these goals, nuclear industry, among other areas, has to focus on: 1 Human Performance (HU) and 2 Equipment Reliability (EQ). The performance objective of HU is as follows: The behaviors of all personnel result in safe and reliable station operation. While unwanted human behaviors in operations mostly result directly in the event, the behavior flaws either in the area of maintenance or engineering usually cause decreased equipment reliability. Unsatisfied Human performance leads even the best designed power plants into significant operating events, which can be found as well-known examples in nuclear industry. Equipment reliability is today recognized as the key to success. While the human performance at most NPPs has been improving since the start of WANO / INPO / IAEA evaluations, the open energy market has forced the nuclear plants to reduce production costs and operate more reliably and effectively. The balance between these two (opposite) goals has made equipment reliability even more important for safe, reliable and efficient production. Insisting on on-line operation by ignoring some principles of safety could nowadays in a well-developed safety culture and human performance environment exceed the cost of electricity losses. In last decade the leading USA nuclear companies put a lot of effort to improve equipment reliability primarily based on INPO Equipment Reliability Program AP-913 at their NPP stations. The Equipment Reliability Program is the key program not only for safe and reliable operation, but also for the Life Cycle Management and Aging Management on the way to the nuclear power plant life extension. The purpose of Equipment Reliability process is to identify, organize, integrate and coordinate equipment reliability activities (preventive and predictive maintenance, maintenance

  17. Reliable Communication in Wireless Meshed Networks using Network Coding

    DEFF Research Database (Denmark)

    Pahlevani, Peyman; Paramanathan, Achuthan; Hundebøll, Martin

    2012-01-01

    The advantages of network coding have been extensively studied in the field of wireless networks. Integrating network coding with existing IEEE 802.11 MAC layer is a challenging problem. The IEEE 802.11 MAC does not provide any reliability mechanisms for overheard packets. This paper addresses...... this problem and suggests different mechanisms to support reliability as part of the MAC protocol. Analytical expressions to this problem are given to qualify the performance of the modified network coding. These expressions are confirmed by numerical result. While the suggested reliability mechanisms...

  18. NASA Applications and Lessons Learned in Reliability Engineering

    Science.gov (United States)

    Safie, Fayssal M.; Fuller, Raymond P.

    2011-01-01

    Since the Shuttle Challenger accident in 1986, communities across NASA have been developing and extensively using quantitative reliability and risk assessment methods in their decision making process. This paper discusses several reliability engineering applications that NASA has used over the year to support the design, development, and operation of critical space flight hardware. Specifically, the paper discusses several reliability engineering applications used by NASA in areas such as risk management, inspection policies, components upgrades, reliability growth, integrated failure analysis, and physics based probabilistic engineering analysis. In each of these areas, the paper provides a brief discussion of a case study to demonstrate the value added and the criticality of reliability engineering in supporting NASA project and program decisions to fly safely. Examples of these case studies discussed are reliability based life limit extension of Shuttle Space Main Engine (SSME) hardware, Reliability based inspection policies for Auxiliary Power Unit (APU) turbine disc, probabilistic structural engineering analysis for reliability prediction of the SSME alternate turbo-pump development, impact of ET foam reliability on the Space Shuttle System risk, and reliability based Space Shuttle upgrade for safety. Special attention is given in this paper to the physics based probabilistic engineering analysis applications and their critical role in evaluating the reliability of NASA development hardware including their potential use in a research and technology development environment.

  19. Characteristics and application study of AP1000 NPPs equipment reliability classification method

    International Nuclear Information System (INIS)

    Guan Gao

    2013-01-01

    AP1000 nuclear power plant applies an integrated approach to establish equipment reliability classification, which includes probabilistic risk assessment technique, maintenance rule administrative, power production reliability classification and functional equipment group bounding method, and eventually classify equipment reliability into 4 levels. This classification process and result are very different from classical RCM and streamlined RCM. It studied the characteristic of AP1000 equipment reliability classification approach, considered that equipment reliability classification should effectively support maintenance strategy development and work process control, recommended to use a combined RCM method to establish the future equipment reliability program of AP1000 nuclear power plants. (authors)

  20. Analytical modeling of nuclear power station operator reliability

    International Nuclear Information System (INIS)

    Sabri, Z.A.; Husseiny, A.A.

    1979-01-01

    The operator-plant interface is a critical component of power stations which requires the formulation of mathematical models to be applied in plant reliability analysis. The human model introduced here is based on cybernetic interactions and allows for use of available data from psychological experiments, hot and cold training and normal operation. The operator model is identified and integrated in the control and protection systems. The availability and reliability are given for different segments of the operator task and for specific periods of the operator life: namely, training, operation and vigilance or near retirement periods. The results can be easily and directly incorporated in system reliability analysis. (author)

  1. Evolving Reliability and Maintainability Allocations for NASA Ground Systems

    Science.gov (United States)

    Munoz, Gisela; Toon, T.; Toon, J.; Conner, A.; Adams, T.; Miranda, D.

    2016-01-01

    This paper describes the methodology and value of modifying allocations to reliability and maintainability requirements for the NASA Ground Systems Development and Operations (GSDO) programs subsystems. As systems progressed through their design life cycle and hardware data became available, it became necessary to reexamine the previously derived allocations. This iterative process provided an opportunity for the reliability engineering team to reevaluate allocations as systems moved beyond their conceptual and preliminary design phases. These new allocations are based on updated designs and maintainability characteristics of the components. It was found that trade-offs in reliability and maintainability were essential to ensuring the integrity of the reliability and maintainability analysis. This paper discusses the results of reliability and maintainability reallocations made for the GSDO subsystems as the program nears the end of its design phase.

  2. 5{sup th} European-American workshop on reliability of NDE. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-07-01

    This proceeding contains 37 lectures and 26 posters relating to the following main topics: 1. New nondestructive evaluation methods and reliability; 2. Human factors; 3. Applications in industry; 4. Reliability of Structural Health Monitoring (SHM); and 5. Integrated solutions. 8 papers are separately analyzed for the INIS database.

  3. Reliability analysis of prestressed concrete containment structures

    International Nuclear Information System (INIS)

    Jiang, J.; Zhao, Y.; Sun, J.

    1993-01-01

    The reliability analysis of prestressed concrete containment structures subjected to combinations of static and dynamic loads with consideration of uncertainties of structural and load parameters is presented. Limit state probabilities for given parameters are calculated using the procedure developed at BNL, while that with consideration of parameter uncertainties are calculated by a fast integration for time variant structural reliability. The limit state surface of the prestressed concrete containment is constructed directly incorporating the prestress. The sensitivities of the Choleskey decomposition matrix and the natural vibration character are calculated by simplified procedures. (author)

  4. Systems analysis programs for Hands-on integrated reliability evaluations (SAPHIRE) Version 5.0: Verification and validation (V ampersand V) manual. Volume 9

    International Nuclear Information System (INIS)

    Jones, J.L.; Calley, M.B.; Capps, E.L.; Zeigler, S.L.; Galyean, W.J.; Novack, S.D.; Smith, C.L.; Wolfram, L.M.

    1995-03-01

    A verification and validation (V ampersand V) process has been performed for the System Analysis Programs for Hands-on Integrated Reliability Evaluation (SAPHIRE) Version 5.0. SAPHIRE is a set of four computer programs that NRC developed for performing probabilistic risk assessments. They allow an analyst to perform many of the functions necessary to create, quantify, and evaluate the risk associated with a facility or process being analyzed. The programs are Integrated Reliability and Risk Analysis System (IRRAS) System Analysis and Risk Assessment (SARA), Models And Results Database (MAR-D), and Fault tree, Event tree, and Piping and instrumentation diagram (FEP) graphical editor. Intent of this program is to perform a V ampersand V of successive versions of SAPHIRE. Previous efforts have been the V ampersand V of SAPHIRE Version 4.0. The SAPHIRE 5.0 V ampersand V plan is based on the SAPHIRE 4.0 V ampersand V plan with revisions to incorporate lessons learned from the previous effort. Also, the SAPHIRE 5.0 vital and nonvital test procedures are based on the test procedures from SAPHIRE 4.0 with revisions to include the new SAPHIRE 5.0 features as well as to incorporate lessons learned from the previous effort. Most results from the testing were acceptable; however, some discrepancies between expected code operation and actual code operation were identified. Modifications made to SAPHIRE are identified

  5. Validity and reliability of the Nintendo Wii Balance Board to assess standing balance and sensory integration in highly functional older adults.

    Science.gov (United States)

    Scaglioni-Solano, Pietro; Aragón-Vargas, Luis F

    2014-06-01

    Standing balance is an important motor task. Postural instability associated with age typically arises from deterioration of peripheral sensory systems. The modified Clinical Test of Sensory Integration for Balance and the Tandem test have been used to screen for balance. Timed tests present some limitations, whereas quantification of the motions of the center of pressure (CoP) with portable and inexpensive equipment may help to improve the sensitivity of these tests and give the possibility of widespread use. This study determines the validity and reliability of the Wii Balance Board (Wii BB) to quantify CoP motions during the mentioned tests. Thirty-seven older adults completed three repetitions of five balance conditions: eyes open, eyes closed, eyes open on a compliant surface, eyes closed on a compliant surface, and tandem stance, all performed on a force plate and a Wii BB simultaneously. Twenty participants repeated the trials for reliability purposes. CoP displacement was the main outcome measure. Regression analysis indicated that the Wii BB has excellent concurrent validity, and Bland-Altman plots showed good agreement between devices with small mean differences and no relationship between the difference and the mean. Intraclass correlation coefficients (ICCs) indicated modest-to-excellent test-retest reliability (ICC=0.64-0.85). Standard error of measurement and minimal detectable change were similar for both devices, except the 'eyes closed' condition, with greater standard error of measurement for the Wii BB. In conclusion, the Wii BB is shown to be a valid and reliable method to quantify CoP displacement in older adults.

  6. Formation of integrated structural units using the systematic and integrated method when implementing high-rise construction projects

    Science.gov (United States)

    Abramov, Ivan

    2018-03-01

    Development of design documentation for a future construction project gives rise to a number of issues with the main one being selection of manpower for structural units of the project's overall implementation system. Well planned and competently staffed integrated structural construction units will help achieve a high level of reliability and labor productivity and avoid negative (extraordinary) situations during the construction period eventually ensuring improved project performance. Research priorities include the development of theoretical recommendations for enhancing reliability of a structural unit staffed as an integrated construction crew. The author focuses on identification of destabilizing factors affecting formation of an integrated construction crew; assessment of these destabilizing factors; based on the developed mathematical model, highlighting the impact of these factors on the integration criterion with subsequent identification of an efficiency and reliability criterion for the structural unit in general. The purpose of this article is to develop theoretical recommendations and scientific and methodological provisions of an organizational and technological nature in order to identify a reliability criterion for a structural unit based on manpower integration and productivity criteria. With this purpose in mind, complex scientific tasks have been defined requiring special research, development of corresponding provisions and recommendations based on the system analysis findings presented herein.

  7. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Version 5.0: Data loading manual. Volume 10

    International Nuclear Information System (INIS)

    VanHorn, R.L.; Wolfram, L.M.; Fowler, R.D.; Beck, S.T.; Smith, C.L.

    1995-04-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) suite of programs can be used to organize and standardize in an electronic format information from probabilistic risk assessments or individual plant examinations. The Models and Results Database (MAR-D) program of the SAPHIRE suite serves as the repository for probabilistic risk assessment and individual plant examination data and information. This report demonstrates by examples the common electronic and manual methods used to load these types of data. It is not a stand alone document but references documents that contribute information relative to the data loading process. This document provides a more detailed discussion and instructions for using SAPHIRE 5.0 only when enough information on a specific topic is not provided by another available source

  8. STARS software tool for analysis of reliability and safety

    International Nuclear Information System (INIS)

    Poucet, A.; Guagnini, E.

    1989-01-01

    This paper reports on the STARS (Software Tool for the Analysis of Reliability and Safety) project aims at developing an integrated set of Computer Aided Reliability Analysis tools for the various tasks involved in systems safety and reliability analysis including hazard identification, qualitative analysis, logic model construction and evaluation. The expert system technology offers the most promising perspective for developing a Computer Aided Reliability Analysis tool. Combined with graphics and analysis capabilities, it can provide a natural engineering oriented environment for computer assisted reliability and safety modelling and analysis. For hazard identification and fault tree construction, a frame/rule based expert system is used, in which the deductive (goal driven) reasoning and the heuristic, applied during manual fault tree construction, is modelled. Expert system can explain their reasoning so that the analyst can become aware of the why and the how results are being obtained. Hence, the learning aspect involved in manual reliability and safety analysis can be maintained and improved

  9. Living PRAs [probabilistic risk analysis] made easier with IRRAS [Integrated Reliability and Risk Analysis System

    International Nuclear Information System (INIS)

    Russell, K.D.; Sattison, M.B.; Rasmuson, D.M.

    1989-01-01

    The Integrated Reliability and Risk Analysis System (IRRAS) is an integrated PRA software tool that gives the user the ability to create and analyze fault trees and accident sequences using an IBM-compatible microcomputer. This program provides functions that range from graphical fault tree and event tree construction to cut set generation and quantification. IRRAS contains all the capabilities and functions required to create, modify, reduce, and analyze event tree and fault tree models used in the analysis of complex systems and processes. IRRAS uses advanced graphic and analytical techniques to achieve the greatest possible realization of the potential of the microcomputer. When the needs of the user exceed this potential, IRRAS can call upon the power of the mainframe computer. The role of the Idaho National Engineering Laboratory if the IRRAS program is that of software developer and interface to the user community. Version 1.0 of the IRRAS program was released in February 1987 to prove the concept of performing this kind of analysis on microcomputers. This version contained many of the basic features needed for fault tree analysis and was received very well by the PRA community. Since the release of Version 1.0, many user comments and enhancements have been incorporated into the program providing a much more powerful and user-friendly system. This version is designated ''IRRAS 2.0''. Version 3.0 will contain all of the features required for efficient event tree and fault tree construction and analysis. 5 refs., 26 figs

  10. An Appropriate Wind Model for Wind Integrated Power Systems Reliability Evaluation Considering Wind Speed Correlations

    Directory of Open Access Journals (Sweden)

    Rajesh Karki

    2013-02-01

    Full Text Available Adverse environmental impacts of carbon emissions are causing increasing concerns to the general public throughout the world. Electric energy generation from conventional energy sources is considered to be a major contributor to these harmful emissions. High emphasis is therefore being given to green alternatives of energy, such as wind and solar. Wind energy is being perceived as a promising alternative. This source of energy technology and its applications have undergone significant research and development over the past decade. As a result, many modern power systems include a significant portion of power generation from wind energy sources. The impact of wind generation on the overall system performance increases substantially as wind penetration in power systems continues to increase to relatively high levels. It becomes increasingly important to accurately model the wind behavior, the interaction with other wind sources and conventional sources, and incorporate the characteristics of the energy demand in order to carry out a realistic evaluation of system reliability. Power systems with high wind penetrations are often connected to multiple wind farms at different geographic locations. Wind speed correlations between the different wind farms largely affect the total wind power generation characteristics of such systems, and therefore should be an important parameter in the wind modeling process. This paper evaluates the effect of the correlation between multiple wind farms on the adequacy indices of wind-integrated systems. The paper also proposes a simple and appropriate probabilistic analytical model that incorporates wind correlations, and can be used for adequacy evaluation of multiple wind-integrated systems.

  11. Quality assurance and reliability in the Japanese electronics industry

    Science.gov (United States)

    Pecht, Michael; Boulton, William R.

    1995-02-01

    Quality and reliability are two attributes required for all Japanese products, although the JTEC panel found these attributes to be secondary to customer cost requirements. While our Japanese hosts gave presentations on the challenges of technology, cost, and miniaturization, quality and reliability were infrequently the focus of our discussions. Quality and reliability were assumed to be sufficient to meet customer needs. Fujitsu's slogan, 'quality built-in, with cost and performance as prime consideration,' illustrates this point. Sony's definition of a next-generation product is 'one that is going to be half the size and half the price at the same performance of the existing one'. Quality and reliability are so integral to Japan's electronics industry that they need no new emphasis.

  12. Of Iron or Wax? The Effect of Economic Integration on the Reliability of Military Alliances***

    Directory of Open Access Journals (Sweden)

    Vobolevičius Vincentas

    2015-12-01

    Full Text Available In this paper we analyze what determines if a military alliance represents a credible commitment. More precisely, we verify if economic integration of military allies increases the deterrent capability of an alliance, and its effectiveness in the case of third-party aggression. We propose that growing intra-alliance trade creates audience costs and sunk costs for political leaders who venture to violate conditions of an alliance treaty. Therefore, intensive trade can be regarded as a signal of allies’ determination to aid one another in the case of third party aggression, and a deterrent of such aggression. Regression analysis of bilateral fixed-term mutual defense agreements concluded between 1945 and 2003 reveals that large trade volumes among military allies indeed reduce the likelihood that their political leaders will breach alliance commitments. Intra-alliance trade also displays a number of interesting interaction effects with the other common predictors of military alliance reliability such as shared allies’ interests and values, symmetry of their military capabilities, their geographic location and domestic political institutions.

  13. Performance Tuning of Fock Matrix and Two-Electron Integral Calculations for NWChem on Leading HPC Platforms

    Energy Technology Data Exchange (ETDEWEB)

    Shan, Hongzhan; Austin, Brian M.; De Jong, Wibe A.; Oliker, Leonid; Wright, Nicholas J.; Apra, Edoardo

    2014-10-01

    Attaining performance in the evaluation of two-electron repulsion integrals and constructing the Fock matrix is of considerable importance to the computational chemistry community. Due to its numerical complexity improving the performance behavior across a variety of leading supercomputing platforms is an increasing challenge due to the significant diversity in high-performance computing architectures. In this paper, we present our successful tuning methodology for these important numerical methods on the Cray XE6, the Cray XC30, the IBM BG/Q, as well as the Intel Xeon Phi. Our optimization schemes leverage key architectural features including vectorization and simultaneous multithreading, and results in speedups of up to 2.5x compared with the original implementation.

  14. A new approach for reliability analysis with time-variant performance characteristics

    International Nuclear Information System (INIS)

    Wang, Zequn; Wang, Pingfeng

    2013-01-01

    Reliability represents safety level in industry practice and may variant due to time-variant operation condition and components deterioration throughout a product life-cycle. Thus, the capability to perform time-variant reliability analysis is of vital importance in practical engineering applications. This paper presents a new approach, referred to as nested extreme response surface (NERS), that can efficiently tackle time dependency issue in time-variant reliability analysis and enable to solve such problem by easily integrating with advanced time-independent tools. The key of the NERS approach is to build a nested response surface of time corresponding to the extreme value of the limit state function by employing Kriging model. To obtain the data for the Kriging model, the efficient global optimization technique is integrated with the NERS to extract the extreme time responses of the limit state function for any given system input. An adaptive response prediction and model maturation mechanism is developed based on mean square error (MSE) to concurrently improve the accuracy and computational efficiency of the proposed approach. With the nested response surface of time, the time-variant reliability analysis can be converted into the time-independent reliability analysis and existing advanced reliability analysis methods can be used. Three case studies are used to demonstrate the efficiency and accuracy of NERS approach

  15. Reliability Modeling of Electromechanical System with Meta-Action Chain Methodology

    Directory of Open Access Journals (Sweden)

    Genbao Zhang

    2018-01-01

    Full Text Available To establish a more flexible and accurate reliability model, the reliability modeling and solving algorithm based on the meta-action chain thought are used in this thesis. Instead of estimating the reliability of the whole system only in the standard operating mode, this dissertation adopts the structure chain and the operating action chain for the system reliability modeling. The failure information and structure information for each component are integrated into the model to overcome the given factors applied in the traditional modeling. In the industrial application, there may be different operating modes for a multicomponent system. The meta-action chain methodology can estimate the system reliability under different operating modes by modeling the components with varieties of failure sensitivities. This approach has been identified by computing some electromechanical system cases. The results indicate that the process could improve the system reliability estimation. It is an effective tool to solve the reliability estimation problem in the system under various operating modes.

  16. Reliability engineering

    International Nuclear Information System (INIS)

    Lee, Chi Woo; Kim, Sun Jin; Lee, Seung Woo; Jeong, Sang Yeong

    1993-08-01

    This book start what is reliability? such as origin of reliability problems, definition of reliability and reliability and use of reliability. It also deals with probability and calculation of reliability, reliability function and failure rate, probability distribution of reliability, assumption of MTBF, process of probability distribution, down time, maintainability and availability, break down maintenance and preventive maintenance design of reliability, design of reliability for prediction and statistics, reliability test, reliability data and design and management of reliability.

  17. Equipment reliability improvement process; implementation in Almaraz NPP and Trillo NPP

    International Nuclear Information System (INIS)

    Risquez Bailon, Aranzazu; Gutierrez Fernandez, Eduardo

    2010-01-01

    The Equipment Reliability Improvement Process (INPO AP-913) is a non-regulatory process developed by the US Nuclear Industry for improving Plants Availability. This Process integrates and coordinates a broad range of equipment reliability activities into one process, performed by the Plant in a non-centralized way. The integration and coordination of these activities will allow plant personnel to evaluate the trends of important station equipment, develop and implement long-term equipment health plans, monitor equipment performance and condition, and make adjustments to preventive maintenance tasks and frequencies based on equipment operating experience, if necessary, arbitrating operational and design improvements, to reach a Failure-free Operation. This paper describes the methodology of Equipment Reliability Improvement Process, being focused on main aspects of the implementation process, relating to the scope and establishment of an Equipment Reliability Monitoring Plan, which should include and complement the existing mechanisms and organizations in the Plant to monitor the condition and performance of the equipments, with the common aim of achieving an operation free of failures. The paper will describe the tools that Iberdrola Ingenieria has developed to support the implementation and monitoring of the Equipment Reliability Improvement Process, as well as the results and lessons learned from its implementation in Almaraz NPP and Trillo NPP. (authors)

  18. Investigating reliability attributes of silicon photovoltaic cells - An overview

    Science.gov (United States)

    Royal, E. L.

    1982-01-01

    Reliability attributes are being developed on a wide variety of advanced single-crystal silicon solar cells. Two separate investigations: cell-contact integrity (metal-to-silicon adherence), and cracked cells identified with fracture-strength-reducing flaws are discussed. In the cell-contact-integrity investigation, analysis of contact pull-strength data shows that cell types made with different metallization technologies, i.e., vacuum, plated, screen-printed and soldered, have appreciably different reliability attributes. In the second investigation, fracture strength was measured using Czochralski wafers and cells taken at various stages of processing and differences were noted. Fracture strength, which is believed to be governed by flaws introduced during wafer sawing, was observed to improve (increase) after chemical polishing and other process steps that tend to remove surface and edge flaws.

  19. About water chemistry influence on equipment reliability of NPP with RBMK-1000

    International Nuclear Information System (INIS)

    Berezina, I.G.; Styazhkin, P.S.; Kritskij, V.G.

    2001-01-01

    In the paper the experience of a quantitative valuation of coolant quality influence on a reliability of some equipment elements of NPP with RBMK-1000 is offered. The choice is made of coolant quality integral parameter. The connection between indices values of coolant quality and reliability of major elements of circulation circuit equipment (including fuel claddings) is established. The reliability improvement of equipment elements operation is supported by high water chemistry quality. (orig.)

  20. Who watches the watchers?: preventing fault in a fault tolerance library

    Energy Technology Data Exchange (ETDEWEB)

    Stanavige, C. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-09-14

    The Scalable Checkpoint/Restart library (SCR) was developed and is used by researchers at Lawrence Livermore National Laboratory to provide a fast and efficient method of saving and recovering large applications during runtime on high-performance computing (HPC) systems. Though SCR protects other programs, up until June 2017, nothing was actively protecting SCR. The goal of this project was to automate the building and testing of this library on the varying HPC architectures on which it is used. Our methods centered around the use of a continuous integration tool called Bamboo that allowed for automation agents to be installed on the HPC systems themselves. These agents provided a way for us to establish a new and unique way to automate and customize the allocation of resources and running of tests with CMake’s unit testing framework, CTest, as well as integration testing scripts though an HPC package manager called Spack. These methods provided a parallel environment in which to test the more complex features of SCR. As a result, SCR is now automatically built and tested on several HPC architectures any time changes are made by developers to the library’s source code. The results of these tests are then communicated back to the developers for immediate feedback, allowing them to fix functionality of SCR that may have broken. Hours of developers’ time are now being saved from the tedious process of manually testing and debugging, which saves money and allows the SCR project team to focus their efforts towards development. Thus, HPC system users can use SCR in conjunction with their own applications to efficiently and effectively checkpoint and restart as needed with the assurance that SCR itself is functioning properly.

  1. IN13B-1660: Analytics and Visualization Pipelines for Big Data on the NASA Earth Exchange (NEX) and OpenNEX

    Science.gov (United States)

    Chaudhary, Aashish; Votava, Petr; Nemani, Ramakrishna R.; Michaelis, Andrew; Kotfila, Chris

    2016-01-01

    We are developing capabilities for an integrated petabyte-scale Earth science collaborative analysis and visualization environment. The ultimate goal is to deploy this environment within the NASA Earth Exchange (NEX) and OpenNEX in order to enhance existing science data production pipelines in both high-performance computing (HPC) and cloud environments. Bridging of HPC and cloud is a fairly new concept under active research and this system significantly enhances the ability of the scientific community to accelerate analysis and visualization of Earth science data from NASA missions, model outputs and other sources. We have developed a web-based system that seamlessly interfaces with both high-performance computing (HPC) and cloud environments, providing tools that enable science teams to develop and deploy large-scale analysis, visualization and QA pipelines of both the production process and the data products, and enable sharing results with the community. Our project is developed in several stages each addressing separate challenge - workflow integration, parallel execution in either cloud or HPC environments and big-data analytics or visualization. This work benefits a number of existing and upcoming projects supported by NEX, such as the Web Enabled Landsat Data (WELD), where we are developing a new QA pipeline for the 25PB system.

  2. Analytics and Visualization Pipelines for Big ­Data on the NASA Earth Exchange (NEX) and OpenNEX

    Science.gov (United States)

    Chaudhary, A.; Votava, P.; Nemani, R. R.; Michaelis, A.; Kotfila, C.

    2016-12-01

    We are developing capabilities for an integrated petabyte-scale Earth science collaborative analysis and visualization environment. The ultimate goal is to deploy this environment within the NASA Earth Exchange (NEX) and OpenNEX in order to enhance existing science data production pipelines in both high-performance computing (HPC) and cloud environments. Bridging of HPC and cloud is a fairly new concept under active research and this system significantly enhances the ability of the scientific community to accelerate analysis and visualization of Earth science data from NASA missions, model outputs and other sources. We have developed a web-based system that seamlessly interfaces with both high-performance computing (HPC) and cloud environments, providing tools that enable science teams to develop and deploy large-scale analysis, visualization and QA pipelines of both the production process and the data products, and enable sharing results with the community. Our project is developed in several stages each addressing separate challenge - workflow integration, parallel execution in either cloud or HPC environments and big-data analytics or visualization. This work benefits a number of existing and upcoming projects supported by NEX, such as the Web Enabled Landsat Data (WELD), where we are developing a new QA pipeline for the 25PB system.

  3. The integrity management cycle as a business process

    Energy Technology Data Exchange (ETDEWEB)

    Ackhurst, Trent B.; Peverelli, Romina P. [PIMS - Pipeline Integrity Management Specialists of London Ltd. (United Kingdom).

    2009-07-01

    It is a best-practice Oil and Gas pipeline integrity and reliability technique to apply integrity management cycles. This is conforms to the business principles of continuous improvement. This paper examines the integrity management cycle - both goals and objectives and subsequent component steps - from a business perspective. Traits that businesses require, to glean maximum benefit from such a cycle, are highlighted. A case study focuses upon an integrity and reliability process developed to apply to pipeline operators. installations. This is compared and contrasted to the pipeline integrity management cycle to underline both cycles. consistency with the principles of continuous improvement. (author)

  4. Advanced reliability improvement of AC-modules (ARIA)

    International Nuclear Information System (INIS)

    Rooij, P.; Real, M.; Moschella, U.; Sample, T.; Kardolus, M.

    2001-09-01

    The AC-module is a relatively new development in PV-system technology and offers significant advantages over conventional PV-systems with a central inverter : e.g. increased modularity, ease of installation and freedom of system design. The Netherlands and Switzerland have a leading position in the field of AC-modules, both in terms of technology and of commercial and large-scale application. An obstacle towards large-scale market introduction of AC-modules is that the reliability and operational lifetime of AC-modules and the integrated inverters in particular are not yet proven. Despite the advantages, no module-integrated inverter has yet achieved large scale introduction. The AC-modules will lower the barrier towards market penetration. But due to the great interest in the new AC-module technology there is the risk of introducing a not fully proven product. This may damage the image of PV-systems. To speed up the development and to improve the reliability, research institutes and PV-industry will address the aspects of reliability and operational lifetime of AC-modules. From field experiences we learn that in general the inverter is still the weakest point in PV-systems. The lifetime of inverters is an important factor on reliability. Some authors are indicating a lifetime of 1.5 years, whereas the field experiences in Germany and Switzerland have shown that for central inverter systems, an availability of 97% has been achieved in the last years. From this point of view it is highly desirable that the operational lifetime and reliability of PV-inverters and especially AC-modules is demonstrated/improved to make large scale use of PV a success. Module Integrated Inverters will most likely be used in modules in the power range between 100 and 300 Watt DC-power. These are modules with more than 100 cells in series, assuming that the module inverter will benefit from the higher voltage. Hot-spot is the phenomenon that can occur when one or more cells of a string

  5. Product reliability and thin-film photovoltaics

    Science.gov (United States)

    Gaston, Ryan; Feist, Rebekah; Yeung, Simon; Hus, Mike; Bernius, Mark; Langlois, Marc; Bury, Scott; Granata, Jennifer; Quintana, Michael; Carlson, Carl; Sarakakis, Georgios; Ogden, Douglas; Mettas, Adamantios

    2009-08-01

    Despite significant growth in photovoltaics (PV) over the last few years, only approximately 1.07 billion kWhr of electricity is estimated to have been generated from PV in the US during 2008, or 0.27% of total electrical generation. PV market penetration is set for a paradigm shift, as fluctuating hydrocarbon prices and an acknowledgement of the environmental impacts associated with their use, combined with breakthrough new PV technologies, such as thin-film and BIPV, are driving the cost of energy generated with PV to parity or cost advantage versus more traditional forms of energy generation. In addition to reaching cost parity with grid supplied power, a key to the long-term success of PV as a viable energy alternative is the reliability of systems in the field. New technologies may or may not have the same failure modes as previous technologies. Reliability testing and product lifetime issues continue to be one of the key bottlenecks in the rapid commercialization of PV technologies today. In this paper, we highlight the critical need for moving away from relying on traditional qualification and safety tests as a measure of reliability and focus instead on designing for reliability and its integration into the product development process. A drive towards quantitative predictive accelerated testing is emphasized and an industrial collaboration model addressing reliability challenges is proposed.

  6. Reliability and Probabilistic Risk Assessment - How They Play Together

    Science.gov (United States)

    Safie, Fayssal M.; Stutts, Richard G.; Zhaofeng, Huang

    2015-01-01

    PRA methodology is one of the probabilistic analysis methods that NASA brought from the nuclear industry to assess the risk of LOM, LOV and LOC for launch vehicles. PRA is a system scenario based risk assessment that uses a combination of fault trees, event trees, event sequence diagrams, and probability and statistical data to analyze the risk of a system, a process, or an activity. It is a process designed to answer three basic questions: What can go wrong? How likely is it? What is the severity of the degradation? Since 1986, NASA, along with industry partners, has conducted a number of PRA studies to predict the overall launch vehicles risks. Planning Research Corporation conducted the first of these studies in 1988. In 1995, Science Applications International Corporation (SAIC) conducted a comprehensive PRA study. In July 1996, NASA conducted a two-year study (October 1996 - September 1998) to develop a model that provided the overall Space Shuttle risk and estimates of risk changes due to proposed Space Shuttle upgrades. After the Columbia accident, NASA conducted a PRA on the Shuttle External Tank (ET) foam. This study was the most focused and extensive risk assessment that NASA has conducted in recent years. It used a dynamic, physics-based, integrated system analysis approach to understand the integrated system risk due to ET foam loss in flight. Most recently, a PRA for Ares I launch vehicle has been performed in support of the Constellation program. Reliability, on the other hand, addresses the loss of functions. In a broader sense, reliability engineering is a discipline that involves the application of engineering principles to the design and processing of products, both hardware and software, for meeting product reliability requirements or goals. It is a very broad design-support discipline. It has important interfaces with many other engineering disciplines. Reliability as a figure of merit (i.e. the metric) is the probability that an item will

  7. Comparison of cryopreservation bags for hematopoietic progenitor cells using a WBC-enriched product.

    Science.gov (United States)

    Dijkstra-Tiekstra, Margriet J; Hazelaar, Sandra; Gkoumassi, Effimia; Weggemans, Margienus; de Wildt-Eggen, Janny

    2015-04-01

    Hematopoietic progenitor cells (HPC) are stored in cryopreservation bags that are resistant to liquid nitrogen. Since Cryocyte bags of Baxter (B-bags) are no longer available, an alternative bag was sought. Also, the influence of freezing volume was studied. Miltenyi Biotec (MB)- and MacoPharma (MP)-bags passed the integrity tests without failure. Comparing MB- and MP-bags with B-bags, no difference in WBC recovery or viability was found when using a WBC-enriched product as a "dummy" HPC product. Further, a freezing volume of 30 mL resulted in better WBC recovery and viability than 60 mL. Additonal studies using real HPC might be necessary. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Reliability Analysis of the CERN Radiation Monitoring Electronic System CROME

    CERN Document Server

    AUTHOR|(CDS)2126870

    For the new in-house developed CERN Radiation Monitoring Electronic System (CROME) a reliability analysis is necessary to ensure compliance with the statu-tory requirements regarding the Safety Integrity Level. The required Safety Integrity Level by IEC 60532 standard is SIL 2 (for the Safety Integrated Functions Measurement, Alarm Triggering and Interlock Triggering). The first step of the reliability analysis was a system and functional analysis which served as basis for the implementation of the CROME system in the software “Iso-graph”. In the “Prediction” module of Isograph the failure rates of all components were calculated. Failure rates for passive components were calculated by the Military Standard 217 and failure rates for active components were obtained from lifetime tests by the manufacturers. The FMEA was carried out together with the board designers and implemented in the “FMECA” module of Isograph. The FMEA served as basis for the Fault Tree Analysis and the detection of weak points...

  9. Quantum Virtual Machine (QVM)

    Energy Technology Data Exchange (ETDEWEB)

    2016-11-18

    There is a lack of state-of-the-art HPC simulation tools for simulating general quantum computing. Furthermore, there are no real software tools that integrate current quantum computers into existing classical HPC workflows. This product, the Quantum Virtual Machine (QVM), solves this problem by providing an extensible framework for pluggable virtual, or physical, quantum processing units (QPUs). It enables the execution of low level quantum assembly codes and returns the results of such executions.

  10. Molecular Science Computing: 2010 Greenbook

    Energy Technology Data Exchange (ETDEWEB)

    De Jong, Wibe A.; Cowley, David E.; Dunning, Thom H.; Vorpagel, Erich R.

    2010-04-02

    This 2010 Greenbook outlines the science drivers for performing integrated computational environmental molecular research at EMSL and defines the next-generation HPC capabilities that must be developed at the MSC to address this critical research. The EMSL MSC Science Panel used EMSL’s vision and science focus and white papers from current and potential future EMSL scientific user communities to define the scientific direction and resulting HPC resource requirements presented in this 2010 Greenbook.

  11. Simulation Approach to Mission Risk and Reliability Analysis, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — It is proposed to develop and demonstrate an integrated total-system risk and reliability analysis approach that is based on dynamic, probabilistic simulation. This...

  12. The effect of loss functions on empirical Bayes reliability analysis

    Directory of Open Access Journals (Sweden)

    Camara Vincent A. R.

    1998-01-01

    Full Text Available The aim of the present study is to investigate the sensitivity of empirical Bayes estimates of the reliability function with respect to changing of the loss function. In addition to applying some of the basic analytical results on empirical Bayes reliability obtained with the use of the “popular” squared error loss function, we shall derive some expressions corresponding to empirical Bayes reliability estimates obtained with the Higgins–Tsokos, the Harris and our proposed logarithmic loss functions. The concept of efficiency, along with the notion of integrated mean square error, will be used as a criterion to numerically compare our results. It is shown that empirical Bayes reliability functions are in general sensitive to the choice of the loss function, and that the squared error loss does not always yield the best empirical Bayes reliability estimate.

  13. Advanced Simulation Capability for Environmental Management - Current Status and Phase II Demonstration Results - 13161

    Energy Technology Data Exchange (ETDEWEB)

    Seitz, Roger R.; Flach, Greg [Savannah River National Laboratory, Savannah River Site, Bldg 773-43A, Aiken, SC 29808 (United States); Freshley, Mark D.; Freedman, Vicky; Gorton, Ian [Pacific Northwest National Laboratory, MSIN K9-33, P.O. Box 999, Richland, WA 99352 (United States); Dixon, Paul; Moulton, J. David [Los Alamos National Laboratory, MS B284, P.O. Box 1663, Los Alamos, NM 87544 (United States); Hubbard, Susan S.; Faybishenko, Boris; Steefel, Carl I.; Finsterle, Stefan [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, MS 50B-4230, Berkeley, CA 94720 (United States); Marble, Justin [Department of Energy, 19901 Germantown Road, Germantown, MD 20874-1290 (United States)

    2013-07-01

    The U.S. Department of Energy (US DOE) Office of Environmental Management (EM), Office of Soil and Groundwater, is supporting development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The modular and open source high-performance computing tool facilitates integrated approaches to modeling and site characterization that enable robust and standardized assessments of performance and risk for EM cleanup and closure activities. The ASCEM project continues to make significant progress in development of computer software capabilities with an emphasis on integration of capabilities in FY12. Capability development is occurring for both the Platform and Integrated Tool-sets and High-Performance Computing (HPC) Multi-process Simulator. The Platform capabilities provide the user interface and tools for end-to-end model development, starting with definition of the conceptual model, management of data for model input, model calibration and uncertainty analysis, and processing of model output, including visualization. The HPC capabilities target increased functionality of process model representations, tool-sets for interaction with Platform, and verification and model confidence testing. The Platform and HPC capabilities are being tested and evaluated for EM applications in a set of demonstrations as part of Site Applications Thrust Area activities. The Phase I demonstration focusing on individual capabilities of the initial tool-sets was completed in 2010. The Phase II demonstration completed in 2012 focused on showcasing integrated ASCEM capabilities. For Phase II, the Hanford Site deep vadose zone (BC Cribs) served as an application site for an end-to-end demonstration of capabilities, with emphasis on integration and linkages between the Platform and HPC components. Other demonstrations

  14. Advanced Simulation Capability for Environmental Management - Current Status and Phase II Demonstration Results - 13161

    International Nuclear Information System (INIS)

    Seitz, Roger R.; Flach, Greg; Freshley, Mark D.; Freedman, Vicky; Gorton, Ian; Dixon, Paul; Moulton, J. David; Hubbard, Susan S.; Faybishenko, Boris; Steefel, Carl I.; Finsterle, Stefan; Marble, Justin

    2013-01-01

    The U.S. Department of Energy (US DOE) Office of Environmental Management (EM), Office of Soil and Groundwater, is supporting development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The modular and open source high-performance computing tool facilitates integrated approaches to modeling and site characterization that enable robust and standardized assessments of performance and risk for EM cleanup and closure activities. The ASCEM project continues to make significant progress in development of computer software capabilities with an emphasis on integration of capabilities in FY12. Capability development is occurring for both the Platform and Integrated Tool-sets and High-Performance Computing (HPC) Multi-process Simulator. The Platform capabilities provide the user interface and tools for end-to-end model development, starting with definition of the conceptual model, management of data for model input, model calibration and uncertainty analysis, and processing of model output, including visualization. The HPC capabilities target increased functionality of process model representations, tool-sets for interaction with Platform, and verification and model confidence testing. The Platform and HPC capabilities are being tested and evaluated for EM applications in a set of demonstrations as part of Site Applications Thrust Area activities. The Phase I demonstration focusing on individual capabilities of the initial tool-sets was completed in 2010. The Phase II demonstration completed in 2012 focused on showcasing integrated ASCEM capabilities. For Phase II, the Hanford Site deep vadose zone (BC Cribs) served as an application site for an end-to-end demonstration of capabilities, with emphasis on integration and linkages between the Platform and HPC components. Other demonstrations

  15. ADVANCED SIMULATION CAPABILITY FOR ENVIRONMENTAL MANAGEMENT- CURRENT STATUS AND PHASE II DEMONSTRATION RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Seitz, R.

    2013-02-26

    The U.S. Department of Energy (USDOE) Office of Environmental Management (EM), Office of Soil and Groundwater, is supporting development of the Advanced Simulation Capability for Environmental Management (ASCEM). ASCEM is a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The modular and open source high-performance computing tool facilitates integrated approaches to modeling and site characterization that enable robust and standardized assessments of performance and risk for EM cleanup and closure activities. The ASCEM project continues to make significant progress in development of computer software capabilities with an emphasis on integration of capabilities in FY12. Capability development is occurring for both the Platform and Integrated Toolsets and High-Performance Computing (HPC) Multiprocess Simulator. The Platform capabilities provide the user interface and tools for end-to-end model development, starting with definition of the conceptual model, management of data for model input, model calibration and uncertainty analysis, and processing of model output, including visualization. The HPC capabilities target increased functionality of process model representations, toolsets for interaction with Platform, and verification and model confidence testing. The Platform and HPC capabilities are being tested and evaluated for EM applications in a set of demonstrations as part of Site Applications Thrust Area activities. The Phase I demonstration focusing on individual capabilities of the initial toolsets was completed in 2010. The Phase II demonstration completed in 2012 focused on showcasing integrated ASCEM capabilities. For Phase II, the Hanford Site deep vadose zone (BC Cribs) served as an application site for an end-to-end demonstration of capabilities, with emphasis on integration and linkages between the Platform and HPC components. Other demonstrations

  16. Linkage reliability in local area network

    International Nuclear Information System (INIS)

    Buissson, J.; Sanchis, P.

    1984-11-01

    The local area networks for industrial applications e.g. in nuclear power plants counterparts intended for office use that they are required to meet more stringent requirements in terms of reliability, security and availability. The designers of such networks take full advantage of the office-oriented developments (more specifically the integrated circuits) and increase their performance capabilities with respect to the industrial requirements [fr

  17. Reliable Rescue Routing Optimization for Urban Emergency Logistics under Travel Time Uncertainty

    Directory of Open Access Journals (Sweden)

    Qiuping Li

    2018-02-01

    Full Text Available The reliability of rescue routes is critical for urban emergency logistics during disasters. However, studies on reliable rescue routing under stochastic networks are still rare. This paper proposes a multiobjective rescue routing model for urban emergency logistics under travel time reliability. A hybrid metaheuristic integrating ant colony optimization (ACO and tabu search (TS was designed to solve the model. An experiment optimizing rescue routing plans under a real urban storm event, was carried out to validate the proposed model. The experimental results showed how our approach can improve rescue efficiency with high travel time reliability.

  18. A computational Bayesian approach to dependency assessment in system reliability

    International Nuclear Information System (INIS)

    Yontay, Petek; Pan, Rong

    2016-01-01

    Due to the increasing complexity of engineered products, it is of great importance to develop a tool to assess reliability dependencies among components and systems under the uncertainty of system reliability structure. In this paper, a Bayesian network approach is proposed for evaluating the conditional probability of failure within a complex system, using a multilevel system configuration. Coupling with Bayesian inference, the posterior distributions of these conditional probabilities can be estimated by combining failure information and expert opinions at both system and component levels. Three data scenarios are considered in this study, and they demonstrate that, with the quantification of the stochastic relationship of reliability within a system, the dependency structure in system reliability can be gradually revealed by the data collected at different system levels. - Highlights: • A Bayesian network representation of system reliability is presented. • Bayesian inference methods for assessing dependencies in system reliability are developed. • Complete and incomplete data scenarios are discussed. • The proposed approach is able to integrate reliability information from multiple sources at multiple levels of the system.

  19. The reliability of physical examination tests for the diagnosis of anterior cruciate ligament rupture--A systematic review.

    Science.gov (United States)

    Lange, Toni; Freiberg, Alice; Dröge, Patrik; Lützner, Jörg; Schmitt, Jochen; Kopkow, Christian

    2015-06-01

    Systematic literature review. Despite their frequent application in routine care, a systematic review on the reliability of clinical examination tests to evaluate the integrity of the ACL is missing. To summarize and evaluate intra- and interrater reliability research on physical examination tests used for the diagnosis of ACL tears. A comprehensive systematic literature search was conducted in MEDLINE, EMBASE and AMED until May 30th 2013. Studies were included if they assessed the intra- and/or interrater reliability of physical examination tests for the integrity of the ACL. Methodological quality was evaluated with the Quality Appraisal of Reliability Studies (QAREL) tool by two independent reviewers. 110 hits were achieved of which seven articles finally met the inclusion criteria. These studies examined the reliability of four physical examination tests. Intrarater reliability was assessed in three studies and ranged from fair to almost perfect (Cohen's k = 0.22-1.00). Interrater reliability was assessed in all included studies and ranged from slight to almost perfect (Cohen's k = 0.02-0.81). The Lachman test is the physical tests with the highest intrarater reliability (Cohen's k = 1.00), the Lachman test performed in prone position the test with the highest interrater reliability (Cohen's k = 0.81). Included studies were partly of low methodological quality. A meta-analysis could not be performed due to the heterogeneity in study populations, reliability measures and methodological quality of included studies. Systematic investigations on the reliability of physical examination tests to assess the integrity of the ACL are scarce and of varying methodological quality. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. On the reliability of seasonal climate forecasts

    Science.gov (United States)

    Weisheimer, A.; Palmer, T. N.

    2014-01-01

    Seasonal climate forecasts are being used increasingly across a range of application sectors. A recent UK governmental report asked: how good are seasonal forecasts on a scale of 1–5 (where 5 is very good), and how good can we expect them to be in 30 years time? Seasonal forecasts are made from ensembles of integrations of numerical models of climate. We argue that ‘goodness’ should be assessed first and foremost in terms of the probabilistic reliability of these ensemble-based forecasts; reliable inputs are essential for any forecast-based decision-making. We propose that a ‘5’ should be reserved for systems that are not only reliable overall, but where, in particular, small ensemble spread is a reliable indicator of low ensemble forecast error. We study the reliability of regional temperature and precipitation forecasts of the current operational seasonal forecast system of the European Centre for Medium-Range Weather Forecasts, universally regarded as one of the world-leading operational institutes producing seasonal climate forecasts. A wide range of ‘goodness’ rankings, depending on region and variable (with summer forecasts of rainfall over Northern Europe performing exceptionally poorly) is found. Finally, we discuss the prospects of reaching ‘5’ across all regions and variables in 30 years time. PMID:24789559

  1. Sensor Selection and Data Validation for Reliable Integrated System Health Management

    Science.gov (United States)

    Garg, Sanjay; Melcher, Kevin J.

    2008-01-01

    For new access to space systems with challenging mission requirements, effective implementation of integrated system health management (ISHM) must be available early in the program to support the design of systems that are safe, reliable, highly autonomous. Early ISHM availability is also needed to promote design for affordable operations; increased knowledge of functional health provided by ISHM supports construction of more efficient operations infrastructure. Lack of early ISHM inclusion in the system design process could result in retrofitting health management systems to augment and expand operational and safety requirements; thereby increasing program cost and risk due to increased instrumentation and computational complexity. Having the right sensors generating the required data to perform condition assessment, such as fault detection and isolation, with a high degree of confidence is critical to reliable operation of ISHM. Also, the data being generated by the sensors needs to be qualified to ensure that the assessments made by the ISHM is not based on faulty data. NASA Glenn Research Center has been developing technologies for sensor selection and data validation as part of the FDDR (Fault Detection, Diagnosis, and Response) element of the Upper Stage project of the Ares 1 launch vehicle development. This presentation will provide an overview of the GRC approach to sensor selection and data quality validation and will present recent results from applications that are representative of the complexity of propulsion systems for access to space vehicles. A brief overview of the sensor selection and data quality validation approaches is provided below. The NASA GRC developed Systematic Sensor Selection Strategy (S4) is a model-based procedure for systematically and quantitatively selecting an optimal sensor suite to provide overall health assessment of a host system. S4 can be logically partitioned into three major subdivisions: the knowledge base, the down

  2. Concept of turbines for ultrasupercritical, supercritical, and subcritical steam conditions

    Science.gov (United States)

    Mikhailov, V. E.; Khomenok, L. A.; Pichugin, I. I.; Kovalev, I. A.; Bozhko, V. V.; Vladimirskii, O. A.; Zaitsev, I. V.; Kachuriner, Yu. Ya.; Nosovitskii, I. A.; Orlik, V. G.

    2017-11-01

    The article describes the design features of condensing turbines for ultrasupercritical initial steam conditions (USSC) and large-capacity cogeneration turbines for super- and subcritical steam conditions having increased steam extractions for district heating purposes. For improving the efficiency and reliability indicators of USSC turbines, it is proposed to use forced cooling of the head high-temperature thermally stressed parts of the high- and intermediate-pressure rotors, reaction-type blades of the high-pressure cylinder (HPC) and at least the first stages of the intermediate-pressure cylinder (IPC), the double-wall HPC casing with narrow flanges of its horizontal joints, a rigid HPC rotor, an extended system of regenerative steam extractions without using extractions from the HPC flow path, and the low-pressure cylinder's inner casing moving in accordance with the IPC thermal expansions. For cogeneration turbines, it is proposed to shift the upper district heating extraction (or its significant part) to the feedwater pump turbine, which will make it possible to improve the turbine plant efficiency and arrange both district heating extractions in the IPC. In addition, in the case of using a disengaging coupling or precision conical bolts in the coupling, this solution will make it possible to disconnect the LPC in shifting the turbine to operate in the cogeneration mode. The article points out the need to intensify turbine development efforts with the use of modern methods for improving their efficiency and reliability involving, in particular, the use of relatively short 3D blades, last stages fitted with longer rotor blades, evaporation techniques for removing moisture in the last-stage diaphragm, and LPC rotor blades with radial grooves on their leading edges.

  3. Structural reliability methods: Code development status

    Science.gov (United States)

    Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.

    1991-05-01

    The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.

  4. Systems reliability/structural reliability

    International Nuclear Information System (INIS)

    Green, A.E.

    1980-01-01

    The question of reliability technology using quantified techniques is considered for systems and structures. Systems reliability analysis has progressed to a viable and proven methodology whereas this has yet to be fully achieved for large scale structures. Structural loading variants over the half-time of the plant are considered to be more difficult to analyse than for systems, even though a relatively crude model may be a necessary starting point. Various reliability characteristics and environmental conditions are considered which enter this problem. The rare event situation is briefly mentioned together with aspects of proof testing and normal and upset loading conditions. (orig.)

  5. Research on the Reliability Analysis of the Integrated Modular Avionics System Based on the AADL Error Model

    Directory of Open Access Journals (Sweden)

    Peng Wang

    2018-01-01

    Full Text Available In recent years, the integrated modular avionics (IMA concept has been introduced to replace the traditional federated avionics. Different avionics functions are hosted in a shared IMA platform, and IMA adopts partition technologies to provide a logical isolation among different functions. The IMA architecture can provide more sophisticated and powerful avionics functionality; meanwhile, the failure propagation patterns in IMA are more complex. The feature of resource sharing introduces some unintended interconnections among different functions, which makes the failure propagation modes more complex. Therefore, this paper proposes an architecture analysis and design language- (AADL- based method to establish the reliability model of IMA platform. The single software and hardware error behavior in IMA system is modeled. The corresponding AADL error model of failure propagation among components, between software and hardware, is given. Finally, the display function of IMA platform is taken as an example to illustrate the effectiveness of the proposed method.

  6. Reliability Engineering

    International Nuclear Information System (INIS)

    Lee, Sang Yong

    1992-07-01

    This book is about reliability engineering, which describes definition and importance of reliability, development of reliability engineering, failure rate and failure probability density function about types of it, CFR and index distribution, IFR and normal distribution and Weibull distribution, maintainability and movability, reliability test and reliability assumption in index distribution type, normal distribution type and Weibull distribution type, reliability sampling test, reliability of system, design of reliability and functionality failure analysis by FTA.

  7. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.0)

    Energy Technology Data Exchange (ETDEWEB)

    Hukerikar, Saurabh [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Engelmann, Christian [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-10-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest that very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Practical limits on power consumption in HPC systems will require future systems to embrace innovative architectures, increasing the levels of hardware and software complexities. The resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies that are capable of handling a broad set of fault models at accelerated fault rates. These techniques must seek to improve resilience at reasonable overheads to power consumption and performance. While the HPC community has developed various solutions, application-level as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power eciency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software ecosystems, which are expected to be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience based on the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. The catalog of resilience design patterns provides designers with reusable design elements. We define a design framework that enhances our understanding of the important

  8. The effect of loss functions on empirical Bayes reliability analysis

    Directory of Open Access Journals (Sweden)

    Vincent A. R. Camara

    1999-01-01

    Full Text Available The aim of the present study is to investigate the sensitivity of empirical Bayes estimates of the reliability function with respect to changing of the loss function. In addition to applying some of the basic analytical results on empirical Bayes reliability obtained with the use of the “popular” squared error loss function, we shall derive some expressions corresponding to empirical Bayes reliability estimates obtained with the Higgins–Tsokos, the Harris and our proposed logarithmic loss functions. The concept of efficiency, along with the notion of integrated mean square error, will be used as a criterion to numerically compare our results.

  9. Hyperthermia quality assurance

    International Nuclear Information System (INIS)

    Shrivastava, P.N.; Paliwal, B.R.

    1984-01-01

    Hyperthermia Physics Center (HPC) operating under contract with the National Cancer Institute is developing a Quality Assurance program for local and regional hyperthermia. The major clinical problem in hyperthermia treatments is that they are extremely difficult to plan, execute, monitor and reproduce. A scientific basis for treatment planning can be established only after ensuring that the performance of heat generating and temperature monitoring systems are reliable. The HPC is presently concentrating on providing uniform NBS traceable calibration of thermometers and evaluation of reproducibility for power generator operation, applicator performance, phanta compositions, system calibrations and personnel shielding. The organizational plan together with recommended evaluation measurements, procedures and criteria are presented

  10. Use of reliability engineering in development and manufacturing of metal parts

    International Nuclear Information System (INIS)

    Khan, A.; Iqbal, M.A.; Asif, M.

    2005-01-01

    The reliability engineering predicts modes of failures and weak links before the system is built instead of failure case study. The reliability engineering analysis will help in the manufacturing economy, assembly accuracy and qualification by testing, leading to production of metal parts in an aerospace industry. This methodology will also minimize the performance constraints in any requirement for the application of metal components in aerospace systems. The reliability engineering predicts the life of the parts under loading conditions whether dynamic or static. Reliability predictions can help engineers in making decisions about design of components, materials selection and qualification under applied stress levels. Two methods of reliability prediction i.e. Part Stress Analysis and Part Count have been used in this study. In this paper we will discuss how these two methods can be used to measure reliability of a system during development phases, which includes the measuring effect of environmental and operational variables. The equations are used to measure the reliability of each type of component, as well as, integration for measuring system applied for the reliability analysis. (author)

  11. Impacts of Contingency Reserve on Nodal Price and Nodal Reliability Risk in Deregulated Power Systems

    DEFF Research Database (Denmark)

    Zhao, Qian; Wang, Peng; Goel, Lalit

    2013-01-01

    The deregulation of power systems allows customers to participate in power market operation. In deregulated power systems, nodal price and nodal reliability are adopted to represent locational operation cost and reliability performance. Since contingency reserve (CR) plays an important role...... in reliable operation, the CR commitment should be considered in operational reliability analysis. In this paper, a CR model based on customer reliability requirements has been formulated and integrated into power market settlement. A two-step market clearing process has been proposed to determine generation...

  12. Reliability

    OpenAIRE

    Condon, David; Revelle, William

    2017-01-01

    Separating the signal in a test from the irrelevant noise is a challenge for all measurement. Low test reliability limits test validity, attenuates important relationships, and can lead to regression artifacts. Multiple approaches to the assessment and improvement of reliability are discussed. The advantages and disadvantages of several different approaches to reliability are considered. Practical advice on how to assess reliability using open source software is provided.

  13. Reliability analysis of digital I and C systems at KAERI

    International Nuclear Information System (INIS)

    Kim, Man Cheol

    2013-01-01

    This paper provides an overview of the ongoing research activities on a reliability analysis of digital instrumentation and control (I and C) systems of nuclear power plants (NPPs) performed by the Korea Atomic Energy Research Institute (KAERI). The research activities include the development of a new safety-critical software reliability analysis method by integrating the advantages of existing software reliability analysis methods, a fault coverage estimation method based on fault injection experiments, and a new human reliability analysis method for computer-based main control rooms (MCRs) based on human performance data from the APR-1400 full-scope simulator. The research results are expected to be used to address various issues such as the licensing issues related to digital I and C probabilistic safety assessment (PSA) for advanced digital-based NPPs. (author)

  14. ES-RBE Event sequence reliability Benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.E.J.

    1991-01-01

    The event Sequence Reliability Benchmark Exercise (ES-RBE) can be considered as a logical extension of the other three Reliability Benchmark Exercices : the RBE on Systems Analysis, the RBE on Common Cause Failures and the RBE on Human Factors. The latter, constituting Activity No. 1, was concluded by the end of 1987. The ES-RBE covered the techniques that are currently used for analysing and quantifying sequences of events starting from an initiating event to various plant damage states, including analysis of various system failures and/or successes, human intervention failure and/or success and dependencies between systems. By this way, one of the scopes of the ES-RBE was to integrate the experiences gained in the previous exercises

  15. Study of complete interconnect reliability for a GaAs MMIC power amplifier

    Science.gov (United States)

    Lin, Qian; Wu, Haifeng; Chen, Shan-ji; Jia, Guoqing; Jiang, Wei; Chen, Chao

    2018-05-01

    By combining the finite element analysis (FEA) and artificial neural network (ANN) technique, the complete prediction of interconnect reliability for a monolithic microwave integrated circuit (MMIC) power amplifier (PA) at the both of direct current (DC) and alternating current (AC) operation conditions is achieved effectively in this article. As a example, a MMIC PA is modelled to study the electromigration failure of interconnect. This is the first time to study the interconnect reliability for an MMIC PA at the conditions of DC and AC operation simultaneously. By training the data from FEA, a high accuracy ANN model for PA reliability is constructed. Then, basing on the reliability database which is obtained from the ANN model, it can give important guidance for improving the reliability design for IC.

  16. Transparent reliability model for fault-tolerant safety systems

    International Nuclear Information System (INIS)

    Bodsberg, Lars; Hokstad, Per

    1997-01-01

    A reliability model is presented which may serve as a tool for identification of cost-effective configurations and operating philosophies of computer-based process safety systems. The main merit of the model is the explicit relationship in the mathematical formulas between failure cause and the means used to improve system reliability such as self-test, redundancy, preventive maintenance and corrective maintenance. A component failure taxonomy has been developed which allows the analyst to treat hardware failures, human failures, and software failures of automatic systems in an integrated manner. Furthermore, the taxonomy distinguishes between failures due to excessive environmental stresses and failures initiated by humans during engineering and operation. Attention has been given to develop a transparent model which provides predictions which are in good agreement with observed system performance, and which is applicable for non-experts in the field of reliability

  17. NDT Reliability - Final Report. Reliability in non-destructive testing (NDT) of the canister components

    Energy Technology Data Exchange (ETDEWEB)

    Pavlovic, Mato; Takahashi, Kazunori; Mueller, Christina; Boehm, Rainer (BAM, Federal Inst. for Materials Research and Testing, Berlin (Germany)); Ronneteg, Ulf (Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden))

    2008-12-15

    This report describes the methodology of the reliability investigation performed on the ultrasonic phased array NDT system, developed by SKB in collaboration with Posiva, for inspection of the canisters for permanent storage of nuclear spent fuel. The canister is composed of a cast iron insert surrounded by a copper shell. The shell is composed of the tube and the lid/base which are welded to the tube after the fuel has been place, in the tube. The manufacturing process of the canister parts and the welding process are described. Possible defects, which might arise in the canister components during the manufacturing or in the weld during the welding, are identified. The number of real defects in manufactured components have been limited. Therefore the reliability of the NDT system has been determined using a number of test objects with artificial defects. The reliability analysis is based on the signal response analysis. The conventional signal response analysis is adopted and further developed before applied on the modern ultrasonic phased-array NDT system. The concept of multi-parameter a, where the response of the NDT system is dependent on more than just one parameter, is introduced. The weakness of use of the peak signal response in the analysis is demonstrated and integration of the amplitudes in the C-scan is proposed as an alternative. The calculation of the volume POD, when the part is inspected with more configurations, is also presented. The reliability analysis is supported by the ultrasonic simulation based on the point source synthesis method

  18. Reliability program plan for the Kilowatt Isotope Power System (KIPS) technology verification phase

    International Nuclear Information System (INIS)

    1978-01-01

    Ths document is an integral part of the Kilowatt Isotope Power System (KIPS) Program Plan. This document defines the KIPS Reliability Program Plan for the Technology Verification Phase. This document delineates the reliability assurance tasks that are to be accomplished by Sundstrand and its suppliers during the design, fabrication and testing of the KIPS

  19. Towards a HPC-oriented parallel implementation of a learning algorithm for bioinformatics applications.

    Science.gov (United States)

    D'Angelo, Gianni; Rampone, Salvatore

    2014-01-01

    The huge quantity of data produced in Biomedical research needs sophisticated algorithmic methodologies for its storage, analysis, and processing. High Performance Computing (HPC) appears as a magic bullet in this challenge. However, several hard to solve parallelization and load balancing problems arise in this context. Here we discuss the HPC-oriented implementation of a general purpose learning algorithm, originally conceived for DNA analysis and recently extended to treat uncertainty on data (U-BRAIN). The U-BRAIN algorithm is a learning algorithm that finds a Boolean formula in disjunctive normal form (DNF), of approximately minimum complexity, that is consistent with a set of data (instances) which may have missing bits. The conjunctive terms of the formula are computed in an iterative way by identifying, from the given data, a family of sets of conditions that must be satisfied by all the positive instances and violated by all the negative ones; such conditions allow the computation of a set of coefficients (relevances) for each attribute (literal), that form a probability distribution, allowing the selection of the term literals. The great versatility that characterizes it, makes U-BRAIN applicable in many of the fields in which there are data to be analyzed. However the memory and the execution time required by the running are of O(n(3)) and of O(n(5)) order, respectively, and so, the algorithm is unaffordable for huge data sets. We find mathematical and programming solutions able to lead us towards the implementation of the algorithm U-BRAIN on parallel computers. First we give a Dynamic Programming model of the U-BRAIN algorithm, then we minimize the representation of the relevances. When the data are of great size we are forced to use the mass memory, and depending on where the data are actually stored, the access times can be quite different. According to the evaluation of algorithmic efficiency based on the Disk Model, in order to reduce the costs of

  20. Reliability of Visual and Somatosensory Feedback in Skilled Movement: The Role of the Cerebellum.

    Science.gov (United States)

    Mizelle, J C; Oparah, Alexis; Wheaton, Lewis A

    2016-01-01

    The integration of vision and somatosensation is required to allow for accurate motor behavior. While both sensory systems contribute to an understanding of the state of the body through continuous updating and estimation, how the brain processes unreliable sensory information remains to be fully understood in the context of complex action. Using functional brain imaging, we sought to understand the role of the cerebellum in weighting visual and somatosensory feedback by selectively reducing the reliability of each sense individually during a tool use task. We broadly hypothesized upregulated activation of the sensorimotor and cerebellar areas during movement with reduced visual reliability, and upregulated activation of occipital brain areas during movement with reduced somatosensory reliability. As specifically compared to reduced somatosensory reliability, we expected greater activations of ipsilateral sensorimotor cerebellum for intact visual and somatosensory reliability. Further, we expected that ipsilateral posterior cognitive cerebellum would be affected with reduced visual reliability. We observed that reduced visual reliability results in a trend towards the relative consolidation of sensorimotor activation and an expansion of cerebellar activation. In contrast, reduced somatosensory reliability was characterized by the absence of cerebellar activations and a trend towards the increase of right frontal, left parietofrontal activation, and temporo-occipital areas. Our findings highlight the role of the cerebellum for specific aspects of skillful motor performance. This has relevance to understanding basic aspects of brain functions underlying sensorimotor integration, and provides a greater understanding of cerebellar function in tool use motor control.

  1. Understanding the compaction behaviour of low-substituted HPC: macro, micro, and nano-metric evaluations.

    Science.gov (United States)

    ElShaer, Amr; Al-Khattawi, Ali; Mohammed, Afzal R; Warzecha, Monika; Lamprou, Dimitrios A; Hassanin, Hany

    2018-06-01

    The fast development in materials science has resulted in the emergence of new pharmaceutical materials with superior physical and mechanical properties. Low-substituted hydroxypropyl cellulose is an ether derivative of cellulose and is praised for its multi-functionality as a binder, disintegrant, film coating agent and as a suitable material for medical dressings. Nevertheless, very little is known about the compaction behaviour of this polymer. The aim of the current study was to evaluate the compaction and disintegration behaviour of four grades of L-HPC namely; LH32, LH21, LH11, and LHB1. The macrometric properties of the four powders were studied and the compaction behaviour was evaluated using the out-of-die method. LH11 and LH22 showed poor flow properties as the powders were dominated by fibrous particles with high aspect ratios, which reduced the powder flow. LH32 showed a weak compressibility profile and demonstrated a large elastic region, making it harder for this polymer to deform plastically. These findings are supported by AFM which revealed the high roughness of LH32 powder (100.09 ± 18.84 nm), resulting in small area of contact, but promoting mechanical interlocking. On the contrary, LH21 and LH11 powders had smooth surfaces which enabled larger contact area and higher adhesion forces of 21.01 ± 11.35 nN and 9.50 ± 5.78 nN, respectively. This promoted bond formation during compression as LH21 and LH11 powders had low strength yield.

  2. Optimization Using Metamodeling in the Context of Integrated Computational Materials Engineering (ICME)

    Energy Technology Data Exchange (ETDEWEB)

    Hammi, Youssef; Horstemeyer, Mark F; Wang, Paul; David, Francis; Carino, Ricolindo

    2013-11-18

    Predictive Design Technologies, LLC (PDT) proposed to employ Integrated Computational Materials Engineering (ICME) tools to help the manufacturing industry in the United States regain the competitive advantage in the global economy. ICME uses computational materials science tools within a holistic system in order to accelerate materials development, improve design optimization, and unify design and manufacturing. With the advent of accurate modeling and simulation along with significant increases in high performance computing (HPC) power, virtual design and manufacturing using ICME tools provide the means to reduce product development time and cost by alleviating costly trial-and-error physical design iterations while improving overall quality and manufacturing efficiency. To reduce the computational cost necessary for the large-scale HPC simulations and to make the methodology accessible for small and medium-sized manufacturers (SMMs), metamodels are employed. Metamodels are approximate models (functional relationships between input and output variables) that can reduce the simulation times by one to two orders of magnitude. In Phase I, PDT, partnered with Mississippi State University (MSU), demonstrated the feasibility of the proposed methodology by employing MSU?s internal state variable (ISV) plasticity-damage model with the help of metamodels to optimize the microstructure-process-property-cost for tube manufacturing processes used by Plymouth Tube Company (PTC), which involves complicated temperature and mechanical loading histories. PDT quantified the microstructure-property relationships for PTC?s SAE J525 electric resistance-welded cold drawn low carbon hydraulic 1010 steel tube manufacturing processes at seven different material states and calibrated the ISV plasticity material parameters to fit experimental tensile stress-strain curves. PDT successfully performed large scale finite element (FE) simulations in an HPC environment using the ISV plasticity

  3. Reliability database development and plant performance improvement effort at Korea Hydro and Nuclear Power Co

    International Nuclear Information System (INIS)

    Oh, S. J.; Hwang, S. W.; Na, J. H.; Lim, H. S.

    2008-01-01

    Nuclear utilities in recent years have focused on improved plant performance and equipment reliability. In U.S., there is a movement toward process integration. Examples are INPO AP-913 equipment reliability program and the standard nuclear performance model developed by NEI. Synergistic effect from an integrated approach can be far greater than as compared to individual effects from each program. In Korea, PSA for all Korean NPPs (Nuclear Power Plants) has been completed. Plant performance monitoring and improvement is an important goal for KHNP (Korea Hydro and Nuclear Power Company) and a risk monitoring system called RIMS has been developed for all nuclear plants. KHNP is in the process of voluntarily implementing maintenance rule program similar to that in U.S. In the future, KHNP would like to expand the effort to equipment reliability program and to achieve highest equipment reliability and improved plant performance. For improving equipment reliability, the current trend is moving toward preventive/predictive maintenance from corrective maintenance. With the emphasis on preventive maintenance, the failure cause and operation history and environment are important. Hence, the development of accurate reliability database is necessary. Furthermore, the database should be updated regularly and maintained as a living program to reflect the current status of equipment reliability. This paper examines the development of reliability database system and its application of maintenance optimization or Risk Informed Application (RIA). (authors)

  4. Reliability considerations of electronics components for the deep underwater muon and neutrino detection system

    International Nuclear Information System (INIS)

    Leskovar, B.

    1980-02-01

    The reliability of some electronics components for the Deep Underwater Muon and Neutrino Detection (DUMAND) System is discussed. An introductory overview of engineering concepts and technique for reliability assessment is given. Component reliability is discussed in the contest of major factors causing failures, particularly with respect to physical and chemical causes, process technology and testing, and screening procedures. Failure rates are presented for discrete devices and for integrated circuits as well as for basic electronics components. Furthermore, the military reliability specifications and standards for semiconductor devices are reviewed

  5. Designing Fault-Injection Experiments for the Reliability of Embedded Systems

    Science.gov (United States)

    White, Allan L.

    2012-01-01

    This paper considers the long-standing problem of conducting fault-injections experiments to establish the ultra-reliability of embedded systems. There have been extensive efforts in fault injection, and this paper offers a partial summary of the efforts, but these previous efforts have focused on realism and efficiency. Fault injections have been used to examine diagnostics and to test algorithms, but the literature does not contain any framework that says how to conduct fault-injection experiments to establish ultra-reliability. A solution to this problem integrates field-data, arguments-from-design, and fault-injection into a seamless whole. The solution in this paper is to derive a model reduction theorem for a class of semi-Markov models suitable for describing ultra-reliable embedded systems. The derivation shows that a tight upper bound on the probability of system failure can be obtained using only the means of system-recovery times, thus reducing the experimental effort to estimating a reasonable number of easily-observed parameters. The paper includes an example of a system subject to both permanent and transient faults. There is a discussion of integrating fault-injection with field-data and arguments-from-design.

  6. Use of PRA methodology for enhancing operational safety and reliability

    International Nuclear Information System (INIS)

    Chu, B.; Rumble, E.; Najafi, B.; Putney, B.; Young, J.

    1985-01-01

    This paper describes a broad scope, on-going R and D study, sponsored by the Electric Power Research Institute (EPRI) to utilize key features of the state-of-the-art plant information management and system analysis techniques to develop and demonstrate a practical engineering tool for assisting plant engineering and operational staff to perform their activities more effectively. The study is foreseen to consist of two major activities: to develop a user-friendly, integrated software system; and to demonstrate the applications of this software on-site. This integrated software, Reliability Analysis Program with In-Plant Data (RAPID), will consist of three types of interrelated elements: an Executive Controller which will provide engineering and operations staff users with interface and control of the other two software elements, a Data Base Manager which can acquire, store, select, and transfer data, and Applications Modules which will perform the specific reliability-oriented functions. A broad range of these functions has been envisaged. The immediate emphasis will be focused on four application modules: a Plant Status Module, a Technical Specification Optimization Module, a Reliability Assessment Module, and a Utility Module for acquiring plant data

  7. Modularly Integrated MEMS Technology

    National Research Council Canada - National Science Library

    Eyoum, Marie-Angie N

    2006-01-01

    Process design, development and integration to fabricate reliable MEMS devices on top of VLSI-CMOS electronics without damaging the underlying circuitry have been investigated throughout this dissertation...

  8. Integrals of random fields treated by the model correction factor method

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  9. Reliability Evaluation and Improvement Approach of Chemical Production Man - Machine - Environment System

    Science.gov (United States)

    Miao, Yongchun; Kang, Rongxue; Chen, Xuefeng

    2017-12-01

    In recent years, with the gradual extension of reliability research, the study of production system reliability has become the hot topic in various industries. Man-machine-environment system is a complex system composed of human factors, machinery equipment and environment. The reliability of individual factor must be analyzed in order to gradually transit to the research of three-factor reliability. Meanwhile, the dynamic relationship among man-machine-environment should be considered to establish an effective blurry evaluation mechanism to truly and effectively analyze the reliability of such systems. In this paper, based on the system engineering, fuzzy theory, reliability theory, human error, environmental impact and machinery equipment failure theory, the reliabilities of human factor, machinery equipment and environment of some chemical production system were studied by the method of fuzzy evaluation. At last, the reliability of man-machine-environment system was calculated to obtain the weighted result, which indicated that the reliability value of this chemical production system was 86.29. Through the given evaluation domain it can be seen that the reliability of man-machine-environment integrated system is in a good status, and the effective measures for further improvement were proposed according to the fuzzy calculation results.

  10. An overview of reliability methods in mechanical and structural design

    Science.gov (United States)

    Wirsching, P. H.; Ortiz, K.; Lee, S. J.

    1987-01-01

    An evaluation is made of modern methods of fast probability integration and Monte Carlo treatment for the assessment of structural systems' and components' reliability. Fast probability integration methods are noted to be more efficient than Monte Carlo ones. This is judged to be an important consideration when several point probability estimates must be made in order to construct a distribution function. An example illustrating the relative efficiency of the various methods is included.

  11. Novel technique for reliability testing of silicon integrated circuits

    NARCIS (Netherlands)

    Le Minh, P.; Wallinga, Hans; Woerlee, P.H.; van den Berg, Albert; Holleman, J.

    2001-01-01

    We propose a simple, inexpensive technique with high resolution to identify the weak spots in integrated circuits by means of a non-destructive photochemical process in which photoresist is used as the photon detection tool. The experiment was done to localize the breakdown link of thin silicon

  12. Resolution of GSI B-56 - Emergency diesel generator reliability

    International Nuclear Information System (INIS)

    Serkiz, A.W.

    1989-01-01

    The need for an emergency diesel generator (EDG) reliability program has been established by 10 CFR Part 50, Section 50.63, Loss of All Alternating Current Power, which requires that licensees assess their station blackout coping and recovery capability. EDGs are the principal emergency ac power sources for avoiding a station blackout. Regulatory Guide 1.155, Station Blackout, identifies a need for (1) a nuclear unit EDG reliability level of at least 0.95, and (2) an EDG reliability program to monitor and maintain the required EDG reliability levels. NUMARC-8700, Guidelines and Technical Bases for NUMARC Initiatives Addressing Station Blackout at Light Water Reactors, also provides guidance on such needs. The resolution of GSI B-56, Diesel Reliability will be accomplished by issuing Regulatory Guide 1.9, Rev. 3, Selection, Design, Qualification, Testing, and Reliability of Diesel Generator Units Used as Onsite Electric Power Systems at Nuclear Plants. This revision will integrate into a single regulatory guide pertinent guidance previously addressed in R.G. 1.9, Rev. 2, R.G. 1.108, and Generic Letter 84-15. R.G. 1.9 has been expanded to define the principal elements of an EDG reliability program for monitoring and maintaining EDG reliability levels selected for SBO. In addition, alert levels and corrective actions have been defined to detect a deteriorating situation for all EDGs assigned to a particular nuclear unit, as well as an individual problem EDG

  13. Optimized Interface Diversity for Ultra-Reliable Low Latency Communication (URLLC)

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Liu, Rongkuan; Popovski, Petar

    2017-01-01

    An important ingredient of the future 5G systems will be Ultra-Reliable Low-Latency Communication (URLLC). A way to offer URLLC without intervention in the baseband/PHY layer design is to use interface diversity and integrate multiple communication interfaces, each interface based on a different...... technology. Our approach is to use rateless codes to seamlessly distribute coded payload and redundancy data across multiple available communication interfaces. We formulate an optimization problem to find the payload allocation weights that maximize the reliability at specific target latency values...

  14. Infrastructure for Multiphysics Software Integration in High Performance Computing-Aided Science and Engineering

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, Michael T. [Illinois Rocstar LLC, Champaign, IL (United States); Safdari, Masoud [Illinois Rocstar LLC, Champaign, IL (United States); Kress, Jessica E. [Illinois Rocstar LLC, Champaign, IL (United States); Anderson, Michael J. [Illinois Rocstar LLC, Champaign, IL (United States); Horvath, Samantha [Illinois Rocstar LLC, Champaign, IL (United States); Brandyberry, Mark D. [Illinois Rocstar LLC, Champaign, IL (United States); Kim, Woohyun [Illinois Rocstar LLC, Champaign, IL (United States); Sarwal, Neil [Illinois Rocstar LLC, Champaign, IL (United States); Weisberg, Brian [Illinois Rocstar LLC, Champaign, IL (United States)

    2016-10-15

    The project described in this report constructed and exercised an innovative multiphysics coupling toolkit called the Illinois Rocstar MultiPhysics Application Coupling Toolkit (IMPACT). IMPACT is an open source, flexible, natively parallel infrastructure for coupling multiple uniphysics simulation codes into multiphysics computational systems. IMPACT works with codes written in several high-performance-computing (HPC) programming languages, and is designed from the beginning for HPC multiphysics code development. It is designed to be minimally invasive to the individual physics codes being integrated, and has few requirements on those physics codes for integration. The goal of IMPACT is to provide the support needed to enable coupling existing tools together in unique and innovative ways to produce powerful new multiphysics technologies without extensive modification and rewrite of the physics packages being integrated. There are three major outcomes from this project: 1) construction, testing, application, and open-source release of the IMPACT infrastructure, 2) production of example open-source multiphysics tools using IMPACT, and 3) identification and engagement of interested organizations in the tools and applications resulting from the project. This last outcome represents the incipient development of a user community and application echosystem being built using IMPACT. Multiphysics coupling standardization can only come from organizations working together to define needs and processes that span the space of necessary multiphysics outcomes, which Illinois Rocstar plans to continue driving toward. The IMPACT system, including source code, documentation, and test problems are all now available through the public gitHUB.org system to anyone interested in multiphysics code coupling. Many of the basic documents explaining use and architecture of IMPACT are also attached as appendices to this document. Online HTML documentation is available through the gitHUB site

  15. SAPHIRE6.64, System Analysis Programs for Hands-on Integrated Reliability

    International Nuclear Information System (INIS)

    2001-01-01

    1 - Description of program or function: SAPHIRE is a collection of programs developed for the purpose of performing those functions necessary to create and analyze a complete Probabilistic Risk Assessment (PRA) primarily for nuclear power plants. The programs included in this suite are the Integrated Reliability and Risk Analysis System (IRRAS), the System Analysis and Risk Assessment (SARA) system, the Models And Results Database (MAR-D) system, and the Fault tree, Event tree and P and ID (FEP) editors. Previously these programs were released as separate packages. These programs include functions to allow the user to create event trees and fault trees, to define accident sequences and basic event failure data, to solve system and accident sequence fault trees, to quantify cut sets, and to perform uncertainty analysis on the results. Also included in this program are features to allow the analyst to generate reports and displays that can be used to document the results of an analysis. Since this software is a very detailed technical tool, the user of this program should be familiar with PRA concepts and the methods used to perform these analyses. 2 - Methods: SAPHIRE is written in MODULA-2 and uses an integrated commercial graphics package to interactively construct and edit fault trees. The fault tree solving methods used are industry recognized top down algorithms. For quantification, the program uses standard methods to propagate the failure information through the generated cut sets. SAPHIRE includes a separate module called the Graphical Evaluation Module (GEM). GEM provides a highly specialized user interface with SAPHIRE which automates the process for evaluating operational events at commercial nuclear power plants. Using GEM an analyst can estimate the risk associated with operational events (that is, perform a Level 1, Level 2, and Level 3 analysis for operational events) in a very efficient and expeditious manner. This on-line reference guide will

  16. Building fast, reliable, and adaptive software for computational science

    International Nuclear Information System (INIS)

    Rendell, A P; Antony, J; Armstrong, W; Janes, P; Yang, R

    2008-01-01

    Building fast, reliable, and adaptive software is a constant challenge for computational science, especially given recent developments in computer architecture. This paper outlines some of our efforts to address these three issues in the context of computational chemistry. First, a simple linear performance that can be used to model and predict the performance of Hartree-Fock calculations is discussed. Second, the use of interval arithmetic to assess the numerical reliability of the sort of integrals used in electronic structure methods is presented. Third, use of dynamic code modification as part of a framework to support adaptive software is outlined

  17. Reliability consideration of low-power-grid-tied inverter for photovoltaic application

    OpenAIRE

    Liu, J.; Henze, N.

    2009-01-01

    In recent years PV modules have been improved evidently. An excellent reliability has been validated corresponding to Mean Time between Failure (MTBF) between 500 and 6000 years respectively in commercial utility power systems. Manufactures can provide performance guarantees for PV modules at least for 20 years. If an average inverter lifetime of 5 years is assumed, it is evident that the overall reliability of PV systems [PVSs] with integrated inverter is determined chiefly by the inverter i...

  18. Reliability of nine programs of topological predictions and their application to integral membrane channel and carrier proteins.

    Science.gov (United States)

    Reddy, Abhinay; Cho, Jaehoon; Ling, Sam; Reddy, Vamsee; Shlykov, Maksim; Saier, Milton H

    2014-01-01

    We evaluated topological predictions for nine different programs, HMMTOP, TMHMM, SVMTOP, DAS, SOSUI, TOPCONS, PHOBIUS, MEMSAT-SVM (hereinafter referred to as MEMSAT), and SPOCTOPUS. These programs were first evaluated using four large topologically well-defined families of secondary transporters, and the three best programs were further evaluated using topologically more diverse families of channels and carriers. In the initial studies, the order of accuracy was: SPOCTOPUS > MEMSAT > HMMTOP > TOPCONS > PHOBIUS > TMHMM > SVMTOP > DAS > SOSUI. Some families, such as the Sugar Porter Family (2.A.1.1) of the Major Facilitator Superfamily (MFS; TC #2.A.1) and the Amino Acid/Polyamine/Organocation (APC) Family (TC #2.A.3), were correctly predicted with high accuracy while others, such as the Mitochondrial Carrier (MC) (TC #2.A.29) and the K(+) transporter (Trk) families (TC #2.A.38), were predicted with much lower accuracy. For small, topologically homogeneous families, SPOCTOPUS and MEMSAT were generally most reliable, while with large, more diverse superfamilies, HMMTOP often proved to have the greatest prediction accuracy. We next developed a novel program, TM-STATS, that tabulates HMMTOP, SPOCTOPUS or MEMSAT-based topological predictions for any subdivision (class, subclass, superfamily, family, subfamily, or any combination of these) of the Transporter Classification Database (TCDB; www.tcdb.org) and examined the following subclasses: α-type channel proteins (TC subclasses 1.A and 1.E), secreted pore-forming toxins (TC subclass 1.C) and secondary carriers (subclass 2.A). Histograms were generated for each of these subclasses, and the results were analyzed according to subclass, family and protein. The results provide an update of topological predictions for integral membrane transport proteins as well as guides for the development of more reliable topological prediction programs, taking family-specific characteristics into account. © 2014 S. Karger AG, Basel.

  19. Reliability analysis of the reactor protection system with fault diagnosis

    International Nuclear Information System (INIS)

    Lee, D.Y.; Han, J.B.; Lyou, J.

    2004-01-01

    The main function of a reactor protection system (RPS) is to maintain the reactor core integrity and reactor coolant system pressure boundary. The RPS consists of the 2-out-of-m redundant architecture to assure a reliable operation. The system reliability of the RPS is a very important factor for the probability safety assessment (PSA) evaluation in the nuclear field. To evaluate the system failure rate of the k-out-of-m redundant system is not so easy with the deterministic method. In this paper, the reliability analysis method using the binomial process is suggested to calculate the failure rate of the RPS system with a fault diagnosis function. The suggested method is compared with the result of the Markov process to verify the validation of the suggested method, and applied to the several kinds of RPS architectures for a comparative evaluation of the reliability. (orig.)

  20. Reliability demonstration test planning using bayesian analysis

    International Nuclear Information System (INIS)

    Chandran, Senthil Kumar; Arul, John A.

    2003-01-01

    In Nuclear Power Plants, the reliability of all the safety systems is very critical from the safety viewpoint and it is very essential that the required reliability requirements be met while satisfying the design constraints. From practical experience, it is found that the reliability of complex systems such as Safety Rod Drive Mechanism is of the order of 10 -4 with an uncertainty factor of 10. To demonstrate the reliability of such systems is prohibitive in terms of cost and time as the number of tests needed is very large. The purpose of this paper is to develop a Bayesian reliability demonstrating testing procedure for exponentially distributed failure times with gamma prior distribution on the failure rate which can be easily and effectively used to demonstrate component/subsystem/system reliability conformance to stated requirements. The important questions addressed in this paper are: With zero failures, how long one should perform the tests and how many components are required to conclude with a given degree of confidence, that the component under test, meets the reliability requirement. The procedure is explained with an example. This procedure can also be extended to demonstrate with more number of failures. The approach presented is applicable for deriving test plans for demonstrating component failure rates of nuclear power plants, as the failure data for similar components are becoming available in existing plants elsewhere. The advantages of this procedure are the criterion upon which the procedure is based is simple and pertinent, the fitting of the prior distribution is an integral part of the procedure and is based on the use of information regarding two percentiles of this distribution and finally, the procedure is straightforward and easy to apply in practice. (author)

  1. Verification on reliability of heat exchanger for primary cooling system

    International Nuclear Information System (INIS)

    Koike, Sumio; Gorai, Shigeru; Onoue, Ryuji; Ohtsuka, Kaoru

    2010-07-01

    Prior to the JMTR refurbishment, verification on reliability of the heat exchangers for primary cooling system was carried out to investigate an integrity of continuously use component. From a result of the significant corrosion, decrease of tube thickness, crack were not observed on the heat exchangers, and integrity of heat exchangers were confirmed. In the long terms usage of the heat exchangers, the maintenance based on periodical inspection and a long-term maintenance plan is scheduled. (author)

  2. Enabling the ATLAS Experiment at the LHC for High Performance Computing

    CERN Document Server

    AUTHOR|(CDS)2091107; Ereditato, Antonio

    In this thesis, I studied the feasibility of running computer data analysis programs from the Worldwide LHC Computing Grid, in particular large-scale simulations of the ATLAS experiment at the CERN LHC, on current general purpose High Performance Computing (HPC) systems. An approach for integrating HPC systems into the Grid is proposed, which has been implemented and tested on the „Todi” HPC machine at the Swiss National Supercomputing Centre (CSCS). Over the course of the test, more than 500000 CPU-hours of processing time have been provided to ATLAS, which is roughly equivalent to the combined computing power of the two ATLAS clusters at the University of Bern. This showed that current HPC systems can be used to efficiently run large-scale simulations of the ATLAS detector and of the detected physics processes. As a first conclusion of my work, one can argue that, in perspective, running large-scale tasks on a few large machines might be more cost-effective than running on relatively small dedicated com...

  3. SCEAPI: A unified Restful Web API for High-Performance Computing

    Science.gov (United States)

    Rongqiang, Cao; Haili, Xiao; Shasha, Lu; Yining, Zhao; Xiaoning, Wang; Xuebin, Chi

    2017-10-01

    The development of scientific computing is increasingly moving to collaborative web and mobile applications. All these applications need high-quality programming interface for accessing heterogeneous computing resources consisting of clusters, grid computing or cloud computing. In this paper, we introduce our high-performance computing environment that integrates computing resources from 16 HPC centers across China. Then we present a bundle of web services called SCEAPI and describe how it can be used to access HPC resources with HTTP or HTTPs protocols. We discuss SCEAPI from several aspects including architecture, implementation and security, and address specific challenges in designing compatible interfaces and protecting sensitive data. We describe the functions of SCEAPI including authentication, file transfer and job management for creating, submitting and monitoring, and how to use SCEAPI in an easy-to-use way. Finally, we discuss how to exploit more HPC resources quickly for the ATLAS experiment by implementing the custom ARC compute element based on SCEAPI, and our work shows that SCEAPI is an easy-to-use and effective solution to extend opportunistic HPC resources.

  4. Parallel computing in genomic research: advances and applications.

    Science.gov (United States)

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities.

  5. On the reliability evaluation of communication equipment for SMART using FMEA

    International Nuclear Information System (INIS)

    Kim, D. H.; Suh, Y. S.; Koo, I. S.; Song, Ki Sang; Han, Byung Rae

    2000-07-01

    This report describes the reliability analysis method for communication equipment using FMEA and FTA. The major equipments to be applicable for SMART communication networks are repeater, bridge, router and gateway and we can apply the FMEA or FTA technique. In the FMEA process, analysis of tagged system, decision of the level of analysis of the target system, drawing reliability block diagram according to the function, decision of failure mode, writing the fault reasons, writing on the FMEA sheet and FMEA level decision are included. Also, the FTA, it is possible to figure out top event reasons and system reliability. We have considered these in mind and we did the FMEA and FTA for NIC, hub, client server and router. Also, we suggested and integrated network model for nuclear power plant and we have shown the reliability analysis procedure according to FTA. If any proprietary communication device is developed, the reliability can be easily determined with proposed procedures

  6. On the reliability evaluation of communication equipment for SMART using FMEA

    Energy Technology Data Exchange (ETDEWEB)

    Kim, D. H.; Suh, Y. S.; Koo, I. S.; Song, Ki Sang; Han, Byung Rae

    2000-07-01

    This report describes the reliability analysis method for communication equipment using FMEA and FTA. The major equipments to be applicable for SMART communication networks are repeater, bridge, router and gateway and we can apply the FMEA or FTA technique. In the FMEA process, analysis of tagged system, decision of the level of analysis of the target system, drawing reliability block diagram according to the function, decision of failure mode, writing the fault reasons, writing on the FMEA sheet and FMEA level decision are included. Also, the FTA, it is possible to figure out top event reasons and system reliability. We have considered these in mind and we did the FMEA and FTA for NIC, hub, client server and router. Also, we suggested and integrated network model for nuclear power plant and we have shown the reliability analysis procedure according to FTA. If any proprietary communication device is developed, the reliability can be easily determined with proposed procedures.

  7. Multinomial-exponential reliability function: a software reliability model

    International Nuclear Information System (INIS)

    Saiz de Bustamante, Amalio; Saiz de Bustamante, Barbara

    2003-01-01

    The multinomial-exponential reliability function (MERF) was developed during a detailed study of the software failure/correction processes. Later on MERF was approximated by a much simpler exponential reliability function (EARF), which keeps most of MERF mathematical properties, so the two functions together makes up a single reliability model. The reliability model MERF/EARF considers the software failure process as a non-homogeneous Poisson process (NHPP), and the repair (correction) process, a multinomial distribution. The model supposes that both processes are statistically independent. The paper discusses the model's theoretical basis, its mathematical properties and its application to software reliability. Nevertheless it is foreseen model applications to inspection and maintenance of physical systems. The paper includes a complete numerical example of the model application to a software reliability analysis

  8. Equipment reliability process improvement and preventive maintenance optimization

    International Nuclear Information System (INIS)

    Darragi, M.; Georges, A.; Vaillancourt, R.; Komljenovic, D.; Croteau, M.

    2004-01-01

    The Gentilly-2 Nuclear Power Plant wants to optimize its preventive maintenance program through an Integrated Equipment Reliability Process. All equipment reliability related activities should be reviewed and optimized in a systematic approach especially for aging plants such as G2. This new approach has to be founded on best practices methods with the purpose of the rationalization of the preventive maintenance program and the performance monitoring of on-site systems, structures and components (SSC). A rational preventive maintenance strategy is based on optimized task scopes and frequencies depending on their applicability, critical effects on system safety and plant availability as well as cost-effectiveness. Preventive maintenance strategy efficiency is systematically monitored through degradation indicators. (author)

  9. Reliability analysis of protection system of advanced pressurized water reactor - APR 1400

    International Nuclear Information System (INIS)

    Varde, P. V.; Choi, J. G.; Lee, D. Y.; Han, J. B.

    2003-04-01

    Reliability analysis was carried out for the protection system of the Korean Advanced Pressurized Water Reactor - APR 1400. The main focus of this study was the reliability analysis of digital protection system, however, towards giving an integrated statement of complete protection reliability an attempt has been made to include the shutdown devices and other related aspects based on the information available to date. The sensitivity analysis has been carried out for the critical components / functions in the system. Other aspects like importance analysis and human error reliability for the critical human actions form part of this work. The framework provided by this study and the results obtained shows that this analysis has potential to be utilized as part of risk informed approach for future design / regulatory applications

  10. Systems analysis programs for hands-on integrated reliability evaluations (SAPHIRE) version 5.0

    International Nuclear Information System (INIS)

    Russell, K.D.; Kvarfordt, K.J.; Skinner, N.L.; Wood, S.T.

    1994-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. This volume is the reference manual for the Systems Analysis and Risk Assessment (SARA) System Version 5.0, a microcomputer-based system used to analyze the safety issues of a open-quotes familyclose quotes [i.e., a power plant, a manufacturing facility, any facility on which a probabilistic risk assessment (PRA) might be performed]. The SARA database contains PRA data primarily for the dominant accident sequences of a family and descriptive information about the family including event trees, fault trees, and system model diagrams. The number of facility databases that can be accessed is limited only by the amount of disk storage available. To simulate changes to family systems, SARA users change the failure rates of initiating and basic events and/or modify the structure of the cut sets that make up the event trees, fault trees, and systems. The user then evaluates the effects of these changes through the recalculation of the resultant accident sequence probabilities and importance measures. The results are displayed in tables and graphs that may be printed for reports. A preliminary version of the SARA program was completed in August 1985 and has undergone several updates in response to user suggestions and to maintain compatibility with the other SAPHIRE programs. Version 5.0 of SARA provides the same capability as earlier versions and adds the ability to process unlimited cut sets; display fire, flood, and seismic data; and perform more powerful cut set editing

  11. High Reliability Prototype Quadrupole for the Next Linear Collider

    International Nuclear Information System (INIS)

    Spencer, Cherrill M

    2001-01-01

    The Next Linear Collider (NLC) will require over 5600 magnets, each of which must be highly reliable and/or quickly repairable in order that the NLC reach its 85% overall availability goal. A multidiscipline engineering team was assembled at SLAC to develop a more reliable electromagnet design than historically had been achieved at SLAC. This team carried out a Failure Mode and Effects Analysis (FMEA) on a standard SLAC quadrupole magnet system. They overcame a number of longstanding design prejudices, producing 10 major design changes. This paper describes how a prototype magnet was constructed and the extensive testing carried out on it to prove full functionality with an improvement in reliability. The magnet's fabrication cost will be compared to the cost of a magnet with the same requirements made in the historic SLAC way. The NLC will use over 1600 of these 12.7 mm bore quadrupoles with a range of integrated strengths from 0.6 to 132 Tesla, a maximum gradient of 135 Tesla per meter, an adjustment range of 0 to -20% and core lengths from 324 mm to 972 mm. The magnetic center must remain stable to within 1 micron during the 20% adjustment. A magnetic measurement set-up has been developed that can measure sub-micron shifts of a magnetic center. The prototype satisfied the center shift requirement over the full range of integrated strengths

  12. Electronics/avionics integrity - Definition, measurement and improvement

    Science.gov (United States)

    Kolarik, W.; Rasty, J.; Chen, M.; Kim, Y.

    The authors report on the results obtained from an extensive, three-fold research project: (1) to search the open quality and reliability literature for documented information relative to electronics/avionics integrity; (2) to interpret and evaluate the literature as to significant concepts, strategies, and tools appropriate for use in electronics/avionics product and process integrity efforts; and (3) to develop a list of critical findings and recommendations that will lead to significant progress in product integrity definition, measurement, modeling, and improvements. The research consisted of examining a broad range of trade journals, scientific journals, and technical reports, as well as face-to-face discussions with reliability professionals. Ten significant recommendations have been supported by the research work.

  13. reliability reliability

    African Journals Online (AJOL)

    eobe

    Corresponding author, Tel: +234-703. RELIABILITY .... V , , given by the code of practice. However, checks must .... an optimization procedure over the failure domain F corresponding .... of Concrete Members based on Utility Theory,. Technical ...

  14. Non-utility generation and demand management reliability of customer delivery systems

    International Nuclear Information System (INIS)

    Hamoud, G.A.; Wang, L.

    1995-01-01

    A probabilistic methodology for evaluating the impact of non-utility generation (NUG) and demand management programs (DMP) on supply reliability of customer delivery systems was presented. The proposed method was based on the criteria that the supply reliability to the customers on the delivery system should not be affected by the integration of either NUG or DMPs. The method considered station load profile, load forecast, and uncertainty in size and availability of the nuio. Impacts on system reliability were expressed in terms of possible delays of the in-service date for new facilities or in terms of an increase in the system load carrying capability. Examples to illustrate the proposed methodology were provided. 10 refs., 8 tabs., 2 figs

  15. Systems integration.

    Science.gov (United States)

    Siemieniuch, C E; Sinclair, M A

    2006-01-01

    The paper presents a view of systems integration, from an ergonomics/human factors perspective, emphasising the process of systems integration as is carried out by humans. The first section discusses some of the fundamental issues in systems integration, such as the significance of systems boundaries, systems lifecycle and systems entropy, issues arising from complexity, the implications of systems immortality, and so on. The next section outlines various generic processes for executing systems integration, to act as guides for practitioners. These address both the design of the system to be integrated and the preparation of the wider system in which the integration will occur. Then the next section outlines some of the human-specific issues that would need to be addressed in such processes; for example, indeterminacy and incompleteness, the prediction of human reliability, workload issues, extended situation awareness, and knowledge lifecycle management. For all of these, suggestions and further readings are proposed. Finally, the conclusions section reiterates in condensed form the major issues arising from the above.

  16. Measuring reliability under epistemic uncertainty: Review on non-probabilistic reliability metrics

    Directory of Open Access Journals (Sweden)

    Kang Rui

    2016-06-01

    Full Text Available In this paper, a systematic review of non-probabilistic reliability metrics is conducted to assist the selection of appropriate reliability metrics to model the influence of epistemic uncertainty. Five frequently used non-probabilistic reliability metrics are critically reviewed, i.e., evidence-theory-based reliability metrics, interval-analysis-based reliability metrics, fuzzy-interval-analysis-based reliability metrics, possibility-theory-based reliability metrics (posbist reliability and uncertainty-theory-based reliability metrics (belief reliability. It is pointed out that a qualified reliability metric that is able to consider the effect of epistemic uncertainty needs to (1 compensate the conservatism in the estimations of the component-level reliability metrics caused by epistemic uncertainty, and (2 satisfy the duality axiom, otherwise it might lead to paradoxical and confusing results in engineering applications. The five commonly used non-probabilistic reliability metrics are compared in terms of these two properties, and the comparison can serve as a basis for the selection of the appropriate reliability metrics.

  17. Passive safety systems reliability and integration of these systems in nuclear power plant PSA

    International Nuclear Information System (INIS)

    La Lumia, V.; Mercier, S.; Marques, M.; Pignatel, J.F.

    2004-01-01

    Innovative nuclear reactor concepts could lead to use passive safety features in combination with active safety systems. A passive system does not need active component, external energy, signal or human interaction to operate. These are attractive advantages for safety nuclear plant improvements and economic competitiveness. But specific reliability problems, linked to physical phenomena, can conduct to stop the physical process. In this context, the European Commission (EC) starts the RMPS (Reliability Methods for Passive Safety functions) program. In this RMPS program, a quantitative reliability evaluation of the RP2 system (Residual Passive heat Removal system on the Primary circuit) has been realised, and the results introduced in a simplified PSA (Probabilistic Safety Assessment). The scope is to get out experience of definition of characteristic parameters for reliability evaluation and PSA including passive systems. The simplified PSA, using event tree method, is carried out for the total loss of power supplies initiating event leading to a severe core damage. Are taken into account: failures of components but also failures of the physical process involved (e.g. natural convection) by a specific method. The physical process failure probabilities are assessed through uncertainty analyses based on supposed probability density functions for the characteristic parameters of the RP2 system. The probabilities are calculated by MONTE CARLO simulation coupled to the CATHARE thermalhydraulic code. The yearly frequency of the severe core damage is evaluated for each accident sequence. This analysis has identified the influence of the passive system RP2 and propose a re-dimensioning of the RP2 system in order to satisfy the safety probabilistic objectives for reactor core severe damage. (authors)

  18. Reliability Approach of a Compressor System using Reliability Block ...

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... This paper presents a reliability analysis of such a system using reliability ... Keywords-compressor system, reliability, reliability block diagram, RBD .... the same structure has been kept with the three subsystems: air flow, oil flow and .... and Safety in Engineering Design", Springer, 2009. [3] P. O'Connor ...

  19. Frontiers of reliability

    CERN Document Server

    Basu, Asit P; Basu, Sujit K

    1998-01-01

    This volume presents recent results in reliability theory by leading experts in the world. It will prove valuable for researchers, and users of reliability theory. It consists of refereed invited papers on a broad spectrum of topics in reliability. The subjects covered include Bayesian reliability, Bayesian reliability modeling, confounding in a series system, DF tests, Edgeworth approximation to reliability, estimation under random censoring, fault tree reduction for reliability, inference about changes in hazard rates, information theory and reliability, mixture experiment, mixture of Weibul

  20. Software reliability

    CERN Document Server

    Bendell, A

    1986-01-01

    Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo

  1. Uncertainty management in integrated modelling, the IMAGE case

    International Nuclear Information System (INIS)

    Van der Sluijs, J.P.

    1995-01-01

    Integrated assessment models of global environmental problems play an increasingly important role in decision making. This use demands a good insight regarding the reliability of these models. In this paper we analyze uncertainty management in the IMAGE-project (Integrated Model to Assess the Greenhouse Effect). We use a classification scheme comprising type and source of uncertainty. Our analysis shows reliability analysis as main area for improvement. We briefly review a recently developed methodology, NUSAP (Numerical, Unit, Spread, Assessment and Pedigree), that systematically addresses the strength of data in terms of spread, reliability and scientific status (pedigree) of information. This approach is being tested through interviews with model builders. 3 tabs., 20 refs

  2. Enhancing product robustness in reliability-based design optimization

    International Nuclear Information System (INIS)

    Zhuang, Xiaotian; Pan, Rong; Du, Xiaoping

    2015-01-01

    Different types of uncertainties need to be addressed in a product design optimization process. In this paper, the uncertainties in both product design variables and environmental noise variables are considered. The reliability-based design optimization (RBDO) is integrated with robust product design (RPD) to concurrently reduce the production cost and the long-term operation cost, including quality loss, in the process of product design. This problem leads to a multi-objective optimization with probabilistic constraints. In addition, the model uncertainties associated with a surrogate model that is derived from numerical computation methods, such as finite element analysis, is addressed. A hierarchical experimental design approach, augmented by a sequential sampling strategy, is proposed to construct the response surface of product performance function for finding optimal design solutions. The proposed method is demonstrated through an engineering example. - Highlights: • A unifying framework for integrating RBDO and RPD is proposed. • Implicit product performance function is considered. • The design problem is solved by sequential optimization and reliability assessment. • A sequential sampling technique is developed for improving design optimization. • The comparison with traditional RBDO is provided

  3. Sustainable, Reliable Mission-Systems Architecture

    Science.gov (United States)

    O'Neil, Graham; Orr, James K.; Watson, Steve

    2007-01-01

    A mission-systems architecture, based on a highly modular infrastructure utilizing: open-standards hardware and software interfaces as the enabling technology is essential for affordable and sustainable space exploration programs. This mission-systems architecture requires (a) robust communication between heterogeneous system, (b) high reliability, (c) minimal mission-to-mission reconfiguration, (d) affordable development, system integration, and verification of systems, and (e) minimal sustaining engineering. This paper proposes such an architecture. Lessons learned from the Space Shuttle program and Earthbound complex engineered system are applied to define the model. Technology projections reaching out 5 years are mde to refine model details.

  4. System Reliability Engineering

    International Nuclear Information System (INIS)

    Lim, Tae Jin

    2005-02-01

    This book tells of reliability engineering, which includes quality and reliability, reliability data, importance of reliability engineering, reliability and measure, the poisson process like goodness of fit test and the poisson arrival model, reliability estimation like exponential distribution, reliability of systems, availability, preventive maintenance such as replacement policies, minimal repair policy, shock models, spares, group maintenance and periodic inspection, analysis of common cause failure, and analysis model of repair effect.

  5. Exploiting Redundancy and Application Scalability for Cost-Effective, Time-Constrained Execution of HPC Applications on Amazon EC2

    International Nuclear Information System (INIS)

    Marathe, Aniruddha P.; Harris, Rachel A.; Lowenthal, David K.; Supinski, Bronis R. de; Rountree, Barry L.; Schulz, Martin

    2015-01-01

    The use of clouds to execute high-performance computing (HPC) applications has greatly increased recently. Clouds provide several potential advantages over traditional supercomputers and in-house clusters. The most popular cloud is currently Amazon EC2, which provides fixed-cost and variable-cost, auction-based options. The auction market trades lower cost for potential interruptions that necessitate checkpointing; if the market price exceeds the bid price, a node is taken away from the user without warning. We explore techniques to maximize performance per dollar given a time constraint within which an application must complete. Specifically, we design and implement multiple techniques to reduce expected cost by exploiting redundancy in the EC2 auction market. We then design an adaptive algorithm that selects a scheduling algorithm and determines the bid price. We show that our adaptive algorithm executes programs up to seven times cheaper than using the on-demand market and up to 44 percent cheaper than the best non-redundant, auction-market algorithm. We extend our adaptive algorithm to incorporate application scalability characteristics for further cost savings. In conclusion, we show that the adaptive algorithm informed with scalability characteristics of applications achieves up to 56 percent cost savings compared to the expected cost for the base adaptive algorithm run at a fixed, user-defined scale.

  6. High performance parallel I/O

    CERN Document Server

    Prabhat

    2014-01-01

    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  7. Using three-dimension virtual reality main control room for integrated system validation and human reliability analysis

    International Nuclear Information System (INIS)

    Yang Chihwei; Cheng Tsungchieh

    2011-01-01

    This study proposes the performance assessment in three-dimension virtual reality (3D-VR) main control room (MCR). The assessment is conducted for integrated system validation (ISV) purposes, and also for human reliability analyses (HRA). This paper describes the latest developments in 3D-VR applications, designated for the familiarization with MCR, specially taking into account the ISV and HRA. The experiences in 3D-VR application, the benefits and advantages of use of VR in training and maintenances of MCR operators in the target NPP are equally presented in this paper. Results gathered from the performance measurement lead to hazard mitigation and reduces the risk of human error in the operation and maintenance of nuclear equipments. The latest developments in simulation techniques, including 3D presentation enhances the above mentioned benefits, brings the MCR simulators closer to reality. In the near future, this type of 3D solutions should be applied more and more often in the design of MCR simulators. The presented 3D-VR are related to the MCR in NPPs, but the concept of composition and navigation through the system's elements can be easily applied for the purpose of any type of technical equipment and shall contribute in a similar manner to hazard prevention. (author)

  8. Maximal network reliability for a stochastic power transmission network

    International Nuclear Information System (INIS)

    Lin, Yi-Kuei; Yeh, Cheng-Ta

    2011-01-01

    Many studies regarded a power transmission network as a binary-state network and constructed it with several arcs and vertices to evaluate network reliability. In practice, the power transmission network should be stochastic because each arc (transmission line) combined with several physical lines is multistate. Network reliability is the probability that the network can transmit d units of electric power from a power plant (source) to a high voltage substation at a specific area (sink). This study focuses on searching for the optimal transmission line assignment to the power transmission network such that network reliability is maximized. A genetic algorithm based method integrating the minimal paths and the Recursive Sum of Disjoint Products is developed to solve this assignment problem. A real power transmission network is adopted to demonstrate the computational efficiency of the proposed method while comparing with the random solution generation approach.

  9. A reliable sewage quality abnormal event monitoring system.

    Science.gov (United States)

    Li, Tianling; Winnel, Melissa; Lin, Hao; Panther, Jared; Liu, Chang; O'Halloran, Roger; Wang, Kewen; An, Taicheng; Wong, Po Keung; Zhang, Shanqing; Zhao, Huijun

    2017-09-15

    With closing water loop through purified recycled water, wastewater becomes a part of source water, requiring reliable wastewater quality monitoring system (WQMS) to manage wastewater source and mitigate potential health risks. However, the development of reliable WQMS is fatally constrained by severe contamination and biofouling of sensors due to the hostile analytical environment of wastewaters, especially raw sewages, that challenges the limit of existing sensing technologies. In this work, we report a technological solution to enable the development of WQMS for real-time abnormal event detection with high reliability and practicality. A vectored high flow hydrodynamic self-cleaning approach and a dual-sensor self-diagnostic concept are adopted for WQMS to effectively encounter vital sensor failing issues caused by contamination and biofouling and ensure the integrity of sensing data. The performance of the WQMS has been evaluated over a 3-year trial period at different sewage catchment sites across three Australian states. It has demonstrated that the developed WQMS is capable of continuously operating in raw sewage for a prolonged period up to 24 months without maintenance and failure, signifying the high reliability and practicality. The demonstrated WQMS capability to reliably acquire real-time wastewater quality information leaps forward the development of effective wastewater source management system. The reported self-cleaning and self-diagnostic concepts should be applicable to other online water quality monitoring systems, opening a new way to encounter the common reliability and stability issues caused by sensor contamination and biofouling. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. A reliability simulation language for reliability analysis

    International Nuclear Information System (INIS)

    Deans, N.D.; Miller, A.J.; Mann, D.P.

    1986-01-01

    The results of work being undertaken to develop a Reliability Description Language (RDL) which will enable reliability analysts to describe complex reliability problems in a simple, clear and unambiguous way are described. Component and system features can be stated in a formal manner and subsequently used, along with control statements to form a structured program. The program can be compiled and executed on a general-purpose computer or special-purpose simulator. (DG)

  11. Shock and vibration effects on performance reliability and mechanical integrity of proton exchange membrane fuel cells: A critical review and discussion

    Science.gov (United States)

    Haji Hosseinloo, Ashkan; Ehteshami, Mohsen Mousavi

    2017-10-01

    Performance reliability and mechanical integrity are the main bottlenecks in mass commercialization of PEMFCs for applications with inherent harsh environment such as automotive and aerospace applications. Imparted shock and vibration to the fuel cell in such applications could bring about numerous issues including clamping torque loosening, gas leakage, increased electrical resistance, and structural damage and breakage. Here, we provide a comprehensive review and critique of the literature focusing on the effects of mechanically harsh environment on PEMFCs, and at the end, we suggest two main future directions in FC technology research that need immediate attention: (i) developing a generic and adequately accurate dynamic model of PEMFCs to assess the dynamic response of FC devices, and (ii) designing effective and robust shock and vibration protection systems based on the developed models in (i).

  12. A study of operational and testing reliability in software reliability analysis

    International Nuclear Information System (INIS)

    Yang, B.; Xie, M.

    2000-01-01

    Software reliability is an important aspect of any complex equipment today. Software reliability is usually estimated based on reliability models such as nonhomogeneous Poisson process (NHPP) models. Software systems are improving in testing phase, while it normally does not change in operational phase. Depending on whether the reliability is to be predicted for testing phase or operation phase, different measure should be used. In this paper, two different reliability concepts, namely, the operational reliability and the testing reliability, are clarified and studied in detail. These concepts have been mixed up or even misused in some existing literature. Using different reliability concept will lead to different reliability values obtained and it will further lead to different reliability-based decisions made. The difference of the estimated reliabilities is studied and the effect on the optimal release time is investigated

  13. Human reliability

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1987-01-01

    Concepts and techniques of human reliability have been developed and are used mostly in probabilistic risk assessment. For this, the major application of human reliability assessment has been to identify the human errors which have a significant effect on the overall safety of the system and to quantify the probability of their occurrence. Some of the major issues within human reliability studies are reviewed and it is shown how these are applied to the assessment of human failures in systems. This is done under the following headings; models of human performance used in human reliability assessment, the nature of human error, classification of errors in man-machine systems, practical aspects, human reliability modelling in complex situations, quantification and examination of human reliability, judgement based approaches, holistic techniques and decision analytic approaches. (UK)

  14. Utilizing clad piping to improve process plant piping integrity, reliability, and operations

    International Nuclear Information System (INIS)

    Chakravarti, B.

    1996-01-01

    During the past four years carbon steel piping clad with type 304L (UNS S30403) stainless steel has been used to solve the flow accelerated corrosion (FAC) problem in nuclear power plants with exceptional success. The product is designed to allow ''like for like'' replacement of damaged carbon steel components where the carbon steel remains the pressure boundary and type 304L (UNS S30403) stainless steel the corrosion allowance. More than 3000 feet of piping and 500 fittings in sizes from 6 to 36-in. NPS have been installed in the extraction steam and other lines of these power plants to improve reliability, eliminate inspection program, reduce O and M costs and provide operational benefits. This concept of utilizing clad piping in solving various corrosion problems in industrial and process plants by conservatively selecting a high alloy material as cladding can provide similar, significant benefits in controlling corrosion problems, minimizing maintenance cost, improving operation and reliability to control performance and risks in a highly cost effective manner. This paper will present various material combinations and applications that appear ideally suited for use of the clad piping components in process plants

  15. DIRAC reliable data management for LHCb

    CERN Document Server

    Smith, A C

    2008-01-01

    DIRAC, LHCb's Grid Workload and Data Management System, utilizes WLCG resources and middleware components to perform distributed computing tasks satisfying LHCb's Computing Model. The Data Management System (DMS) handles data transfer and data access within LHCb. Its scope ranges from the output of the LHCb Online system to Grid-enabled storage for all data types. It supports metadata for these files in replica and bookkeeping catalogues, allowing dataset selection and localization. The DMS controls the movement of files in a redundant fashion whilst providing utilities for accessing all metadata. To do these tasks effectively the DMS requires complete self integrity between its components and external physical storage. The DMS provides highly redundant management of all LHCb data to leverage available storage resources and to manage transient errors in underlying services. It provides data driven and reliable distribution of files as well as reliable job output upload, utilizing VO Boxes at LHCb Tier1 sites ...

  16. Reliability tests for reactor internals rejuvenation technology

    International Nuclear Information System (INIS)

    Fujimaki, Katsumi; Hitoki, Yoichi; Otsubo, Toru; Uchiyama, Junichi

    1998-01-01

    Structural damage due to aging degradation of LWR reactor internals has been reported in several nuclear plants. NUPEC has started a project to test the reliability of the technology for rejuvenating reactor internals which has been funded by the Ministry of International Trade and Industry (MITI) of Japan since 1995. The project follows the policy of a report that the MITI has formally issued in April 1996 summarizing the countermeasures to be considered for aging nuclear plants and equipment. This paper gives an outline of the test plans and results which are directed at preventive maintenance before damage and repair after damage for reactor internals aging degradation. The test results for the replacement methods of ICM housing and BWR core shroud have shown that the methods were reliable and the structural integrity was appropriate based on the evaluation. (author)

  17. Ultra Reliable Closed Loop Life Support for Long Space Missions

    Science.gov (United States)

    Jones, Harry W.; Ewert, Michael K.

    2010-01-01

    Spacecraft human life support systems can achieve ultra reliability by providing sufficient spares to replace all failed components. The additional mass of spares for ultra reliability is approximately equal to the original system mass, provided that the original system reliability is not too low. Acceptable reliability can be achieved for the Space Shuttle and Space Station by preventive maintenance and by replacing failed units. However, on-demand maintenance and repair requires a logistics supply chain in place to provide the needed spares. In contrast, a Mars or other long space mission must take along all the needed spares, since resupply is not possible. Long missions must achieve ultra reliability, a very low failure rate per hour, since they will take years rather than weeks and cannot be cut short if a failure occurs. Also, distant missions have a much higher mass launch cost per kilogram than near-Earth missions. Achieving ultra reliable spacecraft life support systems with acceptable mass will require a well-planned and extensive development effort. Analysis must determine the reliability requirement and allocate it to subsystems and components. Ultra reliability requires reducing the intrinsic failure causes, providing spares to replace failed components and having "graceful" failure modes. Technologies, components, and materials must be selected and designed for high reliability. Long duration testing is needed to confirm very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The system must be designed, developed, integrated, and tested with system reliability in mind. Maintenance and reparability of failed units must not add to the probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass should start soon since it must be a long term effort.

  18. Proving Test on the Reliability for Reactor Containment Vessel

    International Nuclear Information System (INIS)

    Takumi, K.; Nonaka, A.

    1988-01-01

    NUPEC (Nuclear Power Engineering Test Center) has started an eight-year project of Proving Test on the Reliability for Reactor Containment Vessel since June 1987. The objective of this project is to confirm the integrity of containment vessels under severe accident conditions. This paper shows the outline of this project. The test Items are (1) Hydrogen mixing and distribution test, (2) Hydrogen burning test, (3) Iodine trapping characteristics test, and (4) Structural behavior test. Based on the test results, computer codes are verified and as the results of analysis and evaluation by the computer codes, containment integrity is to be confirmed

  19. Reliability Assessment of Active Distribution System Using Monte Carlo Simulation Method

    Directory of Open Access Journals (Sweden)

    Shaoyun Ge

    2014-01-01

    Full Text Available In this paper we have treated the reliability assessment problem of low and high DG penetration level of active distribution system using the Monte Carlo simulation method. The problem is formulated as a two-case program, the program of low penetration simulation and the program of high penetration simulation. The load shedding strategy and the simulation process were introduced in detail during each FMEA process. Results indicate that the integration of DG can improve the reliability of the system if the system was operated actively.

  20. Reducing Reliability Uncertainties for Marine Renewable Energy

    Directory of Open Access Journals (Sweden)

    Sam D. Weller

    2015-11-01

    Full Text Available Technology Readiness Levels (TRLs are a widely used metric of technology maturity and risk for marine renewable energy (MRE devices. To-date, a large number of device concepts have been proposed which have reached the early validation stages of development (TRLs 1–3. Only a handful of mature designs have attained pre-commercial development status following prototype sea trials (TRLs 7–8. In order to navigate through the aptly named “valley of death” (TRLs 4–6 towards commercial realisation, it is necessary for new technologies to be de-risked in terms of component durability and reliability. In this paper the scope of the reliability assessment module of the DTOcean Design Tool is outlined including aspects of Tool integration, data provision and how prediction uncertainties are accounted for. In addition, two case studies are reported of mooring component fatigue testing providing insight into long-term component use and system design for MRE devices. The case studies are used to highlight how test data could be utilised to improve the prediction capabilities of statistical reliability assessment approaches, such as the bottom–up statistical method.

  1. Processing of poultry feathers by alkaline keratin hydrolyzing enzyme from Serratia sp. HPC 1383.

    Science.gov (United States)

    Khardenavis, Anshuman A; Kapley, Atya; Purohit, Hemant J

    2009-04-01

    The present study describes the production and characterization of a feather hydrolyzing enzyme by Serratia sp. HPC 1383 isolated from tannery sludge, which was identified by the ability to form clear zones around colonies on milk agar plates. The proteolytic activity was expressed in terms of the micromoles of tyrosine released from substrate casein per ml per min (U/mL min). Induction of the inoculum with protein was essential to stimulate higher activity of the enzyme, with 0.03% feathermeal in the inoculum resulting in increased enzyme activity (45U/mL) that further increased to 90U/mL when 3d old inoculum was used. The highest enzyme activity, 130U/mL, was observed in the presence of 0.2% yeast extract. The optimum assay temperature and pH for the enzyme were found to be 60 degrees C and 10.0, respectively. The enzyme had a half-life of 10min at 60 degrees C, which improved slightly to 18min in presence of 1mM Ca(2+). Inhibition of the enzyme by phenylmethyl sulfonyl fluoride (PMSF) indicated that the enzyme was a serine protease. The enzyme was also partially inhibited (39%) by the reducing agent beta-mercaptoethanol and by divalent metal ions such as Zn(2+) (41% inhibition). However, Ca(2+) and Fe(2+) resulted in increases in enzyme activity of 15% and 26%, respectively. The kinetic constants of the keratinase were found to be 3.84 microM (K(m)) and 108.7 microM/mLmin (V(max)). These results suggest that this extracellular keratinase may be a useful alternative and eco-friendly route for handling the abundant amount of waste feathers or for applications in other industrial processes.

  2. Mission reliability of semi-Markov systems under generalized operational time requirements

    International Nuclear Information System (INIS)

    Wu, Xiaoyue; Hillston, Jane

    2015-01-01

    Mission reliability of a system depends on specific criteria for mission success. To evaluate the mission reliability of some mission systems that do not need to work normally for the whole mission time, two types of mission reliability for such systems are studied. The first type corresponds to the mission requirement that the system must remain operational continuously for a minimum time within the given mission time interval, while the second corresponds to the mission requirement that the total operational time of the system within the mission time window must be greater than a given value. Based on Markov renewal properties, matrix integral equations are derived for semi-Markov systems. Numerical algorithms and a simulation procedure are provided for both types of mission reliability. Two examples are used for illustration purposes. One is a one-unit repairable Markov system, and the other is a cold standby semi-Markov system consisting of two components. By the proposed approaches, the mission reliability of systems with time redundancy can be more precisely estimated to avoid possible unnecessary redundancy of system resources. - Highlights: • Two types of mission reliability under generalized requirements are defined. • Equations for both types of reliability are derived for semi-Markov systems. • Numerical methods are given for solving both types of reliability. • Simulation procedure is given for estimating both types of reliability. • Verification of the numerical methods is given by the results of simulation

  3. Improving Reliability and Durability of Efficient and Clean Energy Systems

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Prabhakar [Univ. of Connecticut, Storrs, CT (United States)

    2010-08-01

    Overall objective of the research program was to develop an in-depth understanding of the degradation processes in advanced electrochemical energy conversion systems. It was also the objective of the research program to transfer the technology to participating industries for implementation in manufacturing of cost effective and reliable integrated systems.

  4. Reliability in the utility computing era: Towards reliable Fog computing

    DEFF Research Database (Denmark)

    Madsen, Henrik; Burtschy, Bernard; Albeanu, G.

    2013-01-01

    This paper considers current paradigms in computing and outlines the most important aspects concerning their reliability. The Fog computing paradigm as a non-trivial extension of the Cloud is considered and the reliability of the networks of smart devices are discussed. Combining the reliability...... requirements of grid and cloud paradigms with the reliability requirements of networks of sensor and actuators it follows that designing a reliable Fog computing platform is feasible....

  5. Weighted integration of short-term memory and sensory signals in the oculomotor system.

    Science.gov (United States)

    Deravet, Nicolas; Blohm, Gunnar; de Xivry, Jean-Jacques Orban; Lefèvre, Philippe

    2018-05-01

    Oculomotor behaviors integrate sensory and prior information to overcome sensory-motor delays and noise. After much debate about this process, reliability-based integration has recently been proposed and several models of smooth pursuit now include recurrent Bayesian integration or Kalman filtering. However, there is a lack of behavioral evidence in humans supporting these theoretical predictions. Here, we independently manipulated the reliability of visual and prior information in a smooth pursuit task. Our results show that both smooth pursuit eye velocity and catch-up saccade amplitude were modulated by visual and prior information reliability. We interpret these findings as the continuous reliability-based integration of a short-term memory of target motion with visual information, which support modeling work. Furthermore, we suggest that saccadic and pursuit systems share this short-term memory. We propose that this short-term memory of target motion is quickly built and continuously updated, and constitutes a general building block present in all sensorimotor systems.

  6. Dependent systems reliability estimation by structural reliability approach

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2014-01-01

    Estimation of system reliability by classical system reliability methods generally assumes that the components are statistically independent, thus limiting its applicability in many practical situations. A method is proposed for estimation of the system reliability with dependent components, where...... the leading failure mechanism(s) is described by physics of failure model(s). The proposed method is based on structural reliability techniques and accounts for both statistical and failure effect correlations. It is assumed that failure of any component is due to increasing damage (fatigue phenomena...... identification. Application of the proposed method can be found in many real world systems....

  7. Evaluation of aileron actuator reliability with censored data

    Directory of Open Access Journals (Sweden)

    Li Huaiyuan

    2015-08-01

    Full Text Available For the purpose of enhancing reliability of aileron of Airbus new-generation A350XWB, an evaluation of aileron reliability on the basis of maintenance data is presented in this paper. Practical maintenance data contains large number of censoring samples, information uncertainty of which makes it hard to evaluate reliability of aileron actuator. Considering that true lifetime of censoring sample has identical distribution with complete sample, if censoring sample is transformed into complete sample, conversion frequency of censoring sample can be estimated according to frequency of complete sample. On the one hand, standard life table estimation and product limit method are improved on the basis of such conversion frequency, enabling accurate estimation of various censoring samples. On the other hand, by taking such frequency as one of the weight factors and integrating variance of order statistics under standard distribution, weighted least square estimation is formed for accurately estimating various censoring samples. Large amounts of experiments and simulations show that reliabilities of improved life table and improved product limit method are closer to the true value and more conservative; moreover, weighted least square estimate (WLSE, with conversion frequency of censoring sample and variances of order statistics as the weights, can still estimate accurately with high proportion of censored data in samples. Algorithm in this paper has good effect and can accurately estimate the reliability of aileron actuator even with small sample and high censoring rate. This research has certain significance in theory and engineering practice.

  8. Dynamic integration of remote cloud resources into local computing clusters

    Energy Technology Data Exchange (ETDEWEB)

    Fleig, Georg; Erli, Guenther; Giffels, Manuel; Hauth, Thomas; Quast, Guenter; Schnepf, Matthias [Institut fuer Experimentelle Kernphysik, Karlsruher Institut fuer Technologie (Germany)

    2016-07-01

    In modern high-energy physics (HEP) experiments enormous amounts of data are analyzed and simulated. Traditionally dedicated HEP computing centers are built or extended to meet this steadily increasing demand for computing resources. Nowadays it is more reasonable and more flexible to utilize computing power at remote data centers providing regular cloud services to users as they can be operated in a more efficient manner. This approach uses virtualization and allows the HEP community to run virtual machines containing a dedicated operating system and transparent access to the required software stack on almost any cloud site. The dynamic management of virtual machines depending on the demand for computing power is essential for cost efficient operation and sharing of resources with other communities. For this purpose the EKP developed the on-demand cloud manager ROCED for dynamic instantiation and integration of virtualized worker nodes into the institute's computing cluster. This contribution will report on the concept of our cloud manager and the implementation utilizing a remote OpenStack cloud site and a shared HPC center (bwForCluster located in Freiburg).

  9. Analyzing the reliability of shuffle-exchange networks using reliability block diagrams

    International Nuclear Information System (INIS)

    Bistouni, Fathollah; Jahanshahi, Mohsen

    2014-01-01

    Supercomputers and multi-processor systems are comprised of thousands of processors that need to communicate in an efficient way. One reasonable solution would be the utilization of multistage interconnection networks (MINs), where the challenge is to analyze the reliability of such networks. One of the methods to increase the reliability and fault-tolerance of the MINs is use of various switching stages. Therefore, recently, the reliability of one of the most common MINs namely shuffle-exchange network (SEN) has been evaluated through the investigation on the impact of increasing the number of switching stage. Also, it is concluded that the reliability of SEN with one additional stage (SEN+) is better than SEN or SEN with two additional stages (SEN+2), even so, the reliability of SEN is better compared to SEN with two additional stages (SEN+2). Here we re-evaluate the reliability of these networks where the results of the terminal, broadcast, and network reliability analysis demonstrate that SEN+ and SEN+2 continuously outperform SEN and are very alike in terms of reliability. - Highlights: • The impact of increasing the number of stages on reliability of MINs is investigated. • The RBD method as an accurate method is used for the reliability analysis of MINs. • Complex series–parallel RBDs are used to determine the reliability of the MINs. • All measures of the reliability (i.e. terminal, broadcast, and network reliability) are analyzed. • All reliability equations will be calculated for different size N×N

  10. 25. MPA-seminar: safety and reliability of plant technology with special emphasis on safety and reliability - integrity proofs, qualification of components, damage prevention. Vol. 1. Papers 1-29

    International Nuclear Information System (INIS)

    1999-01-01

    The proceedings of the 25th MPA Seminar on 'Safety and Reliability of Plant Technology' were issued in two volumes. The main topics of the first volume are: 1. Structural and safety analysis, 2. Reliability analysis, 3. Fracture mechanics, and 4. Nondestructive Testing. s

  11. Stochastic Differential Equation-Based Flexible Software Reliability Growth Model

    Directory of Open Access Journals (Sweden)

    P. K. Kapur

    2009-01-01

    Full Text Available Several software reliability growth models (SRGMs have been developed by software developers in tracking and measuring the growth of reliability. As the size of software system is large and the number of faults detected during the testing phase becomes large, so the change of the number of faults that are detected and removed through each debugging becomes sufficiently small compared with the initial fault content at the beginning of the testing phase. In such a situation, we can model the software fault detection process as a stochastic process with continuous state space. In this paper, we propose a new software reliability growth model based on Itô type of stochastic differential equation. We consider an SDE-based generalized Erlang model with logistic error detection function. The model is estimated and validated on real-life data sets cited in literature to show its flexibility. The proposed model integrated with the concept of stochastic differential equation performs comparatively better than the existing NHPP-based models.

  12. High-Reliable PLC RTOS Development and RPS Structure Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sohn, H. S.; Song, D. Y.; Sohn, D. S.; Kim, J. H. [Enersys Co., Daejeon (Korea, Republic of)

    2008-04-15

    One of the KNICS objectives is to develop a platform for Nuclear Power Plant(NPP) I and C(Instrumentation and Control) system, especially plant protection system. The developed platform is POSAFE-Q and this work supports the development of POSAFE-Q with the development of high-reliable real-time operating system(RTOS) and programmable logic device(PLD) software. Another KNICS objective is to develop safety I and C systems, such as Reactor Protection System(RPS) and Engineered Safety Feature-Component Control System(ESF-CCS). This work plays an important role in the structure analysis for RPS. Validation and verification(V and V) of the safety critical software is an essential work to make digital plant protection system highly reliable and safe. Generally, the reliability and safety of software based system can be improved by strict quality assurance framework including the software development itself. In other words, through V and V, the reliability and safety of a system can be improved and the development activities like software requirement specification, software design specification, component tests, integration tests, and system tests shall be appropriately documented for V and V.

  13. High-Reliable PLC RTOS Development and RPS Structure Analysis

    International Nuclear Information System (INIS)

    Sohn, H. S.; Song, D. Y.; Sohn, D. S.; Kim, J. H.

    2008-04-01

    One of the KNICS objectives is to develop a platform for Nuclear Power Plant(NPP) I and C(Instrumentation and Control) system, especially plant protection system. The developed platform is POSAFE-Q and this work supports the development of POSAFE-Q with the development of high-reliable real-time operating system(RTOS) and programmable logic device(PLD) software. Another KNICS objective is to develop safety I and C systems, such as Reactor Protection System(RPS) and Engineered Safety Feature-Component Control System(ESF-CCS). This work plays an important role in the structure analysis for RPS. Validation and verification(V and V) of the safety critical software is an essential work to make digital plant protection system highly reliable and safe. Generally, the reliability and safety of software based system can be improved by strict quality assurance framework including the software development itself. In other words, through V and V, the reliability and safety of a system can be improved and the development activities like software requirement specification, software design specification, component tests, integration tests, and system tests shall be appropriately documented for V and V.

  14. Developing Ultra Reliable Life Support for the Moon and Mars

    Science.gov (United States)

    Jones, Harry W.

    2009-01-01

    Recycling life support systems can achieve ultra reliability by using spares to replace failed components. The added mass for spares is approximately equal to the original system mass, provided the original system reliability is not very low. Acceptable reliability can be achieved for the space shuttle and space station by preventive maintenance and by replacing failed units, However, this maintenance and repair depends on a logistics supply chain that provides the needed spares. The Mars mission must take all the needed spares at launch. The Mars mission also must achieve ultra reliability, a very low failure rate per hour, since it requires years rather than weeks and cannot be cut short if a failure occurs. Also, the Mars mission has a much higher mass launch cost per kilogram than shuttle or station. Achieving ultra reliable space life support with acceptable mass will require a well-planned and extensive development effort. Analysis must define the reliability requirement and allocate it to subsystems and components. Technologies, components, and materials must be designed and selected for high reliability. Extensive testing is needed to ascertain very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The systems must be designed, produced, integrated, and tested without impairing system reliability. Maintenance and failed unit replacement should not introduce any additional probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass must start soon if it is to produce timely results for the moon and Mars.

  15. Integrated security system definition

    International Nuclear Information System (INIS)

    Campbell, G.K.; Hall, J.R. II

    1985-01-01

    The objectives of an integrated security system are to detect intruders and unauthorized activities with a high degree of reliability and the to deter and delay them until effective response/engagement can be accomplished. Definition of an effective integrated security system requires proper application of a system engineering methodology. This paper summarizes a methodology and describes its application to the problem of integrated security system definition. This process includes requirements identification and analysis, allocation of identified system requirements to the subsystem level and provides a basis for identification of synergistic subsystem elements and for synthesis into an integrated system. The paper discusses how this is accomplished, emphasizing at each step how system integration and subsystem synergism is considered. The paper concludes with the product of the process: implementation of an integrated security system

  16. The French nuclear power plant reactor building containment contributions of prestressing and concrete performances in reliability improvements and cost savings

    International Nuclear Information System (INIS)

    Rouelle, P.; Roy, F.

    1998-01-01

    The Electricite de France's N4 CHOOZ B nuclear power plant, two units of the world's largest PWR model (1450 Mwe each), has earned the Electric Power International's 1997 Powerplant Award. This lead NPP for EDF's N4 series has been improved notably in terms of civil works. The presentation will focus on the Reactor Building's inner containment wall which is one of the main civil structures on a technical and safety point of view. In order to take into account the necessary evolution of the concrete technical specification such as compressive strength low creep and shrinkage, the HSC/HPC has been used on the last N4 Civaux 2 NPP. As a result of the use of this type of professional concrete, the containment withstands an higher internal pressure related to severe accident and ensures higher level of leak-tightness, thus improving the overall safety of the NPP. On that occasion, a new type of prestressing has been tested locally through 55 C 15 S tendons using a new C 1500 FE Jack. These updated civil works techniques shall allow EDF to ensure a Reactor Containment lifespan for more than 50 years. The gains in terms of reliability and cost saving of these improved techniques will be developed hereafter

  17. Resilience Design Patterns - A Structured Approach to Resilience at Extreme Scale (version 1.1)

    Energy Technology Data Exchange (ETDEWEB)

    Hukerikar, Saurabh [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Engelmann, Christian [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-12-01

    Reliability is a serious concern for future extreme-scale high-performance computing (HPC) systems. Projections based on the current generation of HPC systems and technology roadmaps suggest the prevalence of very high fault rates in future systems. The errors resulting from these faults will propagate and generate various kinds of failures, which may result in outcomes ranging from result corruptions to catastrophic application crashes. Therefore the resilience challenge for extreme-scale HPC systems requires management of various hardware and software technologies that are capable of handling a broad set of fault models at accelerated fault rates. Also, due to practical limits on power consumption in HPC systems future systems are likely to embrace innovative architectures, increasing the levels of hardware and software complexities. As a result the techniques that seek to improve resilience must navigate the complex trade-off space between resilience and the overheads to power consumption and performance. While the HPC community has developed various resilience solutions, application-level techniques as well as system-based solutions, the solution space of HPC resilience techniques remains fragmented. There are no formal methods and metrics to investigate and evaluate resilience holistically in HPC systems that consider impact scope, handling coverage, and performance & power efficiency across the system stack. Additionally, few of the current approaches are portable to newer architectures and software environments that will be deployed on future systems. In this document, we develop a structured approach to the management of HPC resilience using the concept of resilience-based design patterns. A design pattern is a general repeatable solution to a commonly occurring problem. We identify the commonly occurring problems and solutions used to deal with faults, errors and failures in HPC systems. Each established solution is described in the form of a pattern that

  18. Tracking the evolution of hospice palliative care in Canada: A comparative case study analysis of seven provinces

    Directory of Open Access Journals (Sweden)

    Richards Judy-Lynn

    2010-06-01

    of the established system of care were taken, often out of necessity. Three kinds of circumventions were identified from the data: (1 interventions to shift the system (e.g., the role of advocacy; (2 service innovations (e.g., educational initiatives; and (3 new alternative structures (e.g., the establishment of independent hospice organizations. Overall, the evolution of HPC across the case study provinces has been markedly slow, but steady and continuous. Conclusions HPC in Canada remains at the margins of the health care system. Its integration into the primary health care system may ensure dedicated and ongoing funding, enhanced access, quality and service responsiveness. Though demographics are expected to influence HPC demand in Canada, our study confirms that concerned citizens, advocacy organizations and local champions will continue to be the agents of change that make the necessary and lasting impacts on HPC in Canada.

  19. Tracking the evolution of hospice palliative care in Canada: a comparative case study analysis of seven provinces.

    Science.gov (United States)

    Williams, Allison M; Crooks, Valorie A; Whitfield, Kyle; Kelley, Mary-Lou; Richards, Judy-Lynn; DeMiglio, Lily; Dykeman, Sarah

    2010-06-01

    , often out of necessity. Three kinds of circumventions were identified from the data: (1) interventions to shift the system (e.g., the role of advocacy); (2) service innovations (e.g., educational initiatives); and (3) new alternative structures (e.g., the establishment of independent hospice organizations). Overall, the evolution of HPC across the case study provinces has been markedly slow, but steady and continuous. HPC in Canada remains at the margins of the health care system. Its integration into the primary health care system may ensure dedicated and ongoing funding, enhanced access, quality and service responsiveness. Though demographics are expected to influence HPC demand in Canada, our study confirms that concerned citizens, advocacy organizations and local champions will continue to be the agents of change that make the necessary and lasting impacts on HPC in Canada.

  20. Reliability data banks

    International Nuclear Information System (INIS)

    Cannon, A.G.; Bendell, A.

    1991-01-01

    Following an introductory chapter on Reliability, what is it, why it is needed, how it is achieved and measured, the principles of reliability data bases and analysis methodologies are the subject of the next two chapters. Achievements due to the development of data banks are mentioned for different industries in the next chapter, FACTS, a comprehensive information system for industrial safety and reliability data collection in process plants are covered next. CREDO, the Central Reliability Data Organization is described in the next chapter and is indexed separately, as is the chapter on DANTE, the fabrication reliability Data analysis system. Reliability data banks at Electricite de France and IAEA's experience in compiling a generic component reliability data base are also separately indexed. The European reliability data system, ERDS, and the development of a large data bank come next. The last three chapters look at 'Reliability data banks, - friend foe or a waste of time'? and future developments. (UK)