WorldWideScience

Sample records for prototype p2p supercomputing

  1. Towards P2P XML Database Technology

    NARCIS (Netherlands)

    Y. Zhang (Ying)

    2007-01-01

    textabstractTo ease the development of data-intensive P2P applications, we envision a P2P XML Database Management System (P2P XDBMS) that acts as a database middle-ware, providing a uniform database abstraction on top of a dynamic set of distributed data sources. In this PhD work, we research which

  2. Anonymity in P2P Systems

    Science.gov (United States)

    Manzanares-Lopez, Pilar; Muñoz-Gea, Juan Pedro; Malgosa-Sanahuja, Josemaria; Sanchez-Aarnoutse, Juan Carlos

    In the last years, the use of peer-to-peer (P2P) applications to share and exchange knowledge among people around the world has experienced an exponential growth. Therefore, it is understandable that, like in any successful communication mechanism used by a lot of humans being, the anonymity can be a desirable characteristic in this scenario. Anonymity in P2P networks can be obtained by means of different methods, although the most significant ones are broadcast protocols, dining-cryptographer (DC) nets and multiple-hop paths. Each of these methods can be tunable in order to build a real anonymity P2P application. In addition, there is a mathematical tool called entropy that can be used in some scenarios to quantify anonymity in communication networks. In some cases, it can be calculated analytically but in others it is necessary to use simulation to obtain the network entropy.

  3. Managing P2P services via the IMS

    NARCIS (Netherlands)

    Liotta, A.; Lin, L.

    2007-01-01

    The key aim of our work was to illustrate the benefits and means to deploy P2P services via the IMS. Having demonstrated the technical viability of P2P-IMS we have also found a way to add a new management dimension to existing P2P systems. P2P-IMS comes with a natural "data management" mechanism,

  4. Network-Aware DHT-Based P2P Systems

    Science.gov (United States)

    Fayçal, Marguerite; Serhrouchni, Ahmed

    P2P networks lay over existing IP networks and infrastructure. This chapter investigates the relation between both layers, details the motivations for network awareness in P2P systems, and elucidates the requirements P2P systems have to meet for efficient network awareness. Since new P2P systems are mostly based on DHTs, we also present and analyse DHT-based architectures. And after a brief presentation of different existing network-awareness solutions, the chapter goes on effective cooperation between P2P traffic and network providers' business agreements, and introduces emerging DHT-based P2P systems that are network aware through a semantic defined for resource sharing. These new systems ensure also a certain context-awareness. So, they are analyzed and compared before an open end on prospects of network awareness in P2P systems.

  5. Risk Management of P2P Internet Financing Service Platform

    Science.gov (United States)

    Yalei, Li

    2017-09-01

    Since 2005, the world’s first P2P Internet financing service platform Zopa in UK was introduced, in the development of “Internet +” trend, P2P Internet financing service platform has been developed rapidly. In 2007, China’s first P2P platform “filming loan” was established, marking the P2P Internet financing service platform to enter China and the rapid development. At the same time, China’s P2P Internet financing service platform also appeared in different forms of risk. This paper focuses on the analysis of the causes of risk of P2P Internet financing service platform and the performance of risk management process. It provides a solution to the Internet risk management plan, and explains the risk management system of the whole P2P Internet financing service platform and the future development direction.

  6. (p,2p) experiments at the University of Maryland cyclotron

    International Nuclear Information System (INIS)

    Roos, P.G.

    1976-11-01

    Some of the (p,2p) work which has been carried out at the Maryland Cyclotron is discussed. A brief introduction to the (p,2p) reaction is presented, and the types of experimental techniques utilized in (p,2p) studies are discussed. A brief introduction is given to the various theoretical treatments presently available to analyze (p,2p) reaction data. Secondly, experimental and theoretical studies of (p,2p) on d, 3 He, and 4 He carried out by the Maryland group are presented. Thirdly, (p,2p) results are discussed for 6 Li, 7 Li, and 12 C at 100 MeV. Fourthly, the effects of distortion on the experimental data are considered by presenting theoretical calculations for 12 C and 40 Ca at various bombarding energies

  7. Cryptocurrency Networks: A New P2P Paradigm

    Directory of Open Access Journals (Sweden)

    Sergi Delgado-Segura

    2018-01-01

    Full Text Available P2P networks are the mechanism used by cryptocurrencies to disseminate system information while keeping the whole system as much decentralized as possible. Cryptocurrency P2P networks have new characteristics that propose new challenges and avoid some problems of existing P2P networks. By characterizing the most relevant cryptocurrency network, Bitcoin, we provide details on different properties of cryptocurrency networks and their similarities and differences with standard P2P network paradigms. Our study allows us to conclude that cryptocurrency networks present a new paradigm of P2P networks due to the mechanisms they use to achieve high resilience and security. With this new paradigm, interesting research lines can be further developed, both in the focused field of P2P cryptocurrency networks and also when such networks are combined with other distributed scenarios.

  8. Supporting Collaboration and Creativity Through Mobile P2P Computing

    Science.gov (United States)

    Wierzbicki, Adam; Datta, Anwitaman; Żaczek, Łukasz; Rzadca, Krzysztof

    Among many potential applications of mobile P2P systems, collaboration applications are among the most prominent. Examples of applications such as Groove (although not intended for mobile networks), collaboration tools for disaster recovery (the WORKPAD project), and Skype's collaboration extensions, all demonstrate the potential of P2P collaborative applications. Yet, the development of such applications for mobile P2P systems is still difficult because of the lack of middleware.

  9. Controlling P2P File-Sharing Networks Traffic

    OpenAIRE

    García Pineda, Miguel; HAMMOUMI, MOHAMMED; Canovas Solbes, Alejandro; Lloret, Jaime

    2011-01-01

    Since the appearance of Peer-To-Peer (P2P) file-sharing networks some time ago, many Internet users have chosen this technology to share and search programs, videos, music, documents, etc. The total number of P2P file-sharing users has been increasing and decreasing in the last decade depending on the creation or end of some well known P2P file-sharing systems. P2P file-sharing networks traffic is currently overloading some data networks and it is a major headache for netw...

  10. Resource trade-off in P2P streaming

    NARCIS (Netherlands)

    Alhaisoni, M.; Liotta, A.; Ghanbari, M.

    2009-01-01

    P2P TV has emerged as a powerful alternative solution for multimedia streaming over the traditional client-server paradigm. It has proven to be a valid substitute for online applications which offer video-on-demand and real-time video. This is mainly due to the scalability and resiliency that P2P

  11. Comparing Pedophile Activity in Different P2P Systems

    OpenAIRE

    Raphaël Fournier; Thibault Cholez; Matthieu Latapy; Isabelle Chrisment; Clémence Magnien; Olivier Festor; Ivan Daniloff

    2014-01-01

    International audience; Peer-to-peer (P2P) systems are widely used to exchange content over the Internet. Knowledge of pedophile activity in such networks remains limited, despite having important social consequences. Moreover, though there are different P2P systems in use, previous academic works on this topic focused on one system at a time and their results are not directly comparable. We design a methodology for comparing KAD and eDonkey, two P2P systems among the most prominent ones and ...

  12. Data Sharing in DHT Based P2P Systems

    Science.gov (United States)

    Roncancio, Claudia; Del Pilar Villamil, María; Labbé, Cyril; Serrano-Alvarado, Patricia

    The evolution of peer-to-peer (P2P) systems triggered the building of large scale distributed applications. The main application domain is data sharing across a very large number of highly autonomous participants. Building such data sharing systems is particularly challenging because of the “extreme” characteristics of P2P infrastructures: massive distribution, high churn rate, no global control, potentially untrusted participants... This article focuses on declarative querying support, query optimization and data privacy on a major class of P2P systems, that based on Distributed Hash Table (P2P DHT). The usual approaches and the algorithms used by classic distributed systems and databases for providing data privacy and querying services are not well suited to P2P DHT systems. A considerable amount of work was required to adapt them for the new challenges such systems present. This paper describes the most important solutions found. It also identifies important future research trends in data management in P2P DHT systems.

  13. P2P Data Management in Mobile Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Nida Sahar Sayeda

    2013-04-01

    Full Text Available The rapid growth in wireless technologies has made wireless communication an important source for transporting data across different domains. In the same way, there are possibilities of many potential applications that can be deployed using WSNs (Wireless Sensor Networks. However, very limited applications are deployed in real life due to the uncertainty and dynamics of the environment and scare resources. This makes data management in WSN a challenging area to find an approach that suits its characteristics. Currently, the trend is to find efficient data management schemes using evolving technologies, i.e. P2P (Peer-to-Peer systems. Many P2P approaches have been applied in WSNs to carry out the data management due to similarities between WSN and P2P. With the similarities, there are differences too that makes P2P protocols inefficient in WSNs. Furthermore, to increase the efficiency and to exploit the delay tolerant nature of WSNs, where ever possible, the mobile WSNs are gaining importance. Thus, creating a three dimensional problem space to consider, i.e. mobility, WSNs and P2P. In this paper, an efficient algorithm is proposed for data management using P2P techniques for mobile WSNs. The real world implementation and deployment of proposed algorithm is also presented

  14. Determinants of Default in P2P Lending.

    Directory of Open Access Journals (Sweden)

    Carlos Serrano-Cinca

    Full Text Available This paper studies P2P lending and the factors explaining loan default. This is an important issue because in P2P lending individual investors bear the credit risk, instead of financial institutions, which are experts in dealing with this risk. P2P lenders suffer a severe problem of information asymmetry, because they are at a disadvantage facing the borrower. For this reason, P2P lending sites provide potential lenders with information about borrowers and their loan purpose. They also assign a grade to each loan. The empirical study is based on loans' data collected from Lending Club (N = 24,449 from 2008 to 2014 that are first analyzed by using univariate means tests and survival analysis. Factors explaining default are loan purpose, annual income, current housing situation, credit history and indebtedness. Secondly, a logistic regression model is developed to predict defaults. The grade assigned by the P2P lending site is the most predictive factor of default, but the accuracy of the model is improved by adding other information, especially the borrower's debt level.

  15. Determinants of Default in P2P Lending.

    Science.gov (United States)

    Serrano-Cinca, Carlos; Gutiérrez-Nieto, Begoña; López-Palacios, Luz

    2015-01-01

    This paper studies P2P lending and the factors explaining loan default. This is an important issue because in P2P lending individual investors bear the credit risk, instead of financial institutions, which are experts in dealing with this risk. P2P lenders suffer a severe problem of information asymmetry, because they are at a disadvantage facing the borrower. For this reason, P2P lending sites provide potential lenders with information about borrowers and their loan purpose. They also assign a grade to each loan. The empirical study is based on loans' data collected from Lending Club (N = 24,449) from 2008 to 2014 that are first analyzed by using univariate means tests and survival analysis. Factors explaining default are loan purpose, annual income, current housing situation, credit history and indebtedness. Secondly, a logistic regression model is developed to predict defaults. The grade assigned by the P2P lending site is the most predictive factor of default, but the accuracy of the model is improved by adding other information, especially the borrower's debt level.

  16. Mobile P2P Web Services Using SIP

    Directory of Open Access Journals (Sweden)

    Guido Gehlen

    2007-01-01

    Full Text Available Telecommunication networks and the Internet are growing together. Peer-to-Peer (P2P services which are originally offered by network providers, like telephony and messaging, are provided through VoIP and Instant Messaging (IM by Internet service providers, too. The IP Multimedia Subsystem (IMS is the answer of the telecommunication industry to this trend and aims at providing Internet P2P and multimedia services controlled by the network operators. The IMS provides mobility and session management as well as message routing, security, and billing.

  17. The P2P approach to interorganizational workflows

    NARCIS (Netherlands)

    Aalst, van der W.M.P.; Weske, M.H.; Dittrich, K.R.; Geppert, A.; Norrie, M.C.

    2001-01-01

    This paper describes in an informal way the Public-To-Private (P2P) approach to interorganizational workflows, which is based on a notion of inheritance. The approach consists of three steps: (1) create a common understanding of the interorganizational workflow by specifying a shared public

  18. Measurement and Analysis of P2P IPTV Program Resource

    Directory of Open Access Journals (Sweden)

    Wenxian Wang

    2014-01-01

    Full Text Available With the rapid development of P2P technology, P2P IPTV applications have received more and more attention. And program resource distribution is very important to P2P IPTV applications. In order to collect IPTV program resources, a distributed multi-protocol crawler is proposed. And the crawler has collected more than 13 million pieces of information of IPTV programs from 2009 to 2012. In addition, the distribution of IPTV programs is independent and incompact, resulting in chaos of program names, which obstructs searching and organizing programs. Thus, we focus on characteristic analysis of program resources, including the distributions of length of program names, the entropy of the character types, and hierarchy depth of programs. These analyses reveal the disorderly naming conventions of P2P IPTV programs. The analysis results can help to purify and extract useful information from chaotic names for better retrieval and accelerate automatic sorting of program and establishment of IPTV repository. In order to represent popularity of programs and to predict user behavior and popularity of hot programs over a period, we also put forward an analytical model of hot programs.

  19. Measurement and analysis of P2P IPTV program resource.

    Science.gov (United States)

    Wang, Wenxian; Chen, Xingshu; Wang, Haizhou; Zhang, Qi; Wang, Cheng

    2014-01-01

    With the rapid development of P2P technology, P2P IPTV applications have received more and more attention. And program resource distribution is very important to P2P IPTV applications. In order to collect IPTV program resources, a distributed multi-protocol crawler is proposed. And the crawler has collected more than 13 million pieces of information of IPTV programs from 2009 to 2012. In addition, the distribution of IPTV programs is independent and incompact, resulting in chaos of program names, which obstructs searching and organizing programs. Thus, we focus on characteristic analysis of program resources, including the distributions of length of program names, the entropy of the character types, and hierarchy depth of programs. These analyses reveal the disorderly naming conventions of P2P IPTV programs. The analysis results can help to purify and extract useful information from chaotic names for better retrieval and accelerate automatic sorting of program and establishment of IPTV repository. In order to represent popularity of programs and to predict user behavior and popularity of hot programs over a period, we also put forward an analytical model of hot programs.

  20. Comparing Pedophile Activity in Different P2P Systems

    Directory of Open Access Journals (Sweden)

    Raphaël Fournier

    2014-07-01

    Full Text Available Peer-to-peer (P2P systems are widely used to exchange content over the Internet. Knowledge of pedophile activity in such networks remains limited, despite having important social consequences. Moreover, though there are different P2P systems in use, previous academic works on this topic focused on one system at a time and their results are not directly comparable. We design a methodology for comparing KAD and eDonkey, two P2P systems among the most prominent ones and with different anonymity levels. We monitor two eDonkey servers and the KAD network during several days and record hundreds of thousands of keyword-based queries. We detect pedophile-related queries with a previously validated tool and we propose, for the first time, a large-scale comparison of pedophile activity in two different P2P systems. We conclude that there are significantly fewer pedophile queries in KAD than in eDonkey (approximately 0.09% vs. 0.25%.

  1. Supporting seamless mobility for P2P live streaming.

    Science.gov (United States)

    Kim, Eunsam; Kim, Sangjin; Lee, Choonhwa

    2014-01-01

    With advent of various mobile devices with powerful networking and computing capabilities, the users' demand to enjoy live video streaming services such as IPTV with mobile devices has been increasing rapidly. However, it is challenging to get over the degradation of service quality due to data loss caused by the handover. Although many handover schemes were proposed at protocol layers below the application layer, they inherently suffer from data loss while the network is being disconnected during the handover. We therefore propose an efficient application-layer handover scheme to support seamless mobility for P2P live streaming. By simulation experiments, we show that the P2P live streaming system with our proposed handover scheme can improve the playback continuity significantly compared to that without our scheme.

  2. Supporting Seamless Mobility for P2P Live Streaming

    Directory of Open Access Journals (Sweden)

    Eunsam Kim

    2014-01-01

    Full Text Available With advent of various mobile devices with powerful networking and computing capabilities, the users' demand to enjoy live video streaming services such as IPTV with mobile devices has been increasing rapidly. However, it is challenging to get over the degradation of service quality due to data loss caused by the handover. Although many handover schemes were proposed at protocol layers below the application layer, they inherently suffer from data loss while the network is being disconnected during the handover. We therefore propose an efficient application-layer handover scheme to support seamless mobility for P2P live streaming. By simulation experiments, we show that the P2P live streaming system with our proposed handover scheme can improve the playback continuity significantly compared to that without our scheme.

  3. Towards secure mobile P2P applications using JXME

    OpenAIRE

    Domingo Prieto, Marc; Prieto Blázquez, Josep; Herrera Joancomartí, Jordi; Arnedo Moreno, Joan

    2014-01-01

    Mobile devices have become ubiquitous, allowing the integration of new information from a large range of devices. However, the development of new applications requires a powerful framework which simplifies their construction. JXME is the JXTA implementation for mobile devices using J2ME, its main value being its simplicity when creating peer-to-peer (P2P) applications on limited devices. On that regard, an issue that is becoming very important in the recent times is being able to provide ...

  4. Detecting P2P Botnet in Software Defined Networks

    Directory of Open Access Journals (Sweden)

    Shang-Chiuan Su

    2018-01-01

    Full Text Available Software Defined Network separates the control plane from network equipment and has great advantage in network management as compared with traditional approaches. With this paradigm, the security issues persist to exist and could become even worse because of the flexibility on handling the packets. In this paper we propose an effective framework by integrating SDN and machine learning to detect and categorize P2P network traffics. This work provides experimental evidence showing that our approach can automatically analyze network traffic and flexibly change flow entries in OpenFlow switches through the SDN controller. This can effectively help the network administrators manage related security problems.

  5. Improved Degree Search Algorithms in Unstructured P2P Networks

    Directory of Open Access Journals (Sweden)

    Guole Liu

    2012-01-01

    Full Text Available Searching and retrieving the demanded correct information is one important problem in networks; especially, designing an efficient search algorithm is a key challenge in unstructured peer-to-peer (P2P networks. Breadth-first search (BFS and depth-first search (DFS are the current two typical search methods. BFS-based algorithms show the perfect performance in the aspect of search success rate of network resources, while bringing the huge search messages. On the contrary, DFS-based algorithms reduce the search message quantity and also cause the dropping of search success ratio. To address the problem that only one of performances is excellent, we propose two memory function degree search algorithms: memory function maximum degree algorithm (MD and memory function preference degree algorithm (PD. We study their performance including the search success rate and the search message quantity in different networks, which are scale-free networks, random graph networks, and small-world networks. Simulations show that the two performances are both excellent at the same time, and the performances are improved at least 10 times.

  6. Supercomputational science

    CERN Document Server

    Wilson, S

    1990-01-01

    In contemporary research, the supercomputer now ranks, along with radio telescopes, particle accelerators and the other apparatus of "big science", as an expensive resource, which is nevertheless essential for state of the art research. Supercomputers are usually provided as shar.ed central facilities. However, unlike, telescopes and accelerators, they are find a wide range of applications which extends across a broad spectrum of research activity. The difference in performance between a "good" and a "bad" computer program on a traditional serial computer may be a factor of two or three, but on a contemporary supercomputer it can easily be a factor of one hundred or even more! Furthermore, this factor is likely to increase with future generations of machines. In keeping with the large capital and recurrent costs of these machines, it is appropriate to devote effort to training and familiarization so that supercomputers are employed to best effect. This volume records the lectures delivered at a Summer School ...

  7. Music2Share - Copyright-Compliant Music Sharing in P2P Systems

    NARCIS (Netherlands)

    Kalker, Ton; Epema, Dick H.J.; Hartel, Pieter H.; Lagendijk, R. (Inald) L.; van Steen, Martinus Richardus; van Steen, Maarten

    Peer-to-Peer (P2P) networks are generally considered to be free havens for pirated content, in particular with respect to music. We describe a solution for the problem of copyright infringement in P2P networks for music sharing. In particular, we propose a P2P protocol that integrates the functions

  8. Music2Share --- Copyright-Compliant Music Sharing in P2P Systems

    NARCIS (Netherlands)

    Kalker, T.; Epema, D.; Hartel, P.; Lagendijk, I.; van Steen, M.R.

    2004-01-01

    Peer-to-peer (P2P) networks are generally considered to be free havens for pirated content, in particular with respect to music. We describe a solution for the problem of copyright infringement in P2P networks for music sharing. In particular, we propose a P2P protocol that integrates the functions

  9. Research of using mobile agents for information discovery in P2P networks

    International Nuclear Information System (INIS)

    Lan Yan; Yao Qing

    2003-01-01

    The technology of P2P is a new network-computing model that has great value of commerce and technology. After analyzing the current information discovery technology in P2P network, a new solution that is based on mobile agent is proposed. The mobile agent solution can reduce the need of bandwidth, be adapt to the dynamic of P2P network, and be asynchronous and be very fault tolerant. (authors)

  10. Evaluating Application-Layer Traffic Optimization Cost Metrics for P2P Multimedia Streaming

    DEFF Research Database (Denmark)

    Poderys, Justas; Soler, José

    2017-01-01

    To help users of P2P communication systems perform better-than-random selection of communication peers, Internet Engineering Task Force standardized the Application Layer Traffic Optimization (ALTO) protocol. The ALTO provided data-routing cost metric, can be used to rank peers in P2P communicati...

  11. P2P XQuery and the StreetTiVo application

    NARCIS (Netherlands)

    P.A. Boncz (Peter); Y. Zhang (Ying)

    2007-01-01

    textabstractIn the AmbientDB project, we are building MonetDB/XQuery, an open-source XML DBMS (XDBMS) with support for distributed querying and P2P services. Our work is motivated by the hypothesis that P2P is a disruptive paradigm that should change the nature of database technology. Most of the

  12. Convergence of Internet and TV: The Commercial Viability of P2P Content Delivery

    Science.gov (United States)

    de Boever, Jorn

    The popularity of (illegal) P2P (peer-to-peer) file sharing has a disruptive impact on Internet traffic and business models of content providers. In addition, several studies have found an increasing demand for bandwidth consuming content, such as video, on the Internet. Although P2P systems have been put forward as a scalable and inexpensive model to deliver such content, there has been relatively little economic analysis of the potentials and obstacles of P2P systems as a legal and commercial content distribution model. Many content providers encounter uncertainties regarding the adoption or rejection of P2P networks to spread content over the Internet. The recent launch of several commercial, legal P2P content distribution platforms increases the importance of an integrated analysis of the Strengths, Weaknesses, Opportunities and Threats (SWOT).

  13. Fuzzy-rule-based Adaptive Resource Control for Information Sharing in P2P Networks

    Science.gov (United States)

    Wu, Zhengping; Wu, Hao

    With more and more peer-to-peer (P2P) technologies available for online collaboration and information sharing, people can launch more and more collaborative work in online social networks with friends, colleagues, and even strangers. Without face-to-face interactions, the question of who can be trusted and then share information with becomes a big concern of a user in these online social networks. This paper introduces an adaptive control service using fuzzy logic in preference definition for P2P information sharing control, and designs a novel decision-making mechanism using formal fuzzy rules and reasoning mechanisms adjusting P2P information sharing status following individual users' preferences. Applications of this adaptive control service into different information sharing environments show that this service can provide a convenient and accurate P2P information sharing control for individual users in P2P networks.

  14. Bandwidth Reduction via Localized Peer-to-Peer (P2P Video

    Directory of Open Access Journals (Sweden)

    Ken Kerpez

    2010-01-01

    Full Text Available This paper presents recent research into P2P distribution of video that can be highly localized, preferably sharing content among users on the same access network and Central Office (CO. Models of video demand and localized P2P serving areas are presented. Detailed simulations of passive optical networks (PON are run, and these generate statistics of P2P video localization. Next-Generation PON (NG-PON is shown to fully enable P2P video localization, but the lower rates of Gigabit-PON (GPON restrict performance. Results here show that nearly all of the traffic volume of unicast video could be delivered via localized P2P. Strong growth in video delivery via localized P2P could lower overall future aggregation and core network bandwidth of IP video traffic by 58.2%, and total consumer Internet traffic by 43.5%. This assumes aggressive adoption of technologies and business practices that enable highly localized P2P video.

  15. A Simple FSPN Model of P2P Live Video Streaming System

    OpenAIRE

    Kotevski, Zoran; Mitrevski, Pece

    2011-01-01

    Peer to Peer (P2P) live streaming is relatively new paradigm that aims at streaming live video to large number of clients at low cost. Many such applications already exist in the market, but, prior to creating such system it is necessary to analyze its performance via representative model that can provide good insight in the system’s behavior. Modeling and performance analysis of P2P live video streaming systems is challenging task which requires addressing many properties and issues of P2P s...

  16. A Local Scalable Distributed EM Algorithm for Large P2P Networks

    Data.gov (United States)

    National Aeronautics and Space Administration — his paper describes a local and distributed expectation maximization algorithm for learning parameters of Gaussian mixture models (GMM) in large peer-to-peer (P2P)...

  17. P2P-Based Data System for the EAST Experiment

    Science.gov (United States)

    Shu, Yantai; Zhang, Liang; Zhao, Weifeng; Chen, Haiming; Luo, Jiarong

    2006-06-01

    A peer-to-peer (P2P)-based EAST Data System is being designed to provide data acquisition and analysis support for the EAST superconducting tokamak. Instead of transferring data to the servers, all collected data are stored in the data acquisition subsystems locally and the PC clients can access the raw data directly using the P2P architecture. Both online and offline systems are based on Napster-like P2P architecture. This allows the peer (PC) to act both as a client and as a server. A simulation-based method and a steady-state operational analysis technique are used for performance evaluation. These analyses show that the P2P technique can significantly reduce the completion time of raw data display and real-time processing on the online system, and raise the workload capacity and reduce the delay on the offline system.

  18. Data transport and management in P2P Data Management in Mobile Wireless Sensor Network

    International Nuclear Information System (INIS)

    Sahar, S.; Shaikh, F.K.

    2013-01-01

    The rapid growth in wireless technologies has made wireless communication an important source for transporting data across different domains. In the same way, there are possibilities of many potential applications that can be deployed using WSNs (Wireless Sensor Networks). However, very limited applications are deployed in real life due to the uncertainty and dynamics of the environment and scare resources. This makes data management in WSN a challenging area to find an approach that suits its characteristics. Currently, the trend is to find efficient data management schemes using evolving technologies, i.e. P2P (Peer-to-Peer) systems. Many P2P approaches have been applied in WSNs to carry out the data management due to similarities between WSN and P2P. With the similarities, there are differences too that makes P2P protocols inefficient in WSNs. Furthermore, to increase the efficiency and to exploit the delay tolerant nature of WSNs, where ever possible, the mobile WSNs are gaining importance. Thus, creating a three dimensional problem space to consider, i.e. mobility, WSNs and P2P. In this paper, an efficient algorithm is proposed for data management using P2P techniques for mobile WSNs. The real world implementation and deployment of proposed algorithm is also presented. (author)

  19. Strategies for P2P connectivity in reconfigurable converged wired/wireless access networks.

    Science.gov (United States)

    Puerto, Gustavo; Mora, José; Ortega, Beatriz; Capmany, José

    2010-12-06

    This paper presents different strategies to define the architecture of a Radio-Over-Fiber (RoF) Access networks enabling Peer-to-Peer (P2P) functionalities. The architectures fully exploit the flexibility of a wavelength router based on the feedback configuration of an Arrayed Waveguide Grating (AWG) and an optical switch to broadcast P2P services among diverse infrastructures featuring dynamic channel allocation and enabling an optical platform for 3G and beyond wireless backhaul requirements. The first architecture incorporates a tunable laser to generate a dedicated wavelength for P2P purposes and the second architecture takes advantage of reused wavelengths to enable the P2P connectivity among Optical Network Units (ONUs) or Base Stations (BS). While these two approaches allow the P2P connectivity in a one at a time basis (1:1), the third architecture enables the broadcasting of P2P sessions among different ONUs or BSs at the same time (1:M). Experimental assessment of the proposed architecture shows approximately 0.6% Error Vector Magnitude (EVM) degradation for wireless services and 1 dB penalty in average for 1 x 10(-12) Bit Error Rate (BER) for wired baseband services.

  20. A distributed incentive compatible pricing mechanism for P2P networks

    Science.gov (United States)

    Zhang, Jie; Zhao, Zheng; Xiong, Xiao; Shi, Qingwei

    2007-09-01

    Peer-to-Peer (P2P) systems are currently receiving considerable interest. However, as experience with P2P networks shows, the selfish behaviors of peers may lead to serious problems of P2P network, such as free-riding and white-washing. In order to solve these problems, there are increasing considerations on reputation system design in the study of P2P networks. Most of the existing works is concerning probabilistic estimation or social networks to evaluate the trustworthiness for a peer to others. However, these models can not be efficient all the time. In this paper, our aim is to provide a general mechanism that can maximize P2P networks social welfare in a way of Vickrey-Clarke-Groves family, while assuming every peer in P2P networks is rational and selfish, which means they only concern about their own outcome. This mechanism has some desirable properties using an O(n) algorithm: (1) incentive compatibility, every peer truly report its connection type; (2) individually rationality; and (3) fully decentralized, we design a multiple-principal multiple-agent model, concerning about the service provider and service requester individually.

  1. P2P-based botnets: structural analysis, monitoring, and mitigation

    Energy Technology Data Exchange (ETDEWEB)

    Yan, Guanhua [Los Alamos National Laboratory; Eidenbenz, Stephan [Los Alamos National Laboratory; Ha, Duc T [UNIV AT BUFFALO; Ngo, Hung Q [UNIV AT BUFFALO

    2008-01-01

    Botnets, which are networks of compromised machines that are controlled by one or a group of attackers, have emerged as one of the most serious security threats on the Internet. With an army of bots at the scale of tens of thousands of hosts or even as large as 1.5 million PCs, the computational power of botnets can be leveraged to launch large-scale DDoS (Distributed Denial of Service) attacks, sending spamming emails, stealing identities and financial information, etc. As detection and mitigation techniques against botnets have been stepped up in recent years, attackers are also constantly improving their strategies to operate these botnets. The first generation of botnets typically employ IRC (Internet Relay Chat) channels as their command and control (C&C) centers. Though simple and easy to deploy, the centralized C&C mechanism of such botnets has made them prone to being detected and disabled. Against this backdrop, peer-to-peer (P2P) based botnets have emerged as a new generation of botnets which can conceal their C&C communication. Recently, P2P networks have emerged as a covert communication platform for malicious programs known as bots. As popular distributed systems, they allow bots to communicate easily while protecting the botmaster from being discovered. Existing work on P2P-based hotnets mainly focuses on measurement of botnet sizes. In this work, through simulation, we study extensively the structure of P2P networks running Kademlia, one of a few widely used P2P protocols in practice. Our simulation testbed incorporates the actual code of a real Kademlia client software to achieve great realism, and distributed event-driven simulation techniques to achieve high scalability. Using this testbed, we analyze the scaling, reachability, clustering, and centrality properties of P2P-based botnets from a graph-theoretical perspective. We further demonstrate experimentally and theoretically that monitoring bot activities in a P2P network is difficult

  2. P2P Network Lending, Loss Given Default and Credit Risks

    Directory of Open Access Journals (Sweden)

    Guangyou Zhou

    2018-03-01

    Full Text Available Peer-to-peer (P2P network lending is a new mode of internet finance that still holds credit risk as its main risk. According to the internal rating method of the New Basel Accord, in addition to the probability of default, loss given default is also one of the important indicators of evaluation credit risks. Proceeding from the perspective of loss given default (LGD, this paper conducts an empirical study on the probability distribution of LGDs of P2P as well as its influencing factors with the transaction data of Lending Club. The results show that: (1 the LGDs of P2P loans presents an obvious unimodal distribution, the peak value is relatively high and tends to concentrate with the decrease of the borrower’s credit rating, indicating that the distribution of LGDs of P2P lending is similar to that of unsecured bonds; (2 The total asset of the borrower has no significant impact on LGD, the credit rating and the debt-to-income ratio exert a significant negative impact, while the term and amount of the loan produce a relatively strong positive impact. Therefore, when evaluating the borrower’s repayment ability, it is required to pay more attention to its assets structure rather than the size of its total assets. When carrying out risk control for the P2P platform, it is necessary to give priority to the control of default rate.

  3. A New Caching Technique to Support Conjunctive Queries in P2P DHT

    Science.gov (United States)

    Kobatake, Koji; Tagashira, Shigeaki; Fujita, Satoshi

    P2P DHT (Peer-to-Peer Distributed Hash Table) is one of typical techniques for realizing an efficient management of shared resources distributed over a network and a keyword search over such networks in a fully distributed manner. In this paper, we propose a new method for supporting conjunctive queries in P2P DHT. The basic idea of the proposed technique is to share a global information on past trials by conducting a local caching of search results for conjunctive queries and by registering the fact to the global DHT. Such a result caching is expected to significantly reduce the amount of transmitted data compared with conventional schemes. The effect of the proposed method is experimentally evaluated by simulation. The result of experiments indicates that by using the proposed method, the amount of returned data is reduced by 60% compared with conventional P2P DHT which does not support conjunctive queries.

  4. 11th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing

    CERN Document Server

    Barolli, Leonard; Amato, Flora

    2017-01-01

    P2P, Grid, Cloud and Internet computing technologies have been very fast established as breakthrough paradigms for solving complex problems by enabling aggregation and sharing of an increasing variety of distributed computational resources at large scale. The aim of this volume is to provide latest research findings, innovative research results, methods and development techniques from both theoretical and practical perspectives related to P2P, Grid, Cloud and Internet computing as well as to reveal synergies among such large scale computing paradigms. This proceedings volume presents the results of the 11th International Conference on P2P, Parallel, Grid, Cloud And Internet Computing (3PGCIC-2016), held November 5-7, 2016, at Soonchunhyang University, Asan, Korea.

  5. P2P Lending Risk Contagion Analysis Based on a Complex Network Model

    Directory of Open Access Journals (Sweden)

    Qi Wei

    2016-01-01

    Full Text Available This paper analyzes two major channels of P2P lending risk contagion in China—direct risk contagion between platforms and indirect risk contagion with other financial organizations as the contagion medium. Based on this analysis, the current study constructs a complex network model of P2P lending risk contagion in China and performs dynamics analogue simulations in order to analyze general characteristics of direct risk contagion among China’s online P2P lending platforms. The assumed conditions are that other financial organizations act as the contagion medium, with variations in the risk contagion characteristics set under the condition of significant information asymmetry in Internet lending. It is indicated that, compared to direct risk contagion among platforms, both financial organizations acting as the contagion medium and information asymmetry magnify the effect of risk contagion. It is also found that the superposition of media effects and information asymmetry is more likely to magnify the risk contagion effect.

  6. TinCan: User-Defined P2P Virtual Network Overlays for Ad-hoc Collaboration

    Directory of Open Access Journals (Sweden)

    Pierre St Juste

    2014-10-01

    Full Text Available Virtual private networking (VPN has become an increasingly important component of a collaboration environment because it ensures private, authenticated communication among participants, using existing collaboration tools, where users are distributed across multiple institutions and can be mobile. The majority of current VPN solutions are based on a centralized VPN model, where all IP traffic is tunneled through a VPN gateway. Nonetheless, there are several use case scenarios that require a model where end-to-end VPN links are tunneled upon existing Internet infrastructure in a peer-to-peer (P2P fashion, removing the bottleneck of a centralized VPN gateway. We propose a novel virtual network — TinCan — based on peerto-peer private network tunnels. It reuses existing standards and implementations of services for discovery notification (XMPP, reflection (STUN and relaying (TURN, facilitating configuration. In this approach, trust relationships maintained by centralized (or federated services are automatically mapped to TinCan links. In one use scenario, TinCan allows unstructured P2P overlays connecting trusted end-user devices — while only requiring VPN software on user devices and leveraging online social network (OSN infrastructure already widely deployed. This paper describes the architecture and design of TinCan and presents an experimental evaluation of a prototype supporting Windows, Linux, and Android mobile devices. Results quantify the overhead introduced by the network virtualization layer, and the resource requirements imposed on services needed to bootstrap TinCan links.

  7. Testing a Cloud Provider Network for Hybrid P2P and Cloud Streaming Architectures

    OpenAIRE

    Cerviño Arriba, Javier; Rodríguez, Pedro; Trajkovska, Irena; Mozo Velasco, Alberto; Salvachúa Rodríguez, Joaquín

    2011-01-01

    The number of online real-time streaming services deployed over network topologies like P2P or centralized ones has remarkably increased in the recent years. This has revealed the lack of networks that are well prepared to respond to this kind of traffic. A hybrid distribution network can be an efficient solution for real-time streaming services. This paper contains the experimental results of streaming distribution in a hybrid architecture that consist of mixed connections among P2P and Clou...

  8. Integrating XQuery and P2P in MonetDB/XQuery*

    NARCIS (Netherlands)

    Y. Zhang (Ying); P.A. Boncz (Peter); M. Arenas (Marcelo); J. Hidders

    2007-01-01

    textabstractMonetDB/XQuery* is a fully functional publicly available XML DBMS that has been extended with distributed and P2P data management functionality. Our (minimal) XQuery language extension XRPC adds the concept of RPC to XQuery, and exploits the set-at-a-time database processing model to

  9. The "P2P" Educational Model Providing Innovative Learning by Linking Technology, Business and Research

    Science.gov (United States)

    Dickinson, Paul Gordon

    2017-01-01

    This paper evaluates the effect and potential of a new educational learning model called Peer to Peer (P2P). The study was focused on Laurea, Hyvinkaa's Finland campus and its response to bridging the gap between traditional educational methods and working reality, where modern technology plays an important role. The study describes and evaluates…

  10. Load Balancing Scheme on the Basis of Huffman Coding for P2P Information Retrieval

    Science.gov (United States)

    Kurasawa, Hisashi; Takasu, Atsuhiro; Adachi, Jun

    Although a distributed index on a distributed hash table (DHT) enables efficient document query processing in Peer-to-Peer information retrieval (P2P IR), the index costs a lot to construct and it tends to be an unfair management because of the unbalanced term frequency distribution. We devised a new distributed index, named Huffman-DHT, for P2P IR. The new index uses an algorithm similar to Huffman coding with a modification to the DHT structure based on the term distribution. In a Huffman-DHT, a frequent term is assigned to a short ID and allocated a large space in the node ID space in DHT. Throuth ID management, the Huffman-DHT balances the index registration accesses among peers and reduces load concentrations. Huffman-DHT is the first approach to adapt concepts of coding theory and term frequency distribution to load balancing. We evaluated this approach in experiments using a document collection and assessed its load balancing capabilities in P2P IR. The experimental results indicated that it is most effective when the P2P system consists of about 30, 000 nodes and contains many documents. Moreover, we proved that we can construct a Huffman-DHT easily by estimating the probability distribution of the term occurrence from a small number of sample documents.

  11. P2P systems in a regulated environment : challenges and opportunities for the operator

    NARCIS (Netherlands)

    Liotta, A.

    2008-01-01

    P2P networks, systems, and applications have been the subject of intensive studies in recent years. They have created new business opportunities, providing a low-cost mechanism for communication and for online content distribution. They have also sparked a spate of legal disputes with adverse

  12. Comparing manually-developed and data-driven rules for P2P learning

    CSIR Research Space (South Africa)

    Loots, L

    2009-11-01

    Full Text Available Phoneme-to-phoneme (P2P) learning provides a mechanism for predicting the pronunciation of a word based on its pronunciation in a different accent, dialect or language. The authors evaluate the effectiveness of manually-developed as well...

  13. analysis of the probability of channel satisfactory state in p2p live

    African Journals Online (AJOL)

    userpc

    churn and bits flow was modelled as fluid flow. The applicability of the theory of probability was deduced from Kelly (1991). Section II of the paper provides the model of. P2P live streaming systems taking into account peer behaviour and expression was obtained for the computation of the probability of channel- satisfactory ...

  14. G-ROME : semantic-driven capacity sharing among P2P networks

    NARCIS (Netherlands)

    Exarchakos, G.; Antonopoulos, N.; Salter, J.

    2007-01-01

    Purpose – The purpose of this paper is to propose a model for sharing network capacity on demand among different underloaded and overloaded P2P ROME-enabled networks. The paper aims to target networks of nodes with highly dynamic workload fluctuations that may experience a burst of traffic and/or

  15. Enabling Co-located Learning over Mobile Ad Hoc P2P with LightPeers

    DEFF Research Database (Denmark)

    Christensen, Bent Guldbjerg; Kristensen, Mads Darø; Hansen, Frank Allan

    2008-01-01

    This paper presents LightPeers – a new mobile P2P framework specifically tailored for use in a nomadic learning environment. A set of key requirements for the framework is identified based on nomadic learning, and these requirements are used as outset for designing and implementing the architectu...

  16. Scaling laws for file dissemination in P2P networks with random contacts

    NARCIS (Netherlands)

    Nunez-Queija, R.; Prabhu, B.

    2008-01-01

    In this paper we obtain the scaling law for the mean broadcast time of a file in a P2P network with an initial population of N nodes. In the model, at Poisson rate λ a node initiates a contact with another node chosen uniformly at random. This contact is said to be successful if the contacted node

  17. Scaling laws for file dissemination in P2P networks with random contacts

    NARCIS (Netherlands)

    Núñez-Queija, R.; Prabhu, B.

    2008-01-01

    In this paper we obtain the scaling law for the mean broadcast time of a file in a P2P network with an initial population of N nodes. In the model, at Poisson rate lambda a node initiates a contact with another node chosen uniformly at random. This contact is said to be successful if the contacted

  18. Analysis of the probability of channel satisfactory state in P2P live ...

    African Journals Online (AJOL)

    In this paper a model based on user behaviour of P2P live streaming systems was developed in order to analyse one of the key QoS parameter of such systems, i.e. the probability of channel-satisfactory state, the impact of upload bandwidths and channels' popularity on the probability of channel-satisfactory state was also ...

  19. Extending an Afrikaans pronunciation dictionary using Dutch resources and P2P/GP2P

    CSIR Research Space (South Africa)

    Loots, L

    2010-11-01

    Full Text Available . This is compared to the more common approach of extending the Afrikaans dictionary by means of graphemeto-phoneme (G2P) conversion. The results indicate that the Afrikaans pronunciations obtained by P2P and GP2P from the Dutch dictionary are more accurate than...

  20. Photoionization from the 6p 2P3/2 state of neutral cesium

    International Nuclear Information System (INIS)

    Haq, S. U.; Nadeem, Ali

    2010-01-01

    We report the photoionization studies of cesium from the 6p 2 P 3/2 excited state to measure the photoionization cross section at and above the first ionization threshold, oscillator strength of the highly excited transitions, and extension in the Rydberg series. The photoionization cross section at the first ionization threshold is measured as 25 (4) Mb and at excess energies 0.02, 0.04, 0.07, and 0.09 eV as 21, 19, 17, and 16 Mb, respectively. Oscillator strength of the 6p 2 P 3/2 → nd 2 D 5/2 (23 ≤ n ≤ 60) Rydberg transitions has been extracted utilizing the threshold value of photoionization cross section and the recorded nd 2 D 5/2 photoionization spectra.

  1. Using of P2P Networks for Acceleration of RTE Tasks Solving

    Directory of Open Access Journals (Sweden)

    Adrian Iftene

    2008-07-01

    Full Text Available In the last years the computational Grids have become an important research area in large-scale scientific and engineering research. Our approach is based on Peer-to-peer (P2P networks, which are recognized as one of most used architectures in order to achieve scalability in key components of Grid systems. The main scope in using of a computational Grid was to improve the computational speed of systems that solve complex problems from Natural Language processing field. We will see how can be implemented a computational Grid using the P2P model, and how can be used SMB protocol for file transfer. After that we will see how we can use this computational Grid, in order to improve the computational speed of a system used in RTE competition [1], a new complex challenge from Natural Language processing field.

  2. End-to-End Key Exchange through Disjoint Paths in P2P Networks

    Directory of Open Access Journals (Sweden)

    Daouda Ahmat

    2015-01-01

    Full Text Available Due to their inherent features, P2P networks have proven to be effective in the exchange of data between autonomous peers. Unfortunately, these networks are subject to various security threats that cannot be addressed readily since traditional security infrastructures, which are centralized, cannot be applied to them. Furthermore, communication reliability across the Internet is threatened by various attacks, including usurpation of identity, eavesdropping or traffic modification. Thus, in order to overcome these security issues and allow peers to securely exchange data, we propose a new key management scheme over P2P networks. Our approach introduces a new method that enables a secret key exchange through disjoint paths in the absence of a trusted central coordination point which would be required in traditional centralized security systems.

  3. n-p Short-Range Correlations from (p,2p+n) Measurements

    Science.gov (United States)

    Tang, A.; Watson, J. W.; Aclander, J.; Alster, J.; Asryan, G.; Averichev, Y.; Barton, D.; Baturin, V.; Bukhtoyarova, N.; Carroll, A.; Gushue, S.; Heppelmann, S.; Leksanov, A.; Makdisi, Y.; Malki, A.; Minina, E.; Navon, I.; Nicholson, H.; Ogawa, A.; Panebratsev, Yu.; Piasetzky, E.; Schetkovsky, A.; Shimanskiy, S.; Zhalov, D.

    2003-01-01

    We studied the 12C(p,2p+n) reaction at beam momenta of 5.9, 8.0, and 9.0 GeV/c. For quasielastic (p,2p) events pf, the momentum of the knocked-out proton before the reaction, was compared (event by event) with pn, the coincident neutron momentum. For |pn|>kF=0.220 GeV/c (the Fermi momentum) a strong back-to-back directional correlation between pf and pn was observed, indicative of short-range n-p correlations. From pn and pf we constructed the distributions of c.m. and relative motion in the longitudinal direction for correlated pairs. We also determined that 49±13% of events with |pf|>kF had directionally correlated neutrons with |pn|>kF.

  4. Exploring the Feasibility of Reputation Models for Improving P2P Routing under Churn

    Science.gov (United States)

    Sànchez-Artigas, Marc; García-López, Pedro; Herrera, Blas

    Reputation mechanisms help peer-to-peer (P2P) networks to detect and avoid unreliable or uncooperative peers. Recently, it has been discussed that routing protocols can be improved by conditioning routing decisions to the past behavior of forwarding peers. However, churn — the continuous process of node arrival and departure — may severely hinder the applicability of rating mechanisms. In particular, short lifetimes mean that reputations are often generated from a small number of transactions.

  5. JXTA: A Technology Facilitating Mobile P2P Health Management System

    OpenAIRE

    Rajkumar, Rajasekaran; Iyengar, Nallani Chackravatula Sriman Naraya

    2012-01-01

    Objectives Mobile JXTA (Juxtapose) gaining momentum and has attracted the interest of doctors and patients through P2P service that transmits messages. Audio and video can also be transmitted through JXTA. The use of mobile streaming mechanism with the support of mobile hospital management and healthcare system would enable better interaction between doctors, nurses, and the hospital. Experimental results demonstrate good performance in comparison with conventional systems. This study evaluat...

  6. Emergence of Financial Intermediaries in Electronic Markets: The Case of Online P2P Lending

    OpenAIRE

    Sven C. Berger; Fabian Gleisner

    2009-01-01

    We analyze the role of intermediaries in electronic markets using detailed data of more than 14,000 originated loans on an electronic P2P (peer-to-peer) lending platform. In such an electronic credit market, lenders bid to supply a private loan. Screening of potential borrowers and the monitoring of loan repayment can be delegated to designated group leaders. We find that these market participants act as financial intermediaries and significantly improve borrowers' credit conditions by reduci...

  7. Personalized Trust Management for Open and Flat P2P Communities

    Institute of Scientific and Technical Information of China (English)

    ZUO Min; LI Jian-hua

    2008-01-01

    A personalized trust management scheme is proposed to help peers build up trust between each other in open and flat P2P communities. This scheme totally abandons the attempt to achieve a global view. It evaluates trust from a subjective point of view and gives personalized decision support to each peer. Simulation experiments prove its three advantages: free of central control, stronger immunity to misleading recommendations, and limited traffic overload.

  8. A decision support model for investment on P2P lending platform.

    Science.gov (United States)

    Zeng, Xiangxiang; Liu, Li; Leung, Stephen; Du, Jiangze; Wang, Xun; Li, Tao

    2017-01-01

    Peer-to-peer (P2P) lending, as a novel economic lending model, has triggered new challenges on making effective investment decisions. In a P2P lending platform, one lender can invest N loans and a loan may be accepted by M investors, thus forming a bipartite graph. Basing on the bipartite graph model, we built an iteration computation model to evaluate the unknown loans. To validate the proposed model, we perform extensive experiments on real-world data from the largest American P2P lending marketplace-Prosper. By comparing our experimental results with those obtained by Bayes and Logistic Regression, we show that our computation model can help borrowers select good loans and help lenders make good investment decisions. Experimental results also show that the Logistic classification model is a good complement to our iterative computation model, which motivates us to integrate the two classification models. The experimental results of the hybrid classification model demonstrate that the logistic classification model and our iteration computation model are complementary to each other. We conclude that the hybrid model (i.e., the integration of iterative computation model and Logistic classification model) is more efficient and stable than the individual model alone.

  9. A decision support model for investment on P2P lending platform.

    Directory of Open Access Journals (Sweden)

    Xiangxiang Zeng

    Full Text Available Peer-to-peer (P2P lending, as a novel economic lending model, has triggered new challenges on making effective investment decisions. In a P2P lending platform, one lender can invest N loans and a loan may be accepted by M investors, thus forming a bipartite graph. Basing on the bipartite graph model, we built an iteration computation model to evaluate the unknown loans. To validate the proposed model, we perform extensive experiments on real-world data from the largest American P2P lending marketplace-Prosper. By comparing our experimental results with those obtained by Bayes and Logistic Regression, we show that our computation model can help borrowers select good loans and help lenders make good investment decisions. Experimental results also show that the Logistic classification model is a good complement to our iterative computation model, which motivates us to integrate the two classification models. The experimental results of the hybrid classification model demonstrate that the logistic classification model and our iteration computation model are complementary to each other. We conclude that the hybrid model (i.e., the integration of iterative computation model and Logistic classification model is more efficient and stable than the individual model alone.

  10. What is supercomputing ?

    International Nuclear Information System (INIS)

    Asai, Kiyoshi

    1992-01-01

    Supercomputing means the high speed computation using a supercomputer. Supercomputers and the technical term ''supercomputing'' have spread since ten years ago. The performances of the main computers installed so far in Japan Atomic Energy Research Institute are compared. There are two methods to increase computing speed by using existing circuit elements, parallel processor system and vector processor system. CRAY-1 is the first successful vector computer. Supercomputing technology was first applied to meteorological organizations in foreign countries, and to aviation and atomic energy research institutes in Japan. The supercomputing for atomic energy depends on the trend of technical development in atomic energy, and the contents are divided into the increase of computing speed in existing simulation calculation and the acceleration of the new technical development of atomic energy. The examples of supercomputing in Japan Atomic Energy Research Institute are reported. (K.I.)

  11. Manejo de Identidades en Sistemas P2P Basado en DHT

    Directory of Open Access Journals (Sweden)

    Ricardo Villanueva

    2016-01-01

    Full Text Available Este artículo presenta las redes P2P, donde se analizara la manera en la cual los nodos se relacionan entre sí y de que forma en la cual se distribuyen al entrar a la red.  Cada nodo al crear una red o unirse a una ya existente posee un identificador el cual da origen a la manera en la que se distribuyen los siguientes nodos que se unirán a la red, la falla radica en que uno de los nodos ya enlazados a la red existente pueden ser maliciosos y originar puntos de ataque a la red afectando la confidencialidad de la información distribuida entre los demás nodos de la red o modificar el enrutamiento de la información suministrada a través de la capa de aplicación, ya que estos nodos solo con el hecho de estar en la red son responsables de la comunicación que se realiza entre ciertos nodos localizados en el anillo. Se indicaran detalladamente los procesos de conexión, comunicación y estabilización de los nodos por medio de la simulación de las redes P2P en ovelay weaver, mostrando consigo las características y resultados de la simulación.   Abstract This paper contains information relating to what concerns the networks P2P, there was analyzed the way in which the nodes relate between yes and of which it forms the organization in which they are distributed on having entered to the network. Every node on having created a network or to join the already existing one possesses an identifier which gives origin to the way in which there are distributed the following nodes that will join the network, The fault takes root in that one of the nodes already connected to the existing network they can be malicious and in originating points of assault to the network affecting the confidentiality of the information distributed between other nodes of the network or to modify the routing of the information supplied across the cap of application, since these nodes only with the fact of being in the network are responsible for the communication that is

  12. The Measurement and Modeling of a P2P Streaming Video Service

    Science.gov (United States)

    Gao, Peng; Liu, Tao; Chen, Yanming; Wu, Xingyao; El-Khatib, Yehia; Edwards, Christopher

    Most of the work on grid technology in video area has been generally restricted to aspects of resource scheduling and replica management. The traffic of such service has a lot of characteristics in common with that of the traditional video service. However the architecture and user behavior in Grid networks are quite different from those of traditional Internet. Considering the potential of grid networks and video sharing services, measuring and analyzing P2P IPTV traffic are important and fundamental works in the field grid networks.

  13. (p,2p) study of high-momentum components at 2.1 GeV

    International Nuclear Information System (INIS)

    Treuhaft, R.N.

    1982-07-01

    A (p,2p) experiment designed to isolate interactions with small numbers of fast nuclear constituents is described. Special attention is paid to the experimental manifestation and description of a correlated pair of nucleons in the nucleus. Phase space calculations are presented for the proton-pair three-body final state and for final states with larger number of particles. The Two Armed Spectrometer System (TASS) is described in detail. The data suggest the possibility of isolating an interaction with one or two nucleons in the nucleus which may have momenta far in excess of those described in a Fermi gas model

  14. Emergence of Financial Intermediaries in Electronic Markets: The Case of Online P2P Lending

    Directory of Open Access Journals (Sweden)

    Sven C. Berger

    2009-05-01

    Full Text Available We analyze the role of intermediaries in electronic markets using detailed data of more than 14,000 originated loans on an electronic P2P (peer-to-peer lending platform. In such an electronic credit market, lenders bid to supply a private loan. Screening of potential borrowers and the monitoring of loan repayment can be delegated to designated group leaders. We find that these market participants act as financial intermediaries and significantly improve borrowers' credit conditions by reducing information asymmetries, predominantly for borrowers with less attractive risk characteristics. Our findings may be surprising given the replacement of a bank by an electronic marketplace.

  15. WebVR——Web Virtual Reality Engine Based on P2P network

    OpenAIRE

    zhihan LV; Tengfei Yin; Yong Han; Yong Chen; Ge Chen

    2011-01-01

    WebVR, a multi-user online virtual reality engine, is introduced. The main contributions are mapping the geographical space and virtual space to the P2P overlay network space, and dividing the three spaces by quad-tree method. The geocoding is identified with Hash value, which is used to index the user list, terrain data, and the model object data. Sharing of data through improved Kademlia network model is designed and implemented. In this model, XOR algorithm is used to calculate the distanc...

  16. A P2P Service Discovery Strategy Based on Content Catalogues

    Directory of Open Access Journals (Sweden)

    Lican Huang

    2007-08-01

    Full Text Available This paper presents a framework for distributed service discovery based on VIRGO P2P technologies. The services are classified as multi-layer, hierarchical catalogue domains according to their contents. The service providers, which have their own service registries such as UDDIs, register the services they provide and establish a virtual tree in a VIRGO network according to the domain of their service. The service location done by the proposed strategy is effective and guaranteed. This paper also discusses the primary implementation of service discovery based on Tomcat/Axis and jUDDI.

  17. Nuclear transparency in 90 deg.c.m. quasielastic A(p,2p) reactions

    International Nuclear Information System (INIS)

    Aclander, J.; Alster, J.; Kosonovsky, I.; Malki, A.; Mardor, I.; Mardor, Y.; Navon, I.; Piasetzky, E.; Asryan, G.; Barton, D.S.; Buktoyarova, N.; Bunce, G.; Carroll, A.S.; Gushue, S.; Makdisi, Y.I.; Roser, T.; Tanaka, M.; Averiche, Y.; Panebratsev, Y.; Shimanskiy, S.

    2004-01-01

    We summarize the results of two experimental programs at the Alternating Gradient Synchrotron of BNL to measure the nuclear transparency of nuclei measured in the A(p,2p) quasielastic scattering process near 90 deg. in the pp center of mass. The incident momenta varied from 5.9 to 14.4 GeV/c, corresponding to 4.8 2 2 . Taking into account the motion of the target proton in the nucleus, the effective incident momenta extended from 5.0 to 15.8 GeV/c. First, we describe the measurements with the newer experiment, E850, which had more complete kinematic definition of quasielastic events. E850 covered a larger range of incident momenta, and thus provided more information regarding the nature of the energy dependence of the nuclear transparency. In E850 the angular dependence of the nuclear transparency near 90 deg. and the nuclear transparency deuterons were studied. Second, we review the techniques used in an earlier experiment, E834, and show that the two experiments are consistent for the carbon data. E834 also determines the nuclear transparencies for lithium, aluminum, copper, and lead nuclei as well as for carbon. A determination of the (π + ,π + p) transparencies is also reported. We find for both E850 and E834 that the A(p,2p) nuclear transparency, unlike that for A(e,e ' p) nuclear transparency, is incompatible with a constant value versus energy as predicted by Glauber calculations. The A(p,2p) nuclear transparency for carbon and aluminum increases by a factor of two between 5.9 and 9.5 GeV/c incident proton momentum. At its peak the A(p,2p) nuclear transparency is ∼80% of the constant A(e,e ' p) nuclear transparency. Then the nuclear transparency falls back to a value at least as small as that at 5.9 GeV/c, and is compatible with the Glauber level again. This oscillating behavior is generally interpreted as an interplay between two components of the pN scattering amplitude; one short ranged and perturbative, and the other long ranged and strongly absorbed

  18. Towards Accurate Node-Based Detection of P2P Botnets

    Directory of Open Access Journals (Sweden)

    Chunyong Yin

    2014-01-01

    Full Text Available Botnets are a serious security threat to the current Internet infrastructure. In this paper, we propose a novel direction for P2P botnet detection called node-based detection. This approach focuses on the network characteristics of individual nodes. Based on our model, we examine node’s flows and extract the useful features over a given time period. We have tested our approach on real-life data sets and achieved detection rates of 99-100% and low false positives rates of 0–2%. Comparison with other similar approaches on the same data sets shows that our approach outperforms the existing approaches.

  19. (p,2p) study of high-momentum components at 2. 1 GeV

    Energy Technology Data Exchange (ETDEWEB)

    Treuhaft, R.N.

    1982-07-01

    A (p,2p) experiment designed to isolate interactions with small numbers of fast nuclear constituents is described. Special attention is paid to the experimental manifestation and description of a correlated pair of nucleons in the nucleus. Phase space calculations are presented for the proton-pair three-body final state and for final states with larger number of particles. The Two Armed Spectrometer System (TASS) is described in detail. The data suggest the possibility of isolating an interaction with one or two nucleons in the nucleus which may have momenta far in excess of those described in a Fermi gas model.

  20. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee; Kaushik, Dinesh; Winfer, Andrew

    2011-01-01

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  1. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee

    2011-11-15

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  2. Minimizing Redundant Messages and Improving Search Efficiency under Highly Dynamic Mobile P2P Network

    Directory of Open Access Journals (Sweden)

    Ajay Arunachalam

    2016-02-01

    Full Text Available Resource Searching is one of the key functional tasks in large complex networks. With the P2P architecture, millions of peers connect together instantly building a communication pattern. Searching in mobile networks faces additional limitations and challenges. Flooding technique can cope up with the churn and searches aggressively by visiting almost all the nodes. But it exponentially increases the network traffic and thus does not scale well. Further the duplicated query messages consume extra battery power and network bandwidth. The blind flooding also suffers from long delay problem in P2P networks. In this paper, we propose optimal density based flooding resource discovery schemes. Our first model takes into account local graph topology information to supplement the resource discovery process while in our extended version we also consider the neighboring node topology information along with the local node information to further effectively use the mobile and network resources. Our proposed method reduces collision at the same time minimizes effect of redundant messages and failures. Overall the methods reduce network overhead, battery power consumption, query delay, routing load, MAC load and bandwidth usage while also achieving good success rate in comparison to the other techniques. We also perform a comprehensive analysis of the resource discovery schemes to verify the impact of varying node speed and different network conditions.

  3. Performance Evaluation of an Object Management Policy Approach for P2P Networks

    Directory of Open Access Journals (Sweden)

    Dario Vieira

    2012-01-01

    Full Text Available The increasing popularity of network-based multimedia applications poses many challenges for content providers to supply efficient and scalable services. Peer-to-peer (P2P systems have been shown to be a promising approach to provide large-scale video services over the Internet since, by nature, these systems show high scalability and robustness. In this paper, we propose and analyze an object management policy approach for video web cache in a P2P context, taking advantage of object's metadata, for example, video popularity, and object's encoding techniques, for example, scalable video coding (SVC. We carry out trace-driven simulations so as to evaluate the performance of our approach and compare it against traditional object management policy approaches. In addition, we study as well the impact of churn on our approach and on other object management policies that implement different caching strategies. A YouTube video collection which records over 1.6 million video's log was used in our experimental studies. The experiment results have showed that our proposed approach can improve the performance of the cache substantially. Moreover, we have found that neither the simply enlargement of peers' storage capacity nor a zero replicating strategy is effective actions to improve performance of an object management policy.

  4. Cooperation enhanced by indirect reciprocity in spatial prisoner's dilemma games for social P2P systems

    Science.gov (United States)

    Tian, Lin-Lin; Li, Ming-Chu; Wang, Zhen

    2016-11-01

    With the growing interest in social Peer-to-Peer (P2P) applications, relationships of individuals are further exploited to improve the performances of reputation systems. It is an on-going challenge to investigate how spatial reciprocity aids indirect reciprocity in sustaining cooperation in practical P2P environments. This paper describes the construction of an extended prisoner's dilemma game on square lattice networks with three strategies, i.e., defection, unconditional cooperation, and reciprocal cooperation. Reciprocators discriminate partners according to their reputations based on image scoring, where mistakes in judgment of reputations may occur. The independent structures of interaction and learning neighborhood are discussed, with respect to the situation in which learning environments differ from interaction networks. The simulation results have indicated that the incentive mechanism enhances cooperation better in structured peers than among a well-mixed population. Given the realistic condition of inaccurate reputation scores, defection is still successfully held down when the players interact and learn within the unified neighborhoods. Extensive simulations have further confirmed the positive impact of spatial structure on cooperation with different sizes of lattice neighborhoods. And similar conclusions can also be drawn on regular random networks and scale-free networks. Moreover, for the separated structures of the neighborhoods, the interaction network has a critical effect on the evolution dynamics of cooperation and learning environments only have weaker impacts on the process. Our findings further provide some insights concerning the evolution of collective behaviors in social systems.

  5. A multi-state reliability evaluation model for P2P networks

    International Nuclear Information System (INIS)

    Fan Hehong; Sun Xiaohan

    2010-01-01

    The appearance of new service types and the convergence tendency of the communication networks have endowed the networks more and more P2P (peer to peer) properties. These networks can be more robust and tolerant for a series of non-perfect operational states due to the non-deterministic server-client distributions. Thus a reliability model taking into account of the multi-state and non-deterministic server-client distribution properties is needed for appropriate evaluation of the networks. In this paper, two new performance measures are defined to quantify the overall and local states of the networks. A new time-evolving state-transition Monte Carlo (TEST-MC) simulation model is presented for the reliability analysis of P2P networks in multiple states. The results show that the model is not only valid for estimating the traditional binary-state network reliability parameters, but also adequate for acquiring the parameters in a series of non-perfect operational states, with good efficiencies, especially for highly reliable networks. Furthermore, the model is versatile for the reliability and maintainability analyses in that both the links and the nodes can be failure-prone with arbitrary life distributions, and various maintainability schemes can be applied.

  6. A Measurement Study of the Structured Overlay Network in P2P File-Sharing Systems

    Directory of Open Access Journals (Sweden)

    Mo Zhou

    2007-01-01

    Full Text Available The architecture of P2P file-sharing applications has been developing to meet the needs of large scale demands. The structured overlay network, also known as DHT, has been used in these applications to improve the scalability, and robustness of the system, and to make it free from single-point failure. We believe that the measurement study of the overlay network used in the real file-sharing P2P systems can provide guidance for the designing of such systems, and improve the performance of the system. In this paper, we perform the measurement in two different aspects. First, a modified client is designed to provide view to the overlay network from a single-user vision. Second, the instances of crawler programs deployed in many nodes managed to crawl the user information of the overlay network as much as possible. We also find a vulnerability in the overlay network, combined with the character of the DNS service, a more serious DDoS attack can be launched.

  7. A P2P Botnet detection scheme based on decision tree and adaptive multilayer neural networks.

    Science.gov (United States)

    Alauthaman, Mohammad; Aslam, Nauman; Zhang, Li; Alasem, Rafe; Hossain, M A

    2018-01-01

    In recent years, Botnets have been adopted as a popular method to carry and spread many malicious codes on the Internet. These malicious codes pave the way to execute many fraudulent activities including spam mail, distributed denial-of-service attacks and click fraud. While many Botnets are set up using centralized communication architecture, the peer-to-peer (P2P) Botnets can adopt a decentralized architecture using an overlay network for exchanging command and control data making their detection even more difficult. This work presents a method of P2P Bot detection based on an adaptive multilayer feed-forward neural network in cooperation with decision trees. A classification and regression tree is applied as a feature selection technique to select relevant features. With these features, a multilayer feed-forward neural network training model is created using a resilient back-propagation learning algorithm. A comparison of feature set selection based on the decision tree, principal component analysis and the ReliefF algorithm indicated that the neural network model with features selection based on decision tree has a better identification accuracy along with lower rates of false positives. The usefulness of the proposed approach is demonstrated by conducting experiments on real network traffic datasets. In these experiments, an average detection rate of 99.08 % with false positive rate of 0.75 % was observed.

  8. Incentive Mechanism for P2P Content Sharing over Heterogenous Access Networks

    Science.gov (United States)

    Sato, Kenichiro; Hashimoto, Ryo; Yoshino, Makoto; Shinkuma, Ryoichi; Takahashi, Tatsuro

    In peer-to-peer (P2P) content sharing, users can share their content by contributing their own resources to one another. However, since there is no incentive for contributing contents or resources to others, users may attempt to obtain content without any contribution. To motivate users to contribute their resources to the service, incentive-rewarding mechanisms have been proposed. On the other hand, emerging wireless technologies, such as IEEE 802.11 wireless local area networks, beyond third generation (B3G) cellular networks and mobile WiMAX, provide high-speed Internet access for wireless users. Using these high-speed wireless access, wireless users can use P2P services and share their content with other wireless users and with fixed users. However, this diversification of access networks makes it difficult to appropriately assign rewards to each user according to their contributions. This is because the cost necessary for contribution is different in different access networks. In this paper, we propose a novel incentive-rewarding mechanism called EMOTIVER that can assign rewards to users appropriately. The proposed mechanism uses an external evaluator and interactive learning agents. We also investigate a way of appropriately controlling rewards based on the system service's quality and managing policy.

  9. Determinants of default in p2p lending: the Mexican case

    Directory of Open Access Journals (Sweden)

    Carlos Eduardo Canfield

    2018-03-01

    Full Text Available P2P lending is a new method of informal finance that uses the internet to directly connect borrowers with on-line communities. With a unique dataset provided by Prestadero, the largest on-line lending platform with national presence in Mexico, this research explores the effect of credit scores and other variables related to loan and borrower´s traits, in determining default behavior in P2P lending. Moreover, using a logistic regression model, it tested whether investors might benefit from screening loan applicants by gender after controlling for loan quality. The results showed that information provided by the platform is relevant for analyzing credit risk, yet not conclusive. In congruence with the literature, on a scale going from the safest to the riskiest, loan quality is positively associated with default behavior. Other determinants for increasing the odds of default are the payment-to-income ratio and refinancing on the same platform. On the contrary loan purpose and being a female applicant reduce such odds. No categorical evidence for differential default behavior was found for gender´s case-discrimination, under equal credit conditions. However it was found that controlling for loan quality, women have longer loan survival times than men. This is one of the first studies about debt crowdfunding in Latin America and Mexico. Implications for lenders, researchers and policy-makers are also discussed.

  10. A P2P Framework for Developing Bioinformatics Applications in Dynamic Cloud Environments

    Directory of Open Access Journals (Sweden)

    Chun-Hung Richard Lin

    2013-01-01

    Full Text Available Bioinformatics is advanced from in-house computing infrastructure to cloud computing for tackling the vast quantity of biological data. This advance enables large number of collaborative researches to share their works around the world. In view of that, retrieving biological data over the internet becomes more and more difficult because of the explosive growth and frequent changes. Various efforts have been made to address the problems of data discovery and delivery in the cloud framework, but most of them suffer the hindrance by a MapReduce master server to track all available data. In this paper, we propose an alternative approach, called PRKad, which exploits a Peer-to-Peer (P2P model to achieve efficient data discovery and delivery. PRKad is a Kademlia-based implementation with Round-Trip-Time (RTT as the associated key, and it locates data according to Distributed Hash Table (DHT and XOR metric. The simulation results exhibit that our PRKad has the low link latency to retrieve data. As an interdisciplinary application of P2P computing for bioinformatics, PRKad also provides good scalability for servicing a greater number of users in dynamic cloud environments.

  11. ATLAAS-P2P: a two layer network solution for easing the resource discovery process in unstructured networks

    OpenAIRE

    Baraglia, Ranieri; Dazzi, Patrizio; Mordacchini, Matteo; Ricci, Laura

    2013-01-01

    ATLAAS-P2P is a two-layered P2P architecture for developing systems providing resource aggregation and approximated discovery in P2P networks. Such systems allow users to search the desired resources by specifying their requirements in a flexible and easy way. From the point of view of resource providers, this system makes available an effective solution supporting providers in being reached by resource requests.

  12. Multilevel Bloom Filters for P2P Flows Identification Based on Cluster Analysis in Wireless Mesh Network

    Directory of Open Access Journals (Sweden)

    Xia-an Bi

    2015-01-01

    Full Text Available With the development of wireless mesh networks and distributed computing, lots of new P2P services have been deployed and enrich the Internet contents and applications. The rapid growth of P2P flows brings great pressure to the regular network operation. So the effective flow identification and management of P2P applications become increasingly urgent. In this paper, we build a multilevel bloom filters data structure to identify the P2P flows through researches on the locality characteristics of P2P flows. Different level structure stores different numbers of P2P flow rules. According to the characteristics values of the P2P flows, we adjust the parameters of the data structure of bloom filters. The searching steps of the scheme traverse from the first level to the last level. Compared with the traditional algorithms, our method solves the drawbacks of previous schemes. The simulation results demonstrate that our algorithm effectively enhances the performance of P2P flows identification. Then we deploy our flow identification algorithm in the traffic monitoring sensors which belong to the network traffic monitoring system at the export link in the campus network. In the real environment, the experiment results demonstrate that our algorithm has a fast speed and high accuracy to identify the P2P flows; therefore, it is suitable for actual deployment.

  13. Research of trust model base on P2P and grid system

    International Nuclear Information System (INIS)

    Jiang Zhuoming; Wu Huan; Xu Rongsheng

    2009-01-01

    Orienting to the characteristic of P2P and Grid system in architecture and service, a trust management model (PG-TM) based on cluster partition is presented. In this model, the protocol is described that trustworthiness is computed before service interaction, and recommendation values will be fed back after interaction. About the trustworthiness, some arithmetic is compared and geometric mean is brought up for the non-linear trust principle. In addition, it is also considered that the trustworthiness is adjusted by the produce contribution rate,network stability and history accumulation. Finally, the factors of maintaining management server and cluster are discussed. PG-TM model can ensure the security and availability in computation and storage of high energy physics experiments. (authors)

  14. Characterizing the Global Impact of P2P Overlays on the AS-Level Underlay

    Science.gov (United States)

    Rasti, Amir Hassan; Rejaie, Reza; Willinger, Walter

    This paper examines the problem of characterizing and assessing the global impact of the load imposed by a Peer-to-Peer (P2P) overlay on the AS-level underlay. In particular, we capture Gnutella snapshots for four consecutive years, obtain the corresponding AS-level topology snapshots of the Internet and infer the AS-paths associated with each overlay connection. Assuming a simple model of overlay traffic, we analyze the observed load imposed by these Gnutella snapshots on the AS-level underlay using metrics that characterize the load seen on individual AS-paths and by the transit ASes, illustrate the churn among the top transit ASes during this 4-year period, and describe the propagation of traffic within the AS-level hierarchy.

  15. A Hybrid P2P Overlay Network for Non-strictly Hierarchically Categorized Content

    Science.gov (United States)

    Wan, Yi; Asaka, Takuya; Takahashi, Tatsuro

    In P2P content distribution systems, there are many cases in which the content can be classified into hierarchically organized categories. In this paper, we propose a hybrid overlay network design suitable for such content called Pastry/NSHCC (Pastry for Non-Strictly Hierarchically Categorized Content). The semantic information of classification hierarchies of the content can be utilized regardless of whether they are in a strict tree structure or not. By doing so, the search scope can be restrained to any granularity, and the number of query messages also decreases while maintaining keyword searching availability. Through simulation, we showed that the proposed method provides better performance and lower overhead than unstructured overlays exploiting the same semantic information.

  16. Group Clustering Mechanism for P2P Large Scale Data Sharing Collaboration

    Institute of Scientific and Technical Information of China (English)

    DENGQianni; LUXinda; CHENLi

    2005-01-01

    Research shows that P2P scientific collaboration network will exhibit small-world topology, as do a large number of social networks for which the same pattern has been documented. In this paper we propose a topology building protocol to benefit from the small world feature. We find that the idea of Freenet resembles the dynamic pattern of social interactions in scientific data sharing and the small world characteristic of Freenet is propitious to improve the file locating performance in scientificdata sharing. But the LRU (Least recently used) datas-tore cache replacement scheme of Freenet is not suitableto be used in scientific data sharing network. Based onthe group locality of scientific collaboration, we proposean enhanced group clustering cache replacement scheme.Simulation shows that this scheme improves the request hitratio dramatically while keeping the small average hops per successful request comparable to LRU.

  17. SNMS: an intelligent transportation system network architecture based on WSN and P2P network

    Institute of Scientific and Technical Information of China (English)

    LI Li; LIU Yuan-an; TANG Bi-hua

    2007-01-01

    With the development of city road networks, the question of how to obtain information about the roads is becoming more and more important. In this article, sensor network with mobile station (SNMS), a novel two-tiered intelligent transportation system (ITS) network architecture based on wireless sensor network (WSN) and peer-to-peer (P2P) network, is proposed to provide significant traffic information about the road and thereby, assist travelers to take optimum decisions when they are driving. A detailed explanation with regard to the strategy of each level as well as the design of two main components in the network, sensor unit (SU) and mobile station (MS), is presented. Finally, a representative scenario is described to display the operation of the system.

  18. Energy Dependence of Nuclear Transparency in C (p,2p) Scattering

    Science.gov (United States)

    Leksanov, A.; Alster, J.; Asryan, G.; Averichev, Y.; Barton, D.; Baturin, V.; Bukhtoyarova, N.; Carroll, A.; Heppelmann, S.; Kawabata, T.; Makdisi, Y.; Malki, A.; Minina, E.; Navon, I.; Nicholson, H.; Ogawa, A.; Panebratsev, Yu.; Piasetzky, E.; Schetkovsky, A.; Shimanskiy, S.; Tang, A.; Watson, J. W.; Yoshida, H.; Zhalov, D.

    2001-11-01

    The transparency of carbon for (p,2p) quasielastic events was measured at beam momenta ranging from 5.9 to 14.5 GeV/c at 90° c.m. The four-momentum transfer squared (Q2) ranged from 4.7 to 12.7 (GeV/c)2. We present the observed beam momentum dependence of the ratio of the carbon to hydrogen cross sections. We also apply a model for the nuclear momentum distribution of carbon to obtain the nuclear transparency. We find a sharp rise in transparency as the beam momentum is increased to 9 GeV/c and a reduction to approximately the Glauber level at higher energies.

  19. Characterizing Economic and Social Properties of Trust and Reputation Systems in P2P Environment

    Institute of Scientific and Technical Information of China (English)

    Yu-Feng Wang; Yoshiaki Hori; Kouichi Sakurai

    2008-01-01

    Considering the fact that P2P (Peer-to-Peer) systems are self-organized and autonomous, social-control mechanism (like trust and reputation) is essential to evaluate the trustworthiness of participating peers and to combat the selfish, dishonest and malicious peer behaviors. So, naturally, we advocate that P2P systems that gradually act as an important information infrastructure should be multi-disciplinary research topic, and reflect certain features of our society. So, from economic and social perspective, this paper designs the incentive-compatible reputation feedback scheme based on well-known economic model, and characterizes the social features of trust network in terms of efficiency and cost. Specifically, our framework has two distinctive purposes: first, from high-level perspective, we argue trust system is a special kind of social network, and an accurate characterization of the structural properties of the network can be of fundamental importance to understand the dynamics of the system. Thus, inspired by the concept of weighted small-world, this paper proposes new measurements to characterize the social properties of trust system, that is, highg lobal and local efficiency, and low cost; then, from relative low-level perspective, we argue that reputation feedback is a special kind of information, and it is not free. So, based on economic model, VCG (Vickrey-Clarke-Grove)-like reputation remuneration mechanism is proposed to stimulate rational peers not only to provide reputation feedback, but truthfully offer feedback. Furthermore, considering that trust and reputation is subjective, we classify the trust into functional trust and referral trust, and extend the referral trust to include two factors: similarity and truthfulness, which can efficiently reduce the trust inference error. The preliminary simulation results show the benefits of our proposal and the emergence of certain social properties in trust network.

  20. Folksonomical P2P File Sharing Networks Using Vectorized KANSEI Information as Search Tags

    Science.gov (United States)

    Ohnishi, Kei; Yoshida, Kaori; Oie, Yuji

    We present the concept of folksonomical peer-to-peer (P2P) file sharing networks that allow participants (peers) to freely assign structured search tags to files. These networks are similar to folksonomies in the present Web from the point of view that users assign search tags to information distributed over a network. As a concrete example, we consider an unstructured P2P network using vectorized Kansei (human sensitivity) information as structured search tags for file search. Vectorized Kansei information as search tags indicates what participants feel about their files and is assigned by the participant to each of their files. A search query also has the same form of search tags and indicates what participants want to feel about files that they will eventually obtain. A method that enables file search using vectorized Kansei information is the Kansei query-forwarding method, which probabilistically propagates a search query to peers that are likely to hold more files having search tags that are similar to the query. The similarity between the search query and the search tags is measured in terms of their dot product. The simulation experiments examine if the Kansei query-forwarding method can provide equal search performance for all peers in a network in which only the Kansei information and the tendency with respect to file collection are different among all of the peers. The simulation results show that the Kansei query forwarding method and a random-walk-based query forwarding method, for comparison, work effectively in different situations and are complementary. Furthermore, the Kansei query forwarding method is shown, through simulations, to be superior to or equal to the random-walk based one in terms of search speed.

  1. Coalition-based multimedia peer matching strategies for P2P networks

    Science.gov (United States)

    Park, Hyunggon; van der Schaar, Mihaela

    2008-01-01

    In this paper, we consider the problem of matching users for multimedia transmission in peer-to-peer (P2P) networks and identify strategies for fair resource division among the matched multimedia peers. We propose a framework for coalition formation, which enables users to form a group of matched peers where they can interact cooperatively and negotiate resources based on their satisfaction with the coalition, determined by explicitly considering the peer's multimedia attributes. In addition, our proposed approach goes a step further by introducing the concept of marginal contribution, which is the value improvement of the coalition induced by an incoming peer. We show that the best way for a peer to select a coalition is to choose the coalition that provides the largest division of marginal contribution given a deployed value-division scheme. Moreover, we model the utility function by explicitly considering each peer's attributes as well as the cost for uploading content. To quantify the benefit that users derive from a coalition, we define the value of a coalition based on the total utility that all peers can achieve jointly in the coalition. Based on this definition of the coalition value, we use an axiomatic bargaining solution in order to fairly negotiate the value division of the upload bandwidth given each peer's attributes.

  2. Las simulaciones, una alternativa para el estudio de los protocolos P2P

    Directory of Open Access Journals (Sweden)

    Armando de Jesús Ruiz Calderón

    2015-12-01

    Full Text Available La arquitectura y funcionalidad de las redes P2P hacen que sean atractivas para ser utilizadas en ambientes distribuidos locales y aplicaciones de amplia distribución, el análisis de sus protocolos de ruteo bajo diferentes ataques como son los de negación de existencia y de servicio, así como su análisis estadístico, hacen que las simulaciones cobren gran importancia, y sean una alternativa adecuada para su estudio, pues existen varios protocolos de esta categoría como Pastry o Chord, los cuales son de gran importancia dada su amplia utilización en diferentes aplicaciones para el envío y recuperación satisfactoria de información tanto en la nube como en aplicaciones distribuidas, razón por la cual su análisis es importante, este trabajo se centra en Pastry dado que es utilizado en la versión Azure de Microsoft Windows.

  3. Measurement of color transparency by C(p,2p) reactions at large momentum transfer

    International Nuclear Information System (INIS)

    Carroll, A.S.

    1997-12-01

    The subject of color transparency, the enhancement of the ability of hadrons to penetrate nuclear matter by kinematic selection, is both interesting and controversial. The description of the collision of hadrons with nucleons inside nuclei, and the connection with initial and final state interactions involve fundamental questions of quantum mechanics, and nuclear and particle physics. Interest in color transparency was greatly increased by AGS Experiment 834 which observed dramatic changes with incident momentum for a variety of nuclei. A new experiment, E850, has studied the (p,2p) quasi-elastic reaction near 90 degree cm for momenta between 5.9 and 9 GeV/c. The quasi-elastic reaction was compared to the elastic reaction on free protons to determine the transparency. With limited statistics, but with better kinematic definition in a new detector, the authors have confirmed the rise in Carbon transparency ratio seen in Expt 834. The Tr(D/H) for deuterium is consistent with no energy dependence. Unlike the free dσ/dt for hydrogen, the dσ/dt from protons in a nucleus is consistent with the exact s -10 scaling. This suggests two components to the pp scattering amplitude; one small and perturbative, the other spatially large and varying, but filtered away by the nuclear matter in the Carbon nucleus. The plan is to complete the repairs of the superconducting solenoid early this fall, reassemble the detector, and collect data starting next spring

  4. A P2P Query Algorithm for Opportunistic Networks Utilizing betweenness Centrality Forwarding

    Directory of Open Access Journals (Sweden)

    Jianwei Niu

    2013-01-01

    Full Text Available With the proliferation of high-end mobile devices that feature wireless interfaces, many promising applications are enabled in opportunistic networks. In contrary to traditional networks, opportunistic networks utilize the mobility of nodes to relay messages in a store-carry-forward paradigm. Thus, the relay process in opportunistic networks faces several practical challenges in terms of delay and delivery rate. In this paper, we propose a novel P2P Query algorithm, namely Betweenness Centrality Forwarding (PQBCF, for opportunistic networking. PQBCF adopts a forwarding metric called Betweenness Centrality (BC, which is borrowed from social network, to quantify the active degree of nodes in the networks. In PQBCF, nodes with a higher BC are preferable to serve as relays, leading to higher query success rate and lower query delay. A comparison with the state-of-the-art algorithms reveals that PQBCF can provide better performance on both the query success Ratio and query delay, and approaches the performance of Epidemic Routing (ER with much less resource consumption.

  5. Enhancing Scalability in On-Demand Video Streaming Services for P2P Systems

    Directory of Open Access Journals (Sweden)

    R. Arockia Xavier Annie

    2012-01-01

    Full Text Available Recently, many video applications like video telephony, video conferencing, Video-on-Demand (VoD, and so forth have produced heterogeneous consumers in the Internet. In such a scenario, media servers play vital role when a large number of concurrent requests are sent by heterogeneous users. Moreover, the server and distributed client systems participating in the Internet communication have to provide suitable resources to heterogeneous users to meet their requirements satisfactorily. The challenges in providing suitable resources are to analyze the user service pattern, bandwidth and buffer availability, nature of applications used, and Quality of Service (QoS requirements for the heterogeneous users. Therefore, it is necessary to provide suitable techniques to handle these challenges. In this paper, we propose a framework for peer-to-peer- (P2P- based VoD service in order to provide effective video streaming. It consists of four functional modules, namely, Quality Preserving Multivariate Video Model (QPMVM for efficient server management, tracker for efficient peer management, heuristic-based content distribution, and light weight incentivized sharing mechanism. The first two of these modules are confined to a single entity of the framework while the other two are distributed across entities. Experimental results show that the proposed framework avoids overloading the server, increases the number of clients served, and does not compromise on QoS, irrespective of the fact that the expected framework is slightly reduced.

  6. Review of Brookhaven nuclear transparency measurements in (p, 2p) reactions at large Q2

    International Nuclear Information System (INIS)

    Carroll, Alan S.

    2003-01-01

    In this contribution we summarize the results of two experiments to measure the transparency of nuclei in the (p, 2p) quasi-elastic scattering process near 90 deg in the pp center-of-mass. The incident momenta went from 6 to 14.4 GeV/c, corresponding to 4.8 2 2 . First, we describe the measurements with the newer experiment, E850, which has more complete kinematic definition of quasi-elastic events. E850 covers a larger range of incident momenta, and thus provides more information regarding the nature of the unexpected fall in the transparency above 9 GeV/c. Second, we review the techniques used in an earlier experiment, E834, and show that the two experiments are consistent for the carbon data. We use the transparencies measured in the five nuclei from Li to Pb to set limits on the rate of expansion for protons involved in quasi-elastic scattering at large momentum transfer. (author)

  7. Structured P2P overlay of mobile brokers for realizing publish/subscribe communication in VANET.

    Science.gov (United States)

    Pandey, Tulika; Garg, Deepak; Gore, Manoj Madhava

    2014-01-01

    Publish/subscribe communication paradigm provides asynchrony and decoupling, making it an elegant alternative for designing applications in distributed and dynamic environment such as vehicular ad hoc networks (VANETs). In this paradigm, the broker is the most important component that decouples other two components, namely, publisher and subscriber. Previous research efforts have either utilized the deployment of distributed brokers on stationary road side info-stations or have assigned the role of broker to any moving vehicle on ad hoc basis. In one approach, lots of preinstalled infrastructures are needed whereas, in another, the quality of service is not guaranteed due to unpredictable moving and stopping patterns of vehicles. In this paper, we present the architecture of distributed mobile brokers which are dynamically reconfigurable in the form of structured P2P overlay and act as rendezvous points for matching publications and subscriptions. We have taken city buses in urban settings to act as mobile brokers whereas other vehicles are considered to be in role of publishers and subscribers. These mobile brokers also assist in locating a vehicle for successful and timely transfer of notifications. We have performed an extensive simulation study to compare our approach with previously proposed approaches. Simulation results establish the applicability of our approach.

  8. Structured P2P Overlay of Mobile Brokers for Realizing Publish/Subscribe Communication in VANET

    Directory of Open Access Journals (Sweden)

    Tulika Pandey

    2014-01-01

    Full Text Available Publish/subscribe communication paradigm provides asynchrony and decoupling, making it an elegant alternative for designing applications in distributed and dynamic environment such as vehicular ad hoc networks (VANETs. In this paradigm, the broker is the most important component that decouples other two components, namely, publisher and subscriber. Previous research efforts have either utilized the deployment of distributed brokers on stationary road side info-stations or have assigned the role of broker to any moving vehicle on ad hoc basis. In one approach, lots of preinstalled infrastructures are needed whereas, in another, the quality of service is not guaranteed due to unpredictable moving and stopping patterns of vehicles. In this paper, we present the architecture of distributed mobile brokers which are dynamically reconfigurable in the form of structured P2P overlay and act as rendezvous points for matching publications and subscriptions. We have taken city buses in urban settings to act as mobile brokers whereas other vehicles are considered to be in role of publishers and subscribers. These mobile brokers also assist in locating a vehicle for successful and timely transfer of notifications. We have performed an extensive simulation study to compare our approach with previously proposed approaches. Simulation results establish the applicability of our approach.

  9. Relay discovery and selection for large-scale P2P streaming.

    Directory of Open Access Journals (Sweden)

    Chengwei Zhang

    Full Text Available In peer-to-peer networks, application relays have been commonly used to provide various networking services. The service performance often improves significantly if a relay is selected appropriately based on its network location. In this paper, we studied the location-aware relay discovery and selection problem for large-scale P2P streaming networks. In these large-scale and dynamic overlays, it incurs significant communication and computation cost to discover a sufficiently large relay candidate set and further to select one relay with good performance. The network location can be measured directly or indirectly with the tradeoffs between timeliness, overhead and accuracy. Based on a measurement study and the associated error analysis, we demonstrate that indirect measurements, such as King and Internet Coordinate Systems (ICS, can only achieve a coarse estimation of peers' network location and those methods based on pure indirect measurements cannot lead to a good relay selection. We also demonstrate that there exists significant error amplification of the commonly used "best-out-of-K" selection methodology using three RTT data sets publicly available. We propose a two-phase approach to achieve efficient relay discovery and accurate relay selection. Indirect measurements are used to narrow down a small number of high-quality relay candidates and the final relay selection is refined based on direct probing. This two-phase approach enjoys an efficient implementation using the Distributed-Hash-Table (DHT. When the DHT is constructed, the node keys carry the location information and they are generated scalably using indirect measurements, such as the ICS coordinates. The relay discovery is achieved efficiently utilizing the DHT-based search. We evaluated various aspects of this DHT-based approach, including the DHT indexing procedure, key generation under peer churn and message costs.

  10. Towards the Engineering of Dependable P2P-Based Network Control — The Case of Timely Routing Control Messages

    Science.gov (United States)

    Tutschku, Kurt; Nakao, Akihiro

    This paper introduces a methodology for engineering best-effort P2P algorithms into dependable P2P-based network control mechanism. The proposed method is built upon an iterative approach consisting of improving the original P2P algorithm by appropriate mechanisms and of thorough performance assessment with respect to dependability measures. The potential of the methodology is outlined by the example of timely routing control for vertical handover in B3G wireless networks. In detail, the well-known Pastry and CAN algorithms are enhanced to include locality. By showing how to combine algorithmic enhancements with performance indicators, this case study paves the way for future engineering of dependable network control mechanisms through P2P algorithms.

  11. Japanese supercomputer technology

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Ewald, R.H.; Worlton, W.J.

    1982-01-01

    In February 1982, computer scientists from the Los Alamos National Laboratory and Lawrence Livermore National Laboratory visited several Japanese computer manufacturers. The purpose of these visits was to assess the state of the art of Japanese supercomputer technology and to advise Japanese computer vendors of the needs of the US Department of Energy (DOE) for more powerful supercomputers. The Japanese foresee a domestic need for large-scale computing capabilities for nuclear fusion, image analysis for the Earth Resources Satellite, meteorological forecast, electrical power system analysis (power flow, stability, optimization), structural and thermal analysis of satellites, and very large scale integrated circuit design and simulation. To meet this need, Japan has launched an ambitious program to advance supercomputer technology. This program is described

  12. Analysis and Implementation of Gossip-Based P2P Streaming with Distributed Incentive Mechanisms for Peer Cooperation

    Directory of Open Access Journals (Sweden)

    Sachin Agarwal

    2007-10-01

    Full Text Available Peer-to-peer (P2P systems are becoming a popular means of streaming audio and video content but they are prone to bandwidth starvation if selfish peers do not contribute bandwidth to other peers. We prove that an incentive mechanism can be created for a live streaming P2P protocol while preserving the asymptotic properties of randomized gossip-based streaming. In order to show the utility of our result, we adapt a distributed incentive scheme from P2P file storage literature to the live streaming scenario. We provide simulation results that confirm the ability to achieve a constant download rate (in time, per peer that is needed for streaming applications on peers. The incentive scheme fairly differentiates peers' download rates according to the amount of useful bandwidth they contribute back to the P2P system, thus creating a powerful quality-of-service incentive for peers to contribute bandwidth to other peers. We propose a functional architecture and protocol format for a gossip-based streaming system with incentive mechanisms, and present evaluation data from a real implementation of a P2P streaming application.

  13. A Distributed Dynamic Super Peer Selection Method Based on Evolutionary Game for Heterogeneous P2P Streaming Systems

    Directory of Open Access Journals (Sweden)

    Jing Chen

    2013-01-01

    Full Text Available Due to high efficiency and good scalability, hierarchical hybrid P2P architecture has drawn more and more attention in P2P streaming research and application fields recently. The problem about super peer selection, which is the key problem in hybrid heterogeneous P2P architecture, is becoming highly challenging because super peers must be selected from a huge and dynamically changing network. A distributed super peer selection (SPS algorithm for hybrid heterogeneous P2P streaming system based on evolutionary game is proposed in this paper. The super peer selection procedure is modeled based on evolutionary game framework firstly, and its evolutionarily stable strategies are analyzed. Then a distributed Q-learning algorithm (ESS-SPS according to the mixed strategies by analysis is proposed for the peers to converge to the ESSs based on its own payoff history. Compared to the traditional randomly super peer selection scheme, experiments results show that the proposed ESS-SPS algorithm achieves better performance in terms of social welfare and average upload rate of super peers and keeps the upload capacity of the P2P streaming system increasing steadily with the number of peers increasing.

  14. Evolutionary Game Theory-Based Evaluation of P2P File-Sharing Systems in Heterogeneous Environments

    Directory of Open Access Journals (Sweden)

    Yusuke Matsuda

    2010-01-01

    Full Text Available Peer-to-Peer (P2P file sharing is one of key technologies for achieving attractive P2P multimedia social networking. In P2P file-sharing systems, file availability is improved by cooperative users who cache and share files. Note that file caching carries costs such as storage consumption and processing load. In addition, users have different degrees of cooperativity in file caching and they are in different surrounding environments arising from the topological structure of P2P networks. With evolutionary game theory, this paper evaluates the performance of P2P file sharing systems in such heterogeneous environments. Using micro-macro dynamics, we analyze the impact of the heterogeneity of user selfishness on the file availability and system stability. Further, through simulation experiments with agent-based dynamics, we reveal how other aspects, for example, synchronization among nodes and topological structure, affect the system performance. Both analytical and simulation results show that the environmental heterogeneity contributes to the file availability and system stability.

  15. A 2-layer and P2P-based architecture on resource location in future grid environment

    International Nuclear Information System (INIS)

    Pei Erming; Sun Gongxin; Zhang Weiyi; Pang Yangguang; Gu Ming; Ma Nan

    2004-01-01

    Grid and Peer-to-Peer computing are two distributed resource sharing environments developing rapidly in recent years. The final objective of Grid, as well as that of P2P technology, is to pool large sets of resources effectively to be used in a more convenient, fast and transparent way. We can speculate that, though many difference exists, Grid and P2P environments will converge into a large scale resource sharing environment that combines the characteristics of the two environments: large diversity, high heterogeneity (of resources), dynamism, and lack of central control. Resource discovery in this future Grid environment is a basic however, important problem. In this article. We propose a two-layer and P2P-based architecture for resource discovery and design a detailed algorithm for resource request propagation in the computing environment discussed above. (authors)

  16. An Effective Interval-Valued Intuitionistic Fuzzy Entropy to Evaluate Entrepreneurship Orientation of Online P2P Lending Platforms

    Directory of Open Access Journals (Sweden)

    Xiaohong Chen

    2013-01-01

    Full Text Available This paper describes an approach to measure the entrepreneurship orientation of online P2P lending platforms. The limitations of existing methods for calculating entropy of interval-valued intuitionistic fuzzy sets (IVIFSs are significantly improved by a new entropy measure of IVIFS considered in this paper, and then the essential properties of the proposed entropy are introduced. Moreover, an evaluation procedure is proposed to measure entrepreneurship orientation of online P2P lending platforms. Finally, a case is used to demonstrate the effectiveness of this method.

  17. Crossed molecular beam-tunable laser determination of velocity dependence of intramultiplet mixing: K(4p2P1/2)+He →K(4p2P3/2)+He

    International Nuclear Information System (INIS)

    Anderson, R.W.; Goddard, T.P.; Parravano, C.; Warner, J.

    1976-01-01

    The velocity dependence of intramultiplet mixing, K(4p 2 P 1 / 2 ) +He→K(4p 2 P 3 / 2 )+He, has been measured over the relative velocity range v=1.3--3.4 km/sec. The cross section appears to fit a linear function Q (v) =A (v-v 0 ), where a=6.3 x 10 -4 A 2 and v 0 = 7.9 x 10 4 cm/sec. The value of A is obtained by normalization to the literature thermal average cross section. The intramultiplet mixing theory of Nikitin is modified to yield Q (v) for the process. The modified theory correctly exhibits detailed balancing, and it is normalized to provide a very good fit to the observed Q (v). The magnitude of the normalization factor, however, is larger than that predicted from recent pseudopotential calculations of the excited state potentials. The temperature dependence of intramultiplet mixing is predicted. The use of laser polarization to determine the m/subj/ dependence of the process K(4p 2 P 3 / 2 +He→K(4p 2 P 1 / 2 )+He and other collision processes of excited 2 P 3 / 2 states is examined

  18. Supercomputers to transform Science

    CERN Multimedia

    2006-01-01

    "New insights into the structure of space and time, climate modeling, and the design of novel drugs, are but a few of the many research areas that will be transforned by the installation of three supercomputers at the Unversity of Bristol." (1/2 page)

  19. Supercomputers Of The Future

    Science.gov (United States)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1992-01-01

    Report evaluates supercomputer needs of five key disciplines: turbulence physics, aerodynamics, aerothermodynamics, chemistry, and mathematical modeling of human vision. Predicts these fields will require computer speed greater than 10(Sup 18) floating-point operations per second (FLOP's) and memory capacity greater than 10(Sup 15) words. Also, new parallel computer architectures and new structured numerical methods will make necessary speed and capacity available.

  20. StreetTiVo: Using a P2P XML Database System to Manage Multimedia Data in Your Living Room

    NARCIS (Netherlands)

    Zhang, Ying; de Vries, A.P.; Boncz, P.; Hiemstra, Djoerd; Ordelman, Roeland J.F.; Li, Qing; Feng, Ling; Pei, Jian; Wang, Sean X.

    StreetTiVo is a project that aims at bringing research results into the living room; in particular, a mix of current results in the areas of Peer-to-Peer XML Database Management System (P2P XDBMS), advanced multimedia analysis techniques, and advanced information re- trieval techniques. The project

  1. Introduction to Reconfigurable Supercomputing

    CERN Document Server

    Lanzagorta, Marco; Rosenberg, Robert

    2010-01-01

    This book covers technologies, applications, tools, languages, procedures, advantages, and disadvantages of reconfigurable supercomputing using Field Programmable Gate Arrays (FPGAs). The target audience is the community of users of High Performance Computers (HPe who may benefit from porting their applications into a reconfigurable environment. As such, this book is intended to guide the HPC user through the many algorithmic considerations, hardware alternatives, usability issues, programming languages, and design tools that need to be understood before embarking on the creation of reconfigur

  2. KaZaA and similar Peer-to-Peer (P2P) file-sharing applications

    CERN Multimedia

    2003-01-01

    Personal use of Peer-to-Peer (P2P) file sharing applications is NOT permitted at CERN. A non-exhaustive list of such applications, popular for exchanging music, videos, software etc, is: KaZaA, Napster, Gnutella, Edonkey2000, Napigator, Limewire, Bearshare, WinMX, Aimster, Morpheus, BitTorrent, ... You are reminded that use of CERN's Computing Facilities is governed by CERN's Computing Rules (Operational Circular No 5). They require that all users of CERN's Computing Facilities respect copyright, license and confidentiality agreements for data of any form (software, music, videos, etc). Sanctions are applicable in case of non-respect of the Computing Rules. Further details on restrictions for P2P applications are at: http://cern.ch/security/file-sharing CERN's Computing Rules are at: http://cern.ch/ComputingRules Denise Heagerty, CERN Computer Security Officer, Computer.Security@cern.ch

  3. Bottomonium spectroscopy and radiative transitions involving the chi(bJ)(1P, 2P) states at BABAR

    NARCIS (Netherlands)

    Lees, J. P.; Poireau, V.; Tisserand, V.; Grauges, E.; Palano, A.; Eigen, G.; Stugu, B.; Brown, D. N.; Kerth, L. T.; Kolomensky, Yu. G.; Lynch, G.; Schroeder, T.; Hearty, C.; Mattison, T. S.; McKenna, J. A.; So, R. Y.; Khan, A.; Blinov, V. E.; Buzykaev, A. R.; Druzhinin, V. P.; Golubev, V. B.; Kravchenko, E. A.; Onuchin, A. P.; Serednyakov, S. I.; Skovpen, Yu. I.; Solodov, E. P.; Todyshev, K. Yu.; Lankford, A. J.; Mandelkern, M.; Dey, B.; Gary, J. W.; Long, O.; Campagnari, C.; Sevilla, M. Franco; Hong, T. M.; Kovalskyi, D.; Richman, J. D.; West, C. A.; Eisner, A. M.; Lockman, W. S.; Vazquez, W. Panduro; Schumm, B. A.; Seiden, A.; Chao, D. S.; Echenard, B.; Flood, K. T.; Hitlin, D. G.; Miyashita, T. S.; Ongmongkolkul, P.; Roehrken, M.; Andreassen, R.; Huard, Z.; Meadows, B. T.; Pushpawela, B. G.; Sokoloff, M. D.; Sun, L.; Bloom, P. C.; Ford, W. T.; Gaz, A.; Smith, J. G.; Wagner, S. R.; Ayad, R.; Toki, W. H.; Spaan, B.; Bernard, D.; Verderi, M.; Playfer, S.; Bettoni, D.; Bozzi, C.; Calabrese, R.; Cibinetto, G.; Fioravanti, E.; Garzia, I.; Luppi, E.; Piemontese, L.; Santoro, V.; Calcaterra, A.; de Sangro, R.; Finocchiaro, G.; Martellotti, S.; Patteri, P.; Peruzzi, I. M.; Piccolo, M.; Rama, M.; Zallo, A.; Contri, R.; Lo Vetere, M.; Monge, M. R.; Passaggio, S.; Patrignani, C.; Robutti, E.; Bhuyan, B.; Prasad, V.; Adametz, A.; Uwer, U.; Lacker, M.; Dauncey, P. D.; Mallik, U.; Cochran, J.; Prell, S.; Ahmed, H.; Gritsan, A. V.; Arnaud, N.; Davier, M.; Derkach, D.; Grosdidier, G.; Le Diberder, F.; Lutz, A. M.; Malaescu, B.; Roudeau, P.; Stocchi, A.; Wormser, G.; Lange, D. J.; Wright, D. M.; Coleman, J. P.; Fry, J. R.; Gabathuler, E.; Hutchcroft, D. E.; Payne, D. J.; Touramanis, C.; Bevan, A. J.; Di Lodovico, F.; Sacco, R.; Cowan, G.; Bougher, J.; Brown, D. N.; Davis, C. L.; Denig, A. G.; Fritsch, M.; Gradl, W.; Griessinger, K.; Hafner, A.; Schubert, K. R.; Barlow, R. J.; Lafferty, G. D.; Cenci, R.; Hamilton, B.; Jawahery, A.; Roberts, D. A.; Cowan, R.; Sciolla, G.; Cheaib, R.; Patel, P. M.; Robertson, S. H.; Neri, N.; Palombo, F.; Cremaldi, L.; Godang, R.; Sonnek, P.; Summers, D. J.; Simard, M.; Taras, P.; De Nardo, G.; Onorato, G.; Sciacca, C.; Martinelli, M.; Raven, G.; Jessop, C. P.; LoSecco, J. M.; Honscheid, K.; Kass, R.; Feltresi, E.; Margoni, M.; Morandin, M.; Posocco, M.; Rotondo, M.; Simi, G.; Simonetto, F.; Stroili, R.; Akar, S.; Ben-Haim, E.; Bomben, M.; Bonneaud, G. R.; Briand, H.; Calderini, G.; Chauveau, J.; Leruste, Ph.; Marchiori, G.; Ocariz, J.; Biasini, M.; Manoni, E.; Pacetti, S.; Rossi, A.; Angelini, C.; Batignani, G.; Bettarini, S.; Carpinelli, M.; Casarosa, G.; Cervelli, A.; Chrzaszcz, M.; Forti, F.; Giorgi, M. A.; Lusiani, A.; Oberhof, B.; Paoloni, E.; Perez, A.; Rizzo, G.; Walsh, J. J.; Pegna, D. Lopes; Olsen, J.; Smith, A. J. S.; Faccini, R.; Ferrarotto, F.; Ferroni, F.; Gaspero, M.; Gioi, L. Li; Pilloni, A.; Piredda, G.; Buenger, C.; Dittrich, S.; Gruenber, O.; Hess, M.; Leddig, T.; Voss, C.; Waldi, R.; Adye, T.; Olaiya, E. O.; Wilson, F. F.; Emery, S.; Vasseur, G.; Anulli, F.; Aston, D.; Bard, D. J.; Cartaro, C.; Convery, M. R.; Dorfan, J.; Dubois-Felsmann, G. P.; Dunwoodie, W.; Ebert, M.; Field, R. C.; Fulsom, B. G.; Graham, M. T.; Hast, C.; Innes, W. R.; Kim, P.; Leith, D. W. G. S.; Lewis, P.; Lindemann, D.; Luitz, S.; Luth, V.; Lynch, H. L.; MacFarlane, D. B.; Muller, D. R.; Neal, H.; Perl, M.; Pulliam, T.; Ratcliff, B. N.; Roodman, A.; Salnikov, A. A.; Schindler, R. H.; Snyder, A.; Su, D.; Sullivan, M. K.; Va'vra, J.; Wisniewski, W. J.; Wulsin, H. W.; Purohit, M. V.; White, R. M.; Wilson, J. R.; Randle-Conde, A.; Sekula, S. J.; Bellis, M.; Burchat, P. R.; Puccio, E. M. T.; Alam, M. S.; Ernst, J. A.; Gorodeisky, R.; Guttman, N.; Peimer, D. R.; Soffer, A.; Spanier, S. M.; Ritchie, J. L.; Ruland, A. M.; Schwitters, R. F.; Wray, B. C.; Izen, J. M.; Lou, X. C.; Bianchi, F.; De Mori, F.; Filippi, A.; Gamba, D.; Lanceri, L.; Vitale, L.; Martinez-Vidal, F.; Oyanguren, A.; Villanueva-Perez, P.; Albert, J.; Banerjee, Sw.; Beaulieu, A.; Bernlochner, F. U.; Choi, H. H. F.; Kowalewski, R.; Lewczuk, M. J.; Lueck, T.; Nugent, I. M.; Roney, J. M.; Sobie, R. J.; Tasneem, N.; Gershon, T. J.; Harrison, P. F.; Latham, T. E.; Band, H. R.; Dasu, S.; Pan, Y.; Prepost, R.

    2014-01-01

    We use (121±1) million Υ(3S) and (98±1) million Υ(2S) mesons recorded by the BABAR detector at the PEP-II e+e− collider at SLAC to perform a study of radiative transitions involving the χbJ(1P,2P) states in exclusive decays with μ+μ−γγ final states. We reconstruct twelve channels in four cascades

  4. Efficient File Sharing by Multicast - P2P Protocol Using Network Coding and Rank Based Peer Selection

    Science.gov (United States)

    Stoenescu, Tudor M.; Woo, Simon S.

    2009-01-01

    In this work, we consider information dissemination and sharing in a distributed peer-to-peer (P2P highly dynamic communication network. In particular, we explore a network coding technique for transmission and a rank based peer selection method for network formation. The combined approach has been shown to improve information sharing and delivery to all users when considering the challenges imposed by the space network environments.

  5. How signaling and search costs affect information asymmetry in P2P lending: the economics of big data

    OpenAIRE

    Yan, Jiaqi; Yu, Wayne; Zhao, J. Leon

    2015-01-01

    In the past decade, online Peer-to-Peer (P2P) lending platforms have transformed the lending industry, which has been historically dominated by commercial banks. Information technology breakthroughs such as big data-based financial technologies (Fintech) have been identified as important disruptive driving forces for this paradigm shift. In this paper, we take an information economics perspective to investigate how big data affects the transformation of the lending industry. By identifying ho...

  6. Temporal Patterns of Pedophile Activity in a P2P Network: First Insights about User Profiles from Big Data

    OpenAIRE

    Fournier , Raphaël; Latapy , Matthieu

    2015-01-01

    International audience; Recent studies have shown that child abuse material is shared through peer-to-peer (P2P) networks, which allow users to exchange files without a central server. Obtaining knowledge on the extent of this activity has major consequences for child protection, policy making and Internet regulation. Previous works have developed tools and analyses to provide overall figures in temporally-limited measurements. Offenders' behavior is mostly studied through small-scale intervi...

  7. The Comparison of Distributed P2P Trust Models Based on Quantitative Parameters in the File Downloading Scenarios

    Directory of Open Access Journals (Sweden)

    Jingpei Wang

    2016-01-01

    Full Text Available Varied P2P trust models have been proposed recently; it is necessary to develop an effective method to evaluate these trust models to resolve the commonalities (guiding the newly generated trust models in theory and individuality (assisting a decision maker in choosing an optimal trust model to implement in specific context issues. A new method for analyzing and comparing P2P trust models based on hierarchical parameters quantization in the file downloading scenarios is proposed in this paper. Several parameters are extracted from the functional attributes and quality feature of trust relationship, as well as requirements from the specific network context and the evaluators. Several distributed P2P trust models are analyzed quantitatively with extracted parameters modeled into a hierarchical model. The fuzzy inferring method is applied to the hierarchical modeling of parameters to fuse the evaluated values of the candidate trust models, and then the relative optimal one is selected based on the sorted overall quantitative values. Finally, analyses and simulation are performed. The results show that the proposed method is reasonable and effective compared with the previous algorithms.

  8. Theoretical investigation of the Omega(g,u)(+/-) states of K2 dissociating adiabatically up to K(4p 2P(3/2)) + K(4p 2P(3/2)).

    Science.gov (United States)

    Jraij, A; Allouche, A R; Magnier, S; Aubert-Frécon, M

    2009-06-28

    A theoretical investigation of the electronic structure of the K(2) molecule, including spin-orbit effects, has been performed. Potential energies have been calculated over a large range of R up to 75a(0) for the 88 Omega(g,u)(+/-) states dissociating adiabatically into the limits up to K(4p (2)P(3/2))+K(4p (2)P(3/2)). Equilibrium distances, transition energies, harmonic frequencies, as well as depths for wells and heights for barriers are reported for all of the bound Omega(g,u)(+/-) states. Present ab initio calculations are shown to be able to reproduce quite accurately the small structures (wells and barrier) displayed at very long-range (R>50a(0)) by the (2,3)1(u) and (2)0(g)(-) purely long-range states. As the present data could help experimentalists, we make available extensive tables of energy values versus internuclear distances in our database at the web address http://www-lasim.univ-lyon1.fr/spip.php?rubrique99.

  9. Can I trust you? : The importance of trust when doing business on P2P online platforms

    OpenAIRE

    Andersson, David; Kobaslic, Bojan

    2016-01-01

    This report has focused on how important a buyers eWOM is compared to his/her visual information when sellers decide if they can trust this buyer. A focus company was Airbnb, an online P2P platform where private individuals can rent out their living quarters to other private persons. The method involved sending out online web surveys to approximately 200 students in Högskolan Kristianstad. Results from these surveys suggests that a buyer’s eWOM and visual information had little or no impact u...

  10. Enabling department-scale supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, D.S.; Hart, W.E.; Phillips, C.A.

    1997-11-01

    The Department of Energy (DOE) national laboratories have one of the longest and most consistent histories of supercomputer use. The authors summarize the architecture of DOE`s new supercomputers that are being built for the Accelerated Strategic Computing Initiative (ASCI). The authors then argue that in the near future scaled-down versions of these supercomputers with petaflop-per-weekend capabilities could become widely available to hundreds of research and engineering departments. The availability of such computational resources will allow simulation of physical phenomena to become a full-fledged third branch of scientific exploration, along with theory and experimentation. They describe the ASCI and other supercomputer applications at Sandia National Laboratories, and discuss which lessons learned from Sandia`s long history of supercomputing can be applied in this new setting.

  11. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  12. Quasifree (p , 2 p ) Reactions on Oxygen Isotopes: Observation of Isospin Independence of the Reduced Single-Particle Strength

    Science.gov (United States)

    Atar, L.; Paschalis, S.; Barbieri, C.; Bertulani, C. A.; Díaz Fernández, P.; Holl, M.; Najafi, M. A.; Panin, V.; Alvarez-Pol, H.; Aumann, T.; Avdeichikov, V.; Beceiro-Novo, S.; Bemmerer, D.; Benlliure, J.; Boillos, J. M.; Boretzky, K.; Borge, M. J. G.; Caamaño, M.; Caesar, C.; Casarejos, E.; Catford, W.; Cederkall, J.; Chartier, M.; Chulkov, L.; Cortina-Gil, D.; Cravo, E.; Crespo, R.; Dillmann, I.; Elekes, Z.; Enders, J.; Ershova, O.; Estrade, A.; Farinon, F.; Fraile, L. M.; Freer, M.; Galaviz Redondo, D.; Geissel, H.; Gernhäuser, R.; Golubev, P.; Göbel, K.; Hagdahl, J.; Heftrich, T.; Heil, M.; Heine, M.; Heinz, A.; Henriques, A.; Hufnagel, A.; Ignatov, A.; Johansson, H. T.; Jonson, B.; Kahlbow, J.; Kalantar-Nayestanaki, N.; Kanungo, R.; Kelic-Heil, A.; Knyazev, A.; Kröll, T.; Kurz, N.; Labiche, M.; Langer, C.; Le Bleis, T.; Lemmon, R.; Lindberg, S.; Machado, J.; Marganiec-Gałązka, J.; Movsesyan, A.; Nacher, E.; Nikolskii, E. Y.; Nilsson, T.; Nociforo, C.; Perea, A.; Petri, M.; Pietri, S.; Plag, R.; Reifarth, R.; Ribeiro, G.; Rigollet, C.; Rossi, D. M.; Röder, M.; Savran, D.; Scheit, H.; Simon, H.; Sorlin, O.; Syndikus, I.; Taylor, J. T.; Tengblad, O.; Thies, R.; Togano, Y.; Vandebrouck, M.; Velho, P.; Volkov, V.; Wagner, A.; Wamers, F.; Weick, H.; Wheldon, C.; Wilson, G. L.; Winfield, J. S.; Woods, P.; Yakorev, D.; Zhukov, M.; Zilges, A.; Zuber, K.; R3B Collaboration

    2018-01-01

    Quasifree one-proton knockout reactions have been employed in inverse kinematics for a systematic study of the structure of stable and exotic oxygen isotopes at the R3B /LAND setup with incident beam energies in the range of 300 - 450 MeV /u . The oxygen isotopic chain offers a large variation of separation energies that allows for a quantitative understanding of single-particle strength with changing isospin asymmetry. Quasifree knockout reactions provide a complementary approach to intermediate-energy one-nucleon removal reactions. Inclusive cross sections for quasifree knockout reactions of the type O A (p ,2 p )N-1A have been determined and compared to calculations based on the eikonal reaction theory. The reduction factors for the single-particle strength with respect to the independent-particle model were obtained and compared to state-of-the-art ab initio predictions. The results do not show any significant dependence on proton-neutron asymmetry.

  13. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-01-01

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  14. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-12-31

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  15. Los nuevos modelos: una solución equilibrada a la problemática del P2P

    Directory of Open Access Journals (Sweden)

    Santiago Piñeros Durán

    2013-11-01

    Full Text Available Con la llegada de los intercambios ilegales de contenidos ocasionados por las redes P2P en internet, el derecho de autor ha demostrado debilidades sustanciales y procesales para proteger las prestaciones de los titulares de derechos en las redes digitales. Por esta razón se dio un desequilibrio en las economías de mercado de las industrias del entretenimiento a nivel mundial, quienes han optado por impulsar el reforzamiento de los esquemas de responsabilidad en contra de usuarios e IPS infractores, sin importar las implicaciones que ello supone para los consumidores de contenidos en la red y sus derechos fundamentales. Se prevé un esquema donde los nuevos modelos de negocio, licenciamiento y compensación, otorgados por las redes digitales y la cultura colectiva, demuestran que el derecho de autor puede servir como punto de balance para proteger los derechos de los titulares afectados, mientras prima la libertad de mercado en la red para difundir la cultura y la educación. Esto se concreta en un modelo de compensación alternativa que establece los parámetros en que debe funcionar el libre intercambio de contenidos con control y vigilancia estatales, a saber, el modelo Fisher.

  16. The 2H(p,2p)n reaction at 508 MeV. Part I

    International Nuclear Information System (INIS)

    Punjabi, V.; Perdrisat, C.F.; Aniol, K.A.; Epstein, M.B.; Huber, J.P.; Margaziotis, D.J.; Bracco, A.; Davis, C.A.; Gubler, H.P.; Lee, W.P.; Poffenberger, P.R.; van Oers, W.T.H.; Postma, H.; Sebel, H.J.; Stetz, A.W.

    1988-09-01

    Differential cross sections for the reaction 2 H(p,2p)n at T p = 507 and 508 MeV are presented. The proton angle pairs chosen were 41.5 degrees with 41.4 and 50.0, 30.1 degrees with 44.0, 53,75, 61.0, and 68.0, 38.1 degrees -38.0 degrees, 44.1 degrees - 44.0 degrees, 47.1 degrees - 47.0 degrees and 50.0 degrees - 50.0 degrees. The data range over an energy window 100 MeV wide on one of the proton energies, the second energy being defined by the kinematic condition of a single neutron recoiling. The data are compared with the impulse approximation (IA) prediction and with the results of a nonrelativistic calculation of the six lowest-order Feynman diagrams describing the reaction. A previously known missing strength for the reaction in the small neutron recoil region is confirmed with much smaller experimental uncertainty; the missing strength persists up to 150 MeV/c neutron recoil. The onset of a systematic section excess relative to the IA near neutron recoil momentum 200 MeV is explored in detail. (Author) (37 refs., 17 figs.)

  17. Quasifree (p, 2p) Reactions on Oxygen Isotopes: Observation of Isospin Independence of the Reduced Single-Particle Strength.

    Science.gov (United States)

    Atar, L; Paschalis, S; Barbieri, C; Bertulani, C A; Díaz Fernández, P; Holl, M; Najafi, M A; Panin, V; Alvarez-Pol, H; Aumann, T; Avdeichikov, V; Beceiro-Novo, S; Bemmerer, D; Benlliure, J; Boillos, J M; Boretzky, K; Borge, M J G; Caamaño, M; Caesar, C; Casarejos, E; Catford, W; Cederkall, J; Chartier, M; Chulkov, L; Cortina-Gil, D; Cravo, E; Crespo, R; Dillmann, I; Elekes, Z; Enders, J; Ershova, O; Estrade, A; Farinon, F; Fraile, L M; Freer, M; Galaviz Redondo, D; Geissel, H; Gernhäuser, R; Golubev, P; Göbel, K; Hagdahl, J; Heftrich, T; Heil, M; Heine, M; Heinz, A; Henriques, A; Hufnagel, A; Ignatov, A; Johansson, H T; Jonson, B; Kahlbow, J; Kalantar-Nayestanaki, N; Kanungo, R; Kelic-Heil, A; Knyazev, A; Kröll, T; Kurz, N; Labiche, M; Langer, C; Le Bleis, T; Lemmon, R; Lindberg, S; Machado, J; Marganiec-Gałązka, J; Movsesyan, A; Nacher, E; Nikolskii, E Y; Nilsson, T; Nociforo, C; Perea, A; Petri, M; Pietri, S; Plag, R; Reifarth, R; Ribeiro, G; Rigollet, C; Rossi, D M; Röder, M; Savran, D; Scheit, H; Simon, H; Sorlin, O; Syndikus, I; Taylor, J T; Tengblad, O; Thies, R; Togano, Y; Vandebrouck, M; Velho, P; Volkov, V; Wagner, A; Wamers, F; Weick, H; Wheldon, C; Wilson, G L; Winfield, J S; Woods, P; Yakorev, D; Zhukov, M; Zilges, A; Zuber, K

    2018-02-02

    Quasifree one-proton knockout reactions have been employed in inverse kinematics for a systematic study of the structure of stable and exotic oxygen isotopes at the R^{3}B/LAND setup with incident beam energies in the range of 300-450  MeV/u. The oxygen isotopic chain offers a large variation of separation energies that allows for a quantitative understanding of single-particle strength with changing isospin asymmetry. Quasifree knockout reactions provide a complementary approach to intermediate-energy one-nucleon removal reactions. Inclusive cross sections for quasifree knockout reactions of the type ^{A}O(p,2p)^{A-1}N have been determined and compared to calculations based on the eikonal reaction theory. The reduction factors for the single-particle strength with respect to the independent-particle model were obtained and compared to state-of-the-art ab initio predictions. The results do not show any significant dependence on proton-neutron asymmetry.

  18. Computational Dimensionalities of Global Supercomputing

    Directory of Open Access Journals (Sweden)

    Richard S. Segall

    2013-12-01

    Full Text Available This Invited Paper pertains to subject of my Plenary Keynote Speech at the 17th World Multi-Conference on Systemics, Cybernetics and Informatics (WMSCI 2013 held in Orlando, Florida on July 9-12, 2013. The title of my Plenary Keynote Speech was: "Dimensionalities of Computation: from Global Supercomputing to Data, Text and Web Mining" but this Invited Paper will focus only on the "Computational Dimensionalities of Global Supercomputing" and is based upon a summary of the contents of several individual articles that have been previously written with myself as lead author and published in [75], [76], [77], [78], [79], [80] and [11]. The topics of these of the Plenary Speech included Overview of Current Research in Global Supercomputing [75], Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing [76], Data Mining Supercomputing with SAS™ JMP® Genomics ([77], [79], [80], and Visualization by Supercomputing Data Mining [81]. ______________________ [11.] Committee on the Future of Supercomputing, National Research Council (2003, The Future of Supercomputing: An Interim Report, ISBN-13: 978-0-309-09016- 2, http://www.nap.edu/catalog/10784.html [75.] Segall, Richard S.; Zhang, Qingyu and Cook, Jeffrey S.(2013, "Overview of Current Research in Global Supercomputing", Proceedings of Forty- Fourth Meeting of Southwest Decision Sciences Institute (SWDSI, Albuquerque, NM, March 12-16, 2013. [76.] Segall, Richard S. and Zhang, Qingyu (2010, "Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing", Proceedings of 5th INFORMS Workshop on Data Mining and Health Informatics, Austin, TX, November 6, 2010. [77.] Segall, Richard S., Zhang, Qingyu and Pierce, Ryan M.(2010, "Data Mining Supercomputing with SAS™ JMP®; Genomics: Research-in-Progress, Proceedings of 2010 Conference on Applied Research in Information Technology, sponsored by

  19. The ETA10 supercomputer system

    International Nuclear Information System (INIS)

    Swanson, C.D.

    1987-01-01

    The ETA Systems, Inc. ETA 10 is a next-generation supercomputer featuring multiprocessing, a large hierarchical memory system, high performance input/output, and network support for both batch and interactive processing. Advanced technology used in the ETA 10 includes liquid nitrogen cooled CMOS logic with 20,000 gates per chip, a single printed circuit board for each CPU, and high density static and dynamics MOS memory chips. Software for the ETA 10 includes an underlying kernel that supports multiple user environments, a new ETA FORTRAN compiler with an advanced automatic vectorizer, a multitasking library and debugging tools. Possible developments for future supercomputers from ETA Systems are discussed. (orig.)

  20. World's fastest supercomputer opens up to users

    Science.gov (United States)

    Xin, Ling

    2016-08-01

    China's latest supercomputer - Sunway TaihuLight - has claimed the crown as the world's fastest computer according to the latest TOP500 list, released at the International Supercomputer Conference in Frankfurt in late June.

  1. Supercomputing and related national projects in Japan

    International Nuclear Information System (INIS)

    Miura, Kenichi

    1985-01-01

    Japanese supercomputer development activities in the industry and research projects are outlined. Architecture, technology, software, and applications of Fujitsu's Vector Processor Systems are described as an example of Japanese supercomputers. Applications of supercomputers to high energy physics are also discussed. (orig.)

  2. Mistral Supercomputer Job History Analysis

    OpenAIRE

    Zasadziński, Michał; Muntés-Mulero, Victor; Solé, Marc; Ludwig, Thomas

    2018-01-01

    In this technical report, we show insights and results of operational data analysis from petascale supercomputer Mistral, which is ranked as 42nd most powerful in the world as of January 2018. Data sources include hardware monitoring data, job scheduler history, topology, and hardware information. We explore job state sequences, spatial distribution, and electric power patterns.

  3. Supercomputers and quantum field theory

    International Nuclear Information System (INIS)

    Creutz, M.

    1985-01-01

    A review is given of why recent simulations of lattice gauge theories have resulted in substantial demands from particle theorists for supercomputer time. These calculations have yielded first principle results on non-perturbative aspects of the strong interactions. An algorithm for simulating dynamical quark fields is discussed. 14 refs

  4. Computational plasma physics and supercomputers

    International Nuclear Information System (INIS)

    Killeen, J.; McNamara, B.

    1984-09-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular codes, but parallel processing poses new coding difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematics

  5. An assessment of worldwide supercomputer usage

    Energy Technology Data Exchange (ETDEWEB)

    Wasserman, H.J.; Simmons, M.L.; Hayes, A.H.

    1995-01-01

    This report provides a comparative study of advanced supercomputing usage in Japan and the United States as of Spring 1994. It is based on the findings of a group of US scientists whose careers have centered on programming, evaluating, and designing high-performance supercomputers for over ten years. The report is a follow-on to an assessment of supercomputing technology in Europe and Japan that was published in 1993. Whereas the previous study focused on supercomputer manufacturing capabilities, the primary focus of the current work was to compare where and how supercomputers are used. Research for this report was conducted through both literature studies and field research in Japan.

  6. NASA Advanced Supercomputing Facility Expansion

    Science.gov (United States)

    Thigpen, William W.

    2017-01-01

    The NASA Advanced Supercomputing (NAS) Division enables advances in high-end computing technologies and in modeling and simulation methods to tackle some of the toughest science and engineering challenges facing NASA today. The name "NAS" has long been associated with leadership and innovation throughout the high-end computing (HEC) community. We play a significant role in shaping HEC standards and paradigms, and provide leadership in the areas of large-scale InfiniBand fabrics, Lustre open-source filesystems, and hyperwall technologies. We provide an integrated high-end computing environment to accelerate NASA missions and make revolutionary advances in science. Pleiades, a petaflop-scale supercomputer, is used by scientists throughout the U.S. to support NASA missions, and is ranked among the most powerful systems in the world. One of our key focus areas is in modeling and simulation to support NASA's real-world engineering applications and make fundamental advances in modeling and simulation methods.

  7. ATLAS Software Installation on Supercomputers

    CERN Document Server

    Undrus, Alexander; The ATLAS collaboration

    2018-01-01

    PowerPC and high performance computers (HPC) are important resources for computing in the ATLAS experiment. The future LHC data processing will require more resources than Grid computing, currently using approximately 100,000 cores at well over 100 sites, can provide. Supercomputers are extremely powerful as they use resources of hundreds of thousands CPUs joined together. However their architectures have different instruction sets. ATLAS binary software distributions for x86 chipsets do not fit these architectures, as emulation of these chipsets results in huge performance loss. This presentation describes the methodology of ATLAS software installation from source code on supercomputers. The installation procedure includes downloading the ATLAS code base as well as the source of about 50 external packages, such as ROOT and Geant4, followed by compilation, and rigorous unit and integration testing. The presentation reports the application of this procedure at Titan HPC and Summit PowerPC at Oak Ridge Computin...

  8. Efficient data replication for the delivery of high-quality video content over P2P VoD advertising networks

    Science.gov (United States)

    Ho, Chien-Peng; Yu, Jen-Yu; Lee, Suh-Yin

    2011-12-01

    Recent advances in modern television systems have had profound consequences for the scalability, stability, and quality of transmitted digital data signals. This is of particular significance for peer-to-peer (P2P) video-on-demand (VoD) related platforms, faced with an immediate and growing demand for reliable service delivery. In response to demands for high-quality video, the key objectives in the construction of the proposed framework were user satisfaction with perceived video quality and the effective utilization of available resources on P2P VoD networks. This study developed a peer-based promoter to support online advertising in P2P VoD networks based on an estimation of video distortion prior to the replication of data stream chunks. The proposed technology enables the recovery of lost video using replicated stream chunks in real time. Load balance is achieved by adjusting the replication level of each candidate group according to the degree-of-distortion, thereby enabling a significant reduction in server load and increased scalability in the P2P VoD system. This approach also promotes the use of advertising as an efficient tool for commercial promotion. Results indicate that the proposed system efficiently satisfies the given fault tolerances.

  9. Status of supercomputers in the US

    International Nuclear Information System (INIS)

    Fernbach, S.

    1985-01-01

    Current Supercomputers; that is, the Class VI machines which first became available in 1976 are being delivered in greater quantity than ever before. In addition, manufacturers are busily working on Class VII machines to be ready for delivery in CY 1987. Mainframes are being modified or designed to take on some features of the supercomputers and new companies with the intent of either competing directly in the supercomputer arena or in providing entry-level systems from which to graduate to supercomputers are springing up everywhere. Even well founded organizations like IBM and CDC are adding machines with vector instructions in their repertoires. Japanese - manufactured supercomputers are also being introduced into the U.S. Will these begin to compete with those of U.S. manufacture. Are they truly competitive. It turns out that both from the hardware and software points of view they may be superior. We may be facing the same problems in supercomputers that we faced in videosystems

  10. TOP500 Supercomputers for June 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-06-23

    23rd Edition of TOP500 List of World's Fastest Supercomputers Released: Japan's Earth Simulator Enters Third Year in Top Position MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 23rd edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2004) at the International Supercomputer Conference in Heidelberg, Germany.

  11. Isotope shift of 40,42,44,48Ca in the 4s 2S1/2 → 4p 2P3/2 transition

    Science.gov (United States)

    Gorges, C.; Blaum, K.; Frömmgen, N.; Geppert, Ch; Hammen, M.; Kaufmann, S.; Krämer, J.; Krieger, A.; Neugart, R.; Sánchez, R.; Nörtershäuser, W.

    2015-12-01

    We report on improved isotope shift measurements of the isotopes {}{40,42,{44,48}}Ca in the 4{{s}}{ }2{{{S}}}1/2\\to 4{{p}}{ }2{{{P}}}3/2 (D2) transition using collinear laser spectroscopy. Accurately known isotope shifts in the 4{{s}}{ }2{{{S}}}1/2\\to 4{{p}}{ }2{{{P}}}1/2(D1) transition were used to calibrate the ion beam energy with an uncertainty of {{Δ }}U≈ +/- 0.25 {{V}}. The accuracy in the D2 transition was improved by a factor of 5-10. A King-plot analysis of the two transitions revealed that the field shift factor in the D2 line is about 1.8(13)% larger than in the D1 transition which is ascribed to relativistic contributions of the 4{{{p}}}1/2 wave function.

  12. Feasibility study of P2P-type system architecture with 3D medical image data support for medical integrated network systems

    International Nuclear Information System (INIS)

    Noji, Tamotsu; Arino, Masashi; Suto, Yasuzo

    2010-01-01

    We are investigating an integrated medical network system with an electronic letter of introduction function and a 3D image support function operating in the Internet environment. However, the problems with current C/S (client/server)-type systems are inadequate security countermeasures and insufficient transmission availability. In this report, we propose a medical information cooperation system architecture that employs a P2P (peer-to-peer)-type communication method rather than a C/S-type method, which helps to prevent a reduction in processing speed when large amounts of data (such as 3D images) are transferred. In addition, a virtual clinic was created and a feasibility study was conducted to evaluate the P2P-type system. The results showed that efficiency was improved by about 77% in real-time transmission, suggesting that this system may be suitable for practical application. (author)

  13. Analysis and Implementation of Particle-to-Particle (P2P) Graphics Processor Unit (GPU) Kernel for Black-Box Adaptive Fast Multipole Method

    Science.gov (United States)

    2015-06-01

    implementation of the direct interaction called particle-to-particle kernel for a shared-memory single GPU device using the Compute Unified Device Architecture ...GPU-defined P2P kernel we developed using the Compute Unified Device Architecture (CUDA).9 A brief outline of the rest of this work follows. The...Employed The computing environment used for this work is a 64-node heterogeneous cluster consisting of 48 IBM dx360M4 nodes, each with one Intel Phi

  14. Energy-Crossing and Its Effect on Lifetime of the 4s24p 2P3/2 Level for Highly Charged Ga-Like Ions

    International Nuclear Information System (INIS)

    Fan Jian-Zhong; Zhang Deng-Hong; Chang Zhi-Wei; Shi Ying-Long; Dong Chen-Zhong

    2012-01-01

    The multi-configuration Dirac—Fock method is employed to calculate the energy levels and transition probabilities for the electric dipole allowed (E1) and forbidden (M1, E2) lines for the 4s 2 4p, 4s4p 2 and 4s 2 4d configurations of highly charged Ga-like ions from Z = 68–95. The lifetimes of the 4s 2 4p 2 P 3/2 level of the ground configuration are also derived. Based on our calculations, it is found that the energy level of the 4s 2 4p 2 P 3/2 is higher than that of the 4s4p 2 4 P 1/2 for the high-Z Ga-like ions with Z ≥ 74, so as to generate an energy crossing at Z = 74. The effect of the energy crossing is important to the calculation of the 4s 2 4p 2 P 3/2 level lifetime for Ga-like ions with Z ≥ 74. (atomic and molecular physics)

  15. Supercomputer applications in nuclear research

    International Nuclear Information System (INIS)

    Ishiguro, Misako

    1992-01-01

    The utilization of supercomputers in Japan Atomic Energy Research Institute is mainly reported. The fields of atomic energy research which use supercomputers frequently and the contents of their computation are outlined. What is vectorizing is simply explained, and nuclear fusion, nuclear reactor physics, the hydrothermal safety of nuclear reactors, the parallel property that the atomic energy computations of fluids and others have, the algorithm for vector treatment and the effect of speed increase by vectorizing are discussed. At present Japan Atomic Energy Research Institute uses two systems of FACOM VP 2600/10 and three systems of M-780. The contents of computation changed from criticality computation around 1970, through the analysis of LOCA after the TMI accident, to nuclear fusion research, the design of new type reactors and reactor safety assessment at present. Also the method of using computers advanced from batch processing to time sharing processing, from one-dimensional to three dimensional computation, from steady, linear to unsteady nonlinear computation, from experimental analysis to numerical simulation and so on. (K.I.)

  16. INTEL: Intel based systems move up in supercomputing ranks

    CERN Multimedia

    2002-01-01

    "The TOP500 supercomputer rankings released today at the Supercomputing 2002 conference show a dramatic increase in the number of Intel-based systems being deployed in high-performance computing (HPC) or supercomputing areas" (1/2 page).

  17. Absolute emission cross sections for electron-impact excitation of Zn+(4p 2P) and (5s 2S) terms

    International Nuclear Information System (INIS)

    Rogers, W.T.; Dunn, G.H.; Olsen, J.O.; Reading, M.; Stefani, G.

    1982-01-01

    Absolute emission cross sections for electron-impact excitation of the 3d 10 4p 2 P and 3d 10 5s 2 S terms of Zn + have been measured from below threshold to about 790 eV 2 P and 390 eV 2 S using the crossed-charged-beams technique. Both transitions have the abrupt onset at threshold characteristic of positive-ion excitation. The 2 P cross section shows considerable structure in the interval from threshold to near 20 eV, above which it falls off smoothly. Agreement with five-state close-coupling theory is excellent below 100 eV when cascading is included in the theory. Above 100 eV, the data lie above the theory. The peak value of the 2 P cross section is 9.4 x 10 -16 cm 2 essentially at threshold, while the peak value of the 2 S cross section is about 0.47 x 10 -16 cm 2 . The net linear polarization of the 3d 10 4p 2 P emission was measured (unresolved from the 3d 10 4d 2 D→3d 10 4p 2 P cascading transition), and these data were used to correct the cross-section data for anisotropy of the emitted light. The effective lifetime of the 3d 9 4s 2 2 D/sub 3/2/ level was measured by observing exponential decay of the 589.6-nm photons resulting from its decay

  18. Isoelectronic comparison of the Al-like 3s23p 2P-3s3p24P transitions in the ions P III-Mo XXX

    International Nuclear Information System (INIS)

    Jupen, C.; Curtis, L.J.

    1996-01-01

    New observations of the 3s 2 3p 2 P-3s3p 2 4 P intercombination transitions in Al-like ions have been made for Cl V from spark spectra recorded at Lund and for Kr XXIV and Mo XXX from spectra obtained at the JET tokamak. The new results have been combined with other identifications of these transitions along the sequence and empirically systematized and compared with theoretical calculations. A set of smoothed and interpolated values for the excitation energies of the 3s3p 2 4 P levels in P III-Mo XXX is presented. (orig.)

  19. TOP500 Supercomputers for June 2005

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2005-06-22

    25th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/L LNL BlueGene/L and IBM gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 25th edition of the TOP500 list of the world's fastest supercomputers was released today (June 22, 2005) at the 20th International Supercomputing Conference (ISC2005) in Heidelberg Germany.

  20. TOP500 Supercomputers for November 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-11-16

    22nd Edition of TOP500 List of World s Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 22nd edition of the TOP500 list of the worlds fastest supercomputers was released today (November 16, 2003). The Earth Simulator supercomputer retains the number one position with its Linpack benchmark performance of 35.86 Tflop/s (''teraflops'' or trillions of calculations per second). It was built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan.

  1. TOP500 Supercomputers for November 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-11-08

    24th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/IBM BlueGene/L and NASA/SGI's Columbia gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 24th edition of the TOP500 list of the worlds fastest supercomputers was released today (November 8, 2004) at the SC2004 Conference in Pittsburgh, Pa.

  2. Quantum Hamiltonian Physics with Supercomputers

    International Nuclear Information System (INIS)

    Vary, James P.

    2014-01-01

    The vision of solving the nuclear many-body problem in a Hamiltonian framework with fundamental interactions tied to QCD via Chiral Perturbation Theory is gaining support. The goals are to preserve the predictive power of the underlying theory, to test fundamental symmetries with the nucleus as laboratory and to develop new understandings of the full range of complex quantum phenomena. Advances in theoretical frameworks (renormalization and many-body methods) as well as in computational resources (new algorithms and leadership-class parallel computers) signal a new generation of theory and simulations that will yield profound insights into the origins of nuclear shell structure, collective phenomena and complex reaction dynamics. Fundamental discovery opportunities also exist in such areas as physics beyond the Standard Model of Elementary Particles, the transition between hadronic and quark–gluon dominated dynamics in nuclei and signals that characterize dark matter. I will review some recent achievements and present ambitious consensus plans along with their challenges for a coming decade of research that will build new links between theory, simulations and experiment. Opportunities for graduate students to embark upon careers in the fast developing field of supercomputer simulations is also discussed

  3. Plasma turbulence calculations on supercomputers

    International Nuclear Information System (INIS)

    Carreras, B.A.; Charlton, L.A.; Dominguez, N.; Drake, J.B.; Garcia, L.; Leboeuf, J.N.; Lee, D.K.; Lynch, V.E.; Sidikman, K.

    1991-01-01

    Although the single-particle picture of magnetic confinement is helpful in understanding some basic physics of plasma confinement, it does not give a full description. Collective effects dominate plasma behavior. Any analysis of plasma confinement requires a self-consistent treatment of the particles and fields. The general picture is further complicated because the plasma, in general, is turbulent. The study of fluid turbulence is a rather complex field by itself. In addition to the difficulties of classical fluid turbulence, plasma turbulence studies face the problems caused by the induced magnetic turbulence, which couples field by itself. In addition to the difficulties of classical fluid turbulence, plasma turbulence studies face the problems caused by the induced magnetic turbulence, which couples back to the fluid. Since the fluid is not a perfect conductor, this turbulence can lead to changes in the topology of the magnetic field structure, causing the magnetic field lines to wander radially. Because the plasma fluid flows along field lines, they carry the particles with them, and this enhances the losses caused by collisions. The changes in topology are critical for the plasma confinement. The study of plasma turbulence and the concomitant transport is a challenging problem. Because of the importance of solving the plasma turbulence problem for controlled thermonuclear research, the high complexity of the problem, and the necessity of attacking the problem with supercomputers, the study of plasma turbulence in magnetic confinement devices is a Grand Challenge problem

  4. Quantum Hamiltonian Physics with Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Vary, James P.

    2014-06-15

    The vision of solving the nuclear many-body problem in a Hamiltonian framework with fundamental interactions tied to QCD via Chiral Perturbation Theory is gaining support. The goals are to preserve the predictive power of the underlying theory, to test fundamental symmetries with the nucleus as laboratory and to develop new understandings of the full range of complex quantum phenomena. Advances in theoretical frameworks (renormalization and many-body methods) as well as in computational resources (new algorithms and leadership-class parallel computers) signal a new generation of theory and simulations that will yield profound insights into the origins of nuclear shell structure, collective phenomena and complex reaction dynamics. Fundamental discovery opportunities also exist in such areas as physics beyond the Standard Model of Elementary Particles, the transition between hadronic and quark–gluon dominated dynamics in nuclei and signals that characterize dark matter. I will review some recent achievements and present ambitious consensus plans along with their challenges for a coming decade of research that will build new links between theory, simulations and experiment. Opportunities for graduate students to embark upon careers in the fast developing field of supercomputer simulations is also discussed.

  5. Study of fission barriers in neutron-rich nuclei using the (p,2p) reaction. Status of SAMURAI-experiment NP1306 SAMURAI14

    Energy Technology Data Exchange (ETDEWEB)

    Reichert, Sebastian [TU Munich (Germany); Collaboration: NP1306-SAMURAI14-Collaboration

    2015-07-01

    Violent stellar processes are currently assumed to be a major origin of the elements beyond iron and their abundances. The conditions during stellar explosions lead to the so called r-process in which the rapid capture of neutrons and subsequent β decays form heavier elements. This extension of the nuclei stops at the point when the repulsive Coulomb energy induces fission. Its recycling is one key aspect to describe the macroscopic structure of the r-process and the well known elemental abundance pattern. The RIBF at RIKEN is able to provide such neutron rich heavy element beams and a first test with the primary beam {sup 238}U was performed to understand the response of the SAMURAI spectrometer and detectors for heavy beams. The final goal is the definition of the fission barrier height with a resolution of 1 MeV (in σ) using the missing mass method using (p,2p) reactions in inverse kinematics.

  6. An unusual presentation of a customs importation seizure containing amphetamine, possibly synthesized by the APAAN-P2P-Leuckart route.

    Science.gov (United States)

    Power, John D; Barry, Michael G; Scott, Kenneth R; Kavanagh, Pierce V

    2014-01-01

    During the analysis of an Irish customs seizure (14 packages each containing approximately one kilogram of a white wet paste) were analysed for the suspected presence of controlled drugs. The samples were found to contain amphetamine and also characteristic by-products including benzyl cyanide, phenylacetone (P2P), methyl-phenyl-pyrimidines, N-formylamphetamine, naphthalene derivatives and amphetamine dimers. The analytical results corresponded with the impurity profile observed and recently reported for the synthesis of 4-methylamphetamine from 4-methylphenylacetoacetonitrile [1]. The synthesis of amphetamine from alpha-phenylacetoacetonitrile (APAAN) was performed (via an acid hydrolysis and subsequent Leuckart reaction) and the impurity profile of the product obtained was compared to those observed in the customs seizure. Observations are made regarding the route specificity of these by-products. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. Measurement of the hyperfine structure of the 4d2D3/2,5/2 levels and isotope shifts of the 4p2P3/2->4d2D3/2 and 4p2P3/2->4d2D5/2 transitions in gallium 69 and 71

    International Nuclear Information System (INIS)

    Rehse, Steven J.; Fairbank, William M.; Lee, Siu Au

    2001-01-01

    The hyperfine structure of the 4d 2 D 3/2,5/2 levels of 69,71 Ga is determined. The 4p 2 P 3/2 ->4d 2 D 3/2 (294.50-nm) and 4p 2 P 3/2 ->4d 2 D 5/2 (294.45-nm) transitions are studied by laser-induced fluorescence in an atomic Ga beam. The hyperfine A constant measured for the 4d 2 D 5/2 level is 77.3±0.9 MHz for 69 Ga and 97.9± 0.7 MHz for 71 Ga (3σ errors). The A constant measured for the 4d 2 D 3/2 level is -36.3±2.2 MHz for 69 Ga and -46.2±3.8 MHz for 71 Ga. These measurements correct sign errors in the previous determination of these constants. For 69 Ga the hyperfine B constants measured for the 4d 2 D 5/2 and the 4d 2 D 3/2 levels are 5.3±4.1 MHz and 4.6±4.2 MHz, respectively. The isotope shift is determined to be 114±8 MHz for the 4p 2 P 3/2 ->4d 2 D 3/2 transition and 115±7 MHz for the 4p 2 P 3/2 ->4d 2 D 5/2 transition. The lines of 71 Ga are shifted to the blue. This is in agreement with previous measurement. [copyright] 2001 Optical Society of America

  8. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  9. A training program for scientific supercomputing users

    Energy Technology Data Exchange (ETDEWEB)

    Hanson, F.; Moher, T.; Sabelli, N.; Solem, A.

    1988-01-01

    There is need for a mechanism to transfer supercomputing technology into the hands of scientists and engineers in such a way that they will acquire a foundation of knowledge that will permit integration of supercomputing as a tool in their research. Most computing center training emphasizes computer-specific information about how to use a particular computer system; most academic programs teach concepts to computer scientists. Only a few brief courses and new programs are designed for computational scientists. This paper describes an eleven-week training program aimed principally at graduate and postdoctoral students in computationally-intensive fields. The program is designed to balance the specificity of computing center courses, the abstractness of computer science courses, and the personal contact of traditional apprentice approaches. It is based on the experience of computer scientists and computational scientists, and consists of seminars and clinics given by many visiting and local faculty. It covers a variety of supercomputing concepts, issues, and practices related to architecture, operating systems, software design, numerical considerations, code optimization, graphics, communications, and networks. Its research component encourages understanding of scientific computing and supercomputer hardware issues. Flexibility in thinking about computing needs is emphasized by the use of several different supercomputer architectures, such as the Cray X/MP48 at the National Center for Supercomputing Applications at University of Illinois at Urbana-Champaign, IBM 3090 600E/VF at the Cornell National Supercomputer Facility, and Alliant FX/8 at the Advanced Computing Research Facility at Argonne National Laboratory. 11 refs., 6 tabs.

  10. Alignment of Ar+ [3P]4p2P03/2 satellite state from the polarization analysis of fluorescent radiation after photoionization

    International Nuclear Information System (INIS)

    Yenen, O.; McLaughlin, K.W.; Jaecks, D.H.

    1997-01-01

    The measurement of the polarization of radiation from satellite states of Ar + formed after the photoionization of Ar provides detailed information about the nature of doubly excited states, magnetic sublevel cross sections and partial wave ratios of the photo-ejected electrons. Since the formation of these satellite states is a weak process, it is necessary to use a high flux beam of incoming photons. In addition, in order to resolve the many narrow doubly excited Ar resonances, the incoming photons must have a high resolution. The characteristics of the beam line 9.0.1 of the Advanced Light Source fulfill these requirements. The authors determined the polarization of 4765 Angstrom fluorescence from the Ar + [ 3 P] 4p 2 P 3/2 0 satellite state formed after photoionization of Ar by photons from the 9.0.1 beam line of ALS in the 35.620-38.261 eV energy range using a resolution of approximately 12,700. This is accomplished by measuring the intensities of the fluorescent light polarized parallel (I parallel) and perpendicular (I perpendicular) to the polarization axis of the incident synchrotron radiation using a Sterling Optics 105MB polarizing filter. The optical system placed at 90 degrees with respect to the polarization axis of the incident light had a narrow band interference filter (δλ=0.3 nm) to isolate the fluorescent radiation

  11. Modelling of P2P-Based Video Sharing Performance for Content-Oriented Community-Based VoD Systems in Wireless Mobile Networks

    Directory of Open Access Journals (Sweden)

    Shijie Jia

    2016-01-01

    Full Text Available The video sharing performance is a key factor for scalability and quality of service of P2P VoD systems in wireless mobile networks. There are some impact factors for the video sharing performance, such as available upload bandwidth, resource distribution in overlay networks, and mobility of mobile nodes. In this paper, we firstly model user behaviors: joining, playback, and departure for the content-oriented community-based VoD systems in wireless mobile networks and construct a resource assignment model by the analysis of transition of node state: suspend, wait, and playback. We analyze the influence of the above three factors: upload bandwidth, startup delay, and resource distribution for the sharing performance and QoS of systems. We further propose the improved resource sharing strategies from the perspectives of community architecture, resource distribution, and data transmission for the systems. Extensive tests show how the improved strategies achieve much better performance results in comparison with original strategies.

  12. ASCI's Vision for supercomputing future

    International Nuclear Information System (INIS)

    Nowak, N.D.

    2003-01-01

    The full text of publication follows. Advanced Simulation and Computing (ASC, formerly Accelerated Strategic Computing Initiative [ASCI]) was established in 1995 to help Defense Programs shift from test-based confidence to simulation-based confidence. Specifically, ASC is a focused and balanced program that is accelerating the development of simulation capabilities needed to analyze and predict the performance, safety, and reliability of nuclear weapons and certify their functionality - far exceeding what might have been achieved in the absence of a focused initiative. To realize its vision, ASC is creating simulation and proto-typing capabilities, based on advanced weapon codes and high-performance computing

  13. TOP500 Supercomputers for June 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-06-23

    21st Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 21st edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2003). The Earth Simulator supercomputer built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan, with its Linpack benchmark performance of 35.86 Tflop/s (teraflops or trillions of calculations per second), retains the number one position. The number 2 position is held by the re-measured ASCI Q system at Los Alamos National Laboratory. With 13.88 Tflop/s, it is the second system ever to exceed the 10 Tflop/smark. ASCIQ was built by Hewlett-Packard and is based on the AlphaServerSC computer system.

  14. Desktop supercomputer: what can it do?

    Science.gov (United States)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-12-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  15. Desktop supercomputer: what can it do?

    International Nuclear Information System (INIS)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-01-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  16. TOP500 Supercomputers for June 2002

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-06-20

    19th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 19th edition of the TOP500 list of the worlds fastest supercomputers was released today (June 20, 2002). The recently installed Earth Simulator supercomputer at the Earth Simulator Center in Yokohama, Japan, is as expected the clear new number 1. Its performance of 35.86 Tflop/s (trillions of calculations per second) running the Linpack benchmark is almost five times higher than the performance of the now No.2 IBM ASCI White system at Lawrence Livermore National Laboratory (7.2 Tflop/s). This powerful leap frogging to the top by a system so much faster than the previous top system is unparalleled in the history of the TOP500.

  17. Status reports of supercomputing astrophysics in Japan

    International Nuclear Information System (INIS)

    Nakamura, Takashi; Nagasawa, Mikio

    1990-01-01

    The Workshop on Supercomputing Astrophysics was held at National Laboratory for High Energy Physics (KEK, Tsukuba) from August 31 to September 2, 1989. More than 40 participants of physicists, astronomers were attendant and discussed many topics in the informal atmosphere. The main purpose of this workshop was focused on the theoretical activities in computational astrophysics in Japan. It was also aimed to promote effective collaboration between the numerical experimentists working on supercomputing technique. The various subjects of the presented papers of hydrodynamics, plasma physics, gravitating systems, radiative transfer and general relativity are all stimulating. In fact, these numerical calculations become possible now in Japan owing to the power of Japanese supercomputer such as HITAC S820, Fujitsu VP400E and NEC SX-2. (J.P.N.)

  18. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  19. The ETA systems plans for supercomputers

    International Nuclear Information System (INIS)

    Swanson, C.D.

    1987-01-01

    The ETA Systems, is a class VII supercomputer featuring multiprocessing, a large hierarchical memory system, high performance input/output, and network support for both batch and interactive processing. Advanced technology used in the ETA 10 includes liquid nitrogen cooled CMOS logic with 20,000 gates per chip, a single printed circuit board for each CPU, and high density static and dynamic MOS memory chips. Software for the ETA 10 includes an underlying kernel that supports multiple user environments, a new ETA FORTRAN compiler with an advanced automatic vectorizer, a multitasking library and debugging tools. Possible developments for future supercomputers from ETA Systems are discussed

  20. Adaptability of supercomputers to nuclear computations

    International Nuclear Information System (INIS)

    Asai, Kiyoshi; Ishiguro, Misako; Matsuura, Toshihiko.

    1983-01-01

    Recently in the field of scientific and technical calculation, the usefulness of supercomputers represented by CRAY-1 has been recognized, and they are utilized in various countries. The rapid computation of supercomputers is based on the function of vector computation. The authors investigated the adaptability to vector computation of about 40 typical atomic energy codes for the past six years. Based on the results of investigation, the adaptability of the function of vector computation that supercomputers have to atomic energy codes, the problem regarding the utilization and the future prospect are explained. The adaptability of individual calculation codes to vector computation is largely dependent on the algorithm and program structure used for the codes. The change to high speed by pipeline vector system, the investigation in the Japan Atomic Energy Research Institute and the results, and the examples of expressing the codes for atomic energy, environmental safety and nuclear fusion by vector are reported. The magnification of speed up for 40 examples was from 1.5 to 9.0. It can be said that the adaptability of supercomputers to atomic energy codes is fairly good. (Kako, I.)

  1. Computational plasma physics and supercomputers. Revision 1

    International Nuclear Information System (INIS)

    Killeen, J.; McNamara, B.

    1985-01-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular models, but parallel processing poses new programming difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematical models

  2. Graphics supercomputer for computational fluid dynamics research

    Science.gov (United States)

    Liaw, Goang S.

    1994-11-01

    The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.

  3. Architectural prototyping

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2004-01-01

    A major part of software architecture design is learning how specific architectural designs balance the concerns of stakeholders. We explore the notion of "architectural prototypes", correspondingly architectural prototyping, as a means of using executable prototypes to investigate stakeholders...

  4. FPS scientific and supercomputers computers in chemistry

    International Nuclear Information System (INIS)

    Curington, I.J.

    1987-01-01

    FPS Array Processors, scientific computers, and highly parallel supercomputers are used in nearly all aspects of compute-intensive computational chemistry. A survey is made of work utilizing this equipment, both published and current research. The relationship of the computer architecture to computational chemistry is discussed, with specific reference to Molecular Dynamics, Quantum Monte Carlo simulations, and Molecular Graphics applications. Recent installations of the FPS T-Series are highlighted, and examples of Molecular Graphics programs running on the FPS-5000 are shown

  5. Problem solving in nuclear engineering using supercomputers

    International Nuclear Information System (INIS)

    Schmidt, F.; Scheuermann, W.; Schatz, A.

    1987-01-01

    The availability of supercomputers enables the engineer to formulate new strategies for problem solving. One such strategy is the Integrated Planning and Simulation System (IPSS). With the integrated systems, simulation models with greater consistency and good agreement with actual plant data can be effectively realized. In the present work some of the basic ideas of IPSS are described as well as some of the conditions necessary to build such systems. Hardware and software characteristics as realized are outlined. (orig.) [de

  6. A workbench for tera-flop supercomputing

    International Nuclear Information System (INIS)

    Resch, M.M.; Kuester, U.; Mueller, M.S.; Lang, U.

    2003-01-01

    Supercomputers currently reach a peak performance in the range of TFlop/s. With but one exception - the Japanese Earth Simulator - none of these systems has so far been able to also show a level of sustained performance for a variety of applications that comes close to the peak performance. Sustained TFlop/s are therefore rarely seen. The reasons are manifold and are well known: Bandwidth and latency both for main memory and for the internal network are the key internal technical problems. Cache hierarchies with large caches can bring relief but are no remedy to the problem. However, there are not only technical problems that inhibit the full exploitation by scientists of the potential of modern supercomputers. More and more organizational issues come to the forefront. This paper shows the approach of the High Performance Computing Center Stuttgart (HLRS) to deliver a sustained performance of TFlop/s for a wide range of applications from a large group of users spread over Germany. The core of the concept is the role of the data. Around this we design a simulation workbench that hides the complexity of interacting computers, networks and file systems from the user. (authors)

  7. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  8. Collaborative Prototyping

    DEFF Research Database (Denmark)

    Bogers, Marcel; Horst, Willem

    2014-01-01

    of the prototyping process, the actual prototype was used as a tool for communication or development, thus serving as a platform for the cross-fertilization of knowledge. In this way, collaborative prototyping leads to a better balance between functionality and usability; it translates usability problems into design......This paper presents an inductive study that shows how collaborative prototyping across functional, hierarchical, and organizational boundaries can improve the overall prototyping process. Our combined action research and case study approach provides new insights into how collaborative prototyping...... can provide a platform for prototype-driven problem solving in early new product development (NPD). Our findings have important implications for how to facilitate multistakeholder collaboration in prototyping and problem solving, and more generally for how to organize collaborative and open innovation...

  9. PNNL supercomputer to become largest computing resource on the Grid

    CERN Multimedia

    2002-01-01

    Hewlett Packard announced that the US DOE Pacific Northwest National Laboratory will connect a 9.3-teraflop HP supercomputer to the DOE Science Grid. This will be the largest supercomputer attached to a computer grid anywhere in the world (1 page).

  10. Supercomputing - Use Cases, Advances, The Future (2/2)

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the second day, we will focus on software and software paradigms driving supercomputers, workloads that need supercomputing treatment, advances in technology and possible future developments. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and i...

  11. Diseño y desarrollo de una aplicación p2p de mensajería para la ESPOL, usando tecnología jxta

    OpenAIRE

    Calle Peña, Xavier Fernando; Cedeño Mieles, Vanessa Ines; Abad Robalino, Cristina Lucia

    2009-01-01

    Este resumen presenta una aplicación de mensajería instantánea par-a-par (P2P) para la ESPOL. Como las políticas de la institución impiden la utilización de aplicaciones de comunicación en las salas de cómputo, decidimos presentar nuestra aplicación desarrollada en JXTA como una alternativa. La tecnología JXTA es un grupo de protocolos que permite a cualquier dispositivo conectado a la red comunicarse y colaborar de una manera P2P. Los nodos de JXTA, también llamados peers, crean una red vir...

  12. Inteligência estratégica antecipativa coletiva e crowdfunding: aplicação do método L.E.SCAnning em empresa social de economia peer-to-peer (P2P

    Directory of Open Access Journals (Sweden)

    Mery Blanck

    2014-03-01

    Full Text Available Neste artigo, apresentam-se os resultados de pesquisa qualitativa em que se objetivou investigar a aplicabilidade do método L.E.SCAnning em empresas sociais de economia peer-to-peer (P2P. A motivação partiu da ideia de a autossustentabilidade ser, a longo prazo, um dos maiores desafios das organizações, especialmente aquelas lastreadas na economia social, dentre elas, as empresas P2P. No entanto, empresas sociais são potencialmente negócios dinâmicos e progressistas com os quais o mercado empresarial poderia aprender, uma vez que experimentam e inovam. Partindo exatamente desse espírito inovador, muitas empresas sociais voltaram-se para o modelo crowdfunding de economia P2P, que se configura como tendência emergente de organização colaborativa de recursos na Web. Sob esse prisma, um dos novos desenvolvimentos em gestão que se aplicam à atividade de organizações com enfoque sistêmico é a prática da Inteligência Estratégica Antecipativa Coletiva (IEAc. Nesse sentido, no estudo de caso investigou-se a empresa social francesa Babyloan para compreender de que maneira a organização busca, monitora e utiliza a informação captada do meio externo para sua atuação, prototipando, com base nesse diagnóstico, a aplicação de um ciclo do método L.E.SCAnning. Os resultados deste estudo sugerem que o entendimento pragmático do cenário externo, por meio da IEAc, favorece decisões que trazem uma marca de empreendedorismo e inovação, e tem no universo da economia social P2P, ambiente fortemente baseado em percepção, um impacto potencial significativo.

  13. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-01-01

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  14. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  15. Supercomputing Centers and Electricity Service Providers

    DEFF Research Database (Denmark)

    Patki, Tapasya; Bates, Natalie; Ghatikar, Girish

    2016-01-01

    from a detailed, quantitative survey-based analysis and compare the perspectives of the European grid and SCs to the ones of the United States (US). We then show that contrary to the expectation, SCs in the US are more open toward cooperating and developing demand-management strategies with their ESPs......Supercomputing Centers (SCs) have high and variable power demands, which increase the challenges of the Electricity Service Providers (ESPs) with regards to efficient electricity distribution and reliable grid operation. High penetration of renewable energy generation further exacerbates...... this problem. In order to develop a symbiotic relationship between the SCs and their ESPs and to support effective power management at all levels, it is critical to understand and analyze how the existing relationships were formed and how these are expected to evolve. In this paper, we first present results...

  16. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen-Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2018-05-15

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time and supports DMA functionality allowing for parallel processing message-passing.

  17. OpenMP Performance on the Columbia Supercomputer

    Science.gov (United States)

    Haoqiang, Jin; Hood, Robert

    2005-01-01

    This presentation discusses Columbia World Class Supercomputer which is one of the world's fastest supercomputers providing 61 TFLOPs (10/20/04). Conceived, designed, built, and deployed in just 120 days. A 20-node supercomputer built on proven 512-processor nodes. The largest SGI system in the world with over 10,000 Intel Itanium 2 processors and provides the largest node size incorporating commodity parts (512) and the largest shared-memory environment (2048) with 88% efficiency tops the scalar systems on the Top500 list.

  18. Prototyping Practice

    DEFF Research Database (Denmark)

    Ramsgaard Thomsen, Mette; Tamke, Martin

    2015-01-01

    This paper examines the role of the prototyping in digital architecture. During the past decade, a new research field has emerged exploring the digital technology’s impact on the way we think, design and build our environment. In this practice the prototype, the pavilion, installation or demonstr......This paper examines the role of the prototyping in digital architecture. During the past decade, a new research field has emerged exploring the digital technology’s impact on the way we think, design and build our environment. In this practice the prototype, the pavilion, installation...

  19. The Pawsey Supercomputer geothermal cooling project

    Science.gov (United States)

    Regenauer-Lieb, K.; Horowitz, F.; Western Australian Geothermal Centre Of Excellence, T.

    2010-12-01

    The Australian Government has funded the Pawsey supercomputer in Perth, Western Australia, providing computational infrastructure intended to support the future operations of the Australian Square Kilometre Array radiotelescope and to boost next-generation computational geosciences in Australia. Supplementary funds have been directed to the development of a geothermal exploration well to research the potential for direct heat use applications at the Pawsey Centre site. Cooling the Pawsey supercomputer may be achieved by geothermal heat exchange rather than by conventional electrical power cooling, thus reducing the carbon footprint of the Pawsey Centre and demonstrating an innovative green technology that is widely applicable in industry and urban centres across the world. The exploration well is scheduled to be completed in 2013, with drilling due to commence in the third quarter of 2011. One year is allocated to finalizing the design of the exploration, monitoring and research well. Success in the geothermal exploration and research program will result in an industrial-scale geothermal cooling facility at the Pawsey Centre, and will provide a world-class student training environment in geothermal energy systems. A similar system is partially funded and in advanced planning to provide base-load air-conditioning for the main campus of the University of Western Australia. Both systems are expected to draw ~80-95 degrees C water from aquifers lying between 2000 and 3000 meters depth from naturally permeable rocks of the Perth sedimentary basin. The geothermal water will be run through absorption chilling devices, which only require heat (as opposed to mechanical work) to power a chilled water stream adequate to meet the cooling requirements. Once the heat has been removed from the geothermal water, licensing issues require the water to be re-injected back into the aquifer system. These systems are intended to demonstrate the feasibility of powering large-scale air

  20. Unikabeton Prototype

    DEFF Research Database (Denmark)

    Søndergaard, Asbjørn; Dombernowsky, Per

    2011-01-01

    The Unikabeton prototype structure was developed as the finalization of the cross-disciplinary research project Unikabeton, exploring the architectural potential in linking the computational process of topology optimisation with robot fabrication of concrete casting moulds. The project was elabor......The Unikabeton prototype structure was developed as the finalization of the cross-disciplinary research project Unikabeton, exploring the architectural potential in linking the computational process of topology optimisation with robot fabrication of concrete casting moulds. The project...... of Architecture was to develop a series of optimisation experiments, concluding in the design and optimisation of a full scale prototype concrete structure....

  1. Optical frequency measurements of 6s 2S1/2-6p 2P3/2 transition in a 133Cs atomic beam using a femtosecond laser frequency comb

    International Nuclear Information System (INIS)

    Gerginov, V.; Tanner, C.E.; Diddams, S.; Bartels, A.; Hollberg, L.

    2004-01-01

    Optical frequencies of the hyperfine components of the D 2 line in 133 Cs are determined using high-resolution spectroscopy and a femtosecond laser frequency comb. A narrow-linewidth probe laser excites the 6s 2 S 1/2 (F=3,4)→6p 2 P 3/2 (F=2,3,4,5) transition in a highly collimated atomic beam. Fluorescence spectra are taken by scanning the laser frequency over the excited-state hyperfine structure. The laser optical frequency is referenced to a Cs fountain clock via a reference laser and a femtosecond laser frequency comb. A retroreflected laser beam is used to estimate and minimize the Doppler shift due to misalignment between the probe laser and the atomic beam. We achieve an angular resolution on the order of 5x10 -6 rad. The final uncertainties (∼±5 kHz) in the frequencies of the optical transitions are a factor of 20 better than previous results [T. Udem et al., Phys. Rev. A 62, 031801 (2000).]. We find the centroid of the 6s 2 S 1/2 →6p 2 P 3/2 transition to be f D2 =351 725 718.4744(51) MHz

  2. 一种基于Cloud-P2P计算模型的恶意代码联合防御网络%Malicious code united-defense network based on Cloud-P2P model

    Institute of Scientific and Technical Information of China (English)

    徐小龙; 吴家兴; 杨庚

    2012-01-01

    The current anti-virus systems are usually unable io respond to endless emerging malicious codes in time. To sohe this problem, this paper proposed and constructed a new malicious code united-defense network based on the Cloud-P2P computing model. The Cloud-P2P model integrated the cloud computing and the P2P computing systems together organically. Servers and user terminals in the malicious code united-defense network carry out their own duties, forming a high-security collaborative defense network against malicious cooes and obtaining the whole group immunity quickly. It also proposed two kinds of hierarchical network topology, C-DHT and D-DHT, which were based on the distributed hash table technology and suitable for the Cloud-P2P computing model. By the introduction of mobile agent technology, realized vaccine agent and patrol agent of the malicious code defense united-network. The malicious code united-defense network based on the Cloud-P2P computing model has ideal performances, such as the network load balance, the rapid response, the comprehensive defense and the good compatibility.%针对目前的反病毒系统在应对恶意代码时通常具有的滞后性,提出并构建了一种新颖的基于CloudP2P计算模型的恶意代码联合防御网络.Cloud-P2P计算模型将云计算与对等计算进行有机融合.恶意代码联合防御网络系统中的集群服务器与用户终端群体联合组成了一个高安全防御网,协同防御恶意代码,并快速产生群体免疫力.为了提高系统的性能表现,提出适用于Cloud-P2P融合计算环境的两种基于分布式哈希表的层次式网络结构C-DHT和D-DHT,并通过引入移动agent技术实现了恶意代码联合防御网络中的疫苗agent和巡警agent.基于Cloud-P2P计算模型的恶意代码联合防御网络具有负载均衡、反应快捷、防御全面和兼容性良好等性能表现.

  3. Solution Prototype

    DEFF Research Database (Denmark)

    Efeoglu, Arkin; Møller, Charles; Serie, Michel

    2013-01-01

    This paper outlines an artifact building and evaluation proposal. Design Science Research (DSR) studies usually consider encapsulated artifact that have relationships with other artifacts. The solution prototype as a composed artifact demands for a more comprehensive consideration in its systematic...... environment. The solution prototype that is composed from blending product and service prototype has particular impacts on the dualism of DSR’s “Build” and “Evaluate”. Since the mix between product and service prototyping can be varied, there is a demand for a more agile and iterative framework. Van de Ven......’s research framework seems to fit this purpose. Van de Ven allows for an iterative research approach to problem solving with flexible starting point. The research activity is the result between the iteration of two dimensions. This framework focuses on the natural evaluation, particularly on ex...

  4. Status of the Fermilab lattice supercomputer project

    International Nuclear Information System (INIS)

    Mackenzie, P.; Eichten, E.; Hockney, G.

    1988-10-01

    Fermilab has completed construction of a sixteen node (320 megaflop peak speed) parallel computer for lattice gauge theory calculations. The architecture was designed to provide the highest possible cost effectiveness while maintaining a high level of programmability and constraining as little as possible the types of lattice problems which can be done on it. The machine is programmed in C. It is a prototype for a 256 node (5 gigaflop peak speed) computer which will be assembled this winter. 6 refs

  5. Software Prototyping

    Science.gov (United States)

    Del Fiol, Guilherme; Hanseler, Haley; Crouch, Barbara Insley; Cummins, Mollie R.

    2016-01-01

    Summary Background Health information exchange (HIE) between Poison Control Centers (PCCs) and Emergency Departments (EDs) could improve care of poisoned patients. However, PCC information systems are not designed to facilitate HIE with EDs; therefore, we are developing specialized software to support HIE within the normal workflow of the PCC using user-centered design and rapid prototyping. Objective To describe the design of an HIE dashboard and the refinement of user requirements through rapid prototyping. Methods Using previously elicited user requirements, we designed low-fidelity sketches of designs on paper with iterative refinement. Next, we designed an interactive high-fidelity prototype and conducted scenario-based usability tests with end users. Users were asked to think aloud while accomplishing tasks related to a case vignette. After testing, the users provided feedback and evaluated the prototype using the System Usability Scale (SUS). Results Survey results from three users provided useful feedback that was then incorporated into the design. After achieving a stable design, we used the prototype itself as the specification for development of the actual software. Benefits of prototyping included having 1) subject-matter experts heavily involved with the design; 2) flexibility to make rapid changes, 3) the ability to minimize software development efforts early in the design stage; 4) rapid finalization of requirements; 5) early visualization of designs; 6) and a powerful vehicle for communication of the design to the programmers. Challenges included 1) time and effort to develop the prototypes and case scenarios; 2) no simulation of system performance; 3) not having all proposed functionality available in the final product; and 4) missing needed data elements in the PCC information system. PMID:27081404

  6. Design of novel HIV-1 protease inhibitors incorporating isophthalamide-derived P2-P3 ligands: Synthesis, biological evaluation and X-ray structural studies of inhibitor-HIV-1 protease complex

    Energy Technology Data Exchange (ETDEWEB)

    Ghosh, Arun K.; Brindisi, Margherita; Nyalapatla, Prasanth R.; Takayama, Jun; Ella-Menye, Jean-Rene; Yashchuk, Sofiya; Agniswamy, Johnson; Wang, Yuan-Fang; Aoki, Manabu; Amano, Masayuki; Weber, Irene T.; Mitsuya, Hiroaki

    2017-10-01

    Based upon molecular insights from the X-ray structures of inhibitor-bound HIV-1 protease complexes, we have designed a series of isophthalamide-derived inhibitors incorporating substituted pyrrolidines, piperidines and thiazolidines as P2-P3 ligands for specific interactions in the S2-S3 extended site. Compound 4b has shown an enzyme Ki of 0.025 nM and antiviral IC50 of 69 nM. An X-ray crystal structure of inhibitor 4b-HIV-1 protease complex was determined at 1.33 Å resolution. We have also determined X-ray structure of 3b-bound HIV-1 protease at 1.27 Å resolution. These structures revealed important molecular insight into the inhibitor–HIV-1 protease interactions in the active site.

  7. Electron-impact excitation of multiply-charged ions using energy loss in merged beams: e + Si3+(3s2S1/2) → e + Si3+(3p2P1/2,3/2)

    International Nuclear Information System (INIS)

    Wahlin, E.K.; Thompson, J.S.; Dunn, G.H.; Phaneuf, R.A.; Gregory, D.C.; Smith, A.C.H.

    1990-01-01

    For the first time absolute total cross sections for electron-impact excitation of a multiply-charged ion have been measured using an electron-energy-loss technique. Measurements were made near threshold for the process e + Si 3+ (3s 2 S 1/2 ) → e + Si 3+ (3p 2 P 1/2 , 3/2 ) -- 8.88 eV. The 10 -15 cm 2 measured cross section agrees with results of 7-state close coupling calculations to better than the ±20% (90% CL) total uncertainty of the measurements. Convoluting the theoretical curve with a Gaussian energy distribution indicates an energy width of 0.15 approx-lt ΔE approx-lt 0.20 eV. 12 refs., 2 figs

  8. Supercomputing - Use Cases, Advances, The Future (1/2)

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the first day, we will focus on the history and theory of supercomputing, the top500 list and the hardware that makes supercomputers tick. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP an...

  9. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Papka, M.; Messina, P.; Coffey, R.; Drugan, C. (LCF)

    2012-08-16

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to

  10. JINR supercomputer of the module type for event parallel analysis

    International Nuclear Information System (INIS)

    Kolpakov, I.F.; Senner, A.E.; Smirnov, V.A.

    1987-01-01

    A model of a supercomputer with 50 million of operations per second is suggested. Its realization allows one to solve JINR data analysis problems for large spectrometers (in particular DELPHY collaboration). The suggested module supercomputer is based on 32-bit commercial available microprocessor with a processing rate of about 1 MFLOPS. The processors are combined by means of VME standard busbars. MicroVAX-11 is a host computer organizing the operation of the system. Data input and output is realized via microVAX-11 computer periphery. Users' software is based on the FORTRAN-77. The supercomputer is connected with a JINR net port and all JINR users get an access to the suggested system

  11. Exploiting Thread Parallelism for Ocean Modeling on Cray XC Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sarje, Abhinav [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jacobsen, Douglas W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Samuel W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ringler, Todd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-05-01

    The incorporation of increasing core counts in modern processors used to build state-of-the-art supercomputers is driving application development towards exploitation of thread parallelism, in addition to distributed memory parallelism, with the goal of delivering efficient high-performance codes. In this work we describe the exploitation of threading and our experiences with it with respect to a real-world ocean modeling application code, MPAS-Ocean. We present detailed performance analysis and comparisons of various approaches and configurations for threading on the Cray XC series supercomputers.

  12. Comments on the parallelization efficiency of the Sunway TaihuLight supercomputer

    OpenAIRE

    Végh, János

    2016-01-01

    In the world of supercomputers, the large number of processors requires to minimize the inefficiencies of parallelization, which appear as a sequential part of the program from the point of view of Amdahl's law. The recently suggested new figure of merit is applied to the recently presented supercomputer, and the timeline of "Top 500" supercomputers is scrutinized using the metric. It is demonstrated, that in addition to the computing performance and power consumption, the new supercomputer i...

  13. Convex unwraps its first grown-up supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Manuel, T.

    1988-03-03

    Convex Computer Corp.'s new supercomputer family is even more of an industry blockbuster than its first system. At a tenfold jump in performance, it's far from just an incremental upgrade over its first minisupercomputer, the C-1. The heart of the new family, the new C-2 processor, churning at 50 million floating-point operations/s, spawns a group of systems whose performance could pass for some fancy supercomputers-namely those of the Cray Research Inc. family. When added to the C-1, Convex's five new supercomputers create the C series, a six-member product group offering a performance range from 20 to 200 Mflops. They mark an important transition for Convex from a one-product high-tech startup to a multinational company with a wide-ranging product line. It's a tough transition but the Richardson, Texas, company seems to be doing it. The extended product line propels Convex into the upper end of the minisupercomputer class and nudges it into the low end of the big supercomputers. It positions Convex in an uncrowded segment of the market in the $500,000 to $1 million range offering 50 to 200 Mflops of performance. The company is making this move because the minisuper area, which it pioneered, quickly became crowded with new vendors, causing prices and gross margins to drop drastically.

  14. QCD on the BlueGene/L Supercomputer

    International Nuclear Information System (INIS)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-01-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented

  15. QCD on the BlueGene/L Supercomputer

    Science.gov (United States)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-03-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented.

  16. Supercomputers and the future of computational atomic scattering physics

    International Nuclear Information System (INIS)

    Younger, S.M.

    1989-01-01

    The advent of the supercomputer has opened new vistas for the computational atomic physicist. Problems of hitherto unparalleled complexity are now being examined using these new machines, and important connections with other fields of physics are being established. This talk briefly reviews some of the most important trends in computational scattering physics and suggests some exciting possibilities for the future. 7 refs., 2 figs

  17. Mathematical methods and supercomputing in nuclear applications. Proceedings. Vol. 2

    International Nuclear Information System (INIS)

    Kuesters, H.; Stein, E.; Werner, W.

    1993-04-01

    All papers of the two volumes are separately indexed in the data base. Main topics are: Progress in advanced numerical techniques, fluid mechanics, on-line systems, artificial intelligence applications, nodal methods reactor kinetics, reactor design, supercomputer architecture, probabilistic estimation of risk assessment, methods in transport theory, advances in Monte Carlo techniques, and man-machine interface. (orig.)

  18. Mathematical methods and supercomputing in nuclear applications. Proceedings. Vol. 1

    International Nuclear Information System (INIS)

    Kuesters, H.; Stein, E.; Werner, W.

    1993-04-01

    All papers of the two volumes are separately indexed in the data base. Main topics are: Progress in advanced numerical techniques, fluid mechanics, on-line systems, artificial intelligence applications, nodal methods reactor kinetics, reactor design, supercomputer architecture, probabilistic estimation of risk assessment, methods in transport theory, advances in Monte Carlo techniques, and man-machine interface. (orig.)

  19. Role of supercomputers in magnetic fusion and energy research programs

    International Nuclear Information System (INIS)

    Killeen, J.

    1985-06-01

    The importance of computer modeling in magnetic fusion (MFE) and energy research (ER) programs is discussed. The need for the most advanced supercomputers is described, and the role of the National Magnetic Fusion Energy Computer Center in meeting these needs is explained

  20. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    Science.gov (United States)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  1. Integration of Panda Workload Management System with supercomputers

    Science.gov (United States)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads

  2. Study of the quasi-free scattering at the reaction 2H(p,2p)n at Esub(p)0 = 14.1 MeV

    International Nuclear Information System (INIS)

    Helten, H.J.

    1980-01-01

    The breakup reaction 2 H(p,2p)n was studied at Ep = 14.1 MeV in complete coincidence experiments on quasifree pp scattering in a systematic range of cinematic situations of the pp-subsystem for c.m. production angles between 90 0 and 140 0 and different violation of the quasifree condition as well on interferences with final-state interaction processes. The absolute differential breakup cross section was compared with approximate solutions of the Faddeev equations with separable s-wave potentials without explicite Coulomb interaction according to Ebenhoeh. The agreement is generally good referring to the form of the spectra, but the theoretical amplitude is in the mean 20% to high. The permanent independence of the quasifree breakup from the scattering parameter asub(pp) doesn't suggest to use this process for the determination of nn-scattering lengths from the mirror reaction 2 H(n,2n)p. (orig.)

  3. Acción pública y consumo colaborativo. Regulación de las viviendas de uso turístico en el contexto p2p

    Directory of Open Access Journals (Sweden)

    Nicolás Alejandro Guillén Navarro

    2016-01-01

    Full Text Available En el marco del denominado turismo colaborativo, las viviendas de uso turístico están revolucionando el modelo de alojamiento a nivel mundial. Apoyadas por su comercialización a través de los entornos p2p y el vacío legal al respecto, en los últimos años han adquirido tal importancia que por parte de los poderes públicos se ha visto necesario su regulación y así poner freno a aspectos tan problemáticos como la economía sumergida que genera dicha actividad o la competencia desleal sobre otros establecimientos de alojamiento turístico reglados. Propietarios, turistas, sector hotelero y Administraciones públicas han generado un interesante debate acerca de las implicaciones y repercusiones asociadas a las viviendas de uso turístico y hasta qué punto debe ejercerse un control sobre ellas. De ahí que este estudio trate de analizar todos estos puntos de vista y dé a conocer cómo se está haciendo frente a este fenómeno en España.

  4. Integration of Titan supercomputer at OLCF with ATLAS production system

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration

    2016-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job...

  5. Integration of Titan supercomputer at OLCF with ATLAS Production System

    CERN Document Server

    AUTHOR|(SzGeCERN)643806; The ATLAS collaboration; De, Kaushik; Klimentov, Alexei; Nilsson, Paul; Oleynik, Danila; Padolski, Siarhei; Panitkin, Sergey; Wenaus, Torre

    2017-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for jo...

  6. Extending ATLAS Computing to Commercial Clouds and Supercomputers

    CERN Document Server

    Nilsson, P; The ATLAS collaboration; Filipcic, A; Klimentov, A; Maeno, T; Oleynik, D; Panitkin, S; Wenaus, T; Wu, W

    2014-01-01

    The Large Hadron Collider will resume data collection in 2015 with substantially increased computing requirements relative to its first 2009-2013 run. A near doubling of the energy and the data rate, high level of event pile-up, and detector upgrades will mean the number and complexity of events to be analyzed will increase dramatically. A naive extrapolation of the Run 1 experience would suggest that a 5-6 fold increase in computing resources are needed - impossible within the anticipated flat computing budgets in the near future. Consequently ATLAS is engaged in an ambitious program to expand its computing to all available resources, notably including opportunistic use of commercial clouds and supercomputers. Such resources present new challenges in managing heterogeneity, supporting data flows, parallelizing workflows, provisioning software, and other aspects of distributed computing, all while minimizing operational load. We will present the ATLAS experience to date with clouds and supercomputers, and des...

  7. Tryton Supercomputer Capabilities for Analysis of Massive Data Streams

    Directory of Open Access Journals (Sweden)

    Krawczyk Henryk

    2015-09-01

    Full Text Available The recently deployed supercomputer Tryton, located in the Academic Computer Center of Gdansk University of Technology, provides great means for massive parallel processing. Moreover, the status of the Center as one of the main network nodes in the PIONIER network enables the fast and reliable transfer of data produced by miscellaneous devices scattered in the area of the whole country. The typical examples of such data are streams containing radio-telescope and satellite observations. Their analysis, especially with real-time constraints, can be challenging and requires the usage of dedicated software components. We propose a solution for such parallel analysis using the supercomputer, supervised by the KASKADA platform, which with the conjunction with immerse 3D visualization techniques can be used to solve problems such as pulsar detection and chronometric or oil-spill simulation on the sea surface.

  8. Visualizing quantum scattering on the CM-2 supercomputer

    International Nuclear Information System (INIS)

    Richardson, J.L.

    1991-01-01

    We implement parallel algorithms for solving the time-dependent Schroedinger equation on the CM-2 supercomputer. These methods are unconditionally stable as well as unitary at each time step and have the advantage of being spatially local and explicit. We show how to visualize the dynamics of quantum scattering using techniques for visualizing complex wave functions. Several scattering problems are solved to demonstrate the use of these methods. (orig.)

  9. Development of seismic tomography software for hybrid supercomputers

    Science.gov (United States)

    Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton

    2015-04-01

    Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on

  10. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  11. Intelligent Personal Supercomputer for Solving Scientific and Technical Problems

    Directory of Open Access Journals (Sweden)

    Khimich, O.M.

    2016-09-01

    Full Text Available New domestic intellіgent personal supercomputer of hybrid architecture Inparkom_pg for the mathematical modeling of processes in the defense industry, engineering, construction, etc. was developed. Intelligent software for the automatic research and tasks of computational mathematics with approximate data of different structures was designed. Applied software to provide mathematical modeling problems in construction, welding and filtration processes was implemented.

  12. Cellular-automata supercomputers for fluid-dynamics modeling

    International Nuclear Information System (INIS)

    Margolus, N.; Toffoli, T.; Vichniac, G.

    1986-01-01

    We report recent developments in the modeling of fluid dynamics, and give experimental results (including dynamical exponents) obtained using cellular automata machines. Because of their locality and uniformity, cellular automata lend themselves to an extremely efficient physical realization; with a suitable architecture, an amount of hardware resources comparable to that of a home computer can achieve (in the simulation of cellular automata) the performance of a conventional supercomputer

  13. Porting Ordinary Applications to Blue Gene/Q Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy; Katz, Daniel S.; Binkowski, T. Andrew; Zhong, Xiaoliang; Heinonen, Olle; Karpeyev, Dmitry; Wilde, Michael

    2015-08-31

    Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt's sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.

  14. Proceedings of the first energy research power supercomputer users symposium

    International Nuclear Information System (INIS)

    1991-01-01

    The Energy Research Power Supercomputer Users Symposium was arranged to showcase the richness of science that has been pursued and accomplished in this program through the use of supercomputers and now high performance parallel computers over the last year: this report is the collection of the presentations given at the Symposium. ''Power users'' were invited by the ER Supercomputer Access Committee to show that the use of these computational tools and the associated data communications network, ESNet, go beyond merely speeding up computations. Today the work often directly contributes to the advancement of the conceptual developments in their fields and the computational and network resources form the very infrastructure of today's science. The Symposium also provided an opportunity, which is rare in this day of network access to computing resources, for the invited users to compare and discuss their techniques and approaches with those used in other ER disciplines. The significance of new parallel architectures was highlighted by the interesting evening talk given by Dr. Stephen Orszag of Princeton University

  15. Extracting the Textual and Temporal Structure of Supercomputing Logs

    Energy Technology Data Exchange (ETDEWEB)

    Jain, S; Singh, I; Chandra, A; Zhang, Z; Bronevetsky, G

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an online clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.

  16. Applications of supercomputing and the utility industry: Calculation of power transfer capabilities

    International Nuclear Information System (INIS)

    Jensen, D.D.; Behling, S.R.; Betancourt, R.

    1990-01-01

    Numerical models and iterative simulation using supercomputers can furnish cost-effective answers to utility industry problems that are all but intractable using conventional computing equipment. An example of the use of supercomputers by the utility industry is the determination of power transfer capability limits for power transmission systems. This work has the goal of markedly reducing the run time of transient stability codes used to determine power distributions following major system disturbances. To date, run times of several hours on a conventional computer have been reduced to several minutes on state-of-the-art supercomputers, with further improvements anticipated to reduce run times to less than a minute. In spite of the potential advantages of supercomputers, few utilities have sufficient need for a dedicated in-house supercomputing capability. This problem is resolved using a supercomputer center serving a geographically distributed user base coupled via high speed communication networks

  17. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    Energy Technology Data Exchange (ETDEWEB)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S [Earth Sciences Department. Barcelona Supercomputing Center. Barcelona (Spain); Cuevas, E [Izanaa Atmospheric Research Center. Agencia Estatal de Meteorologia, Tenerife (Spain); Nickovic, S [Atmospheric Research and Environment Branch, World Meteorological Organization, Geneva (Switzerland)], E-mail: carlos.perez@bsc.es

    2009-03-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  18. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    International Nuclear Information System (INIS)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S; Cuevas, E; Nickovic, S

    2009-01-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  19. Supercomputers and the mathematical modeling of high complexity problems

    International Nuclear Information System (INIS)

    Belotserkovskii, Oleg M

    2010-01-01

    This paper is a review of many works carried out by members of our scientific school in past years. The general principles of constructing numerical algorithms for high-performance computers are described. Several techniques are highlighted and these are based on the method of splitting with respect to physical processes and are widely used in computing nonlinear multidimensional processes in fluid dynamics, in studies of turbulence and hydrodynamic instabilities and in medicine and other natural sciences. The advances and developments related to the new generation of high-performance supercomputing in Russia are presented.

  20. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    Science.gov (United States)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  1. A fast random number generator for the Intel Paragon supercomputer

    Science.gov (United States)

    Gutbrod, F.

    1995-06-01

    A pseudo-random number generator is presented which makes optimal use of the architecture of the i860-microprocessor and which is expected to have a very long period. It is therefore a good candidate for use on the parallel supercomputer Paragon XP. In the assembler version, it needs 6.4 cycles for a real∗4 random number. There is a FORTRAN routine which yields identical numbers up to rare and minor rounding discrepancies, and it needs 28 cycles. The FORTRAN performance on other microprocessors is somewhat better. Arguments for the quality of the generator and some numerical tests are given.

  2. Centralized supercomputer support for magnetic fusion energy research

    International Nuclear Information System (INIS)

    Fuss, D.; Tull, G.G.

    1984-01-01

    High-speed computers with large memories are vital to magnetic fusion energy research. Magnetohydrodynamic (MHD), transport, equilibrium, Vlasov, particle, and Fokker-Planck codes that model plasma behavior play an important role in designing experimental hardware and interpreting the resulting data, as well as in advancing plasma theory itself. The size, architecture, and software of supercomputers to run these codes are often the crucial constraints on the benefits such computational modeling can provide. Hence, vector computers such as the CRAY-1 offer a valuable research resource. To meet the computational needs of the fusion program, the National Magnetic Fusion Energy Computer Center (NMFECC) was established in 1974 at the Lawrence Livermore National Laboratory. Supercomputers at the central computing facility are linked to smaller computer centers at each of the major fusion laboratories by a satellite communication network. In addition to providing large-scale computing, the NMFECC environment stimulates collaboration and the sharing of computer codes and data among the many fusion researchers in a cost-effective manner

  3. Personal Supercomputing for Monte Carlo Simulation Using a GPU

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2008-05-15

    Since the usability, accessibility, and maintenance of a personal computer (PC) are very good, a PC is a useful computer simulation tool for researchers. It has enough calculation power to simulate a small scale system with the improved performance of a PC's CPU. However, if a system is large or long time scale, we need a cluster computer or supercomputer. Recently great changes have occurred in the PC calculation environment. A graphic process unit (GPU) on a graphic card, only used to calculate display data, has a superior calculation capability to a PC's CPU. This GPU calculation performance is a match for the supercomputer in 2000. Although it has such a great calculation potential, it is not easy to program a simulation code for GPU due to difficult programming techniques for converting a calculation matrix to a 3D rendering image using graphic APIs. In 2006, NVIDIA provided the Software Development Kit (SDK) for the programming environment for NVIDIA's graphic cards, which is called the Compute Unified Device Architecture (CUDA). It makes the programming on the GPU easy without knowledge of the graphic APIs. This paper describes the basic architectures of NVIDIA's GPU and CUDA, and carries out a performance benchmark for the Monte Carlo simulation.

  4. Personal Supercomputing for Monte Carlo Simulation Using a GPU

    International Nuclear Information System (INIS)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho

    2008-01-01

    Since the usability, accessibility, and maintenance of a personal computer (PC) are very good, a PC is a useful computer simulation tool for researchers. It has enough calculation power to simulate a small scale system with the improved performance of a PC's CPU. However, if a system is large or long time scale, we need a cluster computer or supercomputer. Recently great changes have occurred in the PC calculation environment. A graphic process unit (GPU) on a graphic card, only used to calculate display data, has a superior calculation capability to a PC's CPU. This GPU calculation performance is a match for the supercomputer in 2000. Although it has such a great calculation potential, it is not easy to program a simulation code for GPU due to difficult programming techniques for converting a calculation matrix to a 3D rendering image using graphic APIs. In 2006, NVIDIA provided the Software Development Kit (SDK) for the programming environment for NVIDIA's graphic cards, which is called the Compute Unified Device Architecture (CUDA). It makes the programming on the GPU easy without knowledge of the graphic APIs. This paper describes the basic architectures of NVIDIA's GPU and CUDA, and carries out a performance benchmark for the Monte Carlo simulation

  5. Plane-wave electronic structure calculations on a parallel supercomputer

    International Nuclear Information System (INIS)

    Nelson, J.S.; Plimpton, S.J.; Sears, M.P.

    1993-01-01

    The development of iterative solutions of Schrodinger's equation in a plane-wave (pw) basis over the last several years has coincided with great advances in the computational power available for performing the calculations. These dual developments have enabled many new and interesting condensed matter phenomena to be studied from a first-principles approach. The authors present a detailed description of the implementation on a parallel supercomputer (hypercube) of the first-order equation-of-motion solution to Schrodinger's equation, using plane-wave basis functions and ab initio separable pseudopotentials. By distributing the plane-waves across the processors of the hypercube many of the computations can be performed in parallel, resulting in decreases in the overall computation time relative to conventional vector supercomputers. This partitioning also provides ample memory for large Fast Fourier Transform (FFT) meshes and the storage of plane-wave coefficients for many hundreds of energy bands. The usefulness of the parallel techniques is demonstrated by benchmark timings for both the FFT's and iterations of the self-consistent solution of Schrodinger's equation for different sized Si unit cells of up to 512 atoms

  6. Supercomputer algorithms for reactivity, dynamics and kinetics of small molecules

    International Nuclear Information System (INIS)

    Lagana, A.

    1989-01-01

    Even for small systems, the accurate characterization of reactive processes is so demanding of computer resources as to suggest the use of supercomputers having vector and parallel facilities. The full advantages of vector and parallel architectures can sometimes be obtained by simply modifying existing programs, vectorizing the manipulation of vectors and matrices, and requiring the parallel execution of independent tasks. More often, however, a significant time saving can be obtained only when the computer code undergoes a deeper restructuring, requiring a change in the computational strategy or, more radically, the adoption of a different theoretical treatment. This book discusses supercomputer strategies based upon act and approximate methods aimed at calculating the electronic structure and the reactive properties of small systems. The book shows how, in recent years, intense design activity has led to the ability to calculate accurate electronic structures for reactive systems, exact and high-level approximations to three-dimensional reactive dynamics, and to efficient directive and declaratory software for the modelling of complex systems

  7. The TeraGyroid Experiment – Supercomputing 2003

    Directory of Open Access Journals (Sweden)

    R.J. Blake

    2005-01-01

    Full Text Available Amphiphiles are molecules with hydrophobic tails and hydrophilic heads. When dispersed in solvents, they self assemble into complex mesophases including the beautiful cubic gyroid phase. The goal of the TeraGyroid experiment was to study defect pathways and dynamics in these gyroids. The UK's supercomputing and USA's TeraGrid facilities were coupled together, through a dedicated high-speed network, into a single computational Grid for research work that peaked around the Supercomputing 2003 conference. The gyroids were modeled using lattice Boltzmann methods with parameter spaces explored using many 1283 and 3grid point simulations, this data being used to inform the world's largest three-dimensional time dependent simulation with 10243-grid points. The experiment generated some 2 TBytes of useful data. In terms of Grid technology the project demonstrated the migration of simulations (using Globus middleware to and fro across the Atlantic exploiting the availability of resources. Integration of the systems accelerated the time to insight. Distributed visualisation of the output datasets enabled the parameter space of the interactions within the complex fluid to be explored from a number of sites, informed by discourse over the Access Grid. The project was sponsored by EPSRC (UK and NSF (USA with trans-Atlantic optical bandwidth provided by British Telecommunications.

  8. KfK-seminar series on supercomputing und visualization from May till September 1992

    International Nuclear Information System (INIS)

    Hohenhinnebusch, W.

    1993-05-01

    During the period of may 1992 to september 1992 a series of seminars was held at KfK on several topics of supercomputing in different fields of application. The aim was to demonstrate the importance of supercomputing and visualization in numerical simulations of complex physical and technical phenomena. This report contains the collection of all submitted seminar papers. (orig./HP) [de

  9. Comprehensive efficiency analysis of supercomputer resource usage based on system monitoring data

    Science.gov (United States)

    Mamaeva, A. A.; Shaykhislamov, D. I.; Voevodin, Vad V.; Zhumatiy, S. A.

    2018-03-01

    One of the main problems of modern supercomputers is the low efficiency of their usage, which leads to the significant idle time of computational resources, and, in turn, to the decrease in speed of scientific research. This paper presents three approaches to study the efficiency of supercomputer resource usage based on monitoring data analysis. The first approach performs an analysis of computing resource utilization statistics, which allows to identify different typical classes of programs, to explore the structure of the supercomputer job flow and to track overall trends in the supercomputer behavior. The second approach is aimed specifically at analyzing off-the-shelf software packages and libraries installed on the supercomputer, since efficiency of their usage is becoming an increasingly important factor for the efficient functioning of the entire supercomputer. Within the third approach, abnormal jobs – jobs with abnormally inefficient behavior that differs significantly from the standard behavior of the overall supercomputer job flow – are being detected. For each approach, the results obtained in practice in the Supercomputer Center of Moscow State University are demonstrated.

  10. Automatic discovery of the communication network topology for building a supercomputer model

    Science.gov (United States)

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim

    2016-10-01

    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.

  11. SUPERCOMPUTER SIMULATION OF CRITICAL PHENOMENA IN COMPLEX SOCIAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Petrus M.A. Sloot

    2014-09-01

    Full Text Available The paper describes a problem of computer simulation of critical phenomena in complex social systems on a petascale computing systems in frames of complex networks approach. The three-layer system of nested models of complex networks is proposed including aggregated analytical model to identify critical phenomena, detailed model of individualized network dynamics and model to adjust a topological structure of a complex network. The scalable parallel algorithm covering all layers of complex networks simulation is proposed. Performance of the algorithm is studied on different supercomputing systems. The issues of software and information infrastructure of complex networks simulation are discussed including organization of distributed calculations, crawling the data in social networks and results visualization. The applications of developed methods and technologies are considered including simulation of criminal networks disruption, fast rumors spreading in social networks, evolution of financial networks and epidemics spreading.

  12. Cooperative visualization and simulation in a supercomputer environment

    International Nuclear Information System (INIS)

    Ruehle, R.; Lang, U.; Wierse, A.

    1993-01-01

    The article takes a closer look on the requirements being imposed by the idea to integrate all the components into a homogeneous software environment. To this end several methods for the distribtuion of applications in dependence of certain problem types are discussed. The currently available methods at the University of Stuttgart Computer Center for the distribution of applications are further explained. Finally the aims and characteristics of a European sponsored project, called PAGEIN, are explained, which fits perfectly into the line of developments at RUS. The aim of the project is to experiment with future cooperative working modes of aerospace scientists in a high speed distributed supercomputing environment. Project results will have an impact on the development of real future scientific application environments. (orig./DG)

  13. Lectures in Supercomputational Neurosciences Dynamics in Complex Brain Networks

    CERN Document Server

    Graben, Peter beim; Thiel, Marco; Kurths, Jürgen

    2008-01-01

    Computational Neuroscience is a burgeoning field of research where only the combined effort of neuroscientists, biologists, psychologists, physicists, mathematicians, computer scientists, engineers and other specialists, e.g. from linguistics and medicine, seem to be able to expand the limits of our knowledge. The present volume is an introduction, largely from the physicists' perspective, to the subject matter with in-depth contributions by system neuroscientists. A conceptual model for complex networks of neurons is introduced that incorporates many important features of the real brain, such as various types of neurons, various brain areas, inhibitory and excitatory coupling and the plasticity of the network. The computational implementation on supercomputers, which is introduced and discussed in detail in this book, will enable the readers to modify and adapt the algortihm for their own research. Worked-out examples of applications are presented for networks of Morris-Lecar neurons to model the cortical co...

  14. Development of a Cloud Resolving Model for Heterogeneous Supercomputers

    Science.gov (United States)

    Sreepathi, S.; Norman, M. R.; Pal, A.; Hannah, W.; Ponder, C.

    2017-12-01

    A cloud resolving climate model is needed to reduce major systematic errors in climate simulations due to structural uncertainty in numerical treatments of convection - such as convective storm systems. This research describes the porting effort to enable SAM (System for Atmosphere Modeling) cloud resolving model on heterogeneous supercomputers using GPUs (Graphical Processing Units). We have isolated a standalone configuration of SAM that is targeted to be integrated into the DOE ACME (Accelerated Climate Modeling for Energy) Earth System model. We have identified key computational kernels from the model and offloaded them to a GPU using the OpenACC programming model. Furthermore, we are investigating various optimization strategies intended to enhance GPU utilization including loop fusion/fission, coalesced data access and loop refactoring to a higher abstraction level. We will present early performance results, lessons learned as well as optimization strategies. The computational platform used in this study is the Summitdev system, an early testbed that is one generation removed from Summit, the next leadership class supercomputer at Oak Ridge National Laboratory. The system contains 54 nodes wherein each node has 2 IBM POWER8 CPUs and 4 NVIDIA Tesla P100 GPUs. This work is part of a larger project, ACME-MMF component of the U.S. Department of Energy(DOE) Exascale Computing Project. The ACME-MMF approach addresses structural uncertainty in cloud processes by replacing traditional parameterizations with cloud resolving "superparameterization" within each grid cell of global climate model. Super-parameterization dramatically increases arithmetic intensity, making the MMF approach an ideal strategy to achieve good performance on emerging exascale computing architectures. The goal of the project is to integrate superparameterization into ACME, and explore its full potential to scientifically and computationally advance climate simulation and prediction.

  15. A supercomputing application for reactors core design and optimization

    International Nuclear Information System (INIS)

    Hourcade, Edouard; Gaudier, Fabrice; Arnaud, Gilles; Funtowiez, David; Ammar, Karim

    2010-01-01

    Advanced nuclear reactor designs are often intuition-driven processes where designers first develop or use simplified simulation tools for each physical phenomenon involved. Through the project development, complexity in each discipline increases and implementation of chaining/coupling capabilities adapted to supercomputing optimization process are often postponed to a further step so that task gets increasingly challenging. In the context of renewal in reactor designs, project of first realization are often run in parallel with advanced design although very dependant on final options. As a consequence, the development of tools to globally assess/optimize reactor core features, with the on-going design methods accuracy, is needed. This should be possible within reasonable simulation time and without advanced computer skills needed at project management scale. Also, these tools should be ready to easily cope with modeling progresses in each discipline through project life-time. An early stage development of multi-physics package adapted to supercomputing is presented. The URANIE platform, developed at CEA and based on the Data Analysis Framework ROOT, is very well adapted to this approach. It allows diversified sampling techniques (SRS, LHS, qMC), fitting tools (neuronal networks...) and optimization techniques (genetic algorithm). Also data-base management and visualization are made very easy. In this paper, we'll present the various implementing steps of this core physics tool where neutronics, thermo-hydraulics, and fuel mechanics codes are run simultaneously. A relevant example of optimization of nuclear reactor safety characteristics will be presented. Also, flexibility of URANIE tool will be illustrated with the presentation of several approaches to improve Pareto front quality. (author)

  16. Rethink! prototyping transdisciplinary concepts of prototyping

    CERN Document Server

    Nagy, Emilia; Stark, Rainer

    2016-01-01

    In this book, the authors describe the findings derived from interaction and cooperation between scientific actors employing diverse practices. They reflect on distinct prototyping concepts and examine the transformation of development culture in their fusion to hybrid approaches and solutions. The products of tomorrow are going to be multifunctional, interactive systems – and already are to some degree today. Collaboration across multiple disciplines is the only way to grasp their complexity in design concepts. This underscores the importance of reconsidering the prototyping process for the development of these systems, particularly in transdisciplinary research teams. “Rethinking Prototyping – new hybrid concepts for prototyping” was a transdisciplinary project that took up this challenge. The aim of this programmatic rethinking was to come up with a general concept of prototyping by combining innovative prototyping concepts, which had been researched and developed in three sub-projects: “Hybrid P...

  17. Architectures of prototypes and architectural prototyping

    DEFF Research Database (Denmark)

    Hansen, Klaus Marius; Christensen, Michael; Sandvad, Elmer

    1998-01-01

    together as a team, but developed a prototype that more than fulfilled the expectations of the shipping company. The prototype should: - complete the first major phase within 10 weeks, - be highly vertical illustrating future work practice, - continuously live up to new requirements from prototyping......This paper reports from experience obtained through development of a prototype of a global customer service system in a project involving a large shipping company and a university research group. The research group had no previous knowledge of the complex business of shipping and had never worked...... sessions with users, - evolve over a long period of time to contain more functionality - allow for 6-7 developers working intensively in parallel. Explicit focus on the software architecture and letting the architecture evolve with the prototype played a major role in resolving these conflicting...

  18. Detection of radiation transitions between 4d9(D5/3,3/2)5s2nl and 4d105p(2P1/2,3/20)nl of self-ionized states of cadmium atom at electron-ion collisions

    International Nuclear Information System (INIS)

    Gomonaj, A.N.; Imre, A.I.

    2005-01-01

    Radiation transitions between 4d 9 ( 2 D 5/2,3/2 )5s 2 nl and 4d 10 5p( 2 P 1/2,3/2 0 )nl self-ionized states of Cd atom being dielectron satellites of λ325.0 nm (4d 9 5s 22 D 3/2 →4d 10 5p 2 P 1/2 0 ) and λ353.6 nm (4d 9 5s 22 D 3/2 → 4d 10 5p 2 P 3/2 0 ) laser lines of Cd + ion were detected for the first time at electron-ion collisions. One studied energy dependences of the effective cross sections of electron excitation of the satellite lines within 7-10 eV energy range. The effective cross sections of excitation of dielectron satellites constitutes ∼ 10 -17 cm 2 that is comparable with the efficiency of excitation of the laser lines [ru

  19. Integration of Titan supercomputer at OLCF with ATLAS Production System

    Science.gov (United States)

    Barreiro Megino, F.; De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wells, J.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job submission to Titan’s batch queues and local data management, with lightweight MPI wrappers to run single node workloads in parallel on Titan’s multi-core worker nodes. It provides for running of standard ATLAS production jobs on unused resources (backfill) on Titan. The system already allowed ATLAS to collect on Titan millions of core-hours per month, execute hundreds of thousands jobs, while simultaneously improving Titans utilization efficiency. We will discuss the details of the implementation, current experience with running the system, as well as future plans aimed at improvements in scalability and efficiency. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to

  20. Novel Supercomputing Approaches for High Performance Linear Algebra Using FPGAs, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Supercomputing plays a major role in many areas of science and engineering, and it has had tremendous impact for decades in areas such as aerospace, defense, energy,...

  1. SUPERCOMPUTERS FOR AIDING ECONOMIC PROCESSES WITH REFERENCE TO THE FINANCIAL SECTOR

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2014-12-01

    Full Text Available The article discusses the use of supercomputers to support business processes with particular emphasis on the financial sector. A reference was made to the selected projects that support economic development. In particular, we propose the use of supercomputers to perform artificial intel-ligence methods in banking. The proposed methods combined with modern technology enables a significant increase in the competitiveness of enterprises and banks by adding new functionality.

  2. Adventures in supercomputing: An innovative program for high school teachers

    Energy Technology Data Exchange (ETDEWEB)

    Oliver, C.E.; Hicks, H.R.; Summers, B.G. [Oak Ridge National Lab., TN (United States); Staten, D.G. [Wartburg Central High School, TN (United States)

    1994-12-31

    Within the realm of education, seldom does an innovative program become available with the potential to change an educator`s teaching methodology. Adventures in Supercomputing (AiS), sponsored by the U.S. Department of Energy (DOE), is such a program. It is a program for high school teachers that changes the teacher paradigm from a teacher-directed approach of teaching to a student-centered approach. {open_quotes}A student-centered classroom offers better opportunities for development of internal motivation, planning skills, goal setting and perseverance than does the traditional teacher-directed mode{close_quotes}. Not only is the process of teaching changed, but the cross-curricula integration within the AiS materials is remarkable. Written from a teacher`s perspective, this paper will describe the AiS program and its effects on teachers and students, primarily at Wartburg Central High School, in Wartburg, Tennessee. The AiS program in Tennessee is sponsored by Oak Ridge National Laboratory (ORNL).

  3. Accelerating Science Impact through Big Data Workflow Management and Supercomputing

    Directory of Open Access Journals (Sweden)

    De K.

    2016-01-01

    Full Text Available The Large Hadron Collider (LHC, operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. ATLAS, one of the largest collaborations ever assembled in the the history of science, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. To manage the workflow for all data processing on hundreds of data centers the PanDA (Production and Distributed AnalysisWorkload Management System is used. An ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF, is realizing within BigPanDA and megaPanDA projects. These projects are now exploring how PanDA might be used for managing computing jobs that run on supercomputers including OLCF’s Titan and NRC-KI HPC2. The main idea is to reuse, as much as possible, existing components of the PanDA system that are already deployed on the LHC Grid for analysis of physics data. The next generation of PanDA will allow many data-intensive sciences employing a variety of computing platforms to benefit from ATLAS experience and proven tools in highly scalable processing.

  4. Symbolic simulation of engineering systems on a supercomputer

    International Nuclear Information System (INIS)

    Ragheb, M.; Gvillo, D.; Makowitz, H.

    1986-01-01

    Model-Based Production-Rule systems for analysis are developed for the symbolic simulation of Complex Engineering systems on a CRAY X-MP Supercomputer. The Fault-Tree and Event-Tree Analysis methodologies from Systems-Analysis are used for problem representation and are coupled to the Rule-Based System Paradigm from Knowledge Engineering to provide modelling of engineering devices. Modelling is based on knowledge of the structure and function of the device rather than on human expertise alone. To implement the methodology, we developed a production-Rule Analysis System that uses both backward-chaining and forward-chaining: HAL-1986. The inference engine uses an Induction-Deduction-Oriented antecedent-consequent logic and is programmed in Portable Standard Lisp (PSL). The inference engine is general and can accommodate general modifications and additions to the knowledge base. The methodologies used will be demonstrated using a model for the identification of faults, and subsequent recovery from abnormal situations in Nuclear Reactor Safety Analysis. The use of the exposed methodologies for the prognostication of future device responses under operational and accident conditions using coupled symbolic and procedural programming is discussed

  5. Micro-mechanical Simulations of Soils using Massively Parallel Supercomputers

    Directory of Open Access Journals (Sweden)

    David W. Washington

    2004-06-01

    Full Text Available In this research a computer program, Trubal version 1.51, based on the Discrete Element Method was converted to run on a Connection Machine (CM-5,a massively parallel supercomputer with 512 nodes, to expedite the computational times of simulating Geotechnical boundary value problems. The dynamic memory algorithm in Trubal program did not perform efficiently in CM-2 machine with the Single Instruction Multiple Data (SIMD architecture. This was due to the communication overhead involving global array reductions, global array broadcast and random data movement. Therefore, a dynamic memory algorithm in Trubal program was converted to a static memory arrangement and Trubal program was successfully converted to run on CM-5 machines. The converted program was called "TRUBAL for Parallel Machines (TPM." Simulating two physical triaxial experiments and comparing simulation results with Trubal simulations validated the TPM program. With a 512 nodes CM-5 machine TPM produced a nine-fold speedup demonstrating the inherent parallelism within algorithms based on the Discrete Element Method.

  6. HEP Computing Tools, Grid and Supercomputers for Genome Sequencing Studies

    Science.gov (United States)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Novikov, A.; Poyda, A.; Tertychnyy, I.; Wenaus, T.

    2017-10-01

    PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive sciences such as bioinformatics. Recent advances of Next Generation Genome Sequencing (NGS) technology led to increasing streams of sequencing data that need to be processed, analysed and made available for bioinformaticians worldwide. Analysis of genomes sequencing data using popular software pipeline PALEOMIX can take a month even running it on the powerful computer resource. In this paper we will describe the adaptation the PALEOMIX pipeline to run it on a distributed computing environment powered by PanDA. To run pipeline we split input files into chunks which are run separately on different nodes as separate inputs for PALEOMIX and finally merge output file, it is very similar to what it done by ATLAS to process and to simulate data. We dramatically decreased the total walltime because of jobs (re)submission automation and brokering within PanDA. Using software tools developed initially for HEP and Grid can reduce payload execution time for Mammoths DNA samples from weeks to days.

  7. A visual analytics system for optimizing the performance of large-scale networks in supercomputing systems

    Directory of Open Access Journals (Sweden)

    Takanori Fujiwara

    2018-03-01

    Full Text Available The overall efficiency of an extreme-scale supercomputer largely relies on the performance of its network interconnects. Several of the state of the art supercomputers use networks based on the increasingly popular Dragonfly topology. It is crucial to study the behavior and performance of different parallel applications running on Dragonfly networks in order to make optimal system configurations and design choices, such as job scheduling and routing strategies. However, in order to study these temporal network behavior, we would need a tool to analyze and correlate numerous sets of multivariate time-series data collected from the Dragonfly’s multi-level hierarchies. This paper presents such a tool–a visual analytics system–that uses the Dragonfly network to investigate the temporal behavior and optimize the communication performance of a supercomputer. We coupled interactive visualization with time-series analysis methods to help reveal hidden patterns in the network behavior with respect to different parallel applications and system configurations. Our system also provides multiple coordinated views for connecting behaviors observed at different levels of the network hierarchies, which effectively helps visual analysis tasks. We demonstrate the effectiveness of the system with a set of case studies. Our system and findings can not only help improve the communication performance of supercomputing applications, but also the network performance of next-generation supercomputers. Keywords: Supercomputing, Parallel communication network, Dragonfly networks, Time-series data, Performance analysis, Visual analytics

  8. Enhanced P2P Services Providing Multimedia Content

    Directory of Open Access Journals (Sweden)

    E. Ardizzone

    2007-01-01

    To address this major limitation, we propose an original image and video sharing system, in which a user is able to interactively search interesting resources by means of content-based image and video retrieval techniques. In order to limit the network traffic load, maximizing the usefulness of each peer contacted in the query process, we also propose the adoption of an adaptive overlay routing algorithm, exploiting compact representations of the multimedia resources shared by each peer. Experimental results confirm the validity of the proposed approach, that is capable of dynamically adapting the network topology to peer interests, on the basis of query interactions among users.

  9. Managing Network Partitions in Structured P2P Networks

    Science.gov (United States)

    Shafaat, Tallat M.; Ghodsi, Ali; Haridi, Seif

    Structured overlay networks form a major class of peer-to-peer systems, which are touted for their abilities to scale, tolerate failures, and self-manage. Any long-lived Internet-scale distributed system is destined to face network partitions. Consequently, the problem of network partitions and mergers is highly related to fault-tolerance and self-management in large-scale systems. This makes it a crucial requirement for building any structured peer-to-peer systems to be resilient to network partitions. Although the problem of network partitions and mergers is highly related to fault-tolerance and self-management in large-scale systems, it has hardly been studied in the context of structured peer-to-peer systems. Structured overlays have mainly been studied under churn (frequent joins/failures), which as a side effect solves the problem of network partitions, as it is similar to massive node failures. Yet, the crucial aspect of network mergers has been ignored. In fact, it has been claimed that ring-based structured overlay networks, which constitute the majority of the structured overlays, are intrinsically ill-suited for merging rings. In this chapter, we motivate the problem of network partitions and mergers in structured overlays. We discuss how a structured overlay can automatically detect a network partition and merger. We present an algorithm for merging multiple similar ring-based overlays when the underlying network merges. We examine the solution in dynamic conditions, showing how our solution is resilient to churn during the merger, something widely believed to be difficult or impossible. We evaluate the algorithm for various scenarios and show that even when falsely detecting a merger, the algorithm quickly terminates and does not clutter the network with many messages. The algorithm is flexible as the tradeoff between message complexity and time complexity can be adjusted by a parameter.

  10. Distributing Workflows over a Ubiquitous P2P Network

    Directory of Open Access Journals (Sweden)

    Eddie Al-Shakarchi

    2007-01-01

    Full Text Available This paper discusses issues in the distribution of bundled workflows across ubiquitous peer-to-peer networks for the application of music information retrieval. The underlying motivation for this work is provided by the DART project, which aims to develop a novel music recommendation system by gathering statistical data using collaborative filtering techniques and the analysis of the audio itsel, in order to create a reliable and comprehensive database of the music that people own and which they listen to. To achieve this, the DART scientists creating the algorithms need the ability to distribute the Triana workflows they create, representing the analysis to be performed, across the network on a regular basis (perhaps even daily in order to update the network as a whole with new workflows to be executed for the analysis. DART uses a similar approach to BOINC but differs in that the workers receive input data in the form of a bundled Triana workflow, which is executed in order to process any MP3 files that they own on their machine. Once analysed, the results are returned to DART's distributed database that collects and aggregates the resulting information. DART employs the use of package repositories to decentralise the distribution of such workflow bundles and this approach is validated in this paper through simulations that show that suitable scalability is maintained through the system as the number of participants increases. The results clearly illustrate the effectiveness of the approach.

  11. Streaming layered video over P2P networks

    NARCIS (Netherlands)

    Alhaisoni, M.; Ghanbari, M.; Liotta, A.

    2009-01-01

    Peer-to-Peer streaming has been increasingly deployed recently. This comes out from its ability to convey the stream over the IP network to a large number of end-users (or peers). However, due to the heterogeneous nature among the peers, some of them will not be capable to relay or upload the

  12. Review of Brookhaven nuclear transparency measurements in (p,2p ...

    Indian Academy of Sciences (India)

    In this contribution we summarize the results of two experiments to measure ... Nuclear; color transparency; protons; alternating gradient synchrotron; large angle. ..... Proceedings of the XIII International Symposium on Multi-particle Dynamics.

  13. Nonmonotonic Trust Management for P2P Applications

    NARCIS (Netherlands)

    Czenko, M.R.; Tran, H.M.; Doumen, J.M.; Etalle, Sandro; Hartel, Pieter H.; den Hartog, Jeremy

    Community decisions about access control in virtual communities are non-monotonic in nature. This means that they cannot be expressed in current, monotonic trust management languages such as the family of Role Based Trust Management languages (RT). To solve this problem we propose RTo, which adds a

  14. Imagining the prototype

    OpenAIRE

    Brouwer, C. E.; Bhomer, ten, M.; Melkas, H.; Buur, J.

    2013-01-01

    This article reports on the analysis of a design session, employing conversation analysis. In the design session three experts and a designer discuss a prototype of a shirt, which has been developed with the input from these experts. The analysis focuses on the type of involvement of the participants with the prototype and how they explicate the points they make in the discussion with or without making use of the prototype. Three techniques for explicating design issues that exploit the proto...

  15. Rapid Prototyping Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — The ARDEC Rapid Prototyping (RP) Laboratory was established in December 1992 to provide low cost RP capabilities to the ARDEC engineering community. The Stratasys,...

  16. Fabrication and Prototyping Lab

    Data.gov (United States)

    Federal Laboratory Consortium — Purpose: The Fabrication and Prototyping Lab for composite structures provides a wide variety of fabrication capabilities critical to enabling hands-on research and...

  17. Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Villa, Oreste; Tumeo, Antonino; Secchi, Simone; Manzano Franco, Joseph B.

    2012-12-31

    Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, we introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.

  18. Designing and testing prototypes

    NARCIS (Netherlands)

    Vereijken, P.; Wijnands, F.; Stol, W.

    1995-01-01

    This second progress report focuses on designing a theoretical prototype by linking parameters to methods and designing the methods in this context until they are ready for initial testing. The report focuses also on testing and improving the prototype in general and the methods in particular until

  19. EUCLID ARCHIVE SYSTEM PROTOTYPE

    NARCIS (Netherlands)

    Belikov, Andrey; Williams, Owen; Droge, Bob; Tsyganov, Andrey; Boxhoorn, Danny; McFarland, John; Verdoes Kleijn, Gijs; Valentijn, E; Altieri, Bruno; Dabin, Christophe; Pasian, F.; Osuna, Pedro; Soille, P.; Marchetti, P.G.

    2014-01-01

    The Euclid Archive System prototype is a functional information system which is used to address the numerous challenges in the development of fully functional data processing system for Euclid. The prototype must support the highly distributed nature of the Euclid Science Ground System, with Science

  20. Specifications in software prototyping

    OpenAIRE

    Luqi; Chang, Carl K.; Zhu, Hong

    1998-01-01

    We explore the use of software speci®cations for software prototyping. This paper describes a process model for software prototyping, and shows how specifications can be used to support such a process via a cellular mobile phone switch example.

  1. EPCiR prototype

    DEFF Research Database (Denmark)

    2003-01-01

    A prototype of a residential pervasive computing platform based on OSGi involving among other a mock-up of an health care bandage.......A prototype of a residential pervasive computing platform based on OSGi involving among other a mock-up of an health care bandage....

  2. Cooperative Prototyping Experiments

    DEFF Research Database (Denmark)

    Bødker, Susanne; Grønbæk, Kaj

    1989-01-01

    This paper describes experiments with a design technique that we denote cooperative prototyping. The experiments consider design of a patient case record system for municipal dental clinics in which we used HyperCard, an off the shelf programming environment for the Macintosh. In the ecperiments we...... tried to achieve a fluent work-like evaluation of prototypes where users envisioned future work with a computer tool, at the same time as we made on-line modifications of prototypes in cooperation with the users when breakdown occur in their work-like evaluation. The experiments showed...... that it was possible to make a number of direct manipulation changes of prototypes in cooperation with the users, in interplay with their fluent work-like evaluation of these. However, breakdown occurred in the prototyping process when we reached the limits of the direct manipulation support for modification. From...

  3. The ASCI Network for SC '99: A Step on the Path to a 100 Gigabit Per Second Supercomputing Network

    Energy Technology Data Exchange (ETDEWEB)

    PRATT,THOMAS J.; TARMAN,THOMAS D.; MARTINEZ,LUIS M.; MILLER,MARC M.; ADAMS,ROGER L.; CHEN,HELEN Y.; BRANDT,JAMES M.; WYCKOFF,PETER S.

    2000-07-24

    This document highlights the Discom{sup 2}'s Distance computing and communication team activities at the 1999 Supercomputing conference in Portland, Oregon. This conference is sponsored by the IEEE and ACM. Sandia, Lawrence Livermore and Los Alamos National laboratories have participated in this conference for eleven years. For the last four years the three laboratories have come together at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives rubric. Communication support for the ASCI exhibit is provided by the ASCI DISCOM{sup 2} project. The DISCOM{sup 2} communication team uses this forum to demonstrate and focus communication and networking developments within the community. At SC 99, DISCOM built a prototype of the next generation ASCI network demonstrated remote clustering techniques, demonstrated the capabilities of the emerging Terabit Routers products, demonstrated the latest technologies for delivering visualization data to the scientific users, and demonstrated the latest in encryption methods including IP VPN technologies and ATM encryption research. The authors also coordinated the other production networking activities within the booth and between their demonstration partners on the exhibit floor. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support Sandia's overall strategies in ASCI networking.

  4. Excitation function measurements of sup 4 sup 0 Ar(p,3n) sup 3 sup 8 K, sup 4 sup 0 Ar(p,2pn) sup 3 sup 8 Cl and sup 4 sup 0 Ar(p,2p) sup 3 sup 9 Cl reactions

    CERN Document Server

    Nagatsu, K; Suzuki, K

    1999-01-01

    For the production of sup 3 sup 8 K, excitation functions of the sup 4 sup 0 Ar(p,3n) sup 3 sup 8 K reaction and its accompanying reactions sup 4 sup 0 Ar(p,2pn) sup 3 sup 8 Cl, and sup 4 sup 0 Ar(p,2p) sup 3 sup 9 Cl were measured at the proton energy of 20.5-39.5 MeV to determine the optimum conditions of irradiation. Target cells containing argon gas were prepared using specially developed tools in an argon-replaced glove box. In the sup 4 sup 0 Ar(p,3n) sup 3 sup 8 K, sup 4 sup 0 Ar(p,2pn) sup 3 sup 8 Cl, and sup 4 sup 0 Ar(p,2p) sup 3 sup 9 Cl reactions, the maximum cross sections were 6.7+-0.7, 34+-3.3 and 11+-1.2mbarn at 37.6, 39.5 and 32.0 MeV, respectively, and the saturation thick target yields were calculated to be 560, 2200, and 1300 sup * MBq/mu A, respectively, at an incident energy of 39.5 MeV ( sup * integral yield above 21 MeV).

  5. PRMS Data Warehousing Prototype

    Science.gov (United States)

    Guruvadoo, Eranna K.

    2002-01-01

    Project and Resource Management System (PRMS) is a web-based, mid-level management tool developed at KSC to provide a unified enterprise framework for Project and Mission management. The addition of a data warehouse as a strategic component to the PRMS is investigated through the analysis, design and implementation processes of a data warehouse prototype. As a proof of concept, a demonstration of the prototype with its OLAP's technology for multidimensional data analysis is made. The results of the data analysis and the design constraints are discussed. The prototype can be used to motivate interest and support for an operational data warehouse.

  6. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Energy Technology Data Exchange (ETDEWEB)

    De, K [University of Texas at Arlington; Jha, S [Rutgers University; Klimentov, A [Brookhaven National Laboratory (BNL); Maeno, T [Brookhaven National Laboratory (BNL); Nilsson, P [Brookhaven National Laboratory (BNL); Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Wells, Jack C [ORNL; Wenaus, T [Brookhaven National Laboratory (BNL)

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation

  7. Guide to dataflow supercomputing basic concepts, case studies, and a detailed example

    CERN Document Server

    Milutinovic, Veljko; Trifunovic, Nemanja; Giorgi, Roberto

    2015-01-01

    This unique text/reference describes an exciting and novel approach to supercomputing in the DataFlow paradigm. The major advantages and applications of this approach are clearly described, and a detailed explanation of the programming model is provided using simple yet effective examples. The work is developed from a series of lecture courses taught by the authors in more than 40 universities across more than 20 countries, and from research carried out by Maxeler Technologies, Inc. Topics and features: presents a thorough introduction to DataFlow supercomputing for big data problems; revie

  8. Violación derechos de autor a través de redes p2p, ¿responsabilidad de los prestadores de servicios de la sociedad de la información o de los miembros de las redes?

    Directory of Open Access Journals (Sweden)

    Javier Andrés Moreno

    2010-11-01

    Full Text Available Las redes P2P se constituyen como uno de los avances más importantes en el mundo del comercio electrónico debido al crecimiento acelerado tanto del número de usuarios así como al número de funciones que ha tenido en los últimos años. Hoy por hoy, es una realidad la migración del mundo análogo hacia el mundo digital, por esta razón la red se convierte en un entorno social que exige una legislación tendente a velar por el buen desarrollo de las conductas que en este espacio se ejecutan, que sea acorde con el papel que viene cumpliendo el entorno virtual. Por esto, resulta necesario realizar un estudio de qué pasa cuando las conductas que se ejecutan al interior de este tipo de redes configuran una clara trasgresión a derechos protegidos, en aras de poder determinar quién responde por los daños causados. Así pues, el presente trabajo parte de la determinación del marco jurídico en cual se analizará el deber de responsabilidad de los intervinientes en redes P2P, para luego definir jurídicamente quiénes son los que intervienen, qué conductas suponen la violación de derechos y luego determinar bajo qué supuestos esos intervinientes son responsables por la ejecución de las conductas que traen como consecuencia la vulneración de derechos.

  9. Determination of the 1s2{\\ell }2{{\\ell }}^{\\prime } state production ratios {{}^{4}P}^{o}/{}^{2}P, {}^{2}D/{}^{2}P and {{}^{2}P}_{+}/{{}^{2}P}_{-} from fast (1{s}^{2},1s2s\\,{}^{3}S) mixed-state He-like ion beams in collisions with H2 targets

    Science.gov (United States)

    Benis, E. P.; Zouros, T. J. M.

    2016-12-01

    New results are presented on the ratio {R}m={σ }{T2p}( {}4P)/{σ }{T2p}({}2P) concerning the production cross sections of Li-like 1s2s2p quartet and doublet P states formed in energetic ion-atom collisions by single 2p electron transfer to the metastable 1s2s {}3S component of the He-like ion beam. Spin statistics predict a value of R m = 2 independent of the collision system in disagreement with most reported measurements of {R}m≃ 1{--}9. A new experimental approach is presented for the evaluation of R m having some practical advantages over earlier approaches. It also allows for the determination of the separate contributions of ground- and metastable-state beam components to the measured spectra. Applying our technique to zero-degree Auger projectile spectra from 4.5 MeV {{{B}}}3+ (Benis et al 2002 Phys. Rev. A 65 064701) and 25.3 MeV {{{F}}}7+ (Zamkov et al 2002 Phys. Rev. A 65 062706) mixed state (1{s}2 {}1S,1s2s {}3S) He-like ion collisions with H2 targets, we report new values of {R}m=3.5+/- 0.4 for boron and {R}m=1.8+/- 0.3 for fluorine. In addition, the ratios of {}2D/{}2P and {{}2P}+/{{}2P}- populations from either the metastable and/or ground state beam component, also relevant to this analysis, are evaluated and compared to previously reported results for carbon collisions on helium (Strohschein et al 2008 Phys. Rev. A 77 022706) including a critical comparison to theory.

  10. Violación de derechos de autor a través de redes p2p, ¿responsabilidad de los prestadores de servicios de la sociedad de la información o de los miembros de las redes?

    Directory of Open Access Journals (Sweden)

    Javier Andrés Moreno

    2010-11-01

    Full Text Available Las redes P2P, se constituyen como uno de los avances más importantes en el mundo del comercio electrónico debido al crecimiento acelerado tanto del número de usuarios así como al número de funciones que ha tenido en los últimos años. Hoy por hoy, es una realidad la migración del mundo análogo al mundo digital, por esta razón la red se convierte en un entorno social que exige una legislación tendente a velar por el buen desarrollo de las conductas que en este espacio se ejecutan, que sea acorde con el papel que viene jugando el entorno virtual. Por esto, resulta necesario realizar un estudio de qué pasa cuando las conductas que se ejecutan al interior de este tipo de redes, configuran una clara trasgresión a derechos protegidos, en aras de poder determinar quién responde por los daños causados. Así pues, el presente trabajo parte de la determinación del marco jurídico en cual se va analizar el deber de responsabilidad de los intervinientes en redes p2p, para luego definir jurídicamente quiénes son lo que intervienen, qué conductas suponen la violación de derechos y luego determinar bajo qué supuestos esos intervinientes son responsables por la ejecución de las conductas que traen como consecuencia la vulneración de derechos.

  11. Interactive real-time nuclear plant simulations on a UNIX based supercomputer

    International Nuclear Information System (INIS)

    Behling, S.R.

    1990-01-01

    Interactive real-time nuclear plant simulations are critically important to train nuclear power plant engineers and operators. In addition, real-time simulations can be used to test the validity and timing of plant technical specifications and operational procedures. To accurately and confidently simulate a nuclear power plant transient in real-time, sufficient computer resources must be available. Since some important transients cannot be simulated using preprogrammed responses or non-physical models, commonly used simulation techniques may not be adequate. However, the power of a supercomputer allows one to accurately calculate the behavior of nuclear power plants even during very complex transients. Many of these transients can be calculated in real-time or quicker on the fastest supercomputers. The concept of running interactive real-time nuclear power plant transients on a supercomputer has been tested. This paper describes the architecture of the simulation program, the techniques used to establish real-time synchronization, and other issues related to the use of supercomputers in a new and potentially very important area. (author)

  12. Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers

    International Nuclear Information System (INIS)

    Dreher, Patrick; Scullin, William; Vouk, Mladen

    2015-01-01

    Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers. (paper)

  13. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel's MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  14. Argonne National Lab deploys Force10 networks' massively dense ethernet switch for supercomputing cluster

    CERN Multimedia

    2003-01-01

    "Force10 Networks, Inc. today announced that Argonne National Laboratory (Argonne, IL) has successfully deployed Force10 E-Series switch/routers to connect to the TeraGrid, the world's largest supercomputing grid, sponsored by the National Science Foundation (NSF)" (1/2 page).

  15. Design and performance characterization of electronic structure calculations on massively parallel supercomputers

    DEFF Research Database (Denmark)

    Romero, N. A.; Glinsvad, Christian; Larsen, Ask Hjorth

    2013-01-01

    Density function theory (DFT) is the most widely employed electronic structure method because of its favorable scaling with system size and accuracy for a broad range of molecular and condensed-phase systems. The advent of massively parallel supercomputers has enhanced the scientific community...

  16. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  17. An efficient implementation of a backpropagation learning algorithm on quadrics parallel supercomputer

    International Nuclear Information System (INIS)

    Taraglio, S.; Massaioli, F.

    1995-08-01

    A parallel implementation of a library to build and train Multi Layer Perceptrons via the Back Propagation algorithm is presented. The target machine is the SIMD massively parallel supercomputer Quadrics. Performance measures are provided on three different machines with different number of processors, for two network examples. A sample source code is given

  18. Visualization environment of the large-scale data of JAEA's supercomputer system

    Energy Technology Data Exchange (ETDEWEB)

    Sakamoto, Kensaku [Japan Atomic Energy Agency, Center for Computational Science and e-Systems, Tokai, Ibaraki (Japan); Hoshi, Yoshiyuki [Research Organization for Information Science and Technology (RIST), Tokai, Ibaraki (Japan)

    2013-11-15

    On research and development of various fields of nuclear energy, visualization of calculated data is especially useful to understand the result of simulation in an intuitive way. Many researchers who run simulations on the supercomputer in Japan Atomic Energy Agency (JAEA) are used to transfer calculated data files from the supercomputer to their local PCs for visualization. In recent years, as the size of calculated data has gotten larger with improvement of supercomputer performance, reduction of visualization processing time as well as efficient use of JAEA network is being required. As a solution, we introduced a remote visualization system which has abilities to utilize parallel processors on the supercomputer and to reduce the usage of network resources by transferring data of intermediate visualization process. This paper reports a study on the performance of image processing with the remote visualization system. The visualization processing time is measured and the influence of network speed is evaluated by varying the drawing mode, the size of visualization data and the number of processors. Based on this study, a guideline for using the remote visualization system is provided to show how the system can be used effectively. An upgrade policy of the next system is also shown. (author)

  19. From prototype to product

    DEFF Research Database (Denmark)

    Andersen, Tariq Osman; Bansler, Jørgen P.; Kensing, Finn

    2017-01-01

    This paper delves into the challenges of engaging patients, clinicians and industry stakeholders in the participatory design of an mHealth platform for patient-clinician collaboration. It follows the process from the development of a research prototype to a commercial software product. In particu......This paper delves into the challenges of engaging patients, clinicians and industry stakeholders in the participatory design of an mHealth platform for patient-clinician collaboration. It follows the process from the development of a research prototype to a commercial software product....... In particular, we draw attention to four major challenges of (a) aligning the different concerns of patients and clinicians, (b) designing according to clinical accountability, (c) ensuring commercial interest, and (d) dealing with regulatory constraints when prototyping safety critical health Information...... Technology. Using four illustrative cases, we discuss what these challenges entail and the implications they pose to Participatory Design. We conclude the paper by presenting lessons learned....

  20. PANDA Muon System Prototype

    Science.gov (United States)

    Abazov, Victor; Alexeev, Gennady; Alexeev, Maxim; Frolov, Vladimir; Golovanov, Georgy; Kutuzov, Sergey; Piskun, Alexei; Samartsev, Alexander; Tokmenin, Valeri; Verkheev, Alexander; Vertogradov, Leonid; Zhuravlev, Nikolai

    2018-04-01

    The PANDA Experiment will be one of the key experiments at the Facility for Antiproton and Ion Research (FAIR) which is under construction now in the territory of the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, Germany. PANDA is aimed to study hadron spectroscopy and various topics of the weak and strong forces. Muon System is chosen as the most suitable technology for detecting the muons. The Prototype of the PANDA Muon System is installed on the test beam line T9 at the Proton Synchrotron (PS) at CERN. Status of the PANDA Muon System prototype is presented with few preliminary results.

  1. Prototyping a Smart City

    DEFF Research Database (Denmark)

    Korsgaard, Henrik; Brynskov, Martin

    In this paper, we argue that by approaching the so-called Smart City as a design challenge, and an interaction design perspective, it is possible to both uncover existing challenges in the interplay between people, technology and society, as well as prototype possible futures. We present a case...... in which we exposed data about the online communication between the citizens and the municipality on a highly visible media facade, while at the same time prototyped a tool that enabled citizens to report ‘bugs’ within the city....

  2. PANDA Muon System Prototype

    Directory of Open Access Journals (Sweden)

    Abazov Victor

    2018-01-01

    Full Text Available The PANDA Experiment will be one of the key experiments at the Facility for Antiproton and Ion Research (FAIR which is under construction now in the territory of the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, Germany. PANDA is aimed to study hadron spectroscopy and various topics of the weak and strong forces. Muon System is chosen as the most suitable technology for detecting the muons. The Prototype of the PANDA Muon System is installed on the test beam line T9 at the Proton Synchrotron (PS at CERN. Status of the PANDA Muon System prototype is presented with few preliminary results.

  3. LEP vacuum chamber, prototype

    CERN Multimedia

    CERN PhotoLab

    1983-01-01

    Final prototype for the LEP vacuum chamber, see 8305170 for more details. Here we see the strips of the NEG pump, providing "distributed pumping". The strips are made from a Zr-Ti-Fe alloy. By passing an electrical current, they were heated to 700 deg C.

  4. Imagining the prototype

    NARCIS (Netherlands)

    Brouwer, C. E.; Bhomer, ten M.; Melkas, H.; Buur, J.

    2013-01-01

    This article reports on the analysis of a design session, employing conversation analysis. In the design session three experts and a designer discuss a prototype of a shirt, which has been developed with the input from these experts. The analysis focuses on the type of involvement of the

  5. MIND performance and prototyping

    International Nuclear Information System (INIS)

    Cervera-Villanueva, A.

    2008-01-01

    The performance of MIND (Magnetised Iron Neutrino Detector) at a neutrino factory has been revisited in a new analysis. In particular, the low neutrino energy region is studied, obtaining an efficiency plateau around 5 GeV for a background level below 10 -3 . A first look has been given into the detector optimisation and prototyping

  6. The prototype fast reactor

    International Nuclear Information System (INIS)

    Broomfield, A.M.

    1985-01-01

    The paper concerns the Prototype Fast Reactor (PFR), which is a liquid metal cooled fast reactor power station, situated at Dounreay, Scotland. The principal design features of a Fast Reactor and the PFR are given, along with key points of operating history, and health and safety features. The role of the PFR in the development programme for commercial reactors is discussed. (U.K.)

  7. AGS Booster prototype magnets

    Energy Technology Data Exchange (ETDEWEB)

    Danby, G.; Jackson, J.; Lee, Y.Y.; Phillips, R.; Brodowski, J.; Jablonski, E.; Keohane, G.; McDowell, B.; Rodger, E.

    1987-03-19

    Prototype magnets have been designed and constructed for two half cells of the AGS Booster. The lattice requires 2.4m long dipoles, each curved by 10/sup 0/. The multi-use Booster injector requires several very different standard magnet cycles, capable of instantaneous interchange using computer control from dc up to 10 Hz.

  8. AGS booster prototype magnets

    International Nuclear Information System (INIS)

    Danby, G.; Jackson, J.; Lee, Y.Y.; Phillips, R.; Brodowski, J.; Jablonski, E.; Keohane, G.; McDowell, B.; Rodger, E.

    1987-01-01

    Prototype magnets have been designed and constructed for two half cells of the AGS Booster. The lattice requires 2.4m long dipoles, each curved by 10 0 . The multi-use Booster injector requires several very different standard magnet cycles, capable of instantaneous interchange using computer control from dc up to 10 Hz

  9. Cockroft Walton accelerator prototype

    International Nuclear Information System (INIS)

    Hutapea, Sumihar.

    1976-01-01

    Prototype of a Cockroft Walton generator using ceramic and plastic capacitors is discussed. Compared to the previous generator, the construction and components are much more improved. Pralon is used for the high voltage insulation column and plastic is used as a dielectric material for the high voltage capacitor. Cockroft Walton generator is used as a high tension supply for an accelerator. (author)

  10. Prompt and Precise Prototyping

    Science.gov (United States)

    2003-01-01

    For Sanders Design International, Inc., of Wilton, New Hampshire, every passing second between the concept and realization of a product is essential to succeed in the rapid prototyping industry where amongst heavy competition, faster time-to-market means more business. To separate itself from its rivals, Sanders Design aligned with NASA's Marshall Space Flight Center to develop what it considers to be the most accurate rapid prototyping machine for fabrication of extremely precise tooling prototypes. The company's Rapid ToolMaker System has revolutionized production of high quality, small-to-medium sized prototype patterns and tooling molds with an exactness that surpasses that of computer numerically-controlled (CNC) machining devices. Created with funding and support from Marshall under a Small Business Innovation Research (SBIR) contract, the Rapid ToolMaker is a dual-use technology with applications in both commercial and military aerospace fields. The advanced technology provides cost savings in the design and manufacturing of automotive, electronic, and medical parts, as well as in other areas of consumer interest, such as jewelry and toys. For aerospace applications, the Rapid ToolMaker enables fabrication of high-quality turbine and compressor blades for jet engines on unmanned air vehicles, aircraft, and missiles.

  11. Surrogates-based prototyping

    NARCIS (Netherlands)

    Du Bois, E.; Horvath, I.

    2014-01-01

    The research is situated in the system development phase of interactive software products. In this detailed design phase, we found a need for fast testable prototyping to achieve qualitative change proposals on the system design. In this paper, we discuss a literature study on current software

  12. Z Andromedae: the prototype

    International Nuclear Information System (INIS)

    Viotti, R.; Giangrande, A.; Ricciardi, O.; Cassatella, A.

    1982-01-01

    Z And is considered as the ''prototype'' of the symbiotic stars. Besides its symbiotic spectrum, the star is also known for its characteristic light curve (and for the related spectral variations). Since many theoretical speculations on Z And and similar objects have been based on the luminosity and spectral variations of this star, the authors critically analyse the observational data concerning it. (Auth.)

  13. Prototype ATLAS straw tracker

    CERN Multimedia

    Laurent Guiraud

    1998-01-01

    This is an early prototype of the straw tracking device for the ATLAS detector at CERN. This detector will be part of the LHC project, scheduled to start operation in 2008. The straw tracker will consist of thousands of gas-filled straws, each containing a wire, allowing the tracks of particles to be followed.

  14. Courthouse Prototype Building

    Energy Technology Data Exchange (ETDEWEB)

    Malhotra, Mini [ORNL; New, Joshua Ryan [ORNL; Im, Piljae [ORNL

    2018-02-01

    As part of DOE's support of ANSI/ASHRAE/IES Standard 90.1 and IECC, researchers at Pacific Northwest National Laboratory (PNNL) apply a suite of prototype buildings covering 80% of the commercial building floor area in the U.S. for new construction. Efforts have started on expanding the prototype building suite to cover 90% of the commercial building floor area in the U.S., by developing prototype models for additional building types including place of worship, public order and safety, public assembly. Courthouse is courthouse is a sub-category under the “Public Order and Safety" building type category; other sub-categories include police station, fire station, and jail, reformatory or penitentiary.ORNL used building design guides, databases, and documented courthouse projects, supplemented by personal communication with courthouse facility planning and design experts, to systematically conduct research on the courthouse building and system characteristics. This report documents the research conducted for the courthouse building type and proposes building and system characteristics for developing a prototype building energy model to be included in the Commercial Building Prototype Model suite. According to the 2012 CBECS, courthouses occupy a total of 436 million sqft of floor space or 0.5% of the total floor space in all commercial buildings in the US, next to fast food (0.35%), grocery store or food market (0.88%), and restaurant or cafeteria (1.2%) building types currently included in the Commercial Prototype Building Model suite. Considering aggregated average, courthouse falls among the larger with a mean floor area of 69,400 sqft smaller fuel consumption intensity building types and an average of 94.7 kBtu/sqft compared to 77.8 kBtu/sqft for office and 80 kBtu/sqft for all commercial buildings.Courthouses range in size from 1000 sqft to over a million square foot building gross square feet and 1 courtroom to over 100 courtrooms. Small courthouses

  15. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Science.gov (United States)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  16. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    International Nuclear Information System (INIS)

    Klimentov, A; Maeno, T; Nilsson, P; Panitkin, S; Wenaus, T; De, K; Oleynik, D; Jha, S; Wells, J

    2016-01-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  17. Database Replication Prototype

    OpenAIRE

    Vandewall, R.

    2000-01-01

    This report describes the design of a Replication Framework that facilitates the implementation and com-parison of database replication techniques. Furthermore, it discusses the implementation of a Database Replication Prototype and compares the performance measurements of two replication techniques based on the Atomic Broadcast communication primitive: pessimistic active replication and optimistic active replication. The main contributions of this report can be split into four parts....

  18. Brachial Plexus Blocker Prototype

    OpenAIRE

    Stéphanie Coelho Monteiro

    2017-01-01

    Although the area of surgical simulation has been the subject of study in recent years, it is still necessary to develop artificial experimental models with a perspective to dismiss the use of biological models. Since this makes the simulators more real, transferring the environment of the health professional to a physical or virtual reality, an anesthetic prototype has been developed, where the motor response is replicated when the brachial plexus is subjected to a proximal nervous stimulus....

  19. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    Energy Technology Data Exchange (ETDEWEB)

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold (Hal) Edward; Stevenson, Joel O.; Benner, Robert E., Jr. (.,; .); Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  20. BSMBench: a flexible and scalable supercomputer benchmark from computational particle physics

    CERN Document Server

    Bennett, Ed; Del Debbio, Luigi; Jordan, Kirk; Patella, Agostino; Pica, Claudio; Rago, Antonio

    2016-01-01

    Benchmarking plays a central role in the evaluation of High Performance Computing architectures. Several benchmarks have been designed that allow users to stress various components of supercomputers. In order for the figures they provide to be useful, benchmarks need to be representative of the most common real-world scenarios. In this work, we introduce BSMBench, a benchmarking suite derived from Monte Carlo code used in computational particle physics. The advantage of this suite (which can be freely downloaded from http://www.bsmbench.org/) over others is the capacity to vary the relative importance of computation and communication. This enables the tests to simulate various practical situations. To showcase BSMBench, we perform a wide range of tests on various architectures, from desktop computers to state-of-the-art supercomputers, and discuss the corresponding results. Possible future directions of development of the benchmark are also outlined.

  1. Direct exploitation of a top 500 Supercomputer for Analysis of CMS Data

    International Nuclear Information System (INIS)

    Cabrillo, I; Cabellos, L; Marco, J; Fernandez, J; Gonzalez, I

    2014-01-01

    The Altamira Supercomputer hosted at the Instituto de Fisica de Cantatbria (IFCA) entered in operation in summer 2012. Its last generation FDR Infiniband network used (for message passing) in parallel jobs, supports the connection to General Parallel File System (GPFS) servers, enabling an efficient simultaneous processing of multiple data demanding jobs. Sharing a common GPFS system and a single LDAP-based identification with the existing Grid clusters at IFCA allows CMS researchers to exploit the large instantaneous capacity of this supercomputer to execute analysis jobs. The detailed experience describing this opportunistic use for skimming and final analysis of CMS 2012 data for a specific physics channel, resulting in an order of magnitude reduction of the waiting time, is presented.

  2. ParaBTM: A Parallel Processing Framework for Biomedical Text Mining on Supercomputers.

    Science.gov (United States)

    Xing, Yuting; Wu, Chengkun; Yang, Xi; Wang, Wei; Zhu, En; Yin, Jianping

    2018-04-27

    A prevailing way of extracting valuable information from biomedical literature is to apply text mining methods on unstructured texts. However, the massive amount of literature that needs to be analyzed poses a big data challenge to the processing efficiency of text mining. In this paper, we address this challenge by introducing parallel processing on a supercomputer. We developed paraBTM, a runnable framework that enables parallel text mining on the Tianhe-2 supercomputer. It employs a low-cost yet effective load balancing strategy to maximize the efficiency of parallel processing. We evaluated the performance of paraBTM on several datasets, utilizing three types of named entity recognition tasks as demonstration. Results show that, in most cases, the processing efficiency can be greatly improved with parallel processing, and the proposed load balancing strategy is simple and effective. In addition, our framework can be readily applied to other tasks of biomedical text mining besides NER.

  3. Explaining the gap between theoretical peak performance and real performance for supercomputer architectures

    International Nuclear Information System (INIS)

    Schoenauer, W.; Haefner, H.

    1993-01-01

    The basic architectures of vector and parallel computers with their properties are presented. Then the memory size and the arithmetic operations in the context of memory bandwidth are discussed. For the exemplary discussion of a single operation micro-measurements of the vector triad for the IBM 3090 VF and the CRAY Y-MP/8 are presented. They reveal the details of the losses for a single operation. Then we analyze the global performance of a whole supercomputer by identifying reduction factors that bring down the theoretical peak performance to the poor real performance. The responsibilities of the manufacturer and of the user for these losses are dicussed. Then the price-performance ratio for different architectures in a snapshot of January 1991 is briefly mentioned. Finally some remarks to a user-friendly architecture for a supercomputer will be made. (orig.)

  4. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    Energy Technology Data Exchange (ETDEWEB)

    Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brightwell, Ron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In this paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.

  5. Application of Supercomputer Technologies for Simulation Of Socio-Economic Systems

    Directory of Open Access Journals (Sweden)

    Vladimir Valentinovich Okrepilov

    2015-06-01

    Full Text Available To date, an extensive experience has been accumulated in investigation of problems related to quality, assessment of management systems, modeling of economic system sustainability. The performed studies have created a basis for development of a new research area — Economics of Quality. Its tools allow to use opportunities of model simulation for construction of the mathematical models adequately reflecting the role of quality in natural, technical, social regularities of functioning of the complex socio-economic systems. Extensive application and development of models, and also system modeling with use of supercomputer technologies, on our deep belief, will bring the conducted research of socio-economic systems to essentially new level. Moreover, the current scientific research makes a significant contribution to model simulation of multi-agent social systems and that is not less important, it belongs to the priority areas in development of science and technology in our country. This article is devoted to the questions of supercomputer technologies application in public sciences, first of all, — regarding technical realization of the large-scale agent-focused models (AFM. The essence of this tool is that owing to the power computer increase it has become possible to describe the behavior of many separate fragments of a difficult system, as socio-economic systems are. The article also deals with the experience of foreign scientists and practicians in launching the AFM on supercomputers, and also the example of AFM developed in CEMI RAS, stages and methods of effective calculating kernel display of multi-agent system on architecture of a modern supercomputer will be analyzed. The experiments on the basis of model simulation on forecasting the population of St. Petersburg according to three scenarios as one of the major factors influencing the development of socio-economic system and quality of life of the population are presented in the

  6. Heat dissipation computations of a HVDC ground electrode using a supercomputer

    International Nuclear Information System (INIS)

    Greiss, H.; Mukhedkar, D.; Lagace, P.J.

    1990-01-01

    This paper reports on the temperature, of soil surrounding a High Voltage Direct Current (HVDC) toroidal ground electrode of practical dimensions, in both homogeneous and non-homogeneous soils that was computed at incremental points in time using finite difference methods on a supercomputer. Curves of the response were computed and plotted at several locations within the soil in the vicinity of the ground electrode for various values of the soil parameters

  7. Analyzing the Interplay of Failures and Workload on a Leadership-Class Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Meneses, Esteban [University of Pittsburgh; Ni, Xiang [University of Illinois at Urbana-Champaign; Jones, Terry R [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The unprecedented computational power of cur- rent supercomputers now makes possible the exploration of complex problems in many scientific fields, from genomic analysis to computational fluid dynamics. Modern machines are powerful because they are massive: they assemble millions of cores and a huge quantity of disks, cards, routers, and other components. But it is precisely the size of these machines that glooms the future of supercomputing. A system that comprises many components has a high chance to fail, and fail often. In order to make the next generation of supercomputers usable, it is imperative to use some type of fault tolerance platform to run applications on large machines. Most fault tolerance strategies can be optimized for the peculiarities of each system and boost efficacy by keeping the system productive. In this paper, we aim to understand how failure characterization can improve resilience in several layers of the software stack: applications, runtime systems, and job schedulers. We examine the Titan supercomputer, one of the fastest systems in the world. We analyze a full year of Titan in production and distill the failure patterns of the machine. By looking into Titan s log files and using the criteria of experts, we provide a detailed description of the types of failures. In addition, we inspect the job submission files and describe how the system is used. Using those two sources, we cross correlate failures in the machine to executing jobs and provide a picture of how failures affect the user experience. We believe such characterization is fundamental in developing appropriate fault tolerance solutions for Cray systems similar to Titan.

  8. Prototyping real-time systems

    OpenAIRE

    Clynch, Gary

    1994-01-01

    The traditional software development paradigm, the waterfall life cycle model, is defective when used for developing real-time systems. This thesis puts forward an executable prototyping approach for the development of real-time systems. A prototyping system is proposed which uses ESML (Extended Systems Modelling Language) as a prototype specification language. The prototyping system advocates the translation of non-executable ESML specifications into executable LOOPN (Language of Object ...

  9. Federated data storage system prototype for LHC experiments and data intensive science

    Science.gov (United States)

    Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Ryabinkin, E.; Zarochentsev, A.

    2017-10-01

    Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics collaborations. In our project we address the fundamental problem of designing a computing architecture to integrate distributed storage resources for LHC experiments and other data-intensive science applications and to provide access to data from heterogeneous computing facilities. Studies include development and implementation of federated data storage prototype for Worldwide LHC Computing Grid (WLCG) centres of different levels and University clusters within one National Cloud. The prototype is based on computing resources located in Moscow, Dubna, Saint Petersburg, Gatchina and Geneva. This project intends to implement a federated distributed storage for all kind of operations such as read/write/transfer and access via WAN from Grid centres, university clusters, supercomputers, academic and commercial clouds. The efficiency and performance of the system are demonstrated using synthetic and experiment-specific tests including real data processing and analysis workflows from ATLAS and ALICE experiments, as well as compute-intensive bioinformatics applications (PALEOMIX) running on supercomputers. We present topology and architecture of the designed system, report performance and statistics for different access patterns and show how federated data storage can be used efficiently by physicists and biologists. We also describe how sharing data on a widely distributed storage system can lead to a new computing model and reformations of computing style, for instance how bioinformatics program running on supercomputers can read/write data from the federated storage.

  10. MITRE sensor layer prototype

    Science.gov (United States)

    Duff, Francis; McGarry, Donald; Zasada, David; Foote, Scott

    2009-05-01

    The MITRE Sensor Layer Prototype is an initial design effort to enable every sensor to help create new capabilities through collaborative data sharing. By making both upstream (raw) and downstream (processed) sensor data visible, users can access the specific level, type, and quantities of data needed to create new data products that were never anticipated by the original designers of the individual sensors. The major characteristic that sets sensor data services apart from typical enterprise services is the volume (on the order of multiple terabytes) of raw data that can be generated by most sensors. Traditional tightly coupled processing approaches extract pre-determined information from the incoming raw sensor data, format it, and send it to predetermined users. The community is rapidly reaching the conclusion that tightly coupled sensor processing loses too much potentially critical information.1 Hence upstream (raw and partially processed) data must be extracted, rapidly archived, and advertised to the enterprise for unanticipated uses. The authors believe layered sensing net-centric integration can be achieved through a standardize-encapsulate-syndicateaggregate- manipulate-process paradigm. The Sensor Layer Prototype's technical approach focuses on implementing this proof of concept framework to make sensor data visible, accessible and useful to the enterprise. To achieve this, a "raw" data tap between physical transducers associated with sensor arrays and the embedded sensor signal processing hardware and software has been exploited. Second, we encapsulate and expose both raw and partially processed data to the enterprise within the context of a service-oriented architecture. Third, we advertise the presence of multiple types, and multiple layers of data through geographic-enabled Really Simple Syndication (GeoRSS) services. These GeoRSS feeds are aggregated, manipulated, and filtered by a feed aggregator. After filtering these feeds to bring just the type

  11. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    Science.gov (United States)

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  12. Building more powerful less expensive supercomputers using Processing-In-Memory (PIM) LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, Richard C.

    2009-09-01

    This report details the accomplishments of the 'Building More Powerful Less Expensive Supercomputers Using Processing-In-Memory (PIM)' LDRD ('PIM LDRD', number 105809) for FY07-FY09. Latency dominates all levels of supercomputer design. Within a node, increasing memory latency, relative to processor cycle time, limits CPU performance. Between nodes, the same increase in relative latency impacts scalability. Processing-In-Memory (PIM) is an architecture that directly addresses this problem using enhanced chip fabrication technology and machine organization. PIMs combine high-speed logic and dense, low-latency, high-bandwidth DRAM, and lightweight threads that tolerate latency by performing useful work during memory transactions. This work examines the potential of PIM-based architectures to support mission critical Sandia applications and an emerging class of more data intensive informatics applications. This work has resulted in a stronger architecture/implementation collaboration between 1400 and 1700. Additionally, key technology components have impacted vendor roadmaps, and we are in the process of pursuing these new collaborations. This work has the potential to impact future supercomputer design and construction, reducing power and increasing performance. This final report is organized as follow: this summary chapter discusses the impact of the project (Section 1), provides an enumeration of publications and other public discussion of the work (Section 1), and concludes with a discussion of future work and impact from the project (Section 1). The appendix contains reprints of the refereed publications resulting from this work.

  13. A prototype analysis of vengeance

    NARCIS (Netherlands)

    Elshout, Maartje; Nelissen, Rob; van Beest, Ilja

    2015-01-01

    The authors examined the concept of vengeance from a prototype perspective. In 6 studies, the prototype structure of vengeance was mapped. Sixty-nine features of vengeance were identified (Study 1), and rated on centrality (Study 2). Further studies confirmed the prototype structure. Compared to

  14. Visualization on supercomputing platform level II ASC milestone (3537-1B) results from Sandia.

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk (Kitware, Inc., Clifton Park, NY); Fabian, Nathan; Marion, Patrick (Kitware, Inc., Clifton Park, NY); Moreland, Kenneth D.

    2010-09-01

    This report provides documentation for the completion of the Sandia portion of the ASC Level II Visualization on the platform milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratories and Los Alamos National Laboratories. This milestone contains functionality required for performing visualization directly on a supercomputing platform, which is necessary for peta-scale visualization. Sandia's contribution concerns in-situ visualization, running a visualization in tandem with a solver. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors(GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the performance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. Scientific simulation on parallel supercomputers is traditionally performed in four

  15. OPAL Jet Chamber Prototype

    CERN Multimedia

    OPAL was one of the four experiments installed at the LEP particle accelerator from 1989 - 2000. OPAL's central tracking system consists of (in order of increasing radius) a silicon microvertex detector, a vertex detector, a jet chamber, and z-chambers. All the tracking detectors work by observing the ionization of atoms by charged particles passing by: when the atoms are ionized, electrons are knocked out of their atomic orbitals, and are then able to move freely in the detector. These ionization electrons are detected in the dirfferent parts of the tracking system. This piece is a prototype of the jet chambers

  16. Prototyping Augmented Reality

    CERN Document Server

    Mullen, Tony

    2011-01-01

    Learn to create augmented reality apps using Processing open-source programming language Augmented reality (AR) is used all over, and you may not even realize it. Smartphones overlay data onto live camera views to show homes for sale, restaurants, or historical sites. American football broadcasts use AR to show the invisible first-down line on the field to TV viewers. Nike and Budweiser, among others, have used AR in ads. Now, you can learn to create AR prototypes using 3D data, Processing open-source programming language, and other languages. This unique book is an easy-to-follow guide on how

  17. Nightshade Prototype Experiments (Silverleaf)

    Energy Technology Data Exchange (ETDEWEB)

    Danielson, Jeremy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bauer, Amy L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-12-23

    The Red Sage campaign is a series of subcritical dynamic plutonium experiments designed to measure ejecta. Nightshade, the first experiments in Red Sage scheduled for fiscal year 2019, will measure the amount of ejecta emission into vacuum from a double-­shocked plutonium surface. To address the major technical risks in Nightshade, a Level 2 milestone was developed for fiscal year 2016. Silverleaf, a series of four experiments, was executed at the Los Alamos National Laboratory in July and August 2016 to demonstrate a prototype of the Nightshade package and to satisfy this Level 2 milestone. This report is documentation that Red Sage Level 2 milestone requirements were successfully met.

  18. DataCollection Prototyping

    CERN Multimedia

    Beck, H.P.

    DataCollection is a subsystem of the Trigger, DAQ & DCS project responsible for the movement of event data from the ROS to the High Level Triggers. This includes data from Regions of Interest (RoIs) for Level 2, building complete events for the Event Filter and finally transferring accepted events to Mass Storage. It also handles passing the LVL1 RoI pointers and the allocation of Level 2 processors and load balancing of Event Building. During the last 18 months DataCollection has developed a common architecture for the hardware and software required. This involved a radical redesign integrating ideas from separate parts of earlier TDAQ work. An important milestone for this work, now achieved, has been to demonstrate this subsystem in the so-called Phase 2A Integrated Prototype. This prototype comprises the various TDAQ hardware and software components (ROSs, LVL2, etc.) under the control of the TDAQ Online software. The basic functionality has been demonstrated on small testbeds (~8-10 processing nodes)...

  19. OMS FDIR: Initial prototyping

    Science.gov (United States)

    Taylor, Eric W.; Hanson, Matthew A.

    1990-01-01

    The Space Station Freedom Program (SSFP) Operations Management System (OMS) will automate major management functions which coordinate the operations of onboard systems, elements and payloads. The objectives of OMS are to improve safety, reliability and productivity while reducing maintenance and operations cost. This will be accomplished by using advanced automation techniques to automate much of the activity currently performed by the flight crew and ground personnel. OMS requirements have been organized into five task groups: (1) Planning, Execution and Replanning; (2) Data Gathering, Preprocessing and Storage; (3) Testing and Training; (4) Resource Management; and (5) Caution and Warning and Fault Management for onboard subsystems. The scope of this prototyping effort falls within the Fault Management requirements group. The prototyping will be performed in two phases. Phase 1 is the development of an onboard communications network fault detection, isolation, and reconfiguration (FDIR) system. Phase 2 will incorporate global FDIR for onboard systems. Research into the applicability of expert systems, object-oriented programming, fuzzy sets, neural networks and other advanced techniques will be conducted. The goals and technical approach for this new SSFP research project are discussed here.

  20. Live Piloting and Prototyping

    Directory of Open Access Journals (Sweden)

    Francesca Rizzo

    2013-07-01

    Full Text Available This paper presents current trends in service design research concerning large scale projects aimed at generating changes at a local scale. The strategy adopted to achieve this, is to co-design solutions including future users in the development process, prototyping and testing system of products and services before their actual implementation. On the basis of experience achieved in the European Project Life 2.0, this paper discusses which methods and competencies are applied in the development of these projects, eliciting the lessons learnt especially from the piloting phase in which the participatory design (PD approach plays a major role. In the first part, the topic is introduced jointly with the theoretical background where the user center design and participatory design methods are presented; then the Life 2.0 project development is described; finally the experience is discussed from a service design perspective, eliciting guidelines for piloting and prototyping services in a real context of use. The paper concludes reflecting on the designers’ role and competencies needed in this process.

  1. Prototypes as Platforms for Participation

    DEFF Research Database (Denmark)

    Horst, Willem

    developers, and design it accordingly. Designing a flexible prototype in combination with supportive tools to be used by both interaction designers and non-designers during development is introduced as a way to open up the prototyping process to these users. Furthermore I demonstrate how such a flexible...... on prototyping, by bringing to attention that the prototype itself is an object of design, with its users and use context, which deserves further attention. Moreover, in this work I present concrete tools and methods that can be used by interaction designers in practice. As such this work addresses both......The development of interactive products in industry is an activity involving different disciplines – such as different kinds of designers, engineers, marketers and managers – in which prototypes play an important role. On the one hand, prototypes can be powerful boundary objects and an effective...

  2. Prototype Stilbene Neutron Collar

    Energy Technology Data Exchange (ETDEWEB)

    Prasad, M. K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Shumaker, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Snyderman, N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Verbeke, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wong, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-10-26

    A neutron collar using stilbene organic scintillator cells for fast neutron counting is described for the assay of fresh low enriched uranium (LEU) fuel assemblies. The prototype stilbene collar has a form factor similar to standard He-3 based collars and uses an AmLi interrogation neutron source. This report describes the simulation of list mode neutron correlation data on various fuel assemblies including some with neutron absorbers (burnable Gd poisons). Calibration curves (doubles vs 235U linear mass density) are presented for both thermal and fast (with Cd lining) modes of operation. It is shown that the stilbene collar meets or exceeds the current capabilities of He-3 based neutron collars. A self-consistent assay methodology, uniquely suited to the stilbene collar, using triples is described which complements traditional assay based on doubles calibration curves.

  3. Brachial Plexus Blocker Prototype

    Directory of Open Access Journals (Sweden)

    Stéphanie Coelho Monteiro

    2017-08-01

    Full Text Available Although the area of surgical simulation has been the subject of study in recent years, it is still necessary to develop artificial experimental models with a perspective to dismiss the use of biological models. Since this makes the simulators more real, transferring the environment of the health professional to a physical or virtual reality, an anesthetic prototype has been developed, where the motor response is replicated when the brachial plexus is subjected to a proximal nervous stimulus. Using action-research techniques, with this simulator it was possible to validate that the human nerve response can be replicated, which will aid the training of health professionals, reducing possible risks in a surgical environment.

  4. Integration of PanDA workload management system with Titan supercomputer at OLCF

    Science.gov (United States)

    De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  5. Naval Prototype Optical Interferometer (NPOI)

    Data.gov (United States)

    Federal Laboratory Consortium — FUNCTION: Used for astrometry and astronomical imaging, the Naval Prototype Optical Interferometer (NPOI) is a distributed aperture optical telescope. It is operated...

  6. Mobile prototyping with Axure 7

    CERN Document Server

    Hacker, Will

    2013-01-01

    This book is a step-by-step tutorial which includes hands-on examples and downloadable Axure files to get you started with mobile prototyping immediately. You will learn how to develop an application from scratch, and will be guided through each and every step.If you are a mobile-centric developer/designer, or someone who would like to take their Axure prototyping skills to the next level and start designing and testing mobile prototypes, this book is ideal for you. You should be familiar with prototyping and Axure specifically, before you read this book.

  7. Re-inventing electromagnetics - Supercomputing solution of Maxwell's equations via direct time integration on space grids

    International Nuclear Information System (INIS)

    Taflove, A.

    1992-01-01

    This paper summarizes the present state and future directions of applying finite-difference and finite-volume time-domain techniques for Maxwell's equations on supercomputers to model complex electromagnetic wave interactions with structures. Applications so far have been dominated by radar cross section technology, but by no means are limited to this area. In fact, the gains we have made place us on the threshold of being able to make tremendous contributions to non-defense electronics and optical technology. Some of the most interesting research in these commercial areas is summarized. 47 refs

  8. Watson will see you now: a supercomputer to help clinicians make informed treatment decisions.

    Science.gov (United States)

    Doyle-Lindrud, Susan

    2015-02-01

    IBM has collaborated with several cancer care providers to develop and train the IBM supercomputer Watson to help clinicians make informed treatment decisions. When a patient is seen in clinic, the oncologist can input all of the clinical information into the computer system. Watson will then review all of the data and recommend treatment options based on the latest evidence and guidelines. Once the oncologist makes the treatment decision, this information can be sent directly to the insurance company for approval. Watson has the ability to standardize care and accelerate the approval process, a benefit to the healthcare provider and the patient.

  9. Grassroots Supercomputing

    CERN Multimedia

    Buchanan, Mark

    2005-01-01

    What started out as a way for SETI to plow through its piles or radio-signal data from deep space has turned into a powerful research tool as computer users acrosse the globe donate their screen-saver time to projects as diverse as climate-change prediction, gravitational-wave searches, and protein folding (4 pages)

  10. Window prototypes during the project

    DEFF Research Database (Denmark)

    Schultz, Jørgen Munthe

    1996-01-01

    The conditions for the PASSYS test and the results of the measurements on one of the aerogel window prototypes are described.......The conditions for the PASSYS test and the results of the measurements on one of the aerogel window prototypes are described....

  11. Rapid prototyping: een veelbelovende methode

    NARCIS (Netherlands)

    Haverman, T.M.; Karagozoglu, K.H.; Prins, H.; Schulten, E.A.J.M.; Forouzanfar, T.

    2013-01-01

    Rapid prototyping is a method which makes it possible to produce a three-dimensional model based on two-dimensional imaging. Various rapid prototyping methods are available for modelling, such as stereolithography, selective laser sintering, direct laser metal sintering, two-photon polymerization,

  12. Role model and prototype matching

    DEFF Research Database (Denmark)

    Lykkegaard, Eva; Ulriksen, Lars

    2016-01-01

    ’ meetings with the role models affected their thoughts concerning STEM students and attending university. The regular self-to-prototype matching process was shown in real-life role-models meetings to be extended to a more complex three-way matching process between students’ self-perceptions, prototype...

  13. Virtual Prototyping at CERN

    Science.gov (United States)

    Gennaro, Silvano De

    The VENUS (Virtual Environment Navigation in the Underground Sites) project is probably the largest Virtual Reality application to Engineering design in the world. VENUS is just over one year old and offers a fully immersive and stereoscopic "flythru" of the LHC pits for the proposed experiments, including the experimental area equipment and the surface models that are being prepared for a territorial impact study. VENUS' Virtual Prototypes are an ideal replacement for the wooden models traditionally build for the past CERN machines, as they are generated directly from the EUCLID CAD files, therefore they are totally reliable, they can be updated in a matter of minutes, and they allow designers to explore them from inside, in a one-to-one scale. Navigation can be performed on the computer screen, on a stereoscopic large projection screen, or in immersive conditions, with an helmet and 3D mouse. By using specialised collision detection software, the computer can find optimal paths to lower each detector part into the pits and position it to destination, letting us visualize the whole assembly probess. During construction, these paths can be fed to a robot controller, which can operate the bridge cranes and build LHC almost without human intervention. VENUS is currently developing a multiplatform VR browser that will let the whole HEP community access LHC's Virtual Protoypes over the web. Many interesting things took place during the conference on Virtual Reality. For more information please refer to the Virtual Reality section.

  14. UA1 prototype detector

    CERN Multimedia

    1980-01-01

    Prototype of UA1 central detector inside a plexi tube. The UA1 experiment ran at CERN's Super Proton Synchrotron and made the Nobel Prize winning discovery of W and Z particles in 1983. The UA1 central detector was crucial to understanding the complex topology of proton-antiproton events. It played a most important role in identifying a handful of Ws and Zs among billions of collisions. The detector was essentially a wire chamber - a 6-chamber cylindrical assembly 5.8 m long and 2.3 m in diameter, the largest imaging drift chamber of its day. It recorded the tracks of charged particles curving in a 0.7 Tesla magnetic field, measuring their momentum, the sign of their electric charge and their rate of energy loss (dE/dx). Atoms in the argon-ethane gas mixture filling the chambers were ionised by the passage of charged particles. The electrons which were released drifted along an electric field shaped by field wires and were collected on sense wires. The geometrical arrangement of the 17000 field wires and 6...

  15. Frequently updated noise threat maps created with use of supercomputing grid

    Directory of Open Access Journals (Sweden)

    Szczodrak Maciej

    2014-09-01

    Full Text Available An innovative supercomputing grid services devoted to noise threat evaluation were presented. The services described in this paper concern two issues, first is related to the noise mapping, while the second one focuses on assessment of the noise dose and its influence on the human hearing system. The discussed serviceswere developed within the PL-Grid Plus Infrastructure which accumulates Polish academic supercomputer centers. Selected experimental results achieved by the usage of the services proposed were presented. The assessment of the environmental noise threats includes creation of the noise maps using either ofline or online data, acquired through a grid of the monitoring stations. A concept of estimation of the source model parameters based on the measured sound level for the purpose of creating frequently updated noise maps was presented. Connecting the noise mapping grid service with a distributed sensor network enables to automatically update noise maps for a specified time period. Moreover, a unique attribute of the developed software is the estimation of the auditory effects evoked by the exposure to noise. The estimation method uses a modified psychoacoustic model of hearing and is based on the calculated noise level values and on the given exposure period. Potential use scenarios of the grid services for research or educational purpose were introduced. Presentation of the results of predicted hearing threshold shift caused by exposure to excessive noise can raise the public awareness of the noise threats.

  16. Computational fluid dynamics research at the United Technologies Research Center requiring supercomputers

    Science.gov (United States)

    Landgrebe, Anton J.

    1987-01-01

    An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.

  17. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00300320; Klimentov, Alexei; Oleynik, Danila; Panitkin, Sergey; Petrosyan, Artem; Vaniachine, Alexandre; Wenaus, Torre; Schovancova, Jaroslava

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modi ed PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real time, information about unused...

  18. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration; Klimentov, Alexei; Oleynik, Danila; Petrosyan, Artem; Schovancova, Jaroslava; Vaniachine, Alexandre; Wenaus, Torre

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently uses more than 100,000 cores at well over 100 Grid sites with a peak performance of 0.3 petaFLOPS, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real tim...

  19. Feynman diagrams sampling for quantum field theories on the QPACE 2 supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Rappl, Florian

    2016-08-01

    This work discusses the application of Feynman diagram sampling in quantum field theories. The method uses a computer simulation to sample the diagrammatic space obtained in a series expansion. For running large physical simulations powerful computers are obligatory, effectively splitting the thesis in two parts. The first part deals with the method of Feynman diagram sampling. Here the theoretical background of the method itself is discussed. Additionally, important statistical concepts and the theory of the strong force, quantum chromodynamics, are introduced. This sets the context of the simulations. We create and evaluate a variety of models to estimate the applicability of diagrammatic methods. The method is then applied to sample the perturbative expansion of the vertex correction. In the end we obtain the value for the anomalous magnetic moment of the electron. The second part looks at the QPACE 2 supercomputer. This includes a short introduction to supercomputers in general, as well as a closer look at the architecture and the cooling system of QPACE 2. Guiding benchmarks of the InfiniBand network are presented. At the core of this part, a collection of best practices and useful programming concepts are outlined, which enables the development of efficient, yet easily portable, applications for the QPACE 2 system.

  20. Use of high performance networks and supercomputers for real-time flight simulation

    Science.gov (United States)

    Cleveland, Jeff I., II

    1993-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.

  1. Communication Characterization and Optimization of Applications Using Topology-Aware Task Mapping on Large Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; D' Azevedo, Eduardo [ORNL; Philip, Bobby [ORNL; Worley, Patrick H [ORNL

    2016-01-01

    On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phase of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.

  2. Federal Market Information Technology in the Post Flash Crash Era: Roles for Supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; Leinweber, David; Ruebel, Oliver; Wu, Kesheng

    2011-09-16

    This paper describes collaborative work between active traders, regulators, economists, and supercomputing researchers to replicate and extend investigations of the Flash Crash and other market anomalies in a National Laboratory HPC environment. Our work suggests that supercomputing tools and methods will be valuable to market regulators in achieving the goal of market safety, stability, and security. Research results using high frequency data and analytics are described, and directions for future development are discussed. Currently the key mechanism for preventing catastrophic market action are “circuit breakers.” We believe a more graduated approach, similar to the “yellow light” approach in motorsports to slow down traffic, might be a better way to achieve the same goal. To enable this objective, we study a number of indicators that could foresee hazards in market conditions and explore options to confirm such predictions. Our tests confirm that Volume Synchronized Probability of Informed Trading (VPIN) and a version of volume Herfindahl-Hirschman Index (HHI) for measuring market fragmentation can indeed give strong signals ahead of the Flash Crash event on May 6 2010. This is a preliminary step toward a full-fledged early-warning system for unusual market conditions.

  3. Unique Methodologies for Nano/Micro Manufacturing Job Training Via Desktop Supercomputer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kimball, Clyde [Northern Illinois Univ., DeKalb, IL (United States); Karonis, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Lurio, Laurence [Northern Illinois Univ., DeKalb, IL (United States); Piot, Philippe [Northern Illinois Univ., DeKalb, IL (United States); Xiao, Zhili [Northern Illinois Univ., DeKalb, IL (United States); Glatz, Andreas [Northern Illinois Univ., DeKalb, IL (United States); Pohlman, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Hou, Minmei [Northern Illinois Univ., DeKalb, IL (United States); Demir, Veysel [Northern Illinois Univ., DeKalb, IL (United States); Song, Jie [Northern Illinois Univ., DeKalb, IL (United States); Duffin, Kirk [Northern Illinois Univ., DeKalb, IL (United States); Johns, Mitrick [Northern Illinois Univ., DeKalb, IL (United States); Sims, Thomas [Northern Illinois Univ., DeKalb, IL (United States); Yin, Yanbin [Northern Illinois Univ., DeKalb, IL (United States)

    2012-11-21

    This project establishes an initiative in high speed (Teraflop)/large-memory desktop supercomputing for modeling and simulation of dynamic processes important for energy and industrial applications. It provides a training ground for employment of current students in an emerging field with skills necessary to access the large supercomputing systems now present at DOE laboratories. It also provides a foundation for NIU faculty to quantum leap beyond their current small cluster facilities. The funding extends faculty and student capability to a new level of analytic skills with concomitant publication avenues. The components of the Hewlett Packard computer obtained by the DOE funds create a hybrid combination of a Graphics Processing System (12 GPU/Teraflops) and a Beowulf CPU system (144 CPU), the first expandable via the NIU GAEA system to ~60 Teraflops integrated with a 720 CPU Beowulf system. The software is based on access to the NVIDIA/CUDA library and the ability through MATLAB multiple licenses to create additional local programs. A number of existing programs are being transferred to the CPU Beowulf Cluster. Since the expertise necessary to create the parallel processing applications has recently been obtained at NIU, this effort for software development is in an early stage. The educational program has been initiated via formal tutorials and classroom curricula designed for the coming year. Specifically, the cost focus was on hardware acquisitions and appointment of graduate students for a wide range of applications in engineering, physics and computer science.

  4. Computational Science with the Titan Supercomputer: Early Outcomes and Lessons Learned

    Science.gov (United States)

    Wells, Jack

    2014-03-01

    Modeling and simulation with petascale computing has supercharged the process of innovation and understanding, dramatically accelerating time-to-insight and time-to-discovery. This presentation will focus on early outcomes from the Titan supercomputer at the Oak Ridge National Laboratory. Titan has over 18,000 hybrid compute nodes consisting of both CPUs and GPUs. In this presentation, I will discuss the lessons we have learned in deploying Titan and preparing applications to move from conventional CPU architectures to a hybrid machine. I will present early results of materials applications running on Titan and the implications for the research community as we prepare for exascale supercomputer in the next decade. Lastly, I will provide an overview of user programs at the Oak Ridge Leadership Computing Facility with specific information how researchers may apply for allocations of computing resources. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

  5. Parallel Multivariate Spatio-Temporal Clustering of Large Ecological Datasets on Hybrid Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; Kumar, Jitendra [ORNL; Mills, Richard T. [Argonne National Laboratory; Hoffman, Forrest M. [ORNL; Sripathi, Vamsi [Intel Corporation; Hargrove, William Walter [United States Department of Agriculture (USDA), United States Forest Service (USFS)

    2017-09-01

    A proliferation of data from vast networks of remote sensing platforms (satellites, unmanned aircraft systems (UAS), airborne etc.), observational facilities (meteorological, eddy covariance etc.), state-of-the-art sensors, and simulation models offer unprecedented opportunities for scientific discovery. Unsupervised classification is a widely applied data mining approach to derive insights from such data. However, classification of very large data sets is a complex computational problem that requires efficient numerical algorithms and implementations on high performance computing (HPC) platforms. Additionally, increasing power, space, cooling and efficiency requirements has led to the deployment of hybrid supercomputing platforms with complex architectures and memory hierarchies like the Titan system at Oak Ridge National Laboratory. The advent of such accelerated computing architectures offers new challenges and opportunities for big data analytics in general and specifically, large scale cluster analysis in our case. Although there is an existing body of work on parallel cluster analysis, those approaches do not fully meet the needs imposed by the nature and size of our large data sets. Moreover, they had scaling limitations and were mostly limited to traditional distributed memory computing platforms. We present a parallel Multivariate Spatio-Temporal Clustering (MSTC) technique based on k-means cluster analysis that can target hybrid supercomputers like Titan. We developed a hybrid MPI, CUDA and OpenACC implementation that can utilize both CPU and GPU resources on computational nodes. We describe performance results on Titan that demonstrate the scalability and efficacy of our approach in processing large ecological data sets.

  6. An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.

    Science.gov (United States)

    Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei

    2017-12-01

    Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.

  7. Prototype moving-ring reactor

    International Nuclear Information System (INIS)

    Smith, A.C. Jr.; Ashworth, C.P.; Abreu, K.E.

    1981-01-01

    The objective of this work was to design a prototype fusion reactor based on fusion plasmas confined as ''Compact Toruses.' Six major criteria guided the prototype design. The prototype must: (1) produce net electricity decisively (P/sub net/ >70% of P/sub gross/), with P/sub net/ approximately 100 MW(e); (2) have small physical size (low project cost) but commercial plant; (3) have all features required of commerical plants; (4) avoid unreasonable extrapolation of technology; (5) minimize nuclear issues substantially, i.e. accident and waste issues of public concern, and (6) be modular (to permit repetitive fabrication of parts) and be maintainable with low occupational radiological exposures

  8. Learning Axure RP interactive prototypes

    CERN Document Server

    Krahenbuhl, John Henry

    2015-01-01

    If you are a user experience professional, designer, information architect, or business analyst who wants to gain interactive prototyping skills with Axure, then this book is ideal for you. Some familiarity with Axure is preferred but not essential.

  9. Architectural Prototyping in Industrial Practice

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2008-01-01

    Architectural prototyping is the process of using executable code to investigate stakeholders’ software architecture concerns with respect to a system under development. Previous work has established this as a useful and cost-effective way of exploration and learning of the design space of a system......, in addressing issues regarding quality attributes, in addressing architectural risks, and in addressing the problem of knowledge transfer and conformance. Little work has been reported so far on the actual industrial use of architectural prototyping. In this paper, we report from an ethnographical study...... and focus group involving architects from four companies in which we have focused on architectural prototypes. Our findings conclude that architectural prototypes play an important role in resolving problems experimentally, but less so in exploring alternative solutions. Furthermore, architectural...

  10. Experimentation with PEC channel prototype

    International Nuclear Information System (INIS)

    Caponetti, R.; Iacovelli, M.

    1984-01-01

    Experimentation on prototypes of PEC components is presently being carried out at Casaccia CRE. This report shows the results of the first cycle of experimentation of the central channel, concerning the aspects of sodium removal after experimentation

  11. Tangiplay: prototyping tangible electronic games

    OpenAIRE

    Boileau, Jason

    2010-01-01

    Tangible electronic games currently exist in research laboratories around the world but have yet to transition to the commercial sector. The development process of a tangible electronic game is one of the factors preventing progression, as it requires much time and money. Prototyping tools for tangible hardware and software development are becoming more available but are targeted to programmers and technically trained developers. Paper prototyping board and video games is a proven and rapid m...

  12. Fast-prototyping of VLSI

    International Nuclear Information System (INIS)

    Saucier, G.; Read, E.

    1987-01-01

    Fast-prototyping will be a reality in the very near future if both straightforward design methods and fast manufacturing facilities are available. This book focuses, first, on the motivation for fast-prototyping. Economic aspects and market considerations are analysed by European and Japanese companies. In the second chapter, new design methods are identified, mainly for full custom circuits. Of course, silicon compilers play a key role and the introduction of artificial intelligence techniques sheds a new light on the subject. At present, fast-prototyping on gate arrays or on standard cells is the most conventional technique and the third chapter updates the state-of-the art in this area. The fourth chapter concentrates specifically on the e-beam direct-writing for submicron IC technologies. In the fifth chapter, a strategic point in fast-prototyping, namely the test problem is addressed. The design for testability and the interface to the test equipment are mandatory to fulfill the test requirement for fast-prototyping. Finally, the last chapter deals with the subject of education when many people complain about the lack of use of fast-prototyping in higher education for VLSI

  13. Wavelet transform-vector quantization compression of supercomputer ocean model simulation output

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J N; Brislawn, C M

    1992-11-12

    We describe a new procedure for efficient compression of digital information for storage and transmission purposes. The algorithm involves a discrete wavelet transform subband decomposition of the data set, followed by vector quantization of the wavelet transform coefficients using application-specific vector quantizers. The new vector quantizer design procedure optimizes the assignment of both memory resources and vector dimensions to the transform subbands by minimizing an exponential rate-distortion functional subject to constraints on both overall bit-rate and encoder complexity. The wavelet-vector quantization method, which originates in digital image compression. is applicable to the compression of other multidimensional data sets possessing some degree of smoothness. In this paper we discuss the use of this technique for compressing the output of supercomputer simulations of global climate models. The data presented here comes from Semtner-Chervin global ocean models run at the National Center for Atmospheric Research and at the Los Alamos Advanced Computing Laboratory.

  14. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Widener, Patrick (University of New Mexico); Jaconette, Steven (Northwestern University); Bridges, Patrick G. (University of New Mexico); Xia, Lei (Northwestern University); Dinda, Peter (Northwestern University); Cui, Zheng.; Lange, John (Northwestern University); Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  15. Large scale simulations of lattice QCD thermodynamics on Columbia Parallel Supercomputers

    International Nuclear Information System (INIS)

    Ohta, Shigemi

    1989-01-01

    The Columbia Parallel Supercomputer project aims at the construction of a parallel processing, multi-gigaflop computer optimized for numerical simulations of lattice QCD. The project has three stages; 16-node, 1/4GF machine completed in April 1985, 64-node, 1GF machine completed in August 1987, and 256-node, 16GF machine now under construction. The machines all share a common architecture; a two dimensional torus formed from a rectangular array of N 1 x N 2 independent and identical processors. A processor is capable of operating in a multi-instruction multi-data mode, except for periods of synchronous interprocessor communication with its four nearest neighbors. Here the thermodynamics simulations on the two working machines are reported. (orig./HSI)

  16. Use of QUADRICS supercomputer as embedded simulator in emergency management systems

    International Nuclear Information System (INIS)

    Bove, R.; Di Costanzo, G.; Ziparo, A.

    1996-07-01

    The experience related to the implementation of a MRBT, atmospheric spreading model with a short duration releasing, are reported. This model was implemented on a QUADRICS-Q1 supercomputer. First is reported a description of the MRBT model. It is an analytical model to study the speadings of light gases realised in the atmosphere cause incidental releasing. The solution of diffusion equation is Gaussian like. It yield the concentration of pollutant substance released. The concentration is function of space and time. Thus the QUADRICS architecture is introduced. And the implementation of the model is described. At the end it will be consider the integration of the QUADRICS-based model as simulator in a emergency management system

  17. Reactive flow simulations in complex geometries with high-performance supercomputing

    International Nuclear Information System (INIS)

    Rehm, W.; Gerndt, M.; Jahn, W.; Vogelsang, R.; Binninger, B.; Herrmann, M.; Olivier, H.; Weber, M.

    2000-01-01

    In this paper, we report on a modern field code cluster consisting of state-of-the-art reactive Navier-Stokes- and reactive Euler solvers that has been developed on vector- and parallel supercomputers at the research center Juelich. This field code cluster is used for hydrogen safety analyses of technical systems, for example, in the field of nuclear reactor safety and conventional hydrogen demonstration plants with fuel cells. Emphasis is put on the assessment of combustion loads, which could result from slow, fast or rapid flames, including transition from deflagration to detonation. As a sample of proof tests, the special tools have been tested for specific tasks, based on the comparison of experimental and numerical results, which are in reasonable agreement. (author)

  18. Affordable and accurate large-scale hybrid-functional calculations on GPU-accelerated supercomputers

    Science.gov (United States)

    Ratcliff, Laura E.; Degomme, A.; Flores-Livas, José A.; Goedecker, Stefan; Genovese, Luigi

    2018-03-01

    Performing high accuracy hybrid functional calculations for condensed matter systems containing a large number of atoms is at present computationally very demanding or even out of reach if high quality basis sets are used. We present a highly optimized multiple graphics processing unit implementation of the exact exchange operator which allows one to perform fast hybrid functional density-functional theory (DFT) calculations with systematic basis sets without additional approximations for up to a thousand atoms. With this method hybrid DFT calculations of high quality become accessible on state-of-the-art supercomputers within a time-to-solution that is of the same order of magnitude as traditional semilocal-GGA functionals. The method is implemented in a portable open-source library.

  19. Research to application: Supercomputing trends for the 90's - Opportunities for interdisciplinary computations

    International Nuclear Information System (INIS)

    Shankar, V.

    1991-01-01

    The progression of supercomputing is reviewed from the point of view of computational fluid dynamics (CFD), and multidisciplinary problems impacting the design of advanced aerospace configurations are addressed. The application of full potential and Euler equations to transonic and supersonic problems in the 70s and early 80s is outlined, along with Navier-Stokes computations widespread during the late 80s and early 90s. Multidisciplinary computations currently in progress are discussed, including CFD and aeroelastic coupling for both static and dynamic flexible computations, CFD, aeroelastic, and controls coupling for flutter suppression and active control, and the development of a computational electromagnetics technology based on CFD methods. Attention is given to computational challenges standing in a way of the concept of establishing a computational environment including many technologies. 40 refs

  20. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Science.gov (United States)

    DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug

    2018-03-01

    With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  1. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Directory of Open Access Journals (Sweden)

    DeTar Carleton

    2018-01-01

    Full Text Available With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  2. Solving sparse linear least squares problems on some supercomputers by using large dense blocks

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Ostromsky, T; Sameh, A

    1997-01-01

    technique is preferable to sparse matrix technique when the matrices are not large, because the high computational speed compensates fully the disadvantages of using more arithmetic operations and more storage. For very large matrices the computations must be organized as a sequence of tasks in each......Efficient subroutines for dense matrix computations have recently been developed and are available on many high-speed computers. On some computers the speed of many dense matrix operations is near to the peak-performance. For sparse matrices storage and operations can be saved by operating only...... and storing only nonzero elements. However, the price is a great degradation of the speed of computations on supercomputers (due to the use of indirect addresses, to the need to insert new nonzeros in the sparse storage scheme, to the lack of data locality, etc.). On many high-speed computers a dense matrix...

  3. An Optimized Parallel FDTD Topology for Challenging Electromagnetic Simulations on Supercomputers

    Directory of Open Access Journals (Sweden)

    Shugang Jiang

    2015-01-01

    Full Text Available It may not be a challenge to run a Finite-Difference Time-Domain (FDTD code for electromagnetic simulations on a supercomputer with more than 10 thousands of CPU cores; however, to make FDTD code work with the highest efficiency is a challenge. In this paper, the performance of parallel FDTD is optimized through MPI (message passing interface virtual topology, based on which a communication model is established. The general rules of optimal topology are presented according to the model. The performance of the method is tested and analyzed on three high performance computing platforms with different architectures in China. Simulations including an airplane with a 700-wavelength wingspan, and a complex microstrip antenna array with nearly 2000 elements are performed very efficiently using a maximum of 10240 CPU cores.

  4. Reliability Lessons Learned From GPU Experience With The Titan Supercomputer at Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Gallarno, George [Christian Brothers University; Rogers, James H [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learned in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.

  5. EDF's experience with supercomputing and challenges ahead - towards multi-physics and multi-scale approaches

    International Nuclear Information System (INIS)

    Delbecq, J.M.; Banner, D.

    2003-01-01

    Nuclear power plants are a major asset of the EDF company. To remain so, in particular in a context of deregulation, competitiveness, safety and public acceptance are three conditions. These stakes apply both to existing plants and to future reactors. The purpose of the presentation is to explain how supercomputing can help EDF to satisfy these requirements. Three examples are described in detail: ensuring optimal use of nuclear fuel under wholly safe conditions, understanding and simulating the material deterioration mechanisms and moving forward with numerical simulation for the performance of EDF's activities. In conclusion, a broader vision of EDF long term R and D in the field of numerical simulation is given and especially of five challenges taken up by EDF together with its industrial and scientific partners. (author)

  6. Development of a high performance eigensolver on the peta-scale next generation supercomputer system

    International Nuclear Information System (INIS)

    Imamura, Toshiyuki; Yamada, Susumu; Machida, Masahiko

    2010-01-01

    For the present supercomputer systems, a multicore and multisocket processors are necessary to build a system, and choice of interconnection is essential. In addition, for effective development of a new code, high performance, scalable, and reliable numerical software is one of the key items. ScaLAPACK and PETSc are well-known software on distributed memory parallel computer systems. It is needless to say that highly tuned software towards new architecture like many-core processors must be chosen for real computation. In this study, we present a high-performance and high-scalable eigenvalue solver towards the next-generation supercomputer system, so called 'K-computer' system. We have developed two versions, the standard version (eigen s) and enhanced performance version (eigen sx), which are developed on the T2K cluster system housed at University of Tokyo. Eigen s employs the conventional algorithms; Householder tridiagonalization, divide and conquer (DC) algorithm, and Householder back-transformation. They are carefully implemented with blocking technique and flexible two-dimensional data-distribution to reduce the overhead of memory traffic and data transfer, respectively. Eigen s performs excellently on the T2K system with 4096 cores (theoretical peak is 37.6 TFLOPS), and it shows fine performance 3.0 TFLOPS with a two hundred thousand dimensional matrix. The enhanced version, eigen sx, uses more advanced algorithms; the narrow-band reduction algorithm, DC for band matrices, and the block Householder back-transformation with WY-representation. Even though this version is still on a test stage, it shows 4.7 TFLOPS with the same dimensional matrix on eigen s. (author)

  7. High Temporal Resolution Mapping of Seismic Noise Sources Using Heterogeneous Supercomputers

    Science.gov (United States)

    Paitz, P.; Gokhberg, A.; Ermert, L. A.; Fichtner, A.

    2017-12-01

    The time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems like earthquake fault zones, volcanoes, geothermal and hydrocarbon reservoirs. We present results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service providing seismic noise source maps for Central Europe with high temporal resolution. We use source imaging methods based on the cross-correlation of seismic noise records from all seismic stations available in the region of interest. The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept to provide the interested researchers worldwide with regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for collecting, on a periodic basis, raw seismic records from the European seismic networks, (2) high-performance noise source mapping application responsible for the generation of source maps using cross-correlation of seismic records, (3) back-end infrastructure for the coordination of various tasks and computations, (4) front-end Web interface providing the service to the end-users and (5) data repository. The noise source mapping itself rests on the measurement of logarithmic amplitude ratios in suitably pre-processed noise correlations, and the use of simplified sensitivity kernels. During the implementation we addressed various challenges, in particular, selection of data sources and transfer protocols, automation and monitoring of daily data downloads, ensuring the required data processing performance, design of a general service-oriented architecture for coordination of various sub-systems, and

  8. Harnessing Petaflop-Scale Multi-Core Supercomputing for Problems in Space Science

    Science.gov (United States)

    Albright, B. J.; Yin, L.; Bowers, K. J.; Daughton, W.; Bergen, B.; Kwan, T. J.

    2008-12-01

    The particle-in-cell kinetic plasma code VPIC has been migrated successfully to the world's fastest supercomputer, Roadrunner, a hybrid multi-core platform built by IBM for the Los Alamos National Laboratory. How this was achieved will be described and examples of state-of-the-art calculations in space science, in particular, the study of magnetic reconnection, will be presented. With VPIC on Roadrunner, we have performed, for the first time, plasma PIC calculations with over one trillion particles, >100× larger than calculations considered "heroic" by community standards. This allows examination of physics at unprecedented scale and fidelity. Roadrunner is an example of an emerging paradigm in supercomputing: the trend toward multi-core systems with deep hierarchies and where memory bandwidth optimization is vital to achieving high performance. Getting VPIC to perform well on such systems is a formidable challenge: the core algorithm is memory bandwidth limited with low compute-to-data ratio and requires random access to memory in its inner loop. That we were able to get VPIC to perform and scale well, achieving >0.374 Pflop/s and linear weak scaling on real physics problems on up to the full 12240-core Roadrunner machine, bodes well for harnessing these machines for our community's needs in the future. Many of the design considerations encountered commute to other multi-core and accelerated (e.g., via GPU) platforms and we modified VPIC with flexibility in mind. These will be summarized and strategies for how one might adapt a code for such platforms will be shared. Work performed under the auspices of the U.S. DOE by the LANS LLC Los Alamos National Laboratory. Dr. Bowers is a LANL Guest Scientist; he is presently at D. E. Shaw Research LLC, 120 W 45th Street, 39th Floor, New York, NY 10036.

  9. Efficient development of memory bounded geo-applications to scale on modern supercomputers

    Science.gov (United States)

    Räss, Ludovic; Omlin, Samuel; Licul, Aleksandar; Podladchikov, Yuri; Herman, Frédéric

    2016-04-01

    Numerical modeling is an actual key tool in the area of geosciences. The current challenge is to solve problems that are multi-physics and for which the length scale and the place of occurrence might not be known in advance. Also, the spatial extend of the investigated domain might strongly vary in size, ranging from millimeters for reactive transport to kilometers for glacier erosion dynamics. An efficient way to proceed is to develop simple but robust algorithms that perform well and scale on modern supercomputers and permit therefore very high-resolution simulations. We propose an efficient approach to solve memory bounded real-world applications on modern supercomputers architectures. We optimize the software to run on our newly acquired state-of-the-art GPU cluster "octopus". Our approach shows promising preliminary results on important geodynamical and geomechanical problematics: we have developed a Stokes solver for glacier flow and a poromechanical solver including complex rheologies for nonlinear waves in stressed rocks porous rocks. We solve the system of partial differential equations on a regular Cartesian grid and use an iterative finite difference scheme with preconditioning of the residuals. The MPI communication happens only locally (point-to-point); this method is known to scale linearly by construction. The "octopus" GPU cluster, which we use for the computations, has been designed to achieve maximal data transfer throughput at minimal hardware cost. It is composed of twenty compute nodes, each hosting four Nvidia Titan X GPU accelerators. These high-density nodes are interconnected with a parallel (dual-rail) FDR InfiniBand network. Our efforts show promising preliminary results for the different physics investigated. The glacier flow solver achieves good accuracy in the relevant benchmarks and the coupled poromechanical solver permits to explain previously unresolvable focused fluid flow as a natural outcome of the porosity setup. In both cases

  10. Prototypes in engineering design: Definitions and strategies

    DEFF Research Database (Denmark)

    Jensen, Lasse Skovgaard; Özkil, Ali Gürcan; Mortensen, Niels Henrik

    2016-01-01

    By reviewing literature, we investigate types, purposes and definitions of prototypes. There is no overarching definition of a prototype, but we identify five categories of prototypes in litterature. We further synthesize and reference previous work to create an overview of aspects in prototyping...

  11. Prototyping in theory and in practice

    DEFF Research Database (Denmark)

    Yu, Fei; Brem, Alexander; Pasinell, Michele

    2018-01-01

    and functions of a prototype and needed to meet specific goals in order to push the process forward. Designers, on the other hand, used prototypes to investigate the design space for new possibilities, and were more open to a variety of prototyping materials and tools, especially for low-fidelity prototypes...

  12. Rapid Prototyping of Formally Modelled Distributed Systems

    OpenAIRE

    Buchs, Didier; Buffo, Mathieu; Titsworth, Frances M.

    1999-01-01

    This paper presents various kinds of prototypes, used in the prototyping of formally modelled distributed systems. It presents the notions of prototyping techniques and prototype evolution, and shows how to relate them to the software life-cycle. It is illustrated through the use of the formal modelling language for distributed systems CO-OPN/2.

  13. Towards an Operational Framework for Architectural Prototyping

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak

    2005-01-01

    We use a case study in architectural prototyping as input for presenting a first, tentative, framework describing key concepts and their relationships in architectural prototyping processes.......We use a case study in architectural prototyping as input for presenting a first, tentative, framework describing key concepts and their relationships in architectural prototyping processes....

  14. Engineering prototypes for theta-pinch devices

    International Nuclear Information System (INIS)

    Hansborough, L.D.; Hammer, C.F.; Hanks, K.W.; McDonald, T.E.; Nunnally, W.C.

    1975-01-01

    Past, present, and future engineering prototypes for theta-pinch plasma-physics devices at Los Alamos Scientific Laboratory are discussed. Engineering prototypes are designed to test and evaluate all components under system conditions expected on actual plasma-physics experimental devices. The importance of engineering prototype development increases as the size and complexity of the plasma-physics device increases. Past experiences with the Scyllac prototype and the Staged Theta-Pinch prototype are discussed and evaluated. The design of the proposed Staged Scyllac prototype and the Large Staged Scyllac implosion prototype assembly are discussed

  15. Prototypical Rod Consolidation Demonstration Project

    International Nuclear Information System (INIS)

    1993-05-01

    The objective of Phase III of the Prototypical Rod Consolidation Demonstration Project (PRCDP) was to procure, fabricate, assemble, and test the Prototypical Rod Consolidation System as described in the NUS Phase II Final Design Report. This effort required providing the materials, components, and fabricated parts which makes up all of the system equipment. In addition, it included the assembly, installation, and setup of this equipment at the Cold Test Facility. During the Phase III effort the system was tested on a component, subsystem, and system level. Volume IV provides the Operating and Maintenance Manual for the Prototypical Rod Consolidation System that was installed at the Cold Test Facility. This document, Book 1 of Volume IV, discusses: Process overview functional descriptions; Control system descriptions; Support system descriptions; Maintenance system descriptions; and Process equipment descriptions

  16. Science with the ASTRI prototype

    International Nuclear Information System (INIS)

    Sartore, Nicola

    2013-01-01

    ASTRI (Astrofisica a Specchi con Tecnologia Replicante Italiana) is a “Flagship Project” financed by the Italian Ministry of Instruction, University and Research and led by the Italian National Institute of Astrophysics. It represents the Italian proposal for the development of the Small Size Telescope system of the Cherenkov Telescope Array, the next generation observatory for Very High Energy gamma-rays (20 GeV - 100 TeV). The ASTRI end-to-end prototype will be installed at Serra La Nave (Catania, Italy) and it will see the first light at the beginning of 2014. We describe the expected performance of the prototype on few selected test cases of the northern emisphere. The aim of the prototype is to probe the technological solutions and the nominal performance of the various telescope's subsystems

  17. Flight Telerobotic Servicer prototype simulator

    Science.gov (United States)

    Schein, Rob; Krauze, Linda; Hartley, Craig; Dickenson, Alan; Lavecchia, Tom; Working, Bob

    A prototype simulator for the Flight Telerobotic Servicer (FTS) system is described for use in the design development of the FTS, emphasizing the hand controller and user interface. The simulator utilizes a graphics workstation based on rapid prototyping tools for systems analyses of the use of the user interface and the hand controller. Kinematic modeling, manipulator-control algorithms, and communications programs are contained in the software for the simulator. The hardwired FTS panels and operator interface for use on the STS Orbiter are represented graphically, and the simulated controls function as the final FTS system configuration does. The robotic arm moves based on the user hand-controller interface, and the joint angles and other data are given on the prototype of the user interface. This graphics simulation tool provides the means for familiarizing crewmembers with the FTS system operation, displays, and controls.

  18. Prototypical Rod Consolidation Demonstration Project

    International Nuclear Information System (INIS)

    1993-05-01

    The objective of Phase III of the Prototypical Rod Consolidation Demonstration Project (PRCDP) was to procure, fabricate, assemble, and test the Prototypical Rod Consolidation System as described in the NUS Phase II Final Design Report. This effort required providing the materials, components, and fabricated parts which makes up all of the system equipment. In addition, it included the assembly, installation, and setup of this equipment at the Cold Test Facility. During the Phase III effort the system was tested on a component, subsystem, and system level. Volume IV provides the Operating and Maintenance Manual for the Prototypical Rod Consolidation System that was installed at the Cold Test Facility. This document, Book 4 of Volume IV, discusses: Off-normal operating and recovery procedures; Emergency response procedures; Troubleshooting procedures; and Preventive maintenance procedures

  19. Axure RP 6 Prototyping Essentials

    CERN Document Server

    Schwartz, Ezra

    2012-01-01

    Axure RP 6 Prototyping Essentials is a detailed, practical primer on the leading rapid prototyping tool. Short on jargon and high on concepts, real-life scenarios and step-by-step guidance through hands-on examples, this book will show you how to integrate Axure into your UX workflow. This book is written for UX practitioners, business analysts, product managers, and anyone else who is involved in UX projects. The book assumes that you have no or very little familiarity with Axure. It will help you if you are evaluating the tool for an upcoming project or are required to quickly get up to spee

  20. High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers

    Science.gov (United States)

    Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas

    2017-04-01

    Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for

  1. De Novo Ultrascale Atomistic Simulations On High-End Parallel Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Nakano, A; Kalia, R K; Nomura, K; Sharma, A; Vashishta, P; Shimojo, F; van Duin, A; Goddard, III, W A; Biswas, R; Srivastava, D; Yang, L H

    2006-09-04

    We present a de novo hierarchical simulation framework for first-principles based predictive simulations of materials and their validation on high-end parallel supercomputers and geographically distributed clusters. In this framework, high-end chemically reactive and non-reactive molecular dynamics (MD) simulations explore a wide solution space to discover microscopic mechanisms that govern macroscopic material properties, into which highly accurate quantum mechanical (QM) simulations are embedded to validate the discovered mechanisms and quantify the uncertainty of the solution. The framework includes an embedded divide-and-conquer (EDC) algorithmic framework for the design of linear-scaling simulation algorithms with minimal bandwidth complexity and tight error control. The EDC framework also enables adaptive hierarchical simulation with automated model transitioning assisted by graph-based event tracking. A tunable hierarchical cellular decomposition parallelization framework then maps the O(N) EDC algorithms onto Petaflops computers, while achieving performance tunability through a hierarchy of parameterized cell data/computation structures, as well as its implementation using hybrid Grid remote procedure call + message passing + threads programming. High-end computing platforms such as IBM BlueGene/L, SGI Altix 3000 and the NSF TeraGrid provide an excellent test grounds for the framework. On these platforms, we have achieved unprecedented scales of quantum-mechanically accurate and well validated, chemically reactive atomistic simulations--1.06 billion-atom fast reactive force-field MD and 11.8 million-atom (1.04 trillion grid points) quantum-mechanical MD in the framework of the EDC density functional theory on adaptive multigrids--in addition to 134 billion-atom non-reactive space-time multiresolution MD, with the parallel efficiency as high as 0.998 on 65,536 dual-processor BlueGene/L nodes. We have also achieved an automated execution of hierarchical QM

  2. NMS Prototype development final report

    International Nuclear Information System (INIS)

    Lepetich, J.E.

    1993-01-01

    Program for development of NMS prototype for LAMPF consisted of 5 tasks: crystal procurement specification, inspection/evaluation of CsI crystals, design/fabrication of crystal housing, design/fabrication of PMT shields, and packaging of crystals in the housing

  3. EUSO-TA prototype telescope

    Energy Technology Data Exchange (ETDEWEB)

    Bisconti, Francesca, E-mail: francesca.bisconti@kit.edu

    2016-07-11

    EUSO-TA is one of the prototypes developed for the JEM-EUSO project, a space-based large field-of-view telescope to observe the fluorescence light emitted by cosmic ray air showers in the atmosphere. EUSO-TA is a ground-based prototype located at the Telescope Array (TA) site in Utah, USA, where an Electron Light Source and a Central Laser Facility are installed. The purpose of the EUSO-TA project is to calibrate the prototype with the TA fluorescence detector in presence of well-known light sources and cosmic ray air showers. In 2015, the detector started the first measurements and tests using the mentioned light sources have been performed successfully. A first cosmic ray candidate has been observed, as well as stars of different magnitude and color index. Since Silicon Photo-Multipliers (SiPMs) are very promising for fluorescence telescopes of next generation, they are under consideration for the realization of a new prototype of EUSO Photo Detector Module (PDM). The response of this sensor type is under investigation through simulations and laboratory experimentation.

  4. The OPAL vertex detector prototype

    International Nuclear Information System (INIS)

    Roney, J.M.; Armitage, J.C.; Carnegie, R.K.; Giles, G.L.; Hemingway, R.J.; McPherson, A.C.; Pinfold, J.L.; Waterhouse, J.; Godfrey, L.; Hargrove, C.K.

    1989-01-01

    The prototype test results of a high resolution charged particle tracking detector are reported. The detector is designed to measure vertex topologies of particles produced in the e + e - collisions of the OPAL experiment at LEP. The OPAL vertex detector is a 1 m long, 0.46 m diameter cylindrical drift chamber consisting of an axial and stereo layer each of which is divided into 36 jet cells. A prototype chamber containing four axial and two stereo cells was studied using a pion test beam at CERN. The studies examined the prototype under a variety of operating conditions. An r-Φ resolution of 60 μm was obtained when the chamber was operated with argon (50%)-ethane (50%) at 3.75 bar, and when CO 2 (80%)-isobutane (20%) at 2.5 bar was used a 25 μm resolution was achieved. A z measurement using end-to-end time difference has a resolution of 3.5 cm. The details of these prototype studies are discussed in this paper. (orig.)

  5. Rapid Prototyping Enters Mainstream Manufacturing.

    Science.gov (United States)

    Winek, Gary

    1996-01-01

    Explains rapid prototyping, a process that uses computer-assisted design files to create a three-dimensional object automatically, speeding the industrial design process. Five commercially available systems and two emerging types--the 3-D printing process and repetitive masking and depositing--are described. (SK)

  6. Encapsulation of polymer photovoltaic prototypes

    DEFF Research Database (Denmark)

    Krebs, Frederik C

    2006-01-01

    A simple and efficient method for the encapsulation of polymer and organic photovoltaic prototypes is presented. The method employs device preparation on glass substrates with subsequent sealing using glass fiber reinforced thermosetting epoxy (prepreg) against a back plate. The method allows...

  7. EUSO-TA prototype telescope

    Science.gov (United States)

    Bisconti, Francesca; JEM-EUSO Collaboration

    2016-07-01

    EUSO-TA is one of the prototypes developed for the JEM-EUSO project, a space-based large field-of-view telescope to observe the fluorescence light emitted by cosmic ray air showers in the atmosphere. EUSO-TA is a ground-based prototype located at the Telescope Array (TA) site in Utah, USA, where an Electron Light Source and a Central Laser Facility are installed. The purpose of the EUSO-TA project is to calibrate the prototype with the TA fluorescence detector in presence of well-known light sources and cosmic ray air showers. In 2015, the detector started the first measurements and tests using the mentioned light sources have been performed successfully. A first cosmic ray candidate has been observed, as well as stars of different magnitude and color index. Since Silicon Photo-Multipliers (SiPMs) are very promising for fluorescence telescopes of next generation, they are under consideration for the realization of a new prototype of EUSO Photo Detector Module (PDM). The response of this sensor type is under investigation through simulations and laboratory experimentation.

  8. Facial Prototype Formation in Children.

    Science.gov (United States)

    Inn, Donald; And Others

    This study examined memory representation as it is exhibited in young children's formation of facial prototypes. In the first part of the study, researchers constructed images of faces using an Identikit that provided the features of hair, eyes, mouth, nose, and chin. Images were varied systematically. A series of these images, called exemplar…

  9. Research center Juelich to install Germany's most powerful supercomputer new IBM System for science and research will achieve 5.8 trillion computations per second

    CERN Multimedia

    2002-01-01

    "The Research Center Juelich, Germany, and IBM today announced that they have signed a contract for the delivery and installation of a new IBM supercomputer at the Central Institute for Applied Mathematics" (1/2 page).

  10. Prototype diagnosis of psychiatric syndromes

    Science.gov (United States)

    WESTEN, DREW

    2012-01-01

    The method of diagnosing patients used since the early 1980s in psychiatry, which involves evaluating each of several hundred symptoms for their presence or absence and then applying idiosyncratic rules for combining them for each of several hundred disorders, has led to great advances in research over the last 30 years. However, its problems have become increasingly apparent, particularly for clinical practice. An alternative approach, designed to maximize clinical utility, is prototype matching. Instead of counting symptoms of a disorder and determining whether they cross an arbitrary cutoff, the task of the diagnostician is to gauge the extent to which a patient’s clinical presentation matches a paragraph-length description of the disorder using a simple 5-point scale, from 1 (“little or no match”) to 5 (“very good match”). The result is both a dimensional diagnosis that captures the extent to which the patient “has” the disorder and a categorical diagnosis, with ratings of 4 and 5 corresponding to presence of the disorder and a rating of 3 indicating “subthreshold” or “clinically significant features”. The disorders and criteria woven into the prototypes can be identified empirically, so that the prototypes are both scientifically grounded and clinically useful. Prototype diagnosis has a number of advantages: it better captures the way humans naturally classify novel and complex stimuli; is clinically helpful, reliable, and easy to use in everyday practice; facilitates both dimensional and categorical diagnosis and dramatically reduces the number of categories required for classification; allows for clinically richer, empirically derived, and culturally relevant classification; reduces the gap between research criteria and clinical knowledge, by allowing clinicians in training to learn a small set of standardized prototypes and to develop richer mental representations of the disorders over time through clinical experience; and can help

  11. Earth and environmental science in the 1980's: Part 1: Environmental data systems, supercomputer facilities and networks

    Science.gov (United States)

    1986-01-01

    Overview descriptions of on-line environmental data systems, supercomputer facilities, and networks are presented. Each description addresses the concepts of content, capability, and user access relevant to the point of view of potential utilization by the Earth and environmental science community. The information on similar systems or facilities is presented in parallel fashion to encourage and facilitate intercomparison. In addition, summary sheets are given for each description, and a summary table precedes each section.

  12. Prototype Effect and the Persuasiveness of Generalizations.

    Science.gov (United States)

    Dahlman, Christian; Sarwar, Farhan; Bååth, Rasmus; Wahlberg, Lena; Sikström, Sverker

    An argument that makes use of a generalization activates the prototype for the category used in the generalization. We conducted two experiments that investigated how the activation of the prototype affects the persuasiveness of the argument. The results of the experiments suggest that the features of the prototype overshadow and partly overwrite the actual facts of the case. The case is, to some extent, judged as if it had the features of the prototype instead of the features it actually has. This prototype effect increases the persuasiveness of the argument in situations where the audience finds the judgment more warranted for the prototype than for the actual case (positive prototype effect), but decreases persuasiveness in situations where the audience finds the judgment less warranted for the prototype than for the actual case (negative prototype effect).

  13. Supporting Active User Involvment in Prototyping

    DEFF Research Database (Denmark)

    Grønbæk, Kaj

    1990-01-01

    The term prototyping has in recent years become a buzzword in both research and practice of system design due to a number of claimed advantages of prototyping techniques over traditional specification techniques. In particular it is often stated that prototyping facilitates the users' involvement...... in the development process. But prototyping does not automatically imply active user involvement! Thus a cooperative prototyping approach aiming at involving users actively and creatively in system design is proposed in this paper. The key point of the approach is to involve users in activities that closely couple...... development of prototypes to early evaluation of prototypes in envisioned use situations. Having users involved in such activities creates new requirements for tool support. Tools that support direct manipulation of prototypes and simulation of behaviour have shown promise for cooperative prototyping...

  14. Prototyping of user interfaces for mobile applications

    CERN Document Server

    Bähr, Benjamin

    2017-01-01

    This book investigates processes for the prototyping of user interfaces for mobile apps, and describes the development of new concepts and tools that can improve the prototype driven app development in the early stages. It presents the development and evaluation of a new requirements catalogue for prototyping mobile app tools that identifies the most important criteria such tools should meet at different prototype-development stages. This catalogue is not just a good point of orientation for designing new prototyping approaches, but also provides a set of metrics for a comparing the performance of alternative prototyping tools. In addition, the book discusses the development of Blended Prototyping, a new approach for prototyping user interfaces for mobile applications in the early and middle development stages, and presents the results of an evaluation of its performance, showing that it provides a tool for teamwork-oriented, creative prototyping of mobile apps in the early design stages.

  15. The BlueGene/L Supercomputer and Quantum ChromoDynamics

    International Nuclear Information System (INIS)

    Vranas, P; Soltz, R

    2006-01-01

    In summary our update contains: (1) Perfect speedup sustaining 19.3% of peak for the Wilson D D-slash Dirac operator. (2) Measurements of the full Conjugate Gradient (CG) inverter that inverts the Dirac operator. The CG inverter contains two global sums over the entire machine. Nevertheless, our measurements retain perfect speedup scaling demonstrating the robustness of our methods. (3) We ran on the largest BG/L system, the LLNL 64 rack BG/L supercomputer, and obtained a sustained speed of 59.1 TFlops. Furthermore, the speedup scaling of the Dirac operator and of the CG inverter are perfect all the way up to the full size of the machine, 131,072 cores (please see Figure II). The local lattice is rather small (4 x 4 x 4 x 16) while the total lattice has been a lattice QCD vision for thermodynamic studies (a total of 128 x 128 x 256 x 32 lattice sites). This speed is about five times larger compared to the speed we quoted in our submission. As we have pointed out in our paper QCD is notoriously sensitive to network and memory latencies, has a relatively high communication to computation ratio which can not be overlapped in BGL in virtual node mode, and as an application is in a class of its own. The above results are thrilling to us and a 30 year long dream for lattice QCD

  16. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  17. 369 TFlop/s molecular dynamics simulations on the Roadrunner general-purpose heterogeneous supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Swaminarayan, Sriram [Los Alamos National Laboratory; Germann, Timothy C [Los Alamos National Laboratory; Kadau, Kai [Los Alamos National Laboratory; Fossum, Gordon C [IBM CORPORATION

    2008-01-01

    The authors present timing and performance numbers for a short-range parallel molecular dynamics (MD) code, SPaSM, that has been rewritten for the heterogeneous Roadrunner supercomputer. Each Roadrunner compute node consists of two AMD Opteron dual-core microprocessors and four PowerXCell 8i enhanced Cell microprocessors, so that there are four MPI ranks per node, each with one Opteron and one Cell. The interatomic forces are computed on the Cells (each with one PPU and eight SPU cores), while the Opterons are used to direct inter-rank communication and perform I/O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementation of a standard Lennard-Jones pair potential benchmark reached a peak of 369 Tflop/s double-precision floating-point performance on the full Roadrunner system (27.7% of peak), corresponding to 124 MFlop/Watt/s at a price of approximately 3.69 MFlops/dollar. They demonstrate an initial target application, the jetting and ejection of material from a shocked surface.

  18. A user-friendly web portal for T-Coffee on supercomputers

    Directory of Open Access Journals (Sweden)

    Koetsier Jos

    2011-05-01

    Full Text Available Abstract Background Parallel T-Coffee (PTC was the first parallel implementation of the T-Coffee multiple sequence alignment tool. It is based on MPI and RMA mechanisms. Its purpose is to reduce the execution time of the large-scale sequence alignments. It can be run on distributed memory clusters allowing users to align data sets consisting of hundreds of proteins within a reasonable time. However, most of the potential users of this tool are not familiar with the use of grids or supercomputers. Results In this paper we show how PTC can be easily deployed and controlled on a super computer architecture using a web portal developed using Rapid. Rapid is a tool for efficiently generating standardized portlets for a wide range of applications and the approach described here is generic enough to be applied to other applications, or to deploy PTC on different HPC environments. Conclusions The PTC portal allows users to upload a large number of sequences to be aligned by the parallel version of TC that cannot be aligned by a single machine due to memory and execution time constraints. The web portal provides a user-friendly solution.

  19. A Parallel Supercomputer Implementation of a Biological Inspired Neural Network and its use for Pattern Recognition

    International Nuclear Information System (INIS)

    De Ladurantaye, Vincent; Lavoie, Jean; Bergeron, Jocelyn; Parenteau, Maxime; Lu Huizhong; Pichevar, Ramin; Rouat, Jean

    2012-01-01

    A parallel implementation of a large spiking neural network is proposed and evaluated. The neural network implements the binding by synchrony process using the Oscillatory Dynamic Link Matcher (ODLM). Scalability, speed and performance are compared for 2 implementations: Message Passing Interface (MPI) and Compute Unified Device Architecture (CUDA) running on clusters of multicore supercomputers and NVIDIA graphical processing units respectively. A global spiking list that represents at each instant the state of the neural network is described. This list indexes each neuron that fires during the current simulation time so that the influence of their spikes are simultaneously processed on all computing units. Our implementation shows a good scalability for very large networks. A complex and large spiking neural network has been implemented in parallel with success, thus paving the road towards real-life applications based on networks of spiking neurons. MPI offers a better scalability than CUDA, while the CUDA implementation on a GeForce GTX 285 gives the best cost to performance ratio. When running the neural network on the GTX 285, the processing speed is comparable to the MPI implementation on RQCHP's Mammouth parallel with 64 notes (128 cores).

  20. Modeling radiative transport in ICF plasmas on an IBM SP2 supercomputer

    International Nuclear Information System (INIS)

    Johansen, J.A.; MacFarlane, J.J.; Moses, G.A.

    1995-01-01

    At the University of Wisconsin-Madison the authors have integrated a collisional-radiative-equilibrium model into their CONRAD radiation-hydrodynamics code. This integrated package allows them to accurately simulate the transport processes involved in ICF plasmas; including the important effects of self-absorption of line-radiation. However, as they increase the amount of atomic structure utilized in their transport models, the computational demands increase nonlinearly. In an attempt to meet this increased computational demand, they have recently embarked on a mission to parallelize the CONRAD program. The parallel CONRAD development is being performed on an IBM SP2 supercomputer. The parallelism is based on a message passing paradigm, and is being implemented using PVM. At the present time they have determined that approximately 70% of the sequential program can be executed in parallel. Accordingly, they expect that the parallel version will yield a speedup on the order of three times that of the sequential version. This translates into only 10 hours of execution time for the parallel version, whereas the sequential version required 30 hours

  1. Portable implementation model for CFD simulations. Application to hybrid CPU/GPU supercomputers

    Science.gov (United States)

    Oyarzun, Guillermo; Borrell, Ricard; Gorobets, Andrey; Oliva, Assensi

    2017-10-01

    Nowadays, high performance computing (HPC) systems experience a disruptive moment with a variety of novel architectures and frameworks, without any clarity of which one is going to prevail. In this context, the portability of codes across different architectures is of major importance. This paper presents a portable implementation model based on an algebraic operational approach for direct numerical simulation (DNS) and large eddy simulation (LES) of incompressible turbulent flows using unstructured hybrid meshes. The strategy proposed consists in representing the whole time-integration algorithm using only three basic algebraic operations: sparse matrix-vector product, a linear combination of vectors and dot product. The main idea is based on decomposing the nonlinear operators into a concatenation of two SpMV operations. This provides high modularity and portability. An exhaustive analysis of the proposed implementation for hybrid CPU/GPU supercomputers has been conducted with tests using up to 128 GPUs. The main objective consists in understanding the challenges of implementing CFD codes on new architectures.

  2. Assessment techniques for a learning-centered curriculum: evaluation design for adventures in supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Helland, B. [Ames Lab., IA (United States); Summers, B.G. [Oak Ridge National Lab., TN (United States)

    1996-09-01

    As the classroom paradigm shifts from being teacher-centered to being learner-centered, student assessments are evolving from typical paper and pencil testing to other methods of evaluation. Students should be probed for understanding, reasoning, and critical thinking abilities rather than their ability to return memorized facts. The assessment of the Department of Energy`s pilot program, Adventures in Supercomputing (AiS), offers one example of assessment techniques developed for learner-centered curricula. This assessment has employed a variety of methods to collect student data. Methods of assessment used were traditional testing, performance testing, interviews, short questionnaires via email, and student presentations of projects. The data obtained from these sources have been analyzed by a professional assessment team at the Center for Children and Technology. The results have been used to improve the AiS curriculum and establish the quality of the overall AiS program. This paper will discuss the various methods of assessment used and the results.

  3. Visualization at supercomputing centers: the tale of little big iron and the three skinny guys.

    Science.gov (United States)

    Bethel, E W; van Rosendale, J; Southard, D; Gaither, K; Childs, H; Brugger, E; Ahern, S

    2011-01-01

    Supercomputing centers are unique resources that aim to enable scientific knowledge discovery by employing large computational resources-the "Big Iron." Design, acquisition, installation, and management of the Big Iron are carefully planned and monitored. Because these Big Iron systems produce a tsunami of data, it's natural to colocate the visualization and analysis infrastructure. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys doesn't receive the same level of treatment as that of the Big Iron. This article explores the following questions about the Little Iron: How should we size the Little Iron to adequately support visualization and analysis of data coming off the Big Iron? What sort of capabilities must it have? Related questions concern the size of visualization support staff: How big should a visualization program be-that is, how many Skinny Guys should it have? What should the staff do? How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?

  4. PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan; Mills, Richard T.

    2012-04-18

    PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors per realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.

  5. Benchmarking Further Single Board Computers for Building a Mini Supercomputer for Simulation of Telecommunication Systems

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-01-01

    Full Text Available Parallel Discrete Event Simulation (PDES with the conservative synchronization method can be efficiently used for the performance analysis of telecommunication systems because of their good lookahead properties. For PDES, a cost effective execution platform may be built by using single board computers (SBCs, which offer relatively high computation capacity compared to their price or power consumption and especially to the space they take up. A benchmarking method is proposed and its operation is demonstrated by benchmarking ten different SBCs, namely Banana Pi, Beaglebone Black, Cubieboard2, Odroid-C1+, Odroid-U3+, Odroid-XU3 Lite, Orange Pi Plus, Radxa Rock Lite, Raspberry Pi Model B+, and Raspberry Pi 2 Model B+. Their benchmarking results are compared to find out which one should be used for building a mini supercomputer for parallel discrete-event simulation of telecommunication systems. The SBCs are also used to build a heterogeneous cluster and the performance of the cluster is tested, too.

  6. Design in action: From prototyping by demonstration to cooperative prototyping

    DEFF Research Database (Denmark)

    Bødker, Susanne; Grønbæk, Kaj

    1991-01-01

    ... the development of any computer-based system will have to proceed in a cycle from design to experience and back again. It is impossible to anticipate all of the relevant breakdown and their domains. They emerge gradually in practice. Winograd and Flores, 1986. p.171 Some time ago we worked wi...... with a group of dental assistants, designing a prototype case record system to explore the possibility of using computer support in public dental clinics. ...

  7. Prototypical Rod Consolidation Demonstration Project

    International Nuclear Information System (INIS)

    1993-05-01

    The objective of Phase 3 of the Prototypical Rod consolidation Demonstration Project (PRCDP) was to procure, fabricate, assemble, and test the Prototypical Rod consolidation System as described in the NUS Phase 2 Final Design Report. This effort required providing the materials, components, and fabricated parts which makes up all of the system equipment. In addition, it included the assembly, installation, and setup of this equipment at the Cold Test Facility. During the Phase 3 effort the system was tested on a component, subsystem, and system level. This volume 1, discusses the PRCDP Phase 3 Test Program that was conducted by the HALLIBURTON NUS Environmental Corporation under contract AC07-86ID12651 with the United States Department of Energy. This document, Volume 1, Book 2 discusses the following topics: Fuel Rod Extraction System Test Results and Analysis Reports and Clamping Table Test Results and Analysis Reports

  8. Prototypical Rod Consolidation Demonstration Project

    International Nuclear Information System (INIS)

    1993-05-01

    The objective of Phase 3 of the Prototypical Rod consolidation Demonstration Project (PRCDP) was to procure, fabricate, assemble, and test the Prototypical Rod consolidation System as described in the NUS Phase 2 Final Design Report. This effort required providing the materials, components, and fabricated parts which makes up all of the system equipment. In addition, it included the assembly, installation, and setup of this equipment at the Cold Test Facility. During the Phase 3 effort the system was tested on a component, subsystem, and system level. This volume 1, discusses the PRCDP Phase 3 Test Program that was conducted by the HALLIBURTON NUS Environmental Corporation under contract AC07-86ID12651 with the United States Department of Energy. This document, Volume 1, Book 1 discusses the following topics: the background of the project; test program description; summary of tests and test results; problem evaluation; functional requirements confirmation; recommendations; and completed test documentation for tests performed in Phase 3

  9. Prototypical Rod Consolidation Demonstration Project

    International Nuclear Information System (INIS)

    1993-05-01

    The objective of Phase 3 of the Prototypical Rod consolidation Demonstration Project (PRCDP) was to procure, fabricate, assemble, and test the Prototypical Rod consolidation System as described in the NUS Phase 2 Final Design Report. This effort required providing the materials, components, and fabricated parts which makes up all of the system equipment. In addition, it included the assembly, installation, and setup of this equipment at the Cold Test Facility. During the Phase 3 effort the system was tested on a component, subsystem, and system level. This volume 1, discusses the PRCDP Phase 3 Test Program that was conducted by the HALLIBURTON NUS Environmental Corporation under contract AC07-86ID12651 with the United States Department of Energy. This document, Volume 1, Book 9 discusses the following topics: Integrated System Normal Operations Test Results and Analysis Report; Integrated System Off-Normal Operations Test Results and Analysis Report; and Integrated System Maintenance Operations Test Results and Analysis Report

  10. Prototype of sun projector device

    Science.gov (United States)

    Ihsan; Dermawan, B.

    2016-11-01

    One way to introduce astronomy to public, including students, can be handled by solar observation. The widely held device for this purpose is coelostat and heliostat. Besides using filter attached to a device such as telescope, it is safest to use indirect way for observing the Sun. The main principle of the indirect way is deflecting the sun light and projecting image of the sun on a screen. We design and build a simple and low-cost astronomical device, serving as a supplement to increase public service, especially for solar observation. Without using any digital and intricate supporting equipment, people can watch and relish image of the Sun in comfortable condition, i.e. in a sheltered or shady place. Here we describe a design and features of our prototype of the device, which still, of course, has some limitations. In the future, this prototype can be improved for more efficient and useful applications.

  11. Prototypical Rod Consolidation Demonstration Project

    International Nuclear Information System (INIS)

    1993-05-01

    The objective of Phase 3 of the Prototypical Rod consolidation Demonstration Project (PRCDP) was to procure, fabricate, assemble, and test the Prototypical Rod consolidation System as described in the NUS Phase 2 Final Design Report. This effort required providing the materials, components, and fabricated parts which makes up all of the system equipment. In addition, it included the assembly, installation, and setup of this equipment at the Cold Test Facility. During the Phase 3 effort the system was tested on a component, subsystem, and system level. This volume 1, discusses the PRCDP Phase 3 Test Program that was conducted by the HALLIBURTON NUS Environmental Corporation under contract AC07-86ID12651 with the United States Department of Energy. This document, Volume 1, Book 8 discusses Control System SOT Tests Results and Analysis Report. This is a continuation of Book 7

  12. Prototype and proposed ISABELLE dipoles

    International Nuclear Information System (INIS)

    McInturff, A.D.; Sampson, W.B.; Robins, K.E.; Dahl, P.F.; Damm, R.

    1977-01-01

    Data are presented on the latest dipole prototypes to update the operational parameters possible for ISABELLE. This data base will constantly expand until the start of construction of the storage rings. The data will include field quality, stray field magnitudes, quench temperature and propagation times, protection capabilities singly and in multiple units, maximum central fields obtained and training behavior. Performance of the dipoles versus temperature and mode of refrigeration will be discussed. The single layer cosine theta turns distribution coils' parameters are better than those required for the operation of the 200 x 200 GeV version of ISABELLE. The double layer prototype has exceeded the magnetic field performance and two dimensional quality of field needed for the 400 x 400 GeV version of ISABELLE

  13. Prototypical Rod Construction Demonstration Project

    International Nuclear Information System (INIS)

    1993-05-01

    The objective of Phase 3 of the Prototypical Rod consolidation Demonstration Project (PRCDP) was to procure, fabricate, assemble, and test the Prototypical Rod consolidation System as described in the NUS Phase 2 Final Design Report. This effort required providing the materials, components, and fabricated parts which makes up all of the system equipment. In addition, it included the assembly, installation, and setup of this equipment at the Cold Test Facility. During the Phase 3 effort the system was tested on a component, subsystem, and system level. This volume 1, discusses the PRCDP Phase 3 Test Program that was conducted by the HALLIBURTON NUS Environmental Corporation under contract AC07-86ID12651 with the United States Department of Energy. This document, Volume 1, Book 3 discusses the following topics: Downender Test Results and Analysis Report; NFBC Canister Upender Test Results and Analysis Report; Fuel Assembly Handling Fixture Test Results and Analysis Report; and Fuel Canister Upender Test Results and Analysis Report

  14. Rapid mask prototyping for microfluidics.

    Science.gov (United States)

    Maisonneuve, B G C; Honegger, T; Cordeiro, J; Lecarme, O; Thiry, T; Fuard, D; Berton, K; Picard, E; Zelsmann, M; Peyrade, D

    2016-03-01

    With the rise of microfluidics for the past decade, there has come an ever more pressing need for a low-cost and rapid prototyping technology, especially for research and education purposes. In this article, we report a rapid prototyping process of chromed masks for various microfluidic applications. The process takes place out of a clean room, uses a commercially available video-projector, and can be completed in less than half an hour. We quantify the ranges of fields of view and of resolutions accessible through this video-projection system and report the fabrication of critical microfluidic components (junctions, straight channels, and curved channels). To exemplify the process, three common devices are produced using this method: a droplet generation device, a gradient generation device, and a neuro-engineering oriented device. The neuro-engineering oriented device is a compartmentalized microfluidic chip, and therefore, required the production and the precise alignment of two different masks.

  15. Prototyping the PANDA Barrel DIRC

    Energy Technology Data Exchange (ETDEWEB)

    Schwarz, C., E-mail: C.Schwarz@gsi.de [GSI Helmholtzzentrum für Schwerionenforschung GmbH, Darmstadt (Germany); Kalicy, G.; Dzhygadlo, R.; Gerhardt, A.; Götzen, K.; Hohler, R.; Kumawat, H.; Lehmann, D.; Lewandowski, B.; Patsyuk, M.; Peters, K.; Schepers, G.; Schmitt, L.; Schwiening, J.; Traxler, M.; Zühlsdorf, M. [GSI Helmholtzzentrum für Schwerionenforschung GmbH, Darmstadt (Germany); Dodokhov, V.Kh. [Joint Institute for Nuclear Research, Dubna (Russian Federation); Britting, A.; Eyrich, W.; Lehmann, A. [Friedrich Alexander-University of Erlangen-Nuremberg, Erlangen (Germany); and others

    2014-12-01

    The design of the Barrel DIRC detector for the future PANDA experiment at FAIR contains several important improvements compared to the successful BABAR DIRC, such as focusing and fast timing. To test those improvements as well as other design options a prototype was build and successfully tested in 2012 with particle beams at CERN. The prototype comprises a radiator bar, focusing lens, mirror, and a prism shaped expansion volume made of synthetic fused silica. An array of micro-channel plate photomultiplier tubes measures the location and arrival time of the Cherenkov photons with sub-nanosecond resolution. The development of a fast reconstruction algorithm allowed to tune construction details of the detector setup with test beam data and Monte-Carlo simulations.

  16. Customer-experienced rapid prototyping

    Science.gov (United States)

    Zhang, Lijuan; Zhang, Fu; Li, Anbo

    2008-12-01

    In order to describe accurately and comprehend quickly the perfect GIS requirements, this article will integrate the ideas of QFD (Quality Function Deployment) and UML (Unified Modeling Language), and analyze the deficiency of prototype development model, and will propose the idea of the Customer-Experienced Rapid Prototyping (CE-RP) and describe in detail the process and framework of the CE-RP, from the angle of the characteristics of Modern-GIS. The CE-RP is mainly composed of Customer Tool-Sets (CTS), Developer Tool-Sets (DTS) and Barrier-Free Semantic Interpreter (BF-SI) and performed by two roles of customer and developer. The main purpose of the CE-RP is to produce the unified and authorized requirements data models between customer and software developer.

  17. DOE's annealing prototype demonstration projects

    International Nuclear Information System (INIS)

    Warren, J.; Nakos, J.; Rochau, G.

    1997-01-01

    One of the challenges U.S. utilities face in addressing technical issues associated with the aging of nuclear power plants is the long-term effect of plant operation on reactor pressure vessels (RPVs). As a nuclear plant operates, its RPV is exposed to neutrons. For certain plants, this neutron exposure can cause embrittlement of some of the RPV welds which can shorten the useful life of the RPV. This RPV embrittlement issue has the potential to affect the continued operation of a number of operating U.S. pressurized water reactor (PWR) plants. However, RPV material properties affected by long-term irradiation are recoverable through a thermal annealing treatment of the RPV. Although a dozen Russian-designed RPVs and several U.S. military vessels have been successfully annealed, U.S. utilities have stated that a successful annealing demonstration of a U.S. RPV is a prerequisite for annealing a licensed U.S. nuclear power plant. In May 1995, the Department of Energy's Sandia National Laboratories awarded two cost-shared contracts to evaluate the feasibility of annealing U.S. licensed plants by conducting an anneal of an installed RPV using two different heating technologies. The contracts were awarded to the American Society of Mechanical Engineers (ASME) Center for Research and Technology Development (CRTD) and MPR Associates (MPR). The ASME team completed its annealing prototype demonstration in July 1996, using an indirect gas furnace at the uncompleted Public Service of Indiana's Marble Hill nuclear power plant. The MPR team's annealing prototype demonstration was scheduled to be completed in early 1997, using a direct heat electrical furnace at the uncompleted Consumers Power Company's nuclear power plant at Midland, Michigan. This paper describes the Department's annealing prototype demonstration goals and objectives; the tasks, deliverables, and results to date for each annealing prototype demonstration; and the remaining annealing technology challenges

  18. Encapsulation of polymer photovoltaic prototypes

    Energy Technology Data Exchange (ETDEWEB)

    Krebs, Frederik C. [The Danish Polymer Centre, RISOE National Laboratory, P.O. Box 49, DK-4000 Roskilde (Denmark)

    2006-12-15

    A simple and efficient method for the encapsulation of polymer and organic photovoltaic prototypes is presented. The method employs device preparation on glass substrates with subsequent sealing using glass fiber reinforced thermosetting epoxy (prepreg) against a back plate. The method allows for transporting oxygen and water sensitive devices outside a glove box environment after sealing and enables sharing of devices between research groups such that efficiency and stability can be evaluated in different laboratories. (author)

  19. Yucca Mountain project prototype testing

    International Nuclear Information System (INIS)

    Hughes, W.T.; Girdley, W.A.

    1990-01-01

    The U.S. DOE is responsible for characterizing the Yucca Mountain site in Nevada to determine its suitability for development as a geologic repository to isolate high-level nuclear waste for at least 10,000 years. This unprecedented task relies in part on measurements made with relatively new methods or applications, such as dry coring and overcoring for studies to be conducted from the land surface and in an underground facility. The Yucca Mountain Project has, since 1988, implemented a program of equipment development and methods development for a broad spectrum of hydrologic, geologic, rock mechanics, and thermomechanical tests planned for use in an Exploratory Shaft during site characterization at the Yucca Mountain site. A second major program was fielded beginning in April 1989 to develop and test methods and equipment for surface drilling to obtain core samples from depth using only air as a circulating medium. The third major area of prototype testing has been during the ongoing development of the Instrumentation/ Data Acquisition System (IDAS), designed to collect and monitor data from down-hole instrumentation in the unsaturated zone, and store and transmit the data to a central archiving computer. Future prototype work is planned for several programs including the application of vertical seismic profiling methods and flume design to characterizing the geology at Yucca Mountain. The major objectives of this prototype testing are to assure that planned Site Characterization testing can be carried out effectively at Yucca Mountain, both in the Exploratory Shaft Facility (ESF), and from the surface, and to avoid potential major failures or delays that could result from the need to re-design testing concepts or equipment. This paper will describe the scope of the Yucca Mountain Project prototype testing programs and summarize results to date. 3 figs

  20. Prototype Morphing Fan Nozzle Demonstrated

    Science.gov (United States)

    Lee, Ho-Jun; Song, Gang-Bing

    2004-01-01

    Ongoing research in NASA Glenn Research Center's Structural Mechanics and Dynamics Branch to develop smart materials technologies for aeropropulsion structural components has resulted in the design of the prototype morphing fan nozzle shown in the photograph. This prototype exploits the potential of smart materials to significantly improve the performance of existing aircraft engines by introducing new inherent capabilities for shape control, vibration damping, noise reduction, health monitoring, and flow manipulation. The novel design employs two different smart materials, a shape-memory alloy and magnetorheological fluids, to reduce the nozzle area by up to 30 percent. The prototype of the variable-area fan nozzle implements an overlapping spring leaf assembly to simplify the initial design and to provide ease of structural control. A single bundle of shape memory alloy wire actuators is used to reduce the nozzle geometry. The nozzle is subsequently held in the reduced-area configuration by using magnetorheological fluid brakes. This prototype uses the inherent advantages of shape memory alloys in providing large induced strains and of magnetorheological fluids in generating large resistive forces. In addition, the spring leaf design also functions as a return spring, once the magnetorheological fluid brakes are released, to help force the shape memory alloy wires to return to their original position. A computerized real-time control system uses the derivative-gain and proportional-gain algorithms to operate the system. This design represents a novel approach to the active control of high-bypass-ratio turbofan engines. Researchers have estimated that such engines will reduce thrust specific fuel consumption by 9 percent over that of fixed-geometry fan nozzles. This research was conducted under a cooperative agreement (NCC3-839) at the University of Akron.

  1. Using prototyping in software development

    OpenAIRE

    Šinkovec, Miha

    2010-01-01

    Today the business system changers faster than the usual conventional cascade life cycle. Because of that, we can conclude, that today's programming system will no longer be presented as the answer to this topic in the developing age of ever changing user requirements. Neither increased performance or higher productivity will decrease the problem. The appropriate solution to this stated problem is prototyping. Instead of building and developing the whole system, we build a module that can...

  2. Iteration and Prototyping in Creating Technical Specifications.

    Science.gov (United States)

    Flynt, John P.

    1994-01-01

    Claims that the development process for computer software can be greatly aided by the writers of specifications if they employ basic iteration and prototyping techniques. Asserts that computer software configuration management practices provide ready models for iteration and prototyping. (HB)

  3. Printing of Titanium implant prototype

    International Nuclear Information System (INIS)

    Wiria, Florencia Edith; Shyan, John Yong Ming; Lim, Poon Nian; Wen, Francis Goh Chung; Yeo, Jin Fei; Cao, Tong

    2010-01-01

    Dental implant plays an important role as a conduit of force and stress to flow from the tooth to the related bone. In the load sharing between an implant and its related bone, the amount of stress carried by each of them directly related to their stiffness or modulus. Hence, it is a crucial issue for the implant to have matching mechanical properties, in particular modulus, between the implant and its related bone. Titanium is a metallic material that has good biocompatibility and corrosion resistance. Whilst the modulus of the bulk material is still higher than that of bone, it is the lowest among all other commonly used metallic implant materials, such as stainless steel or cobalt alloy. Hence it is potential to further reduce the modulus of pure Titanium by engineering its processing method to obtain porous structure. In this project, porous Titanium implant prototype is fabricated using 3-dimensional printing. This technique allows the flexibility of design customization, which is beneficial for implant fabrication as tailoring of implant size and shape helps to ensure the implant would fit nicely to the patient. The fabricated Titanium prototype had a modulus of 4.8-13.2 GPa, which is in the range of natural bone modulus. The compressive strength achieved was between 167 to 455 MPa. Subsequent cell culture study indicated that the porous Titanium prototype had good biocompatibility and is suitable for bone cell attachment and proliferation.

  4. Majorana Thermosyphon Prototype Experimental Results

    International Nuclear Information System (INIS)

    Fast, James E.; Reid, Douglas J.; Aguayo Navarrete, Estanislao

    2010-01-01

    The Majorana demonstrator will operate at liquid Nitrogen temperatures to ensure optimal spectrometric performance of its High Purity Germanium (HPGe) detector modules. In order to transfer the heat load of the detector module, the Majorana demonstrator requires a cooling system that will maintain a stable liquid nitrogen temperature. This cooling system is required to transport the heat from the detector chamber outside the shield. One approach is to use the two phase liquid-gas equilibrium to ensure constant temperature. This cooling technique is used in a thermosyphon. The thermosyphon can be designed so the vaporization/condensing process transfers heat through the shield while maintaining a stable operating temperature. A prototype of such system has been built at PNNL. This document presents the experimental results of the prototype and evaluates the heat transfer performance of the system. The cool down time, temperature gradient in the thermosyphon, and heat transfer analysis are studied in this document with different heat load applied to the prototype.

  5. Prototype effect and the persuasiveness of generalizations

    OpenAIRE

    Dahlman, Christian; Sarwar, Farhan; Bååth, Rasmus; Wahlberg, Lena; Sikström, Sverker

    2015-01-01

    An argument that makes use of a generalization activates the prototype for the category used in the generalization. We conducted two experiments that investigated how the activation of the prototype affects the persuasiveness of the argument. The results of the experiments suggest that the features of the prototype overshadow and partly overwrite the actual facts of the case. The case is, to some extent, judged as if it had the features of the prototype instead of the features it actually ...

  6. NASA's Climate in a Box: Desktop Supercomputing for Open Scientific Model Development

    Science.gov (United States)

    Wojcik, G. S.; Seablom, M. S.; Lee, T. J.; McConaughy, G. R.; Syed, R.; Oloso, A.; Kemp, E. M.; Greenseid, J.; Smith, R.

    2009-12-01

    NASA's High Performance Computing Portfolio in cooperation with its Modeling, Analysis, and Prediction program intends to make its climate and earth science models more accessible to a larger community. A key goal of this effort is to open the model development and validation process to the scientific community at large such that a natural selection process is enabled and results in a more efficient scientific process. One obstacle to others using NASA models is the complexity of the models and the difficulty in learning how to use them. This situation applies not only to scientists who regularly use these models but also non-typical users who may want to use the models such as scientists from different domains, policy makers, and teachers. Another obstacle to the use of these models is that access to high performance computing (HPC) accounts, from which the models are implemented, can be restrictive with long wait times in job queues and delays caused by an arduous process of obtaining an account, especially for foreign nationals. This project explores the utility of using desktop supercomputers in providing a complete ready-to-use toolkit of climate research products to investigators and on demand access to an HPC system. One objective of this work is to pre-package NASA and NOAA models so that new users will not have to spend significant time porting the models. In addition, the prepackaged toolkit will include tools, such as workflow, visualization, social networking web sites, and analysis tools, to assist users in running the models and analyzing the data. The system architecture to be developed will allow for automatic code updates for each user and an effective means with which to deal with data that are generated. We plan to investigate several desktop systems, but our work to date has focused on a Cray CX1. Currently, we are investigating the potential capabilities of several non-traditional development environments. While most NASA and NOAA models are

  7. Parallel simulation of tsunami inundation on a large-scale supercomputer

    Science.gov (United States)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2013-12-01

    An accurate prediction of tsunami inundation is important for disaster mitigation purposes. One approach is to approximate the tsunami wave source through an instant inversion analysis using real-time observation data (e.g., Tsushima et al., 2009) and then use the resulting wave source data in an instant tsunami inundation simulation. However, a bottleneck of this approach is the large computational cost of the non-linear inundation simulation and the computational power of recent massively parallel supercomputers is helpful to enable faster than real-time execution of a tsunami inundation simulation. Parallel computers have become approximately 1000 times faster in 10 years (www.top500.org), and so it is expected that very fast parallel computers will be more and more prevalent in the near future. Therefore, it is important to investigate how to efficiently conduct a tsunami simulation on parallel computers. In this study, we are targeting very fast tsunami inundation simulations on the K computer, currently the fastest Japanese supercomputer, which has a theoretical peak performance of 11.2 PFLOPS. One computing node of the K computer consists of 1 CPU with 8 cores that share memory, and the nodes are connected through a high-performance torus-mesh network. The K computer is designed for distributed-memory parallel computation, so we have developed a parallel tsunami model. Our model is based on TUNAMI-N2 model of Tohoku University, which is based on a leap-frog finite difference method. A grid nesting scheme is employed to apply high-resolution grids only at the coastal regions. To balance the computation load of each CPU in the parallelization, CPUs are first allocated to each nested layer in proportion to the number of grid points of the nested layer. Using CPUs allocated to each layer, 1-D domain decomposition is performed on each layer. In the parallel computation, three types of communication are necessary: (1) communication to adjacent neighbours for the

  8. Implicit face prototype learning from geometric information.

    Science.gov (United States)

    Or, Charles C-F; Wilson, Hugh R

    2013-04-19

    There is evidence that humans implicitly learn an average or prototype of previously studied faces, as the unseen face prototype is falsely recognized as having been learned (Solso & McCarthy, 1981). Here we investigated the extent and nature of face prototype formation where observers' memory was tested after they studied synthetic faces defined purely in geometric terms in a multidimensional face space. We found a strong prototype effect: The basic results showed that the unseen prototype averaged from the studied faces was falsely identified as learned at a rate of 86.3%, whereas individual studied faces were identified correctly 66.3% of the time and the distractors were incorrectly identified as having been learned only 32.4% of the time. This prototype learning lasted at least 1 week. Face prototype learning occurred even when the studied faces were further from the unseen prototype than the median variation in the population. Prototype memory formation was evident in addition to memory formation of studied face exemplars as demonstrated in our models. Additional studies showed that the prototype effect can be generalized across viewpoints, and head shape and internal features separately contribute to prototype formation. Thus, implicit face prototype extraction in a multidimensional space is a very general aspect of geometric face learning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. The Scintillator Tile Hadronic Calorimeter Prototype

    International Nuclear Information System (INIS)

    Rusinov, V.

    2006-01-01

    A high granularity scintillator hadronic calorimeter prototype is described. The calorimeter is based on a novel photodetector - Silicon Photo-Multiplier (SiPM). The main parameters of SiPM are discussed as well as readout cell construction and optimization. The experience with a small prototype production and testing is described. A new 8 k channel prototype is being manufactured now

  10. Rapid Prototyping: An Alternative Instructional Design Strategy.

    Science.gov (United States)

    Tripp, Steven D.; Bichelmeyer, Barbara

    1990-01-01

    Discusses the nature of instructional design and describes rapid prototyping as a feasible model for instructional system design (ISD). The use of prototyping in software engineering is described, similarities between software design and instructional design are discussed, and an example is given which uses rapid prototyping in designing a…

  11. Project management strategies for prototyping breakdowns

    DEFF Research Database (Denmark)

    Granlien, Maren Sander; Pries-Heje, Jan; Baskerville, Richard

    2009-01-01

    , managing the explorative and iterative aspects of prototyping projects is not a trivial task. We examine the managerial challenges in a small scale prototyping project in the Danish healthcare sector where a prototype breakdown and project escalation occurs. From this study we derive a framework...... of strategies for coping with escalation in troubled prototyping projects; the framework is based on project management triangle theory and is useful when considering how to manage prototype breakdown and escalation. All strategies were applied in the project case at different points in time. The strategies led...

  12. Pleiades and OCO-2: Using Supercomputing Resources to Process OCO-2 Science Data

    Science.gov (United States)

    LaHaye, Nick

    2012-01-01

    For a period of ten weeks I got the opportunity to assist in doing research for the OCO-2 project in the Science Data Operations System Team. This research involved writing a prototype interface that would work as a model for the system implemented for the project's operations. This would only be the case if when the system is tested it worked properly and up to the team's standards. This paper gives the details of the research done and its results.

  13. Results from the FDIRC prototype

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, D.A., E-mail: roberts@umd.edu [University of Maryland, College Park, MD 20742 (United States); Arnaud, N. [Laboratoire de l’Accélérateur Linéaire, Centre Scientifique d’Orsay, F-91898 Orsay Cedex (France); Dey, B. [University of California, Riverside, CA 92521 (United States); Borsato, M. [Laboratoire de l’Accélérateur Linéaire, Centre Scientifique d’Orsay, F-91898 Orsay Cedex (France); Leith, D.W.G.S.; Nishimura, K.; Ratcliff, B.N. [SLAC, Stanford University, Palo Alto, CA 94309 (United States); Varner, G. [University of Hawaii, Honolulu, HI 96822 (United States); Va’vra, J. [SLAC, Stanford University, Palo Alto, CA 94309 (United States)

    2014-12-01

    We present results from a novel Cherenkov imaging detector called the Focusing DIRC (FDIRC). This detector was designed as a prototype of the particle identification system for the SuperB experiment, and comprises 1/12 of the SuperB barrel azimuthal coverage with partial electronics implementation. The prototype was tested in the SLAC Cosmic Ray Telescope (CRT) which provides 3-D muon tracking with an angular resolution of ∼1.5 mrad, track position resolution of 5–6 mm, start time resolution of 70 ps, and a muon low-energy cutoff of ∼2 GeV provided by an iron range stack. The quartz focusing photon camera couples to a full-size BaBar DIRC bar box and is read out by 12 Hamamatsu H8500 MaPMTs providing 768 pixels. We used IRS2 waveform digitizing electronics to read out the MaPMTs. We present several results from our on-going development activities that demonstrate that the new optics design works very well, including: (a) single photon Cherenkov angle resolutions with and without chromatic corrections, (b) S/N ratio between the Cherenkov peak and background, which consists primarily of ambiguities in possible photon paths to a given pixel, (c) dTOP=TOP{sub measured}–TOP{sub expected} resolutions, and (d) performance of the detector in the presence of high-rate backgrounds. We also describe data analysis methods and point out limits of the present performance. - Highlights: • We present results from a novel Cherenkov imaging detector called the Focusing DIRC (FDIRC). • The prototype was tested in the SLAC Cosmic Ray Telescope (CRT) which provides 3-D muon tracking. • We present several results from our on-going development activities that demonstrate that new optics design works very well. • We describe data analysis methods and point out limits of the present performance.

  14. Digital Prototyping of Milk Products

    DEFF Research Database (Denmark)

    Frisvad, Jeppe Revall; Nielsen, Otto Højager Attermann; Skytte, Jacob Lercke

    2012-01-01

    reflectance measurements can be used for more extensive validation and for gathering data that can be used to extend our current model such that it can also predict how the optical properties develop during fermentation or acidification of milk to yogurt. A well-established way of measuring optical properties...... prototyping of milk products such that it can also predict how the optical properties develop during gelation of milk to yogurt. The influence of the colloidal aggregation on the optical properties is described by the static structure factor. As our method is noninvasive, we can use our setup for monitoring...

  15. Mechanical Prototyping and Manufacturing Internship

    Science.gov (United States)

    Grenfell, Peter

    2016-01-01

    The internship was located at the Johnson Space Center (JSC) Innovation Design Center (IDC), which is a facility where the JSC workforce can meet and conduct hands-on innovative design, fabrication, evaluation, and testing of ideas and concepts relevant to NASA's mission. The tasks of the internship included mechanical prototyping design and manufacturing projects in service of research and development as well as assisting the users of the IDC in completing their manufacturing projects. The first project was to manufacture hatch mechanisms for a team in the Systems Engineering and Project Advancement Program (SETMAP) hexacopter competition. These mechanisms were intended to improve the performance of the servomotors and offer an access point that would also seal to prevent cross-contamination. I also assisted other teams as they were constructing and modifying their hexacopters. The success of this competition demonstrated a proof of concept for aerial reconnaissance and sample return to be potentially used in future NASA missions. I also worked with Dr. Kumar Krishen to prototype an improved thermos and a novel, portable solar array. Computer-aided design (CAD) software was used to model the parts for both of these projects. Then, 3D printing as well as conventional techniques were used to produce the parts. These prototypes were then subjected to trials to determine the success of the designs. The solar array is intended to work in a cluster that is easy to set up and take down and doesn't require powered servomechanisms. It could be used terrestrially in areas not serviced by power grids. Both projects improve planetary exploration capabilities to future astronauts. Other projects included manufacturing custom rail brackets for EG-2, assisting engineers working on underwater instrument and tool cases for the NEEMO project, and helping to create mock-up parts for Space Center Houston. The use of the IDC enabled efficient completion of these projects at

  16. Prototype system of secure VOD

    Science.gov (United States)

    Minemura, Harumi; Yamaguchi, Tomohisa

    1997-12-01

    Secure digital contents delivery systems are to realize copyright protection and charging mechanism, and aim at secure delivery service of digital contents. Encrypted contents delivery and history (log) management are means to accomplish this purpose. Our final target is to realize a video-on-demand (VOD) system that can prevent illegal usage of video data and manage user history data to achieve a secure video delivery system on the Internet or Intranet. By now, mainly targeting client-server systems connected with enterprise LAN, we have implemented and evaluated a prototype system based on the investigation into the delivery method of encrypted video contents.

  17. CERN LHC dipole prototype success

    International Nuclear Information System (INIS)

    Anon.

    1994-01-01

    In a crash programme, the first prototype superconducting dipole magnet for CERN's LHC protonproton collider was successfully powered for the first time at CERN on 14 April, eventually sailing to 9T, above the 8.65T nominal LHC field, before quenching for the third time. The next stage is to install the delicate measuring system for making comprehensive magnetic field maps in the 10 m long, 50 mm diameter twin-apertures of the magnet. These measurements will check that the required LHC field quality has been achieved at both the nominal and injection fields

  18. Prototype plutonium-storage monitor

    International Nuclear Information System (INIS)

    Bliss, M.; Craig, R.A.; Sunberg, D.S.; Warner, R.A.

    1996-01-01

    Pacific Northwest National Laboratory (PNNL) has fabricated cerium-activated lithium silicate scintillating fibers via a hot-downdraw process. These fibers typically have an operational transmission length (e -1 length) of greater than 2 meters. This permits the fabrication of devices that, hitherto, were not possible to consider. A prototype neutron monitor for scrap Pu-storage containers was fabricated and tested for 70 days, taking data with a variety of sources in a high-background environment. These data and their implication in the context of a storage-monitor situation are discussed

  19. FY97 ICCS prototype specification

    International Nuclear Information System (INIS)

    Woodruff, J.

    1997-01-01

    The ICCS software team will implement and test two iterations of their software product during FY97. This document specifies the products to be delivered in that first prototype and projects the direction that the second prototype will take. Detailed specification of the later iteration will be written when the results of the first iteration are complete. The selection of frameworks to be implemented early is made on a basis of risk analysis from the point of view of future development in the ICCS project. The prototype will address risks in integration of object- oriented components, in refining our development process, and in emulation testing for FEP devices. This document is a specification that identifies products and processes to undertake for resolving these risks. The goals of this activity are to exercise our development process at a modest scale and to probe our architecture plan for fundamental limits and failure modes. The product of the iterations will be the framework software which will be useful in future ICCS code. Thus the FY97 products are intended for internal usage by the ICCS team and for demonstration to the FEP software developers of the strategy for integrating supervisory software with FEP computers. This will be the first of several expected iterations of the software development process and the performance measurements that ICCS will demonstrate, intended to support confidence in our ability to meet project RAM goals. The design of the application software is being carried out in a separate WBS 1.5.2 activity. The design activity has as its FY97 product a series of Software Design Documents that will specify the functionality of the controls software of ICCS. During the testing of this year''s prototypes, the application functionality needed for test will be provided by sample maintenance controls. These are early precursors of controls that can be used for low level device control. Since the devices under test will be represented by

  20. Rapid prototyping of robotic platforms

    CSIR Research Space (South Africa)

    De Ronde, Willis

    2016-11-01

    Full Text Available of thickness up to 200mm can be cut to create prototype chassis/ bodies or even the final product. One of the few limitations is the cutting of certain laminated materials, as this tends to produce delaminated cutting edges or even fractures in the case... mine inspection robot (Shongololo). Shongololo’s frame is made from engineering plastics while the chassis of Dassie was made from aluminium and cut using abrasive waterjet machining. The advantage of using abrasive waterjet machining is the speed...

  1. On Reducing Delay in Mesh-Based P2P Streaming: A Mesh-Push Approach

    Science.gov (United States)

    Liu, Zheng; Xue, Kaiping; Hong, Peilin

    The peer-assisted streaming paradigm has been widely employed to distribute live video data on the internet recently. In general, the mesh-based pull approach is more robust and efficient than the tree-based push approach. However, pull protocol brings about longer streaming delay, which is caused by the handshaking process of advertising buffer map message, sending request message and scheduling of the data block. In this paper, we propose a new approach, mesh-push, to address this issue. Different from the traditional pull approach, mesh-push implements block scheduling algorithm at sender side, where the block transmission is initiated by the sender rather than by the receiver. We first formulate the optimal upload bandwidth utilization problem, then present the mesh-push approach, in which a token protocol is designed to avoid block redundancy; a min-cost flow model is employed to derive the optimal scheduling for the push peer; and a push peer selection algorithm is introduced to reduce control overhead. Finally, we evaluate mesh-push through simulation, the results of which show mesh-push outperforms the pull scheduling in streaming delay, and achieves comparable delivery ratio at the same time.

  2. AmbientDB: relational query processing in a P2P network

    NARCIS (Netherlands)

    P.A. Boncz (Peter); C. Treijtel

    2003-01-01

    textabstractA new generation of applications running on a network of nodes, that share data on an ad-hoc basis, will benefit from data management services including powerful querying facilities. In this paper, we introduce the goals, assumptions and architecture of AmbientDB, a new peer-to-peer

  3. AmbientDB : relational query processing in a P2P network

    NARCIS (Netherlands)

    P.A. Boncz (Peter); C. Treijtel

    2003-01-01

    textabstractA new generation of applications running on a network of nodes, that share data on an ad-hoc basis, will benefit from data management services including powerful querying facilities. In this paper, we introduce the goals, assumptions and architecture of AmbientDB, a new peer-to-peer

  4. ChordMR: A P2P-based Job Management Scheme in Cloud

    OpenAIRE

    Jiagao Wu; Hang Yuan; Ying He; Zhiqiang Zou

    2014-01-01

    MapReduce is a programming model and an associated implementation for processing parallel data, which is widely used in Cloud computing environments. However, the traditional MapReduce system is based on a centralized master-slave structure. While, along with the increase of the number of MapReduce jobs submitted and system scale, the master node will become the bottleneck of the system. To improve this problem, we have proposed a new MapReduce system named ChordMR, which is designed to use a...

  5. Design and Evaluation of IP Header Compression for Cellular-Controlled P2P Networks

    DEFF Research Database (Denmark)

    Madsen, T.K.; Zhang, Qi; Fitzek, F.H.P.

    2007-01-01

    In this paper we advocate to exploit terminal cooperation to stabilize IP communication using header compression. The terminal cooperation is based on direct communication between terminals using short range communication and simultaneously being connected to the cellular service access point....... The short range link is than used to provide first aid information to heal the decompressor state of the neighboring node in case of a packet loss on the cellular link. IP header compression schemes are used to increase the spectral and power efficiency loosing robustness of the communication compared...

  6. AGNO: An Adaptive Group Communication Scheme for Unstructured P2P Networks

    National Research Council Canada - National Science Library

    Tsoumakos, Dimitrios; Roussopoulos, Nick

    2004-01-01

    .... Utilizing search indices, together with a small number of soft-state shortcuts, AGNO achieves effective and bandwidth-efficient content dissemination, without the cost and restrictions of a membership protocol or a DHT...

  7. Survey: Research on QoS of P2P Reliable Streaming Media

    OpenAIRE

    Xiaofeng Xiong; Jiajia Song; Guangxue Yue; Jiansheng Liu; Linquan Xie

    2011-01-01

    Streaming media application has become one of the main services over Internet. As streaming media have special attributes, it is very important to ensure and improve the quality of service in large-scale streaming media. Based on the development of media streaming, it compared and analyzed the typical flow service strategy of media streaming system, and summarized the features and shortcomings of different systems. Moreover, it take the reputation evaluation, node selection, strategy of copy ...

  8. Using Wi-Fi to Save Energy via P2P Remote Execution

    DEFF Research Database (Denmark)

    Kristensen, Mads Darø; Bouvin, Niels Olof

    2010-01-01

    tasks. This paper presents energy measurements of a modern mobile computing device, showing that utilising the relatively large CPU of such a device is very expensive - even more so than using Wi-Fi. Experiments performed with the Scavenger cyber foraging system are presented; a system enabling easy...

  9. Cellular Controlled Short-Range Communication for Cooperative P2P Networking

    DEFF Research Database (Denmark)

    Fitzek, Frank H. P.; Katz, Marcos; Zhang, Qi

    2009-01-01

    -range communication network among cooperating mobile and wireless devices. The role of the mobile device will change, from being an agnostic entity in respect to the surrounding world to a cognitive device. This cognitive device is capable of being aware of the neighboring devices as well as on the possibility......This article advocates a novel communication architecture and associated collaborative framework for future wireless communication systems. In contrast to the dominating cellular architecture and the upcoming peer-to-peer architecture, the new approach envisions a cellular controlled short...... to establish cooperation with them. The novel architecture together with several possible cooperative strategies will bring clear benefits for the network and service providers, mobile device manufacturers and also end users....

  10. Ontology driven framework for multimedia information retrieval in P2P network

    CERN Document Server

    Sokhn, Maria

    During the last decade we have witnessed an exponential growth of digital documents and multimedia resources, including a vast amount of video resources. Videos are becoming one of the most popular media thanks to the rich audio, visual and textual content they may convey. The recent technological advances have made this large amount of multimedia resources available to users in a variety of areas, including the academic and scientific realms. However, without adequate techniques for effective content based multimedia retrieval, this large and valuable body of data is barely accessible and remains in effect unusable. This thesis explores semantic approaches to content based management browsing and visualization of the multimedia resources generated for and during scientific conferences. Indeed, a so-called semantic gap exists between the explicit knowledge representation required by users who search the multimedia resources and the implicit knowledge conveyed within a conference life cycle. The aim of this wo...

  11. The Sharing Economy and Collaborative Finance: the Case of P2p Lending in Vietnam

    OpenAIRE

    Uyen, Tran Dinh; Ha, Hoang

    2017-01-01

    Peer-to-peer Online Lending (P2PO) has received increasing attention over the last years, not only because of its disruptive nature and its disintermediation of nearly all major banking functions, but also because of its rapid growth and expanding breadth of services. This model offers a new way of investing in addition to investing in traditional channels such as banking or financial company. The transaction process is done online, the personal information and terms of mobilization are compl...

  12. FCJ-119 Subjectivity in the Ecologies of P2P Production

    Directory of Open Access Journals (Sweden)

    Phoebe Moore

    2011-01-01

    Full Text Available Free (Libre/Open Source Software (FLOSS is an open, evolutionary arena in which hundreds and sometimes thousands of users voluntarily explore and design code, spot bugs in code, make contributions to the code, release software, create artwork, and develop licenses in a fashion that is becoming increasingly prevalent in the otherwise hugely monopolised software market. This ‘computerisation movement’ emerged as a challenge to the monopolisation of the software market by such mammoth firms as Microsoft and IBM, and is portrayed as being revolutionary (Elliot and Scacchi, 2004; DiBona, Ockman, and Stone, 1999; Kling and Iacono, 1988. Its ‘ultimate goal’ is ‘to provide free software to do all of the jobs computer users want to do and thus make proprietary software obsolete’ (Free Software Foundation, 2005.However, if it is to succeed in bringing about a new social order (Kling and Lacono, 1988, this movement must be re-evaluated from a critical standpoint through a look into the practices of knowledge production based on radical licenses for property sharing and development such as the General Public Licence (GPL and the emerging subjectivities of participants. Free Software may be viewed as a social movement while Open Source is perhaps a development methodology, but it is not always necessary to isolate analysis to one or the other, firstly due to the extensive overlap in software communities, and secondly because their rhizomatic roots emerge from a shared intellectual and moral response to the exploitation of markets by powerful firms (see Elliot and Scacchi, 2004. Here, I query whether the activities of collaborative software producers as well as hardware production communities such as those found in FabLabs, which release playbots and other blueprints for machine replications as well as agricultural and construction initiatives, can indeed be perceived as revolutionary due to their subversive work and production methods. The recursive communities (Kelty 2006; Powell 2008 that develop around these practices are linked, with shared practices, goals and self-perceptions. People’s emerging subjectivities are the most important dimension of such radical production ecologies, because they reflect both the immaterial and material dimensions of the inherently political projects involved.

  13. Nature-Inspired Dissemination of Information in P2P Networks

    Science.gov (United States)

    Guéret, Christophe

    After having first been used as a means to publish content, the Web is now widely used as a social tool for sharing information. It is an easy task to subscribe to a social network, join one of the Web-based communities according to some personal interests and start to share content with all the people who do the same. It is easy once you solve two basic problems: select the network to join (go to hi5, facebook, myspace,…? join all of them?) and find/pick up the right communities (i.e., find a strict label to match non-strict centers of interest). An error of appreciation would result in getting too much of useless/non-relevant information. This chapter provides a study on the dissemination of information within groups of people and aim at answering one question: can we find an effortless way of sharing information on the Web? Ideally, such a solution would require neither the definition of a profile nor the selection of communities to join. Publishing information should also not be the result of an active decision but be performed in an automatic way. A nature-inspired framework is introduced as an answer to this question. This framework features artificial ants taking care of the dissemination of information items within the network. Centers of interest of the users are reflected by artificial pheromones laid down on connections between peers. Another part of the framework uses those pheromone trails to detect shared interests and creates communities.

  14. Algorithmic PON/P2P FTTH Access Network Design for CAPEX Minimization

    DEFF Research Database (Denmark)

    Papaefthimiou, Kostantinos; Tefera, Yonas; Mihylov, Dimitar

    2013-01-01

    one. It provides an obvious advantage for the end users in terms of high achievable data rates. On the other hand, the high initial deployment cost required exists as the heaviest impediment. The main goal of this paper is to study different approaches when designing a fiber access network. More......Due to the emergence of high bandwidth-requiring services, telecommunication operators (telcos) are called to upgrade their fixed access network. In order to keep up with the competition, they must consider different optical access network solutions with Fiber To The Home (FTTH) as the prevailing...

  15. Peers or Professionals? The P2P-Economy and Competition Law

    NARCIS (Netherlands)

    Ranchordás, Sofia

    2017-01-01

    For almost a decade, digital peer-to-peer initiatives (eg, Uber, Airbnb) have been disrupting the traditional economy by offering informal, diverse, convenient and affordable services to consumers. However, more recently, the peer-to-peer economy has become increasingly professionalised. Service

  16. Zorilla: A P2P Middleware for Real-World Distributed Systems

    NARCIS (Netherlands)

    Drost, N.; van Nieuwpoort, R.V.; Maassen, J.; Seinstra, F.J.; Bal, H.E.

    2011-01-01

    The inherent complex nature of current distributed computing architectures hinders the widespread adoption of these systems for mainstream use. In general, users have access to a highly heterogeneous set of compute resources, which may include clusters, grids, desktop grids, clouds, and other

  17. www.p2p.edu: Rip, Mix & Burn Your Education.

    Science.gov (United States)

    Gillespie, Thom

    2001-01-01

    Discusses peer to peer technology which allows uploading files from one hard drive to another. Topics include the client/server model for education; the Napster client/server model; Gnutella; Freenet and other projects to allow the free exchange of information without censorship; bandwidth problems; copyright issues; metadata; and the United…

  18. Microscopic model accounting of 2p2p configurations in magic nuclei

    International Nuclear Information System (INIS)

    Kamerdzhiev, S.P.

    1983-01-01

    A model for account of the 2p2h configurations in magic nuclei is described in the framework of the Green function formalism. The model is formulated in the lowest order in the phonon production amplitude, so that the series are expansions not over pure 2p2h configurations, but over con figurations of the type ''1p1h+phonon''. Equations are obtained for the vertex and the density matrix, as well as an expression for the transition probabilities, that are extensions of the corresponding results of the theory of finite Fermi systems, or of the random-phase approximation to the case where the ''1p1h+phonon'' configurations are taken into account. Corrections to the one-particle phenomenological basis which arise with account for complicated configurations are obtained. Comparison with other approaches, using phonons, has shown that they are particular cases of the described model

  19. Effective nucleon-nucleon t matrix in the (p,2p) reaction

    International Nuclear Information System (INIS)

    Kudo, Y.; Kanayama, N.; Wakasugi, T.

    1989-01-01

    The cross sections and the analyzing powers for the /sup 40/Ca(p-arrow-right,2p) reactions at E/sub p/ = 76.1, 101.3, and 200 MeV are calculated in the distorted-wave impulse approximation using the Love-Franey effective nucleon-nucleon interaction. It is shown that the calculated individual contributions of the central, spin-orbit, and tensor parts in the Love-Franey interaction to the cross sections and the analyzing powers strongly depend on the incident proton energies. The spectroscopic factors extracted are consistent with the other reaction studies

  20. Trust Management in P2P systems using Standard TuLiP

    NARCIS (Netherlands)

    Czenko, M.R.; Doumen, J.M.; Etalle, Sandro

    2008-01-01

    In this paper we introduce Standard TuLiP - a new logic based Trust Management system. In Standard TuLiP, security decisions are based on security credentials, which can be issued by different entities and stored at different locations. Standard TuLiP directly supports the distributed credential

  1. Trust management in P2P systems using standard TuLiP

    NARCIS (Netherlands)

    Czenko, M.; Doumen, J.M.; Etalle, S.; Karabulut, Y.; Mitchell, J.C.; Herrmann, P.; Jensen, C.D.

    2008-01-01

    In this paper we introduce Standard TuLiP - a new logic based Trust Management system. In Standard TuLiP, security decisions are based on security credentials, which can be issued by different entities and stored at different locations. Standard TuLiP directly supports the distributed credential

  2. Trust Management in P2P Systems Using Standard TuLiP

    NARCIS (Netherlands)

    Czenko, M.R.; Doumen, J.M.; Etalle, Sandro

    2008-01-01

    In this paper we introduce Standard TuLiP - a new logic based Trust Management system. In Standard TuLiP, security decisions are based on security credentials, which can be issued by different entities and stored at different locations. Standard TuLiP directly supports the distributed credential

  3. Trust Management in P2P systems using Standard TuLiP

    OpenAIRE

    Czenko, M.R.; Doumen, J.M.; Etalle, Sandro

    2008-01-01

    In this paper we introduce Standard TuLiP - a new logic based Trust Management system. In Standard TuLiP, security decisions are based on security credentials, which can be issued by different entities and stored at different locations. Standard TuLiP directly supports the distributed credential storage by providing a sound and complete Lookup and Inference AlgoRithm (LIAR). In this paper we focus on (a) the language of Standard TuLiP and (b) on the practical considerations which arise when d...

  4. Low-friction nanojoint prototype

    Science.gov (United States)

    Vlassov, Sergei; Oras, Sven; Antsov, Mikk; Butikova, Jelena; Lõhmus, Rünno; Polyakov, Boris

    2018-05-01

    High surface energy of individual nanostructures leads to high adhesion and static friction that can completely hinder the operation of nanoscale systems with movable parts. For instance, silver or gold nanowires cannot be moved on silicon substrate without plastic deformation. In this paper, we experimentally demonstrate an operational prototype of a low-friction nanojoint. The movable part of the prototype is made either from a gold or silver nano-pin produced by laser-induced partial melting of silver and gold nanowires resulting in the formation of rounded bulbs on their ends. The nano-pin is then manipulated into the inverted pyramid (i-pyramids) specially etched in a Si wafer. Due to the small contact area, the nano-pin can be repeatedly tilted inside an i-pyramid as a rigid object without noticeable deformation. At the same time in the absence of external force the nanojoint is stable and preserves its position and tilt angle. Experiments are performed inside a scanning electron microscope and are supported by finite element method simulations.

  5. Prototypical Rod Consolidation Demonstration Project

    International Nuclear Information System (INIS)

    1993-05-01

    The objective of Phase 3 of the Prototypical Rod consolidation Demonstration Project (PRCDP) was to procure, fabricate, assemble, and test the Prototypical Rod consolidation System as described in the NUS Phase 2 Final Design Report. This effort required providing the materials, components, and fabricated parts which makes up all of the system equipment. In addition, it included the assembly, installation, and setup of this equipment at the Cold Test Facility. During the Phase 3 effort the system was tested on a component, subsystem, and system level. This volume 1, discusses the PRCDP Phase 3 Test Program that was conducted by the HALLIBURTON NUS Environmental Corporation under contract AC07-86ID12651 with the United States Department of Energy. This document, Volume 1, Book 4 discusses the following topics: Rod Compaction/Loading System Test Results and Analysis Report; Waste Collection System Test Results and Analysis Report; Waste Container Transfer Fixture Test Results and Analysis Report; Staging and Cutting Table Test Results and Analysis Report; and Upper Cutting System Test Results and Analysis Report

  6. Prototypical Rod Consolidation Demonstration Project

    International Nuclear Information System (INIS)

    1993-05-01

    The objective of Phase 3 of the Prototypical Rod consolidation Demonstration Project (PRCDP) was to procure, fabricate, assemble, and test the Prototypical Rod consolidation System as described in the NUS Phase 2 Final Design Report. This effort required providing the materials, components, and fabricated parts which makes up all of the system equipment. In addition, it included the assembly, installation, and setup of this equipment at the Cold Test Facility. During the Phase 3 effort the system was tested on a component, subsystem, and system level. This volume 1, discusses the PRCDP Phase 3 Test Program that was conducted by the HALLIBURTON NUS Environmental Corporation under contract AC07-86ID12651 with the United States Department of Energy. This document, Volume 1, Book 5 discusses the following topics: Lower Cutting System Test Results and Analysis Report; NFBC Loading System Test Results and Analysis Report; Robotic Bridge Transporter Test Results and Analysis Report; RM-10A Remotec Manipulator Test Results and Analysis Report; and Manipulator Transporter Test Results and Analysis Report

  7. Scalable geocomputation: evolving an environmental model building platform from single-core to supercomputers

    Science.gov (United States)

    Schmitz, Oliver; de Jong, Kor; Karssenberg, Derek

    2017-04-01

    There is an increasing demand to run environmental models on a big scale: simulations over large areas at high resolution. The heterogeneity of available computing hardware such as multi-core CPUs, GPUs or supercomputer potentially provides significant computing power to fulfil this demand. However, this requires detailed knowledge of the underlying hardware, parallel algorithm design and the implementation thereof in an efficient system programming language. Domain scientists such as hydrologists or ecologists often lack this specific software engineering knowledge, their emphasis is (and should be) on exploratory building and analysis of simulation models. As a result, models constructed by domain specialists mostly do not take full advantage of the available hardware. A promising solution is to separate the model building activity from software engineering by offering domain specialists a model building framework with pre-programmed building blocks that they combine to construct a model. The model building framework, consequently, needs to have built-in capabilities to make full usage of the available hardware. Developing such a framework providing understandable code for domain scientists and being runtime efficient at the same time poses several challenges on developers of such a framework. For example, optimisations can be performed on individual operations or the whole model, or tasks need to be generated for a well-balanced execution without explicitly knowing the complexity of the domain problem provided by the modeller. Ideally, a modelling framework supports the optimal use of available hardware whichsoever combination of model building blocks scientists use. We demonstrate our ongoing work on developing parallel algorithms for spatio-temporal modelling and demonstrate 1) PCRaster, an environmental software framework (http://www.pcraster.eu) providing spatio-temporal model building blocks and 2) parallelisation of about 50 of these building blocks using

  8. UbiWorld: An environment integrating virtual reality, supercomputing, and design

    Energy Technology Data Exchange (ETDEWEB)

    Disz, T.; Papka, M.E.; Stevens, R. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.

    1997-07-01

    UbiWorld is a concept being developed by the Futures Laboratory group at Argonne National Laboratory that ties together the notion of ubiquitous computing (Ubicomp) with that of using virtual reality for rapid prototyping. The goal is to develop an environment where one can explore Ubicomp-type concepts without having to build real Ubicomp hardware. The basic notion is to extend object models in a virtual world by using distributed wide area heterogeneous computing technology to provide complex networking and processing capabilities to virtual reality objects.

  9. A visualization environment for supercomputing-based applications in computational mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Pavlakos, C.J.; Schoof, L.A.; Mareda, J.F.

    1993-06-01

    In this paper, we characterize a visualization environment that has been designed and prototyped for a large community of scientists and engineers, with an emphasis in superconducting-based computational mechanics. The proposed environment makes use of a visualization server concept to provide effective, interactive visualization to the user`s desktop. Benefits of using the visualization server approach are discussed. Some thoughts regarding desirable features for visualization server hardware architectures are also addressed. A brief discussion of the software environment is included. The paper concludes by summarizing certain observations which we have made regarding the implementation of such visualization environments.

  10. Prototype moving-ring reactor

    International Nuclear Information System (INIS)

    Smith, A.C. Jr.; Ashworth, C.P.; Abreu, K.E.

    1982-01-01

    We have completed a design of the Prototype Moving-Ring Reactor. The fusion fuel is confined in current-carrying rings of magnetically-field-reversed plasma (Compact Toroids). The plasma rings, formed by a coaxial plasma gun, undergo adiabatic magnetic compression to ignition temperature while they are being injected into the reactor's burner section. The cylindrical burner chamber is divided into three burn stations. Separator coils and a slight axial guide field gradient are used to shuttle the ignited toroids rapidly from one burn station to the next, pausing for 1/3 of the total burn time at each station. D-T- 3 He ice pellets refuel the rings at a rate which maintains constant radiated power

  11. LEP vacuum chamber, early prototype

    CERN Multimedia

    CERN PhotoLab

    1978-01-01

    The structure of LEP, with long bending magnets and little access to the vacuum chamber between them, required distributed pumping. This is an early prototype for the LEP vacuum chamber, made from extruded aluminium. The main opening is for the beam. The small channel to the right is for cooling water, to carry away the heat deposited by the synchroton radiation from the beam. The 4 slots in the channel to the left house the strip-shaped ion-getter pumps (see 7810255). The ion-getter pumps depended on the magnetic field of the bending magnets, too low at injection energy for the pumps to function well. Also, a different design was required outside the bending magnets. This design was therefore abandoned, in favour of a thermal getter pump (see 8301153 and 8305170).

  12. Prototype international quality assurance program

    International Nuclear Information System (INIS)

    Broadway, J.A.; Chambless, D.A.; Sapozhnikov, Yu.A.; Kalmykov, S.N.

    1998-01-01

    The international community presently lacks the ability to determine the quality and credibility of environmental measurements that is required to make sound decisions in matters related to international security, public health, and investment-related considerations. The ultimate goal of the work described in this article is to develop a credible information base including measurement capability for determination of environmental contamination and the potential for proliferation of material components of chemical or nuclear weapons. This study compared the accuracy obtained by six Russian and six U.S. laboratories for samples representative of classes of trace metals, dioxing-furans, and radioactive substances. The results obtained in this work indicate that current estimates for laboratory accuracy are likely overly optimistic. The weaknesses discovered by this prototype U.S. - Russia study also exist within the broader international community of laboratories. Further work is proposed to address the urgent need for the international community to improve performance evaluations for analytical measurements. (author)

  13. Prototype of industrial electrons accelerator

    International Nuclear Information System (INIS)

    Lopez, V.H.; Valdovinos, A.M.

    1992-01-01

    The interest and the necessity of Mexico's industry in the use of irradiation process has been increased in the last years. As examples are the irradiation of combustion gases (elimination of NO x and SO 2 ) and the polymer cross-linking between others. At present time at least twelve enterprises require immediately of them which have been contacted by electron accelerators suppliers of foreign countries. The first project step consisted in to identify the electrons accelerator type that in can be constructed in Mexico with the major number of possible equipment, instruments, components and acquisition materials local and useful for the major number of users. the characteristics of the accelerator prototype are: accelerator type transformer with multiple secondary insulated and rectifier circuits with a potential of 0.8 MV of voltage, the second step it consisted in an economic study that permitted to demonstrate the economic feasibility of its construction. (Author)

  14. Hadron therapy information sharing prototype

    CERN Document Server

    Roman, Faustin Laurentiu; Kanellopoulos, Vassiliki; Amoros, Gabriel; Davies, Jim; Dosanjh, Manjit; Jena, Raj; Kirkby, Norman; Peach, Ken; Salt, Jose

    2013-01-01

    The European PARTNER project developed a prototypical system for sharing hadron therapy data. This system allows doctors and patients to record and report treatment-related events during and after hadron therapy. It presents doctors and statisticians with an integrated view of adverse events across institutions, using open-source components for data federation, semantics, and analysis. There is a particular emphasis upon semantic consistency, achieved through intelligent, annotated form designs. The system as presented is ready for use in a clinical setting, and amenable to further customization. The essential contribution of the work reported here lies in the novel data integration and reporting methods, as well as the approach to software sustainability achieved through the use of community-supported open-source components.

  15. PEP-II prototype klystron

    International Nuclear Information System (INIS)

    Fowkes, W.R.; Caryotakis, G.; Lee, T.G.; Pearson, C.; Wright, E.L.

    1993-04-01

    A 540-kW continuous-wave (cw) klystron operating at 476 MHz was developed for use as a power source for testing PEP-II rf accelerating cavities and rf windows. It also serves as a prototype for a 1.2 MW cw klystron presently being developed as a potential rf source for asymmetric colliding ring use. The design incorporates the concepts and many of the parts used in the original 353 MHz PEP klystron developed sixteen years ago. The superior computer simulation codes available today result in improved performance with the cavity frequencies, drift lengths, and output circuit optimized for the higher frequency.The design and operating results of this tube are described with particular emphasis on the factors which affect efficiency and stability

  16. Simulation of x-rays in refractive structure by the Monte Carlo method using the supercomputer SKIF

    International Nuclear Information System (INIS)

    Yaskevich, Yu.R.; Kravchenko, O.I.; Soroka, I.I.; Chembrovskij, A.G.; Kolesnik, A.S.; Serikova, N.V.; Petrov, P.V.; Kol'chevskij, N.N.

    2013-01-01

    Software 'Xray-SKIF' for the simulation of the X-rays in refractive structures by the Monte-Carlo method using the supercomputer SKIF BSU are developed. The program generates a large number of rays propagated from a source to the refractive structure. The ray trajectory under assumption of geometrical optics is calculated. Absorption is calculated for each ray inside of refractive structure. Dynamic arrays are used for results of calculation rays parameters, its restore the X-ray field distributions very fast at different position of detector. It was found that increasing the number of processors leads to proportional decreasing of calculation time: simulation of 10 8 X-rays using supercomputer with the number of processors from 1 to 30 run-times equal 3 hours and 6 minutes, respectively. 10 9 X-rays are calculated by software 'Xray-SKIF' which allows to reconstruct the X-ray field after refractive structure with a special resolution of 1 micron. (authors)

  17. The Yucca Mountain Project Prototype Testing Program

    International Nuclear Information System (INIS)

    1989-10-01

    The Yucca Mountain Project is conducting a Prototype Testing Program to ensure that the Exploratory Shaft Facility (ESF) tests can be completed in the time available and to develop instruments, equipment, and procedures so the ESF tests can collect reliable and representative site characterization data. This report summarizes the prototype tests and their status and location and emphasizes prototype ESF and surface tests, which are required in the early stages of the ESF site characterization tests. 14 figs

  18. Test case preparation using a prototype

    OpenAIRE

    Treharne, Helen; Draper, J.; Schneider, Steve A.

    1998-01-01

    This paper reports on the preparation of test cases using a prototype within the context of a formal development. It describes an approach to building a prototype using an example. It discusses how a prototype contributes to the testing activity as part of a lifecycle based on the use of formal methods. The results of applying the approach to an embedded avionics case study are also presented.

  19. A prototype for JDEM science data processing

    International Nuclear Information System (INIS)

    Gottschalk, Erik E

    2011-01-01

    Fermilab is developing a prototype science data processing and data quality monitoring system for dark energy science. The purpose of the prototype is to demonstrate distributed data processing capabilities for astrophysics applications, and to evaluate candidate technologies for trade-off studies. We present the architecture and technical aspects of the prototype, including an open source scientific execution and application development framework, distributed data processing, and publish/subscribe message passing for quality control.

  20. Rapid prototyping using CBCT: an initial experience

    International Nuclear Information System (INIS)

    Yovchev, D.; Deliverska, E.; Indjova, J.; Ugrinov, R.

    2011-01-01

    This report presents a case of fibrous dysplasia in the left lower jaw of a 12-year-old girl, scanned with CBCT. On the basis of CBCT scan a model of affected jaw was produced using a rapid-prototyping three-dimensional printer. The case demonstrates the possibility to get a prototype by CBCT data. Prototypes can be used to support the diagnosis, planning, training (students and postgraduates) and to obtain informed consent from the patient.