WorldWideScience

Sample records for llnl hpc linux

  1. Linux bible

    CERN Document Server

    Negus, Christopher

    2015-01-01

    The industry favorite Linux guide, updated for Red Hat Enterprise Linux 7 and the cloud Linux Bible, 9th Edition is the ultimate hands-on Linux user guide, whether you're a true beginner or a more advanced user navigating recent changes. This updated ninth edition covers the latest versions of Red Hat Enterprise Linux 7 (RHEL 7), Fedora 21, and Ubuntu 14.04 LTS, and includes new information on cloud computing and development with guidance on Openstack and Cloudforms. With a focus on RHEL 7, this practical guide gets you up to speed quickly on the new enhancements for enterprise-quality file s

  2. Linux Essentials

    CERN Document Server

    Smith, Roderick W

    2012-01-01

    A unique, full-color introduction to Linux fundamentals Serving as a low-cost, secure alternative to expensive operating systems, Linux is a UNIX-based, open source operating system. Full-color and concise, this beginner's guide takes a learning-by-doing approach to understanding the essentials of Linux. Each chapter begins by clearly identifying what you will learn in the chapter, followed by a straightforward discussion of concepts that leads you right into hands-on tutorials. Chapters conclude with additional exercises and review questions, allowing you to reinforce and measure your underst

  3. Leveraging HPC resources for High Energy Physics

    International Nuclear Information System (INIS)

    O'Brien, B; Washbrook, A; Walker, R

    2014-01-01

    High Performance Computing (HPC) supercomputers provide unprecedented computing power for a diverse range of scientific applications. The most powerful supercomputers now deliver petaflop peak performance with the expectation of 'exascale' technologies available in the next five years. More recent HPC facilities use x86-based architectures managed by Linux-based operating systems which could potentially allow unmodified HEP software to be run on supercomputers. There is now a renewed interest from both the LHC experiments and the HPC community to accommodate data analysis and event simulation production on HPC facilities. This study provides an outline of the challenges faced when incorporating HPC resources for HEP software by using the HECToR supercomputer as a demonstrator.

  4. Running Linux

    CERN Document Server

    Dalheimer, Matthias Kalle

    2006-01-01

    The fifth edition of Running Linux is greatly expanded, reflecting the maturity of the operating system and the teeming wealth of software available for it. Hot consumer topics such as audio and video playback applications, groupware functionality, and spam filtering are covered, along with the basics in configuration and management that always made the book popular.

  5. Connecting to HPC VPN | High-Performance Computing | NREL

    Science.gov (United States)

    visualization, and file transfers. NREL Users Logging in to Peregrine Use SSH to login to the system. Your login and password will match your NREL network account login/password. From OS X or Linux, open a terminal login for the Windows HPC Cluster will match your NREL Active Directory login/password that you use to

  6. Linux System Administration

    CERN Document Server

    Adelstein, Tom

    2007-01-01

    If you're an experienced system administrator looking to acquire Linux skills, or a seasoned Linux user facing a new challenge, Linux System Administration offers practical knowledge for managing a complete range of Linux systems and servers. The book summarizes the steps you need to build everything from standalone SOHO hubs, web servers, and LAN servers to load-balanced clusters and servers consolidated through virtualization. Along the way, you'll learn about all of the tools you need to set up and maintain these working environments. Linux is now a standard corporate platform with user

  7. Linux Desktop Pocket Guide

    CERN Document Server

    Brickner, David

    2005-01-01

    While Mac OS X garners all the praise from pundits, and Windows XP attracts all the viruses, Linux is quietly being installed on millions of desktops every year. For programmers and system administrators, business users, and educators, desktop Linux is a breath of fresh air and a needed alternative to other operating systems. The Linux Desktop Pocket Guide is your introduction to using Linux on five of the most popular distributions: Fedora, Gentoo, Mandriva, SUSE, and Ubuntu. Despite what you may have heard, using Linux is not all that hard. Firefox and Konqueror can handle all your web bro

  8. Beginning Ubuntu Linux

    CERN Document Server

    Raggi, Emilio; Channelle, Andy; Parsons, Trevor; Van Vugt, Sander

    2010-01-01

    Ubuntu Linux is the fastest growing Linux-based operating system, and Beginning Ubuntu Linux, Fifth Edition teaches all of us - including those who have never used Linux - how to use it productively, whether you come from Windows or the Mac or the world of open source. Beginning Ubuntu Linux, Fifth Edition shows you how to take advantage of the newest Ubuntu release, Lucid Lynx. Based on the best-selling previous edition, Emilio Raggi maintains a fine balance between teaching Ubuntu and introducing new features. Whether you aim to use it in the home or in the office, you'll be introduced to th

  9. HPC: Rent or Buy

    Science.gov (United States)

    Fredette, Michelle

    2012-01-01

    "Rent or buy?" is a question people ask about everything from housing to textbooks. It is also a question universities must consider when it comes to high-performance computing (HPC). With the advent of Amazon's Elastic Compute Cloud (EC2), Microsoft Windows HPC Server, Rackspace's OpenStack, and other cloud-based services, researchers now have…

  10. Linux utilities cookbook

    CERN Document Server

    Lewis, James Kent

    2013-01-01

    A Cookbook-style guide packed with examples and illustrations, it offers organized learning through recipes and step-by-step instructions. The book is designed so that you can pick exactly what you need, when you need it.Written for anyone that would like to familiarize themselves with Linux. This book is perfect migrating from Windows to Linux and will save your time and money, learn exactly how to and where to begin working with Linux and troubleshooting in easy steps.

  11. Linux Networking Cookbook

    CERN Document Server

    Schroder, Carla

    2008-01-01

    If you want a book that lays out the steps for specific Linux networking tasks, one that clearly explains the commands and configurations, this is the book for you. Linux Networking Cookbook is a soup-to-nuts collection of recipes that covers everything you need to know to perform your job as a Linux network administrator. You'll dive straight into the gnarly hands-on work of building and maintaining a computer network

  12. Pro Linux System Administration

    CERN Document Server

    Turnbull, James

    2009-01-01

    We can all be Linux experts, provided we invest the time in learning the craft of Linux administration. Pro Linux System Administration makes it easy for small to medium--sized businesses to enter the world of zero--cost software running on Linux and covers all the distros you might want to use, including Red Hat, Ubuntu, Debian, and CentOS. Authors, and systems infrastructure experts James Turnbull, Peter Lieverdink, and Dennis Matotek take a layered, component--based approach to open source business systems, while training system administrators as the builders of business infrastructure. If

  13. Ubuntu Linux toolbox

    CERN Document Server

    Negus, Christopher

    2012-01-01

    This bestseller from Linux guru Chris Negus is packed with an array of new and revised material As a longstanding bestseller, Ubuntu Linux Toolbox has taught you how to get the most out Ubuntu, the world?s most popular Linux distribution. With this eagerly anticipated new edition, Christopher Negus returns with a host of new and expanded coverage on tools for managing file systems, ways to connect to networks, techniques for securing Ubuntu systems, and a look at the latest Long Term Support (LTS) release of Ubuntu, all aimed at getting you up and running with Ubuntu Linux quickly.

  14. LLNL 1981: technical horizons

    International Nuclear Information System (INIS)

    1981-07-01

    Research programs at LLNL for 1981 are described in broad terms. In his annual State of the Laboratory address, Director Roger Batzel projected a $481 million operating budget for fiscal year 1982, up nearly 13% from last year. In projects for the Department of Energy and the Department of Defense, the Laboratory applies its technical facilities and capabilities to nuclear weapons design and development and other areas of defense research that include inertial confinement fusion, nonnuclear ordnances, and particle-beam technology. LLNL is also applying its unique experience and capabilities to a variety of projects that will help the nation meet its energy needs in an environmentally acceptable manner. A sampling of recent achievements by LLNL support organizations indicates their diversity

  15. Minimalist's linux cluster

    International Nuclear Information System (INIS)

    Choi, Chang-Yeong; Kim, Jeong-Hyun; Kim, Seyong

    2004-01-01

    Using barebone PC components and NIC's, we construct a linux cluster which has 2-dimensional mesh structure. This cluster has smaller footprint, is less expensive, and use less power compared to conventional linux cluster. Here, we report our experience in building such a machine and discuss our current lattice project on the machine

  16. The LLNL AMS facility

    International Nuclear Information System (INIS)

    Roberts, M.L.; Bench, G.S.; Brown, T.A.

    1996-05-01

    The AMS facility at Lawrence Livermore National Laboratory (LLNL) routinely measures the isotopes 3 H, 7 Be, 10 Be, 14 C, 26 Al, 36 Cl, 41 Ca, 59,63 Ni, and 129 I. During the past two years, over 30,000 research samples have been measured. Of these samples, approximately 30% were for 14 C bioscience tracer studies, 45% were 14 C samples for archaeology and the geosciences, and the other isotopes constitute the remaining 25%. During the past two years at LLNL, a significant amount of work has gone into the development of the Projectile X-ray AMS (PXAMS) technique. PXAMS uses induced characteristic x-rays to discriminate against competing atomic isobars. PXAMS has been most fully developed for 63 Ni but shows promise for the measurement of several other long lived isotopes. During the past year LLNL has also conducted an 129 I interlaboratory comparison exercise. Recent hardware changes at the LLNL AMS facility include the installation and testing of a new thermal emission ion source, a new multianode gas ionization detector for general AMS use, re-alignment of the vacuum tank of the first of the two magnets that make up the high energy spectrometer, and a new cryo-vacuum system for the AMS ion source. In addition, they have begun design studies and carried out tests for a new high-resolution injector and a new beamline for heavy element AMS

  17. STAR Data Reconstruction at NERSC/Cori, an adaptable Docker container approach for HPC

    Science.gov (United States)

    Mustafa, Mustafa; Balewski, Jan; Lauret, Jérôme; Porter, Jefferson; Canon, Shane; Gerhardt, Lisa; Hajdu, Levente; Lukascsyk, Mark

    2017-10-01

    As HPC facilities grow their resources, adaptation of classic HEP/NP workflows becomes a need. Linux containers may very well offer a way to lower the bar to exploiting such resources and at the time, help collaboration to reach vast elastic resources on such facilities and address their massive current and future data processing challenges. In this proceeding, we showcase STAR data reconstruction workflow at Cori HPC system at NERSC. STAR software is packaged in a Docker image and runs at Cori in Shifter containers. We highlight two of the typical end-to-end optimization challenges for such pipelines: 1) data transfer rate which was carried over ESnet after optimizing end points and 2) scalable deployment of conditions database in an HPC environment. Our tests demonstrate equally efficient data processing workflows on Cori/HPC, comparable to standard Linux clusters.

  18. Linux Server Security

    CERN Document Server

    Bauer, Michael D

    2005-01-01

    Linux consistently appears high up in the list of popular Internet servers, whether it's for the Web, anonymous FTP, or general services such as DNS and delivering mail. But security is the foremost concern of anyone providing such a service. Any server experiences casual probe attempts dozens of time a day, and serious break-in attempts with some frequency as well. This highly regarded book, originally titled Building Secure Servers with Linux, combines practical advice with a firm knowledge of the technical tools needed to ensure security. The book focuses on the most common use of Linux--

  19. Kali Linux CTF blueprints

    CERN Document Server

    Buchanan, Cameron

    2014-01-01

    Taking a highly practical approach and a playful tone, Kali Linux CTF Blueprints provides step-by-step guides to setting up vulnerabilities, in-depth guidance to exploiting them, and a variety of advice and ideas to build and customising your own challenges. If you are a penetration testing team leader or individual who wishes to challenge yourself or your friends in the creation of penetration testing assault courses, this is the book for you. The book assumes a basic level of penetration skills and familiarity with the Kali Linux operating system.

  20. Linux Security Cookbook

    CERN Document Server

    Barrett, Daniel J; Byrnes, Robert G

    2003-01-01

    Computer security is an ongoing process, a relentless contest between system administrators and intruders. A good administrator needs to stay one step ahead of any adversaries, which often involves a continuing process of education. If you're grounded in the basics of security, however, you won't necessarily want a complete treatise on the subject each time you pick up a book. Sometimes you want to get straight to the point. That's exactly what the new Linux Security Cookbook does. Rather than provide a total security solution for Linux computers, the authors present a series of easy-to-fol

  1. Kali Linux social engineering

    CERN Document Server

    Singh, Rahul

    2013-01-01

    This book is a practical, hands-on guide to learning and performing SET attacks with multiple examples.Kali Linux Social Engineering is for penetration testers who want to use BackTrack in order to test for social engineering vulnerabilities or for those who wish to master the art of social engineering attacks.

  2. Faults in Linux

    DEFF Research Database (Denmark)

    Palix, Nicolas Jean-Michel; Thomas, Gaël; Saha, Suman

    2011-01-01

    In 2001, Chou et al. published a study of faults found by applying a static analyzer to Linux versions 1.0 through 2.4.1. A major result of their work was that the drivers directory contained up to 7 times more of certain kinds of faults than other directories. This result inspired a number...... of development and research efforts on improving the reliability of driver code. Today Linux is used in a much wider range of environments, provides a much wider range of services, and has adopted a new development and release model. What has been the impact of these changes on code quality? Are drivers still...... a major problem? To answer these questions, we have transported the experiments of Chou et al. to Linux versions 2.6.0 to 2.6.33, released between late 2003 and early 2010. We find that Linux has more than doubled in size during this period, but that the number of faults per line of code has been...

  3. Embedded Linux in het onderwijs

    NARCIS (Netherlands)

    Dr Ruud Ermers

    2008-01-01

    Embedded Linux wordt bij steeds meer grote bedrijven ingevoerd als embedded operating system. Binnen de opleiding Technische Informatica van Fontys Hogeschool ICT is Embedded Linux geïntroduceerd in samenwerking met het lectoraat Architectuur van Embedded Systemen. Embedded Linux is als vakgebied

  4. Communication to Linux users

    CERN Multimedia

    IT Department

    We would like to inform you that the aging “phone” Linux command will stop working: On lxplus on 30 November 2009, On lxbatch on 4 January 2010, and is replaced by the new “phonebook” command, currently available on SLC4 and SLC5 Linux. As the new “phonebook” command has different syntax and output formats from the “phone” command, please update and test all scripts currently using “phone” before the above dates. You can refer to the article published on the IT Service Status Board, under the Service Changes section. Please send any comments to it-dep-phonebook-feedback@cern.ch Best regards, IT-UDS User Support Section

  5. HPC Annual Report 2017

    Energy Technology Data Exchange (ETDEWEB)

    Dennig, Yasmin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-10-01

    Sandia National Laboratories has a long history of significant contributions to the high performance community and industry. Our innovative computer architectures allowed the United States to become the first to break the teraFLOP barrier—propelling us to the international spotlight. Our advanced simulation and modeling capabilities have been integral in high consequence US operations such as Operation Burnt Frost. Strong partnerships with industry leaders, such as Cray, Inc. and Goodyear, have enabled them to leverage our high performance computing (HPC) capabilities to gain a tremendous competitive edge in the marketplace. As part of our continuing commitment to providing modern computing infrastructure and systems in support of Sandia missions, we made a major investment in expanding Building 725 to serve as the new home of HPC systems at Sandia. Work is expected to be completed in 2018 and will result in a modern facility of approximately 15,000 square feet of computer center space. The facility will be ready to house the newest National Nuclear Security Administration/Advanced Simulation and Computing (NNSA/ASC) Prototype platform being acquired by Sandia, with delivery in late 2019 or early 2020. This new system will enable continuing advances by Sandia science and engineering staff in the areas of operating system R&D, operation cost effectiveness (power and innovative cooling technologies), user environment and application code performance.

  6. Kali Linux cookbook

    CERN Document Server

    Pritchett, Willie

    2013-01-01

    A practical, cookbook style with numerous chapters and recipes explaining the penetration testing. The cookbook-style recipes allow you to go directly to your topic of interest if you are an expert using this book as a reference, or to follow topics throughout a chapter to gain in-depth knowledge if you are a beginner.This book is ideal for anyone who wants to get up to speed with Kali Linux. It would also be an ideal book to use as a reference for seasoned penetration testers.

  7. Programming Models in HPC

    Energy Technology Data Exchange (ETDEWEB)

    Shipman, Galen M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-13

    These are the slides for a presentation on programming models in HPC, at the Los Alamos National Laboratory's Parallel Computing Summer School. The following topics are covered: Flynn's Taxonomy of computer architectures; single instruction single data; single instruction multiple data; multiple instruction multiple data; address space organization; definition of Trinity (Intel Xeon-Phi is a MIMD architecture); single program multiple data; multiple program multiple data; ExMatEx workflow overview; definition of a programming model, programming languages, runtime systems; programming model and environments; MPI (Message Passing Interface); OpenMP; Kokkos (Performance Portable Thread-Parallel Programming Model); Kokkos abstractions, patterns, policies, and spaces; RAJA, a systematic approach to node-level portability and tuning; overview of the Legion Programming Model; mapping tasks and data to hardware resources; interoperability: supporting task-level models; Legion S3D execution and performance details; workflow, integration of external resources into the programming model.

  8. HPC4Energy Final Report : GE Energy

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Steven G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Van Zandt, Devin T. [GE Energy Consulting, Schenectady, NY (United States); Thomas, Brian [GE Energy Consulting, Schenectady, NY (United States); Mahmood, Sajjad [GE Energy Consulting, Schenectady, NY (United States); Woodward, Carol S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-02-25

    Power System planning tools are being used today to simulate systems that are far larger and more complex than just a few years ago. Advances in renewable technologies and more pervasive control technology are driving planning engineers to analyze an increasing number of scenarios and system models with much more detailed network representations. Although the speed of individual CPU’s has increased roughly according to Moore’s Law, the requirements for advanced models, increased system sizes, and larger sensitivities have outstripped CPU performance. This computational dilemma has reached a critical point and the industry needs to develop the technology to accurately model the power system of the future. The hpc4energy incubator program provided a unique opportunity to leverage the HPC resources available to LLNL and the power systems domain expertise of GE Energy to enhance the GE Concorda PSLF software. Well over 500 users worldwide, including all of the major California electric utilities, rely on Concorda PSLF software for their power flow and dynamics. This pilot project demonstrated that the GE Concorda PSLF software can perform contingency analysis in a massively parallel environment to significantly reduce the time to results. An analysis with 4,127 contingencies that would take 24 days on a single core was reduced to 24 minutes when run on 4,217 cores. A secondary goal of this project was to develop and test modeling techniques that will expand the computational capability of PSLF to efficiently deal with systems sizes greater than 150,000 buses. Toward this goal the matrix reordering implementation time was sped up 9.5 times by optimizing the code and introducing threading.

  9. Simplifying the Access to HPC Resources by Integrating them in the Application GUI

    KAUST Repository

    van Waveren, Matthijs

    2016-06-22

    The computing landscape of KAUST is increasing in complexity. Researchers have access to the 9th fastest supercomputer in the world (Shaheen II) and several other HPC clusters. They work on local Windows, Mac, or Linux workstations. In order to facilitate the access of the HPC systems, we have developed interfaces to several research applications that automate input data transfer, job submission and retrieval of results. The user now submits his jobs to the cluster from within the application GUI on his workstation, and does not have to physically go onto the cluster anymore.

  10. ATLAS computing on CSCS HPC

    Science.gov (United States)

    Filipcic, A.; Haug, S.; Hostettler, M.; Walker, R.; Weber, M.

    2015-12-01

    The Piz Daint Cray XC30 HPC system at CSCS, the Swiss National Supercomputing centre, was the highest ranked European system on TOP500 in 2014, also featuring GPU accelerators. Event generation and detector simulation for the ATLAS experiment have been enabled for this machine. We report on the technical solutions, performance, HPC policy challenges and possible future opportunities for HEP on extreme HPC systems. In particular a custom made integration to the ATLAS job submission system has been developed via the Advanced Resource Connector (ARC) middleware. Furthermore, a partial GPU acceleration of the Geant4 detector simulations has been implemented.

  11. ATLAS computing on CSCS HPC

    CERN Document Server

    Hostettler, Michael Artur; The ATLAS collaboration; Haug, Sigve; Walker, Rodney; Weber, Michele

    2015-01-01

    The Piz Daint Cray XC30 HPC system at CSCS, the Swiss National Supercomputing centre, was in 2014 the highest ranked European system on TOP500, also featuring GPU accelerators. Event generation and detector simulation for the ATLAS experiment have been enabled for this machine. We report on the technical solutions, performance, HPC policy challenges and possible future opportunities for HEP on extreme HPC systems. In particular a custom made integration to the ATLAS job submission system has been developed via the Advanced Resource Connector (ARC) middleware. Furthermore, some GPU acceleration of the Geant4 detector simulations has been implemented to justify the allocation request for this machine.

  12. ATLAS computing on CSCS HPC

    CERN Document Server

    Filipcic, Andrej; The ATLAS collaboration; Weber, Michele; Walker, Rodney; Hostettler, Michael Artur

    2015-01-01

    The Piz Daint Cray XC30 HPC system at CSCS, the Swiss National Supercomputing centre, is in 2014 the highest ranked European system on TOP500, also featuring GPU accelerators. Event generation and detector simulation for the ATLAS experiment has been enabled for this machine. We report on the technical solutions, performance, HPC policy challenges and possible future opportunities for HEP on extreme HPC systems. In particular a custom made integration to the ATLAS job submission system has been developed via the Advanced Resource Connector (ARC) middleware. Further, some GPU acceleration of the Geant4 detector simulations were implemented to justify the allocation request for this machine.

  13. Diskless Linux Cluster How-To

    National Research Council Canada - National Science Library

    Shumaker, Justin L

    2005-01-01

    Diskless linux clustering is not yet a turn-key solution. The process of configuring a cluster of diskless linux machines requires many modifications to the stock linux operating system before they can boot cleanly...

  14. 2014 HPC Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Jennings, Barbara [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-10-01

    Our commitment is to support you through delivery of an IT environment that provides mission value by transforming the way you use, protect, and access information. We approach this through technical innovation, risk management, and relationships with our workforce, Laboratories leadership, and policy makers nationwide. This second edition of our HPC Annual Report continues our commitment to communicate the details and impact of Sandia’s large-scale computing resources that support the programs associated with our diverse mission areas. A key tenet to our approach is to work with our mission partners to understand and anticipate their requirements and formulate an investment strategy that is aligned with those Laboratories priorities. In doing this, our investments include not only expanding the resources available for scientific computing and modeling and simulation, but also acquiring large-scale systems for data analytics, cloud computing, and Emulytics. We are also investigating new computer architectures in our advanced systems test bed to guide future platform designs and prepare for changes in our code development models. Our initial investments in large-scale institutional platforms that are optimized for Informatics and Emulytics work are serving a diverse customer base. We anticipate continued growth and expansion of these resources in the coming years as the use of these analytic techniques expands across our mission space. If your program could benefit from an investment in innovative systems, please work through your Program Management Unit ’s Mission Computing Council representatives to engage our teams.

  15. Lightweight HPC beam OMEGA

    Science.gov (United States)

    Sýkora, Michal; Jedlinský, Petr; Komanec, Jan

    2017-09-01

    In the design and construction of precast bridge structures, a general goal is to achieve the maximum possible span length. Often, the weight of individual beams makes them difficult to handle, which may be a limiting factor in achieving the desired span. The design of the OMEGA beam aims to solve a part of these problems. It is a thin-walled shell made of prestressed high-performance concrete (HPC) in the shape of inverted Ω character. The concrete shell with prestressed strands is fitted with a non-stressed tendon already in the casting yard and is more easily transported and installed on the site. The shells are subsequently completed with mild steel reinforcement and cores are cast in situ together with the deck. The OMEGA beams can also be used as an alternative to steel - concrete composite bridges. Due to the higher production complexity, OMEGA beam can hardly substitute conventional prestressed beams like T or PETRA completely, but it can be a useful alternative for specific construction needs.

  16. LLNL NESHAPs, 1993 annual report

    International Nuclear Information System (INIS)

    Harrach, R.J.; Surano, K.A.; Biermann, A.H.; Gouveia, F.J.; Fields, B.C.; Tate, P.J.

    1994-06-01

    The standard defined in NESHAPSs CFR Part 61.92 limits the emission of radionuclides to the ambient air from DOE facilities to those that would cause any member of the public to receive in any year an effective dose equivalent of 10 mrem. In August 1993 DOE and EPA signed a Federal Facility Compliance Agreement which established a schedule of work for LLNL to perform to demonstrate compliance with NESHAPs, 40 CFR part 61, Subpart H. The progress in LLNL's NESHAPs program - evaluations of all emission points for the Livermore site and Site 300, of collective EDEs for populations within 80 km of each site, status in reguard to continuous monitoring requirements and periodic confirmatory measurements, improvements in the sampling and monitoring systems and progress on a NESHAPs quality assurance program - is described in this annual report. In April 1994 the EPA notified DOE and LLNL that all requirements of the FFCA had been met, and that LLNL was in compliance with the NESHAPs regulations

  17. LLNL NESHAPs 2014 Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bertoldo, N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gallegos, G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); MacQueen, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wegrecki, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-07-01

    Lawrence Livermore National Security, LLC operates facilities at Lawrence Livermore National Laboratory (LLNL) where radionuclides are handled and stored. These facilities are subject to the U.S. Environmental Protection Agency (EPA) National Emission Standards for Hazardous Air Pollutants (NESHAPs) in Code of Federal Regulations (CFR) Title 40, Part 61, Subpart H, which regulates radionuclide emissions to air from Department of Energy (DOE) facilities. Specifically, NESHAPs limits the emission of radionuclides to the ambient air to levels resulting in an annual effective dose equivalent of 10 mrem (100 μSv) to any member of the public. Using measured and calculated emissions, and building-specific and common parameters, LLNL personnel applied the EPA-approved computer code, CAP88-PC, Version 4.0.1.17, to calculate the dose to the maximally exposed individual member of the public for the Livermore Site and Site 300.

  18. LLNL Chemical Kinetics Modeling Group

    Energy Technology Data Exchange (ETDEWEB)

    Pitz, W J; Westbrook, C K; Mehl, M; Herbinet, O; Curran, H J; Silke, E J

    2008-09-24

    The LLNL chemical kinetics modeling group has been responsible for much progress in the development of chemical kinetic models for practical fuels. The group began its work in the early 1970s, developing chemical kinetic models for methane, ethane, ethanol and halogenated inhibitors. Most recently, it has been developing chemical kinetic models for large n-alkanes, cycloalkanes, hexenes, and large methyl esters. These component models are needed to represent gasoline, diesel, jet, and oil-sand-derived fuels.

  19. LLNL pure positron plasma program

    International Nuclear Information System (INIS)

    Hartley, J.H.; Beck, B.R.; Cowan, T.E.; Howell, R.H.; McDonald, J.L.; Rohatgi, R.R.; Fajans, J.; Gopalan, R.

    1995-01-01

    Assembly and initial testing of the Positron Time-of-Flight Trap at the Lawrence Livermore National Laboratory (LLNL) Increase Pulsed Positron Facility has been completed. The goal of the project is to accumulate at high-density positron plasma in only a few seconds., in order to facilitate study that may require destructive diagnostics. To date, densities of at least 6 x 10 6 positrons per cm 3 have been achieved

  20. LLNL Waste Minimization Program Plan

    International Nuclear Information System (INIS)

    1990-01-01

    This document is the February 14, 1990 version of the LLNL Waste Minimization Program Plan (WMPP). The Waste Minimization Policy field has undergone continuous changes since its formal inception in the 1984 HSWA legislation. The first LLNL WMPP, Revision A, is dated March 1985. A series of informal revision were made on approximately a semi-annual basis. This Revision 2 is the third formal issuance of the WMPP document. EPA has issued a proposed new policy statement on source reduction and recycling. This policy reflects a preventative strategy to reduce or eliminate the generation of environmentally-harmful pollutants which may be released to the air, land surface, water, or ground water. In accordance with this new policy new guidance to hazardous waste generators on the elements of a Waste Minimization Program was issued. In response to these policies, DOE has revised and issued implementation guidance for DOE Order 5400.1, Waste Minimization Plan and Waste Reduction reporting of DOE Hazardous, Radioactive, and Radioactive Mixed Wastes, final draft January 1990. This WMPP is formatted to meet the current DOE guidance outlines. The current WMPP will be revised to reflect all of these proposed changes when guidelines are established. Updates, changes and revisions to the overall LLNL WMPP will be made as appropriate to reflect ever-changing regulatory requirements. 3 figs., 4 tabs

  1. Bringing ATLAS production to HPC resources. A case study with SuperMuc and Hydra

    Energy Technology Data Exchange (ETDEWEB)

    Duckeck, Guenter; Walker, Rodney [LMU Muenchen (Germany); Kennedy, John; Mazzaferro, Luca [RZG Garching (Germany); Kluth, Stefan [Max-Planck-Institut fuer Physik, Muenchen (Germany); Collaboration: ATLAS-Collaboration

    2015-07-01

    The possible usage of Supercomputer systems or HPC resources by ATLAS is now becoming viable due to the changing nature of these systems and it is also very attractive due to the need for increasing amounts of simulated data. The ATLAS experiment at CERN will begin a period of high luminosity data taking in 2015. The corresponding need for simulated data might potentially exceed the capabilities of the current Grid infrastructure. ATLAS aims to address this need by opportunistically accessing resources such as cloud and HPC systems. This contribution presents the results of two projects undertaken by LMU/LRZ and MPP/RZG to use the supercomputer facilities SuperMuc (LRZ) and Hydra (RZG). Both are Linux based supercomputers in the 100 k CPU-core category. The integration of such HPC resources into the ATLAS production system poses many challenges. Firstly, established techniques and features of standard WLCG operation are prohibited or much restricted on HPC systems, e.g. Grid middleware, software installation, outside connectivity, etc. Secondly, efficient use of available resources requires massive multi-core jobs, back-fill submission and check-pointing. We discuss the customization of these components and the strategies for HPC usage as well as possibilities for future directions.

  2. Status of LLNL granite projects

    International Nuclear Information System (INIS)

    Ramspott, L.D.

    1980-01-01

    The status of LLNL Projects dealing with nuclear waste disposal in granitic rocks is reviewed. This review covers work done subsequent to the June 1979 Workshop on Thermomechanical Modeling for a Hardrock Waste Repository and is prepared for the July 1980 Workshop on Thermomechanical-Hydrochemical Modeling for a Hardrock Waste Repository. Topics reviewed include laboratory determination of thermal, mechanical, and transport properties of rocks at conditions simulating a deep geologic repository, and field testing at the Climax granitic stock at the USDOE Nevada Test Site

  3. Linux all-in-one for dummies

    CERN Document Server

    Dulaney, Emmett

    2014-01-01

    Eight minibooks in one volume cover every important aspect of Linux and everything you need to know to pass level-1 certification Linux All-in-One For Dummies explains everything you need to get up and running with the popular Linux operating system. Written in the friendly and accessible For Dummies style, the book ideal for new and intermediate Linux users, as well as anyone studying for level-1 Linux certification. The eight minibooks inside cover the basics of Linux, interacting with it, networking issues, Internet services, administration, security, scripting, and level-1 certification. C

  4. LLNL Waste Minimization Program Plan

    International Nuclear Information System (INIS)

    1990-05-01

    This document is the February 14, 1990 version of the LLNL Waste Minimization Program Plan (WMPP). Now legislation at the federal level is being introduced. Passage will result in new EPA regulations and also DOE orders. At the state level the Hazardous Waste Reduction and Management Review Act of 1989 was signed by the Governor. DHS is currently promulgating regulations to implement the new law. EPA has issued a proposed new policy statement on source reduction and recycling. This policy reflects a preventative strategy to reduce or eliminate the generation of environmentally-harmful pollutants which may be released to the air, land surface, water, or ground water. In accordance with this policy new guidance to hazardous waste generators on the elements of a Waste Minimization Program was issued. This WMPP is formatted to meet the current DOE guidance outlines. The current WMPP will be revised to reflect all of these proposed changes when guidelines are established. Updates, changes and revisions to the overall LLNL WMPP will be made as appropriate to reflect ever-changing regulatory requirements

  5. 2016 LLNL Nuclear Forensics Summer Program

    Energy Technology Data Exchange (ETDEWEB)

    Zavarin, Mavrik [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-11-15

    The Lawrence Livermore National Laboratory (LLNL) Nuclear Forensics Summer Program is designed to give graduate students an opportunity to come to LLNL for 8–10 weeks for a hands-on research experience. Students conduct research under the supervision of a staff scientist, attend a weekly lecture series, interact with other students, and present their work in poster format at the end of the program. Students also have the opportunity to meet staff scientists one-on-one, participate in LLNL facility tours (e.g., the National Ignition Facility and Center for Accelerator Mass Spectrometry), and gain a better understanding of the various science programs at LLNL.

  6. 2017 LLNL Nuclear Forensics Summer Internship Program

    Energy Technology Data Exchange (ETDEWEB)

    Zavarin, Mavrik [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-12-13

    The Lawrence Livermore National Laboratory (LLNL) Nuclear Forensics Summer Internship Program (NFSIP) is designed to give graduate students an opportunity to come to LLNL for 8-10 weeks of hands-on research. Students conduct research under the supervision of a staff scientist, attend a weekly lecture series, interact with other students, and present their work in poster format at the end of the program. Students can also meet staff scientists one-on-one, participate in LLNL facility tours (e.g., the National Ignition Facility and Center for Accelerator Mass Spectrometry), and gain a better understanding of the various science programs at LLNL.

  7. 2016 LLNL Nuclear Forensics Summer Program

    International Nuclear Information System (INIS)

    Zavarin, Mavrik

    2016-01-01

    The Lawrence Livermore National Laboratory (LLNL) Nuclear Forensics Summer Program is designed to give graduate students an opportunity to come to LLNL for 8-10 weeks for a hands-on research experience. Students conduct research under the supervision of a staff scientist, attend a weekly lecture series, interact with other students, and present their work in poster format at the end of the program. Students also have the opportunity to meet staff scientists one-on-one, participate in LLNL facility tours (e.g., the National Ignition Facility and Center for Accelerator Mass Spectrometry), and gain a better understanding of the various science programs at LLNL.

  8. Strengthening LLNL Missions through Laboratory Directed Research and Development in High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Willis, D. K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-12-01

    High performance computing (HPC) has been a defining strength of Lawrence Livermore National Laboratory (LLNL) since its founding. Livermore scientists have designed and used some of the world’s most powerful computers to drive breakthroughs in nearly every mission area. Today, the Laboratory is recognized as a world leader in the application of HPC to complex science, technology, and engineering challenges. Most importantly, HPC has been integral to the National Nuclear Security Administration’s (NNSA’s) Stockpile Stewardship Program—designed to ensure the safety, security, and reliability of our nuclear deterrent without nuclear testing. A critical factor behind Lawrence Livermore’s preeminence in HPC is the ongoing investments made by the Laboratory Directed Research and Development (LDRD) Program in cutting-edge concepts to enable efficient utilization of these powerful machines. Congress established the LDRD Program in 1991 to maintain the technical vitality of the Department of Energy (DOE) national laboratories. Since then, LDRD has been, and continues to be, an essential tool for exploring anticipated needs that lie beyond the planning horizon of our programs and for attracting the next generation of talented visionaries. Through LDRD, Livermore researchers can examine future challenges, propose and explore innovative solutions, and deliver creative approaches to support our missions. The present scientific and technical strengths of the Laboratory are, in large part, a product of past LDRD investments in HPC. Here, we provide seven examples of LDRD projects from the past decade that have played a critical role in building LLNL’s HPC, computer science, mathematics, and data science research capabilities, and describe how they have impacted LLNL’s mission.

  9. Analyzing Security-Enhanced Linux Policy Specifications

    National Research Council Canada - National Science Library

    Archer, Myla

    2003-01-01

    NSA's Security-Enhanced (SE) Linux enhances Linux by providing a specification language for security policies and a Flask-like architecture with a security server for enforcing policies defined in the language...

  10. DOD HPC Insights. Spring 2012

    Science.gov (United States)

    2012-04-01

    petascale and exascale HPC concepts has led to new research thrusts including power efficiency. Now, power efficiency is an important area of expertise... exascale supercomputers. MHPCC is also working on the gen- eration side of the energy equation. We have deployed a 100 KW research so- lar array... exascale su- percomputers. Within the HPCMP, en- ergy costs take an increasing amount of the limited budget that could be better used for service

  11. HPC s Pivot to Data

    Energy Technology Data Exchange (ETDEWEB)

    Parete-Koon, Suzanne [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Caldwell, Blake A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Canon, Richard Shane [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Dart, Eli [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Sciences Network (ESnet); Hick, Jason [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Hill, Jason J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Layton, Chris [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Pelfrey, Daniel S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Shipman, Galen M [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Skinner, David [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Nam, Hai Ah [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Wells, Jack C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Zurawski, Jason [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Sciences Network (ESnet)

    2014-05-03

    Computer centers such as NERSC and OLCF have traditionally focused on delivering computational capability that enables breakthrough innovation in a wide range of science domains. Accessing that computational power has required services and tools to move the data from input and output to computation and storage. A ''pivot to data'' is occurring in HPC. Data transfer tools and services that were previously peripheral are becoming integral to scientific workflows. Emerging requirements from high-bandwidth detectors, high-throughput screening techniques, highly concur- rent simulations, increased focus on uncertainty quantification, and an emerging open-data policy posture toward published research are among the data-drivers shaping the networks, file systems, databases, and overall compute and data environment. In this paper we explain the pivot to data in HPC through user requirements and the changing resources provided by HPC with particular focus on data movement. For WAN data transfers we present the results of a study of network performance between centers.

  12. Laser wakefields at UCLA and LLNL

    International Nuclear Information System (INIS)

    Mori, W.B.; Clayton, C.E.; Joshi, C.; Dawson, J.M.; Decker, C.B.; Marsh, K.; Katsouleas, T.; Darrow, C.B.; Wilks, S.C.

    1991-01-01

    The authors report on recent progress at UCLA and LLNL on the nonlinear laser wakefield scheme. They find advantages to operating in the limit where the laser pulse is narrow enough to expel all the plasma electrons from the focal region. A description of the experimental program for the new short pulse 10 TW laser facility at LLNL is also presented

  13. Application of the Linux cluster for exhaustive window haplotype analysis using the FBAT and Unphased programs.

    Science.gov (United States)

    Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun

    2008-05-28

    Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4-15.9 times faster, while Unphased jobs performed 1.1-18.6 times faster compared to the accumulated computation duration. Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance.

  14. Super computer made with Linux cluster

    International Nuclear Information System (INIS)

    Lee, Jeong Hun; Oh, Yeong Eun; Kim, Jeong Seok

    2002-01-01

    This book consists of twelve chapters, which introduce super computer made with Linux cluster. The contents of this book are Linux cluster, the principle of cluster, design of Linux cluster, general things for Linux, building up terminal server and client, Bear wolf cluster by Debian GNU/Linux, cluster system with red hat, Monitoring system, application programming-MPI, on set-up and install application programming-PVM, with PVM programming and XPVM application programming-open PBS with composition and install and set-up and GRID with GRID system, GSI, GRAM, MDS, its install and using of tool kit

  15. Linux Command Line and Shell Scripting Bible

    CERN Document Server

    Blum, Richard

    2011-01-01

    The authoritative guide to Linux command line and shell scripting?completely updated and revised [it's not a guide to Linux as a whole ? just to scripting] The Linux command line allows you to type specific Linux commands directly to the system so that you can easily manipulate files and query system resources, thereby permitting you to automate commonly used functions and even schedule those programs to run automatically. This new edition is packed with new and revised content, reflecting the many changes to new Linux versions, including coverage of alternative shells to the default bash shel

  16. Big Data and HPC collocation: Using HPC idle resources for Big Data Analytics

    OpenAIRE

    MERCIER , Michael; Glesser , David; Georgiou , Yiannis; Richard , Olivier

    2017-01-01

    International audience; Executing Big Data workloads upon High Performance Computing (HPC) infrastractures has become an attractive way to improve their performances. However, the collocation of HPC and Big Data workloads is not an easy task, mainly because of their core concepts' differences. This paper focuses on the challenges related to the scheduling of both Big Data and HPC workloads on the same computing platform. In classic HPC workloads, the rigidity of jobs tends to create holes in ...

  17. HPC in a HEP lab: lessons learned from setting up cost-effective HPC clusters

    OpenAIRE

    Husejko, Michal; Agtzidis, Ioannis; Baehler, Pierre; Dul, Tadeusz; Evans, John; Himyr, Nils; Meinhard, Helge

    2015-01-01

    In this paper we present our findings gathered during the evaluation and testing of Windows Server High-Performance Computing (Windows HPC) in view of potentially using it as a production HPC system for engineering applications. The Windows HPC package, an extension of Microsofts Windows Server product, provides all essential interfaces, utilities and management functionality for creating, operating and monitoring a Windows-based HPC cluster infrastructure. The evaluation and test phase was f...

  18. Membangun Sistem Linux Mandrake Minimal Menggunakan Inisial Disk Ram

    OpenAIRE

    Wagito, Wagito

    2006-01-01

    Minimal Linux system is commonly used for special systems like router, gateway, Linux installer and diskless Linux system. Minimal Linux system is a Linux system that use a few facilities of all Linux capabilities. Mandrake Linux, as one of Linux distribution is able to perform minimal Linux system. RAM is a computer resource that especially used as main memory. A part of RAM's function can be changed into disk called RAM disk. This RAM disk can be used to run the Linux system. This ...

  19. MEMBANGUN SISTEM LINUX MANDRAKE MINIMAL MENGGUNAKAN INISIAL DISK RAM

    OpenAIRE

    Wagito, Wagito

    2009-01-01

            Minimal Linux system is commonly used for special systems like router, gateway, Linux installer and diskless Linux system. Minimal Linux system is a Linux system that use a few facilities of all Linux capabilities. Mandrake Linux, as one of Linux distribution is able to perform minimal Linux system.         RAM is a computer resource that especially used as main memory. A  part of RAM’s function can be changed into disk called RAM disk. This RAM disk can be used to run the Linux syste...

  20. HPC Test Results Analysis with Splunk

    Energy Technology Data Exchange (ETDEWEB)

    Green, Jennifer Kathleen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-04-21

    This PowerPoint presentation details Los Alamos National Laboratory’s (LANL) outstanding computing division. LANL’s high performance computing (HPC) aims at having the first platform large and fast enough to accommodate resolved 3D calculations for full scale end-to-end calculations. Strategies for managing LANL’s HPC division are also discussed.

  1. Fire science at LLNL: A review

    Energy Technology Data Exchange (ETDEWEB)

    Hasegawa, H.K. (ed.)

    1990-03-01

    This fire sciences report from LLNL includes topics on: fire spread in trailer complexes, properties of welding blankets, validation of sprinkler systems, fire and smoke detectors, fire modeling, and other fire engineering and safety issues. (JEF)

  2. MARS Code in Linux Environment

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Moon Kyu; Bae, Sung Won; Jung, Jae Joon; Chung, Bub Dong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2005-07-01

    The two-phase system analysis code MARS has been incorporated into Linux system. The MARS code was originally developed based on the RELAP5/MOD3.2 and COBRA-TF. The 1-D module which evolved from RELAP5 alone could be applied for the whole NSSS system analysis. The 3-D module developed based on the COBRA-TF, however, could be applied for the analysis of the reactor core region where 3-D phenomena would be better treated. The MARS code also has several other code units that could be incorporated for more detailed analysis. The separate code units include containment analysis modules and 3-D kinetics module. These code modules could be optionally invoked to be coupled with the main MARS code. The containment code modules (CONTAIN and CONTEMPT), for example, could be utilized for the analysis of the plant containment phenomena in a coupled manner with the nuclear reactor system. The mass and energy interaction during the hypothetical coolant leakage accident could, thereby, be analyzed in a more realistic manner. In a similar way, 3-D kinetics could be incorporated for simulating the three dimensional reactor kinetic behavior, instead of using the built-in point kinetics model. The MARS code system, developed initially for the MS Windows environment, however, would not be adequate enough for the PC cluster system where multiple CPUs are available. When parallelism is to be eventually incorporated into the MARS code, MS Windows environment is not considered as an optimum platform. Linux environment, on the other hand, is generally being adopted as a preferred platform for the multiple codes executions as well as for the parallel application. In this study, MARS code has been modified for the adaptation of Linux platform. For the initial code modification, the Windows system specific features have been removed from the code. Since the coupling code module CONTAIN is originally in a form of dynamic load library (DLL) in the Windows system, a similar adaptation method

  3. MARS Code in Linux Environment

    International Nuclear Information System (INIS)

    Hwang, Moon Kyu; Bae, Sung Won; Jung, Jae Joon; Chung, Bub Dong

    2005-01-01

    The two-phase system analysis code MARS has been incorporated into Linux system. The MARS code was originally developed based on the RELAP5/MOD3.2 and COBRA-TF. The 1-D module which evolved from RELAP5 alone could be applied for the whole NSSS system analysis. The 3-D module developed based on the COBRA-TF, however, could be applied for the analysis of the reactor core region where 3-D phenomena would be better treated. The MARS code also has several other code units that could be incorporated for more detailed analysis. The separate code units include containment analysis modules and 3-D kinetics module. These code modules could be optionally invoked to be coupled with the main MARS code. The containment code modules (CONTAIN and CONTEMPT), for example, could be utilized for the analysis of the plant containment phenomena in a coupled manner with the nuclear reactor system. The mass and energy interaction during the hypothetical coolant leakage accident could, thereby, be analyzed in a more realistic manner. In a similar way, 3-D kinetics could be incorporated for simulating the three dimensional reactor kinetic behavior, instead of using the built-in point kinetics model. The MARS code system, developed initially for the MS Windows environment, however, would not be adequate enough for the PC cluster system where multiple CPUs are available. When parallelism is to be eventually incorporated into the MARS code, MS Windows environment is not considered as an optimum platform. Linux environment, on the other hand, is generally being adopted as a preferred platform for the multiple codes executions as well as for the parallel application. In this study, MARS code has been modified for the adaptation of Linux platform. For the initial code modification, the Windows system specific features have been removed from the code. Since the coupling code module CONTAIN is originally in a form of dynamic load library (DLL) in the Windows system, a similar adaptation method

  4. The Linux operating system: An introduction

    Science.gov (United States)

    Bokhari, Shahid H.

    1995-01-01

    Linux is a Unix-like operating system for Intel 386/486/Pentium based IBM-PCs and compatibles. The kernel of this operating system was written from scratch by Linus Torvalds and, although copyrighted by the author, may be freely distributed. A world-wide group has collaborated in developing Linux on the Internet. Linux can run the powerful set of compilers and programming tools of the Free Software Foundation, and XFree86, a port of the X Window System from MIT. Most capabilities associated with high performance workstations, such as networking, shared file systems, electronic mail, TeX, LaTeX, etc. are freely available for Linux. It can thus transform cheap IBM-PC compatible machines into Unix workstations with considerable capabilities. The author explains how Linux may be obtained, installed and networked. He also describes some interesting applications for Linux that are freely available. The enormous consumer market for IBM-PC compatible machines continually drives down prices of CPU chips, memory, hard disks, CDROMs, etc. Linux can convert such machines into powerful workstations that can be used for teaching, research and software development. For professionals who use Unix based workstations at work, Linux permits virtually identical working environments on their personal home machines. For cost conscious educational institutions Linux can create world-class computing environments from cheap, easily maintained, PC clones. Finally, for university students, it provides an essentially cost-free path away from DOS into the world of Unix and X Windows.

  5. Kali Linux assuring security by penetration testing

    CERN Document Server

    Ali, Shakeel; Allen, Lee

    2014-01-01

    Written as an interactive tutorial, this book covers the core of Kali Linux with real-world examples and step-by-step instructions to provide professional guidelines and recommendations for you. The book is designed in a simple and intuitive manner that allows you to explore the whole Kali Linux testing process or study parts of it individually.If you are an IT security professional who has a basic knowledge of Unix/Linux operating systems, including an awareness of information security factors, and want to use Kali Linux for penetration testing, then this book is for you.

  6. LLNL's Regional Seismic Discrimination Research

    International Nuclear Information System (INIS)

    Hanley, W; Mayeda, K; Myers, S; Pasyanos, M; Rodgers, A; Sicherman, A; Walter, W

    1999-01-01

    As part of the Department of Energy's research and development effort to improve the monitoring capability of the planned Comprehensive Nuclear-Test-Ban Treaty international monitoring system, Lawrence Livermore Laboratory (LLNL) is testing and calibrating regional seismic discrimination algorithms in the Middle East, North Africa and Western Former Soviet Union. The calibration process consists of a number of steps: (1) populating the database with independently identified regional events; (2) developing regional boundaries and pre-identifying severe regional phase blockage zones; (3) measuring and calibrating coda based magnitude scales; (4a) measuring regional amplitudes and making magnitude and distance amplitude corrections (MDAC); (4b) applying the DOE modified kriging methodology to MDAC results using the regionalized background model; (5) determining the thresholds of detectability of regional phases as a function of phase type and frequency; (6) evaluating regional phase discriminant performance both singly and in combination; (7) combining steps 1-6 to create a calibrated discrimination surface for each stations; (8) assessing progress and iterating. We have now developed this calibration procedure to the point where it is fairly straightforward to apply earthquake-explosion discrimination in regions with ample empirical data. Several of the steps outlined above are discussed in greater detail in other DOE papers in this volume or in recent publications. Here we emphasize the results of the above process: station correction surfaces and their improvement to discrimination results compared with simpler calibration methods. Some of the outstanding discrimination research issues involve cases in which there is little or no empirical data. For example in many cases there is no regional nuclear explosion data at IMS stations or nearby surrogates. We have taken two approaches to this problem, first finding and using mining explosion data when available, and

  7. LLNL Site 200 Risk Management Plan

    International Nuclear Information System (INIS)

    Pinkston, D.; Johnson, M.

    2008-01-01

    It is the Lawrence Livermore National Laboratory's (LLNL) policy to perform work in a manner that protects the health and safety of employees and the public, preserves the quality of the environment, and prevents property damage using the Integrated Safety Management System. The environment, safety, and health are to take priority in the planning and execution of work activities at the Laboratory. Furthermore, it is the policy of LLNL to comply with applicable ES and H laws, regulations, and requirements (LLNL Environment, Safety and Health Manual, Document 1.2, ES and H Policies of LLNL). The program and policies that improve LLNL's ability to prevent or mitigate accidental releases are described in the LLNL Environment, Health, and Safety Manual that is available to the public. The laboratory uses an emergency management system known as the Incident Command System, in accordance with the California Standardized Emergency Management System (SEMS) to respond to Operational Emergencies and to mitigate consequences resulting from them. Operational Emergencies are defined as unplanned, significant events or conditions that require time-urgent response from outside the immediate area of the incident that could seriously impact the safety or security of the public, LLNL's employees, its facilities, or the environment. The Emergency Plan contains LLNL's Operational Emergency response policies, commitments, and institutional responsibilities for managing and recovering from emergencies. It is not possible to list in the Emergency Plan all events that could occur during any given emergency situation. However, a combination of hazard assessments, an effective Emergency Plan, and Emergency Plan Implementing Procedures (EPIPs) can provide the framework for responses to postulated emergency situations. Revision 7, 2004 of the above mentioned LLNL Emergency Plan is available to the public. The most recent revision of the LLNL Emergency Plan LLNL-AM-402556, Revision 11, March

  8. Kali Linux wireless penetration testing essentials

    CERN Document Server

    Alamanni, Marco

    2015-01-01

    This book is targeted at information security professionals, penetration testers and network/system administrators who want to get started with wireless penetration testing. No prior experience with Kali Linux and wireless penetration testing is required, but familiarity with Linux and basic networking concepts is recommended.

  9. FY16 LLNL Omega Experimental Programs

    International Nuclear Information System (INIS)

    Heeter, R. F.; Ali, S. J.; Benstead, J.; Celliers, P. M.; Coppari, F.; Eggert, J.; Erskine, D.; Panella, A. F.; Fratanduono, D. E.; Hua, R.; Huntington, C. M.; Jarrott, L. C.; Jiang, S.; Kraus, R. G.; Lazicki, A. E.; LePape, S.; Martinez, D. A.; McNaney, J. M.; Millot, M. A.; Moody, J.; Pak, A. E.; Park, H. S.; Ping, Y.; Pollock, B. B.; Rinderknecht, H.; Ross, J. S.; Rubery, M.; Sio, H.; Smith, R. F.; Swadling, G. F.; Wehrenberg, C. E.; Collins, G. W.; Landen, O. L.; Wan, A.; Hsing, W.

    2016-01-01

    In FY16, LLNL's High-Energy-Density Physics (HED) and Indirect Drive Inertial Confinement Fusion (ICF-ID) programs conducted several campaigns on the OMEGA laser system and on the EP laser system, as well as campaigns that used the OMEGA and EP beams jointly. Overall, these LLNL programs led 430 target shots in FY16, with 304 shots using just the OMEGA laser system, and 126 shots using just the EP laser system. Approximately 21% of the total number of shots (77 OMEGA shots and 14 EP shots) supported the Indirect Drive Inertial Confinement Fusion Campaign (ICF-ID). The remaining 79% (227 OMEGA shots and 112 EP shots) were dedicated to experiments for High-Energy-Density Physics (HED). Highlights of the various HED and ICF campaigns are summarized in the following reports. In addition to these experiments, LLNL Principal Investigators led a variety of Laboratory Basic Science campaigns using OMEGA and EP, including 81 target shots using just OMEGA and 42 shots using just EP. The highlights of these are also summarized, following the ICF and HED campaigns. Overall, LLNL PIs led a total of 553 shots at LLE in FY 2016. In addition, LLNL PIs also supported 57 NLUF shots on Omega and 31 NLUF shots on EP, in collaboration with the academic community.

  10. Modeling Security-Enhanced Linux Policy Specifications for Analysis (Preprint)

    National Research Council Canada - National Science Library

    Archer, Myla; Leonard, Elizabeth; Pradella, Matteo

    2003-01-01

    Security-Enhanced (SE) Linux is a modification of Linux initially released by NSA in January 2001 that provides a language for specifying Linux security policies and, as in the Flask architecture, a security server...

  11. Linux command line and shell scripting bible

    CERN Document Server

    Blum, Richard

    2014-01-01

    Talk directly to your system for a faster workflow with automation capability Linux Command Line and Shell Scripting Bible is your essential Linux guide. With detailed instruction and abundant examples, this book teaches you how to bypass the graphical interface and communicate directly with your computer, saving time and expanding capability. This third edition incorporates thirty pages of new functional examples that are fully updated to align with the latest Linux features. Beginning with command line fundamentals, the book moves into shell scripting and shows you the practical application

  12. The Research on Linux Memory Forensics

    Science.gov (United States)

    Zhang, Jun; Che, ShengBing

    2018-03-01

    Memory forensics is a branch of computer forensics. It does not depend on the operating system API, and analyzes operating system information from binary memory data. Based on the 64-bit Linux operating system, it analyzes system process and thread information from physical memory data. Using ELF file debugging information and propose a method for locating kernel structure member variable, it can be applied to different versions of the Linux operating system. The experimental results show that the method can successfully obtain the sytem process information from physical memory data, and can be compatible with multiple versions of the Linux kernel.

  13. Understanding Collateral Evolution in Linux Device Drivers

    DEFF Research Database (Denmark)

    Padioleau, Yoann; Lawall, Julia Laetitia; Muller, Gilles

    2006-01-01

    no tools to help in this process, collateral evolution is thus time consuming and error prone.In this paper, we present a qualitative and quantitative assessment of collateral evolution in Linux device driver code. We provide a taxonomy of evolutions and collateral evolutions, and use an automated patch......-analysis tool that we have developed to measure the number of evolutions and collateral evolutions that affect device drivers between Linux versions 2.2 and 2.6. In particular, we find that from one version of Linux to the next, collateral evolutions can account for up to 35% of the lines modified in such code....

  14. Web penetration testing with Kali Linux

    CERN Document Server

    Muniz, Joseph

    2013-01-01

    Web Penetration Testing with Kali Linux contains various penetration testing methods using BackTrack that will be used by the reader. It contains clear step-by-step instructions with lot of screenshots. It is written in an easy to understand language which will further simplify the understanding for the user.""Web Penetration Testing with Kali Linux"" is ideal for anyone who is interested in learning how to become a penetration tester. It will also help the users who are new to Kali Linux and want to learn the features and differences in Kali versus Backtrack, and seasoned penetration testers

  15. The Linux command line a complete introduction

    CERN Document Server

    Shotts, William E

    2012-01-01

    You've experienced the shiny, point-and-click surface of your Linux computer—now dive below and explore its depths with the power of the command line. The Linux Command Line takes you from your very first terminal keystrokes to writing full programs in Bash, the most popular Linux shell. Along the way you'll learn the timeless skills handed down by generations of gray-bearded, mouse-shunning gurus: file navigation, environment configuration, command chaining, pattern matching with regular expressions, and more.

  16. LPI Linux Certification in a Nutshell

    CERN Document Server

    Haeder, Adam; Pessanha, Bruno; Stanger, James

    2010-01-01

    Linux deployment continues to increase, and so does the demand for qualified and certified Linux system administrators. If you're seeking a job-based certification from the Linux Professional Institute (LPI), this updated guide will help you prepare for the technically challenging LPIC Level 1 Exams 101 and 102. The third edition of this book is a meticulously researched reference to these exams, written by trainers who work closely with LPI. You'll find an overview of each exam, a summary of the core skills you need, review questions and exercises, as well as a study guide, a practice test,

  17. FY16 LLNL Omega Experimental Programs

    Energy Technology Data Exchange (ETDEWEB)

    Heeter, R. F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ali, S. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Benstead, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Celliers, P. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Coppari, F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Eggert, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Erskine, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Panella, A. F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fratanduono, D. E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hua, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Huntington, C. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Jarrott, L. C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Jiang, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kraus, R. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Lazicki, A. E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); LePape, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Martinez, D. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); McNaney, J. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Millot, M. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Moody, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pak, A. E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Park, H. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ping, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pollock, B. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Rinderknecht, H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ross, J. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Rubery, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sio, H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Smith, R. F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Swadling, G. F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wehrenberg, C. E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Collins, G. W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Landen, O. L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wan, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hsing, W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-12-01

    In FY16, LLNL’s High-Energy-Density Physics (HED) and Indirect Drive Inertial Confinement Fusion (ICF-ID) programs conducted several campaigns on the OMEGA laser system and on the EP laser system, as well as campaigns that used the OMEGA and EP beams jointly. Overall, these LLNL programs led 430 target shots in FY16, with 304 shots using just the OMEGA laser system, and 126 shots using just the EP laser system. Approximately 21% of the total number of shots (77 OMEGA shots and 14 EP shots) supported the Indirect Drive Inertial Confinement Fusion Campaign (ICF-ID). The remaining 79% (227 OMEGA shots and 112 EP shots) were dedicated to experiments for High-Energy-Density Physics (HED). Highlights of the various HED and ICF campaigns are summarized in the following reports. In addition to these experiments, LLNL Principal Investigators led a variety of Laboratory Basic Science campaigns using OMEGA and EP, including 81 target shots using just OMEGA and 42 shots using just EP. The highlights of these are also summarized, following the ICF and HED campaigns. Overall, LLNL PIs led a total of 553 shots at LLE in FY 2016. In addition, LLNL PIs also supported 57 NLUF shots on Omega and 31 NLUF shots on EP, in collaboration with the academic community.

  18. Kali Linux wireless penetration testing beginner's guide

    CERN Document Server

    Ramachandran, Vivek

    2015-01-01

    If you are a security professional, pentester, or anyone interested in getting to grips with wireless penetration testing, this is the book for you. Some familiarity with Kali Linux and wireless concepts is beneficial.

  19. Two-factor Authorization in Linux

    Directory of Open Access Journals (Sweden)

    L. S. Nosov

    2010-03-01

    Full Text Available Identification and authentication realization in OS Linux on basis of external USB-device and on basis of PAM-module program realization by the example answer on control question (enigma is considered.

  20. The Linux farm at the RCF

    International Nuclear Information System (INIS)

    Chan, A.W.; Hogue, R.W.; Throwe, T.G.; Yanuklis, T.A.

    2001-01-01

    A description of the Linux Farm at the RHIC Computing Facility (RCF) is presented. The RCF is a dedicated data processing facility for RHIC, which became operational in the summer of 2000 at Brookhaven National Laboratory

  1. Tuning Linux to meet real time requirements

    Science.gov (United States)

    Herbel, Richard S.; Le, Dang N.

    2007-04-01

    There is a desire to use Linux in military systems. Customers are requesting contractors to use open source to the maximal possible extent in contracts. Linux is probably the best operating system of choice to meet this need. It is widely used. It is free. It is royalty free, and, best of all, it is completely open source. However, there is a problem. Linux was not originally built to be a real time operating system. There are many places where interrupts can and will be blocked for an indeterminate amount of time. There have been several attempts to bridge this gap. One of them is from RTLinux, which attempts to build a microkernel underneath Linux. The microkernel will handle all interrupts and then pass it up to the Linux operating system. This does insure good interrupt latency; however, it is not free [1]. Another is RTAI, which provides a similar typed interface; however, the PowerPC platform, which is used widely in real time embedded community, was stated as "recovering" [2]. Thus this is not suited for military usage. This paper provides a method for tuning a standard Linux kernel so it can meet the real time requirement of an embedded system.

  2. Bringing ATLAS production to HPC resources - A use case with the Hydra supercomputer of the Max Planck Society

    CERN Document Server

    Kennedy, John; The ATLAS collaboration; Mazzaferro, Luca; Walker, Rodney

    2015-01-01

    The possible usage of HPC resources by ATLAS is now becoming viable due to the changing nature of these systems and it is also very attractive due to the need for increasing amounts of simulated data. In recent years the architecture of HPC systems has evolved, moving away from specialized monolithic systems, to a more generic Linux type platform. This change means that the deployment of non HPC specific codes has become much easier. The timing of this evolution perfectly suits the needs of ATLAS and opens a new window of opportunity. The ATLAS experiment at CERN will begin a period of high luminosity data taking in 2015. This high luminosity phase will be accompanied by a need for increasing amounts of simulated data which is expected to exceed the capabilities of the current Grid infrastructure. ATLAS aims to address this need by opportunistically accessing resources such as cloud and HPC systems. This paper presents the results of a pilot project undertaken by ATLAS and the MPP and RZG to provide access to...

  3. Big Data and HPC: A Happy Marriage

    KAUST Repository

    Mehmood, Rashid

    2016-01-25

    International Data Corporation (IDC) defines Big Data technologies as “a new generation of technologies and architectures, designed to economically extract value from very large volumes of a wide variety of data produced every day, by enabling high velocity capture, discovery, and/or analysis”. High Performance Computing (HPC) most generally refers to “the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer or workstation in order to solve large problems in science, engineering, or business”. Big data platforms are built primarily considering the economics and capacity of the system for dealing with the 4V characteristics of data. HPC traditionally has been more focussed on the speed of digesting (computing) the data. For these reasons, the two domains (HPC and Big Data) have developed their own paradigms and technologies. However, recently, these two have grown fond of each other. HPC technologies are needed by Big Data to deal with the ever increasing Vs of data in order to forecast and extract insights from existing and new domains, faster, and with greater accuracy. Increasingly more data is being produced by scientific experiments from areas such as bioscience, physics, and climate, and therefore, HPC needs to adopt data-driven paradigms. Moreover, there are synergies between them with unimaginable potential for developing new computing paradigms, solving long-standing grand challenges, and making new explorations and discoveries. Therefore, they must get married to each other. In this talk, we will trace the HPC and big data landscapes through time including their respective technologies, paradigms and major applications areas. Subsequently, we will present the factors that are driving the convergence of the two technologies, the synergies between them, as well as the benefits of their convergence to the biosciences field. The opportunities and challenges of the

  4. LLNL NESHAPs 2015 Annual Report - June 2016

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, K. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gallegos, G. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); MacQueen, D. H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wegrecki, A. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-06-01

    Lawrence Livermore National Security, LLC operates facilities at Lawrence Livermore National Laboratory (LLNL) in which radionuclides are handled and stored. These facilities are subject to the U.S. Environmental Protection Agency (EPA) National Emission Standards for Hazardous Air Pollutants (NESHAPs) in Code of Federal Regulations (CFR) Title 40, Part 61, Subpart H, which regulates radionuclide emissions to air from Department of Energy (DOE) facilities. Specifically, NESHAPs limits the emission of radionuclides to the ambient air to levels resulting in an annual effective dose equivalent of 10 mrem (100 μSv) to any member of the public. Using measured and calculated emissions, and building-specific and common parameters, LLNL personnel applied the EPA-approved computer code, CAP88-PC, Version 4.0.1.17, to calculate the dose to the maximally exposed individual member of the public for the Livermore Site and Site 300.

  5. High intensity positron program at LLNL

    International Nuclear Information System (INIS)

    Asoka-Kumar, P.; Howell, R.; Stoeffl, W.; Carter, D.

    1999-01-01

    Lawrence Livermore National Laboratory (LLNL) is the home of the world's highest current beam of keV positrons. The potential for establishing a national center for materials analysis using positron annihilation techniques around this capability is being actively pursued. The high LLNL beam current will enable investigations in several new areas. We are developing a positron microprobe that will produce a pulsed, focused positron beam for 3-dimensional scans of defect size and concentration with submicron resolution. Below we summarize the important design features of this microprobe. Several experimental end stations will be available that can utilize the high current beam with a time distribution determined by the electron linac pulse structure, quasi-continuous, or bunched at 20 MHz, and can operate in an electrostatic or (and) magnetostatic environment. Some of the planned early experiments are: two-dimensional angular correlation of annihilation radiation of thin films and buried interfaces, positron diffraction holography, positron induced desorption, and positron induced Auger spectroscopy

  6. High intensity positron program at LLNL

    International Nuclear Information System (INIS)

    Asoka-Kumar, P.; Howell, R.H.; Stoeffl, W.

    1998-01-01

    Lawrence Livermore National Laboratory (LLNL) is the home of the world's highest current beam of keV positrons. The potential for establishing a national center for materials analysis using positron annihilation techniques around this capability is being actively pursued. The high LLNL beam current will enable investigations in several new areas. We are developing a positron microprobe that will produce a pulsed, focused positron beam for 3-dimensional scans of defect size and concentration with submicron resolution. Below we summarize the important design features of this microprobe. Several experimental end stations will be available that can utilize the high current beam with a time distribution determined by the electron linac pulse structure, quasi-continuous, or bunched at 20 MHz, and can operate in an electrostatic or (and) magnetostatic environment. Some of the planned early experiments are: two-dimensional angular correlation of annihilation radiation of thin films and buried interfaces, positron diffraction holography, positron induced desorption, and positron induced Auger spectra

  7. LLNL high-field coil program

    International Nuclear Information System (INIS)

    Miller, J.R.

    1986-01-01

    An overview is presented of the LLNL High-Field Superconducting Magnet Development Program wherein the technology is being developed for producing fields in the range of 15 T and higher for both mirror and tokamak applications. Applications requiring less field will also benefit from this program. In addition, recent results on the thermomechanical performance of cable-in-conduit conductor systems are presented and their importance to high-field coil design discussed

  8. LIFTERS-hyperspectral imaging at LLNL

    Energy Technology Data Exchange (ETDEWEB)

    Fields, D. [Lawrence Livermore National Lab., CA (United States); Bennett, C.; Carter, M.

    1994-11-15

    LIFTIRS, the Livermore Imaging Fourier Transform InfraRed Spectrometer, recently developed at LLNL, is an instrument which enables extremely efficient collection and analysis of hyperspectral imaging data. LIFTIRS produces a spatial format of 128x128 pixels, with spectral resolution arbitrarily variable up to a maximum of 0.25 inverse centimeters. Time resolution and spectral resolution can be traded off for each other with great flexibility. We will discuss recent measurements made with this instrument, and present typical images and spectra.

  9. Probabilistic Seismic Hazards Update for LLNL

    Energy Technology Data Exchange (ETDEWEB)

    Menchawi, O. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fernandez, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-03-30

    Fugro Consultants, Inc. (FCL) completed the Probabilistic Seismic Hazard Analysis (PSHA) performed for Building 332 at the Lawrence Livermore National Laboratory (LLNL), near Livermore, CA. The study performed for the LLNL site includes a comprehensive review of recent information relevant to the LLNL regional tectonic setting and regional seismic sources in the vicinity of the site and development of seismic wave transmission characteristics. The Seismic Source Characterization (SSC), documented in Project Report No. 2259-PR-02 (FCL, 2015b), and Ground Motion Characterization (GMC), documented in Project Report No. 2259-PR-06 (FCL, 2015a) were developed in accordance with ANS/ANSI 2.29- 2008 Level 2 PSHA guidelines. The ANS/ANSI 2.29-2008 Level 2 PSHA framework is documented in Project Report No. 2259-PR-05 (FCL, 2016a). The Hazard Input Document (HID) for input into the PSHA developed from the SSC and GMC is presented in Project Report No. 2259-PR-04 (FCL, 2016b). The site characterization used as input for development of the idealized site profiles including epistemic uncertainty and aleatory variability is presented in Project Report No. 2259-PR-03 (FCL, 2015c). The PSHA results are documented in Project Report No. 2259-PR-07 (FCL, 2016c).

  10. Project Final Report: HPC-Colony II

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Terry R [ORNL; Kale, Laxmikant V [University of Illinois, Urbana-Champaign; Moreira, Jose [IBM T. J. Watson Research Center

    2013-11-01

    This report recounts the HPC Colony II Project which was a computer science effort funded by DOE's Advanced Scientific Computing Research office. The project included researchers from ORNL, IBM, and the University of Illinois at Urbana-Champaign. The topic of the effort was adaptive system software for extreme scale parallel machines. A description of findings is included.

  11. Charliecloud: Unprivileged containers for user-defined software stacks in HPC

    Energy Technology Data Exchange (ETDEWEB)

    Priedhorsky, Reid [Los Alamos National Laboratory; Randles, Timothy C. [Los Alamos National Laboratory

    2016-08-09

    Supercomputing centers are seeing increasing demand for user-defined software stacks (UDSS), instead of or in addition to the stack provided by the center. These UDSS support user needs such as complex dependencies or build requirements, externally required configurations, portability, and consistency. The challenge for centers is to provide these services in a usable manner while minimizing the risks: security, support burden, missing functionality, and performance. We present Charliecloud, which uses the Linux user and mount namespaces to run industry-standard Docker containers with no privileged operations or daemons on center resources. Our simple approach avoids most security risks while maintaining access to the performance and functionality already on offer, doing so in less than 500 lines of code. Charliecloud promises to bring an industry-standard UDSS user workflow to existing, minimally altered HPC resources.

  12. Replacing OSE with Real Time capable Linux

    OpenAIRE

    Boman, Simon; Rutgersson, Olof

    2009-01-01

    For many years OSE has been a common used operating system, with real time extensions enhancements, in embed-ded systems. But in the last decades, Linux has grown and became a competitor to common operating systems and, in recent years, even as an operating system with real time extensions. With this in mind, ÅF was interested in replacing the quite expensive OSE with some distribution of the open source based Linux on a PowerPC MPC8360. Therefore, our purpose with thesis is to implement Linu...

  13. Research of Performance Linux Kernel File Systems

    Directory of Open Access Journals (Sweden)

    Andrey Vladimirovich Ostroukh

    2015-10-01

    Full Text Available The article describes the most common Linux Kernel File Systems. The research was carried out on a personal computer, the characteristics of which are written in the article. The study was performed on a typical workstation running GNU/Linux with below characteristics. On a personal computer for measuring the file performance, has been installed the necessary software. Based on the results, conclusions and proposed recommendations for use of file systems. Identified and recommended by the best ways to store data.

  14. Mastering Kali Linux for advanced penetration testing

    CERN Document Server

    Beggs, Robert W

    2014-01-01

    This book provides an overview of the kill chain approach to penetration testing, and then focuses on using Kali Linux to provide examples of how this methodology is applied in the real world. After describing the underlying concepts, step-by-step examples are provided that use selected tools to demonstrate the techniques. If you are an IT professional or a security consultant who wants to maximize the success of your network testing using some of the advanced features of Kali Linux, then this book is for you. This book will teach you how to become an expert in the pre-engagement, management,

  15. Python for Unix and Linux system administration

    CERN Document Server

    Gift, Noah

    2007-01-01

    Python is an ideal language for solving problems, especially in Linux and Unix networks. With this pragmatic book, administrators can review various tasks that often occur in the management of these systems, and learn how Python can provide a more efficient and less painful way to handle them. Each chapter in Python for Unix and Linux System Administration presents a particular administrative issue, such as concurrency or data backup, and presents Python solutions through hands-on examples. Once you finish this book, you'll be able to develop your own set of command-line utilities with Pytho

  16. Embedded Linux platform for data acquisition systems

    International Nuclear Information System (INIS)

    Patel, Jigneshkumar J.; Reddy, Nagaraj; Kumari, Praveena; Rajpal, Rachana; Pujara, Harshad; Jha, R.; Kalappurakkal, Praveen

    2014-01-01

    Highlights: • The design and the development of data acquisition system on FPGA based reconfigurable hardware platform. • Embedded Linux configuration and compilation for FPGA based systems. • Hardware logic IP core and its Linux device driver development for the external peripheral to interface it with the FPGA based system. - Abstract: This scalable hardware–software system is designed and developed to explore the emerging open standards for data acquisition requirement of Tokamak experiments. To address the future need for a scalable data acquisition and control system for fusion experiments, we have explored the capability of software platform using Open Source Embedded Linux Operating System on a programmable hardware platform such as FPGA. The idea was to identify the platform which can be customizable, flexible and scalable to support the data acquisition system requirements. To do this, we have selected FPGA based reconfigurable and scalable hardware platform to design the system with Embedded Linux based operating system for flexibility in software development and Gigabit Ethernet interface for high speed data transactions. The proposed hardware–software platform using FPGA and Embedded Linux OS offers a single chip solution with processor, peripherals such ADC interface controller, Gigabit Ethernet controller, memory controller amongst other peripherals. The Embedded Linux platform for data acquisition is implemented and tested on a Virtex-5 FXT FPGA ML507 which has PowerPC 440 (PPC440) [2] hard block on FPGA. For this work, we have used the Linux Kernel version 2.6.34 with BSP support for the ML507 platform. It is downloaded from the Xilinx [1] GIT server. Cross-compiler tool chain is created using the Buildroot scripts. The Linux Kernel and Root File System are configured and compiled using the cross-tools to support the hardware platform. The Analog to Digital Converter (ADC) IO module is designed and interfaced with the ML507 through Xilinx

  17. Linux software for large topology optimization problems

    DEFF Research Database (Denmark)

    evolving product, which allows a parallel solution of the PDE, it lacks the important feature that the matrix-generation part of the computations is localized to each processor. This is well-known to be critical for obtaining a useful speedup on a Linux cluster and it motivates the search for a COMSOL......-like package for large topology optimization problems. One candidate for such software is developed for Linux by Sandia Nat’l Lab in the USA being the Sundance system. Sundance also uses a symbolic representation of the PDE and a scalable numerical solution is achieved by employing the underlying Trilinos...

  18. Embedded Linux platform for data acquisition systems

    Energy Technology Data Exchange (ETDEWEB)

    Patel, Jigneshkumar J., E-mail: jjp@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Reddy, Nagaraj, E-mail: nagaraj.reddy@coreel.com [Sandeepani School of Embedded System Design, Bangalore, Karnataka (India); Kumari, Praveena, E-mail: praveena@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Rajpal, Rachana, E-mail: rachana@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Pujara, Harshad, E-mail: pujara@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Jha, R., E-mail: rjha@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Kalappurakkal, Praveen, E-mail: praveen.k@coreel.com [Sandeepani School of Embedded System Design, Bangalore, Karnataka (India)

    2014-05-15

    Highlights: • The design and the development of data acquisition system on FPGA based reconfigurable hardware platform. • Embedded Linux configuration and compilation for FPGA based systems. • Hardware logic IP core and its Linux device driver development for the external peripheral to interface it with the FPGA based system. - Abstract: This scalable hardware–software system is designed and developed to explore the emerging open standards for data acquisition requirement of Tokamak experiments. To address the future need for a scalable data acquisition and control system for fusion experiments, we have explored the capability of software platform using Open Source Embedded Linux Operating System on a programmable hardware platform such as FPGA. The idea was to identify the platform which can be customizable, flexible and scalable to support the data acquisition system requirements. To do this, we have selected FPGA based reconfigurable and scalable hardware platform to design the system with Embedded Linux based operating system for flexibility in software development and Gigabit Ethernet interface for high speed data transactions. The proposed hardware–software platform using FPGA and Embedded Linux OS offers a single chip solution with processor, peripherals such ADC interface controller, Gigabit Ethernet controller, memory controller amongst other peripherals. The Embedded Linux platform for data acquisition is implemented and tested on a Virtex-5 FXT FPGA ML507 which has PowerPC 440 (PPC440) [2] hard block on FPGA. For this work, we have used the Linux Kernel version 2.6.34 with BSP support for the ML507 platform. It is downloaded from the Xilinx [1] GIT server. Cross-compiler tool chain is created using the Buildroot scripts. The Linux Kernel and Root File System are configured and compiled using the cross-tools to support the hardware platform. The Analog to Digital Converter (ADC) IO module is designed and interfaced with the ML507 through Xilinx

  19. Linux malware incident response an excerpt from malware forensic field guide for Linux systems

    CERN Document Server

    Malin, Cameron H; Aquilina, James M

    2013-01-01

    Linux Malware Incident Response is a ""first look"" at the Malware Forensics Field Guide for Linux Systems, exhibiting the first steps in investigating Linux-based incidents. The Syngress Digital Forensics Field Guides series includes companions for any digital and computer forensic investigator and analyst. Each book is a ""toolkit"" with checklists for specific tasks, case studies of difficult situations, and expert analyst tips. This compendium of tools for computer forensics analysts and investigators is presented in a succinct outline format with cross-references to suppleme

  20. Using Linux PCs in DAQ applications

    CERN Document Server

    Ünel, G; Beck, H P; Cetin, S A; Conka, T; Crone, G J; Fernandes, A; Francis, D; Joosb, M; Lehmann, G; López, J; Mailov, A A; Mapelli, Livio P; Mornacchi, Giuseppe; Niculescu, M; Petersen, J; Tremblet, L J; Veneziano, Stefano; Wildish, T; Yasu, Y

    2000-01-01

    ATLAS Data Acquisition/Event Filter "-1" (DAQ/EF1) project provides the opportunity to explore the use of commodity hardware (PCs) and Open Source Software (Linux) in DAQ applications. In DAQ/EF-1 there is an element called the LDAQ which is responsible for providing local run-control, error-handling and reporting for a number of read- out modules in front end crates. This element is also responsible for providing event data for monitoring and for the interface with the global control and monitoring system (Back-End). We present the results of an evaluation of the Linux operating system made in the context of DAQ/EF-1 where there are no strong real-time requirements. We also report on our experience in implementing the LDAQ on a VMEbus based PC (the VMIVME-7587) and a desktop PC linked to VMEbus with a Bit3 interface both running Linux. We then present the problems encountered during the integration with VMEbus, the status of the LDAQ implementation and draw some conclusions on the use of Linux in DAQ applica...

  1. Embedded Linux projects using Yocto project cookbook

    CERN Document Server

    González, Alex

    2015-01-01

    If you are an embedded developer learning about embedded Linux with some experience with the Yocto project, this book is the ideal way to become proficient and broaden your knowledge with examples that are immediately applicable to your embedded developments. Experienced embedded Yocto developers will find new insight into working methodologies and ARM specific development competence.

  2. Kernel Korner : The Linux keyboard driver

    NARCIS (Netherlands)

    Brouwer, A.E.

    1995-01-01

    Our Kernel Korner series continues with an article describing the Linux keyboard driver. This article is not for "Kernel Hackers" only--in fact, it will be most useful to those who wish to use their own keyboard to its fullest potential, and those who want to write programs to take advantage of the

  3. Linux Incident Response Volatile Data Analysis Framework

    Science.gov (United States)

    McFadden, Matthew

    2013-01-01

    Cyber incident response is an emphasized subject area in cybersecurity in information technology with increased need for the protection of data. Due to ongoing threats, cybersecurity imposes many challenges and requires new investigative response techniques. In this study a Linux Incident Response Framework is designed for collecting volatile data…

  4. The LLNL portable tritium processing system

    International Nuclear Information System (INIS)

    Anon.

    1995-01-01

    The end of the Cold War significantly reduced the need for facilities to handle radioactive materials for the US nuclear weapons program. The LLNL Tritium Facility was among those slated for decommissioning. The plans for the facility have since been reversed, and it remains open. Nevertheless, in the early 1990s, the cleanup (the Tritium Inventory Removal Project) was undertaken. However, removing the inventory of tritium within the facility and cleaning up any pockets of high-level residual contamination required that we design a system adequate to the task and meeting today's stringent standards of worker and environmental protection. In collaboration with Sandia National Laboratory and EG ampersand G Mound Applied Technologies, we fabricated a three-module Portable Tritium Processing System (PTPS) that meets current glovebox standards, is operated from a portable console, and is movable from laboratory to laboratory for performing the basic tritium processing operations: pumping and gas transfer, gas analysis, and gas-phase tritium scrubbing. The Tritium Inventory Removal Project is now in its final year, and the portable system continues to be the workhorse. To meet a strong demand for tritium services, the LLNL Tritium Facility will be reconfigured to provide state-of-the-art tritium and radioactive decontamination research and development. The PTPS will play a key role in this new facility

  5. LLNL Livermore site Groundwater Surveillance Plan

    International Nuclear Information System (INIS)

    1992-04-01

    Department of Energy (DOE) Order 5400.1 establishes environ-mental protection program requirements, authorities, and responsibilities for DOE operations to assume compliance with federal, state, and local environmental protection laws and regulations; Federal Executive Orders; and internal DOE policies. ne DOE Order contains requirements and guidance for environmental monitoring programs, the objectives of which are to demonstrate compliance with legal and regulatory requirements imposed by federal, state, and local agencies; confirm adherence to DOE environmental protection polices; and support environmental management decisions. The environmental monitoring programs consist of two major activities: (1) measurement and monitoring of effluents from DOE operations, and (2) surveillance through measurement, monitoring, and calculation of the effects of those operations on the environment and public health. The latter concern, that of assessing the effects, if any, of Lawrence Livermore National Laboratory (LLNL) operations and activities on on-site and off-site surface waters and groundwaters is addressed by an Environmental Surveillance Program being developed by LLNL. The Groundwater Surveillance Plan presented here has been developed on a sitespecific basis, taking into consideration facility characteristics, applicable regulations, hazard potential, quantities and concentrations of materials released, the extent and use of local water resources, and specific local public interest and concerns

  6. High intensity positron program at LLNL

    International Nuclear Information System (INIS)

    Asoka-Kumar, P.; Howell, R.; Stoeffl, W.; Carter, D.

    1999-01-01

    Lawrence Livermore National Laboratory (LLNL) is the home of the world close-quote s highest current beam of keV positrons. The potential for establishing a national center for materials analysis using positron annihilation techniques around this capability is being actively pursued. The high LLNL beam current will enable investigations in several new areas. We are developing a positron microprobe that will produce a pulsed, focused positron beam for 3-dimensional scans of defect size and concentration with submicron resolution. Below we summarize the important design features of this microprobe. Several experimental end stations will be available that can utilize the high current beam with a time distribution determined by the electron linac pulse structure, quasi-continuous, or bunched at 20 MHz, and can operate in an electrostatic or (and) magnetostatic environment. Some of the planned early experiments are: two-dimensional angular correlation of annihilation radiation of thin films and buried interfaces, positron diffraction holography, positron induced desorption, and positron induced Auger spectroscopy. copyright 1999 American Institute of Physics

  7. HPC Access Using KVM over IP

    Science.gov (United States)

    2007-06-08

    Lightwave VDE /200 KVM-over-Fiber (Keyboard, Video and Mouse) devices installed throughout the TARDEC campus. Implementation of this system required...development effort through the pursuit of an Army-funded Phase-II Small Business Innovative Research (SBIR) effort with IP Video Systems (formerly known as...visualization capabilities of a DoD High- Performance Computing facility, many advanced features are necessary. TARDEC-HPC’s SBIR with IP Video Systems

  8. Superiority of CT imaging reconstruction on Linux OS

    International Nuclear Information System (INIS)

    Lin Shaochun; Yan Xufeng; Wu Tengfang; Luo Xiaomei; Cai Huasong

    2010-01-01

    Objective: To compare the speed of CT reconstruction using the Linux and Windows OS. Methods: Shepp-Logan head phantom in different pixel size was projected to obtain the sinogram by using the inverse Fourier transformation, filtered back projection and Radon transformation on both Linux and Windows OS. Results: CT image reconstruction using the Linux operating system was significantly better and more efficient than Windows. Conclusion: CT image reconstruction using the Linux operating system is more efficient. (authors)

  9. Enforcing the use of API functions in Linux code

    DEFF Research Database (Denmark)

    Lawall, Julia; Muller, Gilles; Palix, Nicolas Jean-Michel

    2009-01-01

    In the Linux kernel source tree, header files typically define many small functions that have a simple behavior but are critical to ensure readability, correctness, and maintainability. We have observed, however, that some Linux code does not use these functions systematically. In this paper, we...... in the header file include/linux/usb.h....

  10. Low latency network and distributed storage for next generation HPC systems: the ExaNeSt project

    Science.gov (United States)

    Ammendola, R.; Biagioni, A.; Cretaro, P.; Frezza, O.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Paolucci, P. S.; Pastorelli, E.; Pisani, F.; Simula, F.; Vicini, P.; Navaridas, J.; Chaix, F.; Chrysos, N.; Katevenis, M.; Papaeustathiou, V.

    2017-10-01

    With processor architecture evolution, the HPC market has undergone a paradigm shift. The adoption of low-cost, Linux-based clusters extended the reach of HPC from its roots in modelling and simulation of complex physical systems to a broader range of industries, from biotechnology, cloud computing, computer analytics and big data challenges to manufacturing sectors. In this perspective, the near future HPC systems can be envisioned as composed of millions of low-power computing cores, densely packed — meaning cooling by appropriate technology — with a tightly interconnected, low latency and high performance network and equipped with a distributed storage architecture. Each of these features — dense packing, distributed storage and high performance interconnect — represents a challenge, made all the harder by the need to solve them at the same time. These challenges lie as stumbling blocks along the road towards Exascale-class systems; the ExaNeSt project acknowledges them and tasks itself with investigating ways around them.

  11. Real Time Linux - The RTOS for Astronomy?

    Science.gov (United States)

    Daly, P. N.

    The BoF was attended by about 30 participants and a free CD of real time Linux-based upon RedHat 5.2-was available. There was a detailed presentation on the nature of real time Linux and the variants for hard real time: New Mexico Tech's RTL and DIAPM's RTAI. Comparison tables between standard Linux and real time Linux responses to time interval generation and interrupt response latency were presented (see elsewhere in these proceedings). The present recommendations are to use RTL for UP machines running the 2.0.x kernels and RTAI for SMP machines running the 2.2.x kernel. Support, both academically and commercially, is available. Some known limitations were presented and the solutions reported e.g., debugging and hardware support. The features of RTAI (scheduler, fifos, shared memory, semaphores, message queues and RPCs) were described. Typical performance statistics were presented: Pentium-based oneshot tasks running > 30kHz, 486-based oneshot tasks running at ~ 10 kHz, periodic timer tasks running in excess of 90 kHz with average zero jitter peaking to ~ 13 mus (UP) and ~ 30 mus (SMP). Some detail on kernel module programming, including coding examples, were presented showing a typical data acquisition system generating simulated (random) data writing to a shared memory buffer and a fifo buffer to communicate between real time Linux and user space. All coding examples were complete and tested under RTAI v0.6 and the 2.2.12 kernel. Finally, arguments were raised in support of real time Linux: it's open source, free under GPL, enables rapid prototyping, has good support and the ability to have a fully functioning workstation capable of co-existing hard real time performance. The counter weight-the negatives-of lack of platforms (x86 and PowerPC only at present), lack of board support, promiscuous root access and the danger of ignorance of real time programming issues were also discussed. See ftp://orion.tuc.noao.edu/pub/pnd/rtlbof.tgz for the StarOffice overheads

  12. Proposed LLNL electron beam ion trap

    International Nuclear Information System (INIS)

    Marrs, R.E.; Egan, P.O.; Proctor, I.; Levine, M.A.; Hansen, L.; Kajiyama, Y.; Wolgast, R.

    1985-01-01

    The interaction of energetic electrons with highly charged ions is of great importance to several research fields such as astrophysics, laser fusion and magnetic fusion. In spite of this importance there are almost no measurements of electron interaction cross sections for ions more than a few times ionized. To address this problem an electron beam ion trap (EBIT) is being developed at LLNL. The device is essentially an EBIS except that it is not intended as a source of extracted ions. Instead the (variable energy) electron beam interacting with the confined ions will be used to obtain measurements of ionization cross sections, dielectronic recombination cross sections, radiative recombination cross sections, energy levels and oscillator strengths. Charge-exchange recombinaion cross sections with neutral gasses could also be measured. The goal is to produce and study elements in many different charge states up to He-like xenon and Ne-like uranium. 5 refs., 2 figs

  13. FY14 LLNL OMEGA Experimental Programs

    Energy Technology Data Exchange (ETDEWEB)

    Heeter, R. F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fournier, K. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Baker, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Barrios, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bernstein, L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brown, G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Celliers, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Chen, H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Coppari, F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fratanduono, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Johnson, M. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Huntington, C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Jenei, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kraus, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ma, T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Martinez, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); McNabb, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Millot, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Moore, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nagel, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Park, H. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Patel, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Perez, F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ping, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pollock, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ross, J. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Rygg, J. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Smith, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Zylstra, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Collins, G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Landen, O. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wan, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hsing, W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-10-13

    In FY14, LLNL’s High-Energy-Density Physics (HED) and Indirect Drive Inertial Confinement Fusion (ICF-ID) programs conducted several campaigns on the OMEGA laser system and on the EP laser system, as well as campaigns that used the OMEGA and EP beams jointly. Overall these LLNL programs led 324 target shots in FY14, with 246 shots using just the OMEGA laser system, 62 shots using just the EP laser system, and 16 Joint shots using Omega and EP together. Approximately 31% of the total number of shots (62 OMEGA shots, 42 EP shots) shots supported the Indirect Drive Inertial Confinement Fusion Campaign (ICF-ID). The remaining 69% (200 OMEGA shots and 36 EP shots, including the 16 Joint shots) were dedicated to experiments for High- Energy-Density Physics (HED). Highlights of the various HED and ICF campaigns are summarized in the following reports.

  14. FY15 LLNL OMEGA Experimental Programs

    Energy Technology Data Exchange (ETDEWEB)

    Heeter, R. F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Baker, K. L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Barrios, M. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Beckwith, M. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Casey, D. T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Celliers, P. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Chen, H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Coppari, F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fournier, K. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fratanduono, D. E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Frenje, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Huntington, C. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kraus, R. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Lazicki, A. E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Martinez, D. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); McNaney, J. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Millot, M. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pak, A. E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Park, H. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ping, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pollock, B. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Smith, R. F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wehrenberg, C. E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Widmann, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Collins, G. W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Landen, O. L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wan, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hsing, W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-12-04

    In FY15, LLNL’s High-Energy-Density Physics (HED) and Indirect Drive Inertial Confinement Fusion (ICF-ID) programs conducted several campaigns on the OMEGA laser system and on the EP laser system, as well as campaigns that used the OMEGA and EP beams jointly. Overall these LLNL programs led 468 target shots in FY15, with 315 shots using just the OMEGA laser system, 145 shots using just the EP laser system, and 8 Joint shots using Omega and EP together. Approximately 25% of the total number of shots (56 OMEGA shots and 67 EP shots, including the 8 Joint shots) supported the Indirect Drive Inertial Confinement Fusion Campaign (ICF-ID). The remaining 75% (267 OMEGA shots and 86 EP shots) were dedicated to experiments for High-Energy-Density Physics (HED). Highlights of the various HED and ICF campaigns are summarized in the following reports.

  15. Preparing a scientific manuscript in Linux: Today's possibilities and limitations.

    Science.gov (United States)

    Tchantchaleishvili, Vakhtang; Schmitto, Jan D

    2011-10-22

    Increasing number of scientists are enthusiastic about using free, open source software for their research purposes. Authors' specific goal was to examine whether a Linux-based operating system with open source software packages would allow to prepare a submission-ready scientific manuscript without the need to use the proprietary software. Preparation and editing of scientific manuscripts is possible using Linux and open source software. This letter to the editor describes key steps for preparation of a publication-ready scientific manuscript in a Linux-based operating system, as well as discusses the necessary software components. This manuscript was created using Linux and open source programs for Linux.

  16. Distributed MDSplus database performance with Linux clusters

    International Nuclear Information System (INIS)

    Minor, D.H.; Burruss, J.R.

    2006-01-01

    The staff at the DIII-D National Fusion Facility, operated for the USDOE by General Atomics, are investigating the use of grid computing and Linux technology to improve performance in our core data management services. We are in the process of converting much of our functionality to cluster-based and grid-enabled software. One of the most important pieces is a new distributed version of the MDSplus scientific data management system that is presently used to support fusion research in over 30 countries worldwide. To improve data handling performance, the staff is investigating the use of Linux clusters for both data clients and servers. The new distributed capability will result in better load balancing between these clients and servers, and more efficient use of network resources resulting in improved support of the data analysis needs of the scientific staff

  17. WinHPC System Configuration | High-Performance Computing | NREL

    Science.gov (United States)

    ), login node (WinHPC02) and worker/compute nodes. The head node acts as the file, DNS, and license server . The login node is where the users connect to access the cluster. Node 03 has dual Intel Xeon E5530 2008 R2 HPC Edition. The login node, WinHPC02, is where users login to access the system. This is where

  18. UNIX and Linux system administration handbook

    CERN Document Server

    Nemeth, Evi; Hein, Trent R; Whaley, Ben; Mackin, Dan; Garnett, James; Branca, Fabrizio; Mouat, Adrian

    2018-01-01

    Now fully updated for today’s Linux distributions and cloud environments, it details best practices for every facet of system administration, including storage management, network design and administration, web hosting and scale-out, automation, configuration management, performance analysis, virtualization, DNS, security, management of IT service organizations, and much more. For modern system and network administrators, this edition contains indispensable new coverage of cloud deployments, continuous delivery, Docker and other containerization solutions, and much more.

  19. IP Security für Linux

    OpenAIRE

    Parthey, Mirko

    2001-01-01

    Die Nutzung des Internet für sicherheitskritische Anwendungen erfordert kryptographische Schutzmechanismen. IP Security (IPsec) definiert dafür geeignete Protokolle. Diese Arbeit gibt einen Überblick über IPsec. Eine IPsec-Implementierung für Linux (FreeS/WAN) wird auf Erweiterbarkeit und Praxistauglichkeit untersucht. Using the Internet in security-critical areas requires cryptographic protection, for which IP Security (IPsec) defines suitable protocols. This paper gives an overview of IP...

  20. Compilation of LLNL CUP-2 Data

    Energy Technology Data Exchange (ETDEWEB)

    Eppich, G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kips, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Lindvall, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-07-31

    The CUP-2 uranium ore concentrate (UOC) standard reference material, a powder, was produced at the Blind River uranium refinery of Eldorado Resources Ltd. in Canada in 1986. This material was produced as part of a joint effort by the Canadian Certified Reference Materials Project and the Canadian Uranium Producers Metallurgical Committee to develop a certified reference material for uranium concentration and the concentration of several impurity constituents. This standard was developed to satisfy the requirements of the UOC mining and milling industry, and was characterized with this purpose in mind. To produce CUP-2, approximately 25 kg of UOC derived from the Blind River uranium refinery was blended, homogenized, and assessed for homogeneity by X-ray fluorescence (XRF) analysis. The homogenized material was then packaged into bottles, containing 50 g of material each, and distributed for analysis to laboratories in 1986. The CUP-2 UOC standard was characterized by an interlaboratory analysis program involving eight member laboratories, six commercial laboratories, and three additional volunteer laboratories. Each laboratory provided five replicate results on up to 17 analytes, including total uranium concentration, and moisture content. The selection of analytical technique was left to each participating laboratory. Uranium was reported on an “as-received” basis; all other analytes (besides moisture content) were reported on a “dry-weight” basis. A bottle of 25g of CUP-2 UOC standard as described above was purchased by LLNL and characterized by the LLNL Nuclear Forensics Group. Non-destructive and destructive analytical techniques were applied to the UOC sample. Information obtained from short-term techniques such as photography, gamma spectrometry, and scanning electron microscopy were used to guide the performance of longer-term techniques such as ICP-MS. Some techniques, such as XRF and ICP-MS, provided complementary types of data. The results

  1. Delivering LHC software to HPC compute elements

    CERN Document Server

    Blomer, Jakob; Hardi, Nikola; Popescu, Radu

    2017-01-01

    In recent years, there was a growing interest in improving the utilization of supercomputers by running applications of experiments at the Large Hadron Collider (LHC) at CERN when idle cores cannot be assigned to traditional HPC jobs. At the same time, the upcoming LHC machine and detector upgrades will produce some 60 times higher data rates and challenge LHC experiments to use so far untapped compute resources. LHC experiment applications are tailored to run on high-throughput computing resources and they have a different anatomy than HPC applications. LHC applications comprise a core framework that allows hundreds of researchers to plug in their specific algorithms. The software stacks easily accumulate to many gigabytes for a single release. New releases are often produced on a daily basis. To facilitate the distribution of these software stacks to world-wide distributed computing resources, LHC experiments use a purpose-built, global, POSIX file system, the CernVM File System. CernVM-FS pre-processes dat...

  2. The new LLNL AMS sample changer

    International Nuclear Information System (INIS)

    Roberts, M.L.; Norman, P.J.; Garibaldi, J.L.; Hornady, R.S.

    1993-01-01

    The Center for Accelerator Mass Spectrometry at LLNL has installed a new 64 position AMS sample changer on our spectrometer. This new sample changer has the capability of being controlled manually by an operator or automatically by the AMS data acquisition computer. Automatic control of the sample changer by the data acquisition system is a necessary step towards unattended AMS operation in our laboratory. The sample changer uses a fiber optic shaft encoder for rough rotational indexing of the sample wheel and a series of sequenced pneumatic cylinders for final mechanical indexing of the wheel and insertion and retraction of samples. Transit time from sample to sample varies from 4 s to 19 s, depending on distance moved. Final sample location can be set to within 50 microns on the x and y axis and within 100 microns in the z axis. Changing sample wheels on the new sample changer is also easier and faster than was possible on our previous sample changer and does not require the use of any tools

  3. WinHPC System Policies | High-Performance Computing | NREL

    Science.gov (United States)

    ) cluster. The WinHPC login node (WinHPC02) is intended to allow users with approved access to connect to also be run from the login node. There is a single login node for this system so any applications

  4. Cleaning up a GNU/Linux operating system

    OpenAIRE

    Oblak , Denis

    2018-01-01

    The aim of the thesis is to develop an application for cleaning up the Linux operating system that would be able to function on most distributions. The theoretical part discusses the cleaning of the Linux operating system that frees up disk space and allows a better functioning. The cleaning techniques and the existing tools for Linux are systematically reviewed and presented. The following part examines the cleaning of the Windows and MacOS operating systems. The thesis also compares all...

  5. Abstract of talk for Silicon Valley Linux Users Group

    Science.gov (United States)

    Clanton, Sam

    2003-01-01

    The use of Linux for research at NASA Ames is discussed.Topics include:work with the Atmospheric Physics branch on software for a spectrometer to be used in the CRYSTAL-FACE mission this summer; work on in the Neuroengineering Lab with code IC including an introduction to the extension of the human senses project,advantages with using linux for real-time biological data processing,algorithms utilized on a linux system, goals of the project,slides of people with Neuroscan caps on, and progress that has been made and how linux has helped.

  6. Nuclear physics and heavy element research at LLNL

    Energy Technology Data Exchange (ETDEWEB)

    Stoyer, M A; Ahle, L E; Becker, J A; Bernstein, L A; Bleuel, D L; Burke, J T; Dashdorj, D; Henderson, R A; Hurst, A M; Kenneally, J M; Lesher, S R; Moody, K J; Nelson, S L; Norman, E B; Pedretti, M; Scielzo, N D; Shaughnessy, D A; Sheets, S A; Stoeffl, W; Stoyer, N J; Wiedeking, M; Wilk, P A; Wu, C Y

    2009-05-11

    This paper highlights some of the current basic nuclear physics research at Lawrence Livermore National Laboratory (LLNL). The work at LLNL concentrates on investigating nuclei at the extremes. The Experimental Nuclear Physics Group performs research to improve our understanding of nuclei, nuclear reactions, nuclear decay processes and nuclear astrophysics; an expertise utilized for important laboratory national security programs and for world-class peer-reviewed basic research.

  7. Development of positron diffraction and holography at LLNL

    International Nuclear Information System (INIS)

    Hamza, A.; Asoka-Kumar, P.; Stoeffl, W.; Howell, R.; Miller, D.; Denison, A.

    2003-01-01

    A low-energy positron diffraction and holography spectrometer is currently being constructed at the Lawrence Livermore National Laboratory (LLNL) to study surfaces and adsorbed structures. This instrument will operate in conjunction with the LLNL intense positron beam produced by the 100 MeV LINAC allowing data to be acquired in minutes rather than days. Positron diffraction possesses certain advantages over electron diffraction which are discussed. Details of the instrument based on that of low-energy electron diffraction are described

  8. HPC CLOUD APPLIED TO LATTICE OPTIMIZATION

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Changchun; Nishimura, Hiroshi; James, Susan; Song, Kai; Muriki, Krishna; Qin, Yong

    2011-03-18

    As Cloud services gain in popularity for enterprise use, vendors are now turning their focus towards providing cloud services suitable for scientific computing. Recently, Amazon Elastic Compute Cloud (EC2) introduced the new Cluster Compute Instances (CCI), a new instance type specifically designed for High Performance Computing (HPC) applications. At Berkeley Lab, the physicists at the Advanced Light Source (ALS) have been running Lattice Optimization on a local cluster, but the queue wait time and the flexibility to request compute resources when needed are not ideal for rapid development work. To explore alternatives, for the first time we investigate running the Lattice Optimization application on Amazon's new CCI to demonstrate the feasibility and trade-offs of using public cloud services for science.

  9. HPC Cloud Applied To Lattice Optimization

    International Nuclear Information System (INIS)

    Sun, Changchun; Nishimura, Hiroshi; James, Susan; Song, Kai; Muriki, Krishna; Qin, Yong

    2011-01-01

    As Cloud services gain in popularity for enterprise use, vendors are now turning their focus towards providing cloud services suitable for scientific computing. Recently, Amazon Elastic Compute Cloud (EC2) introduced the new Cluster Compute Instances (CCI), a new instance type specifically designed for High Performance Computing (HPC) applications. At Berkeley Lab, the physicists at the Advanced Light Source (ALS) have been running Lattice Optimization on a local cluster, but the queue wait time and the flexibility to request compute resources when needed are not ideal for rapid development work. To explore alternatives, for the first time we investigate running the Lattice Optimization application on Amazon's new CCI to demonstrate the feasibility and trade-offs of using public cloud services for science.

  10. Measuring performances of linux hyper visors

    International Nuclear Information System (INIS)

    Chierici, A.; Veraldi, R.; Salomoni, D.

    2009-01-01

    Virtualisation is a now proven software technology that is rapidly transforming the I T landscape and fundamentally changing the way people make computations and implement services. Recently, all major software producers (e.g., Microsoft and Red Hat) developed or acquired virtualisation technologies. Our institute (http://www.CNAF.INFN.it) is a Tier l for experiments carried on at the Large Hadron Collider at CERN (http://lhc.web.CERN.ch/lhc/) and is experiencing several benefits from virtualisation technologies, like improving fault tolerance, providing efficient hardware resource usage and increasing security. Currently, the virtualisation solution we adopted is xen, which is well supported by the Scientific Linux distribution, widely used by the High-Energy Physics (HEP) community. Since Scientific Linux is based on Red Hat E S, we felt the need to investigate performances and usability differences with the new k vm technology, recently acquired by Red Hat. The case study of this work is the Tier2 site for the LHCb experiment hosted at our institute; all major grid elements for this Tier2 run on xen virtual machines smoothly. We will investigate the impact on performance and stability that a migration to k vm would entail on the Tier2 site, as well as the effort required by a system administrator to deploy the migration.

  11. Extracting Feature Model Changes from the Linux Kernel Using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2014-01-01

    The Linux kernel feature model has been studied as an example of large scale evolving feature model and yet details of its evolution are not known. We present here a classification of feature changes occurring on the Linux kernel feature model, as well as a tool, FMDiff, designed to automatically

  12. The clinical phenotype of hereditary versus sporadic prostate cancer: HPC definition revisited

    NARCIS (Netherlands)

    Cremers, R.G.H.M.; Aben, K.K.H.; Oort, I.M. van; Sedelaar, J.P.M.; Vasen, H.F.A.; Vermeulen, S.H.; Kiemeney, L.A.L.M.

    2016-01-01

    BACKGROUND: The definition of hereditary prostate cancer (HPC) is based on family history and age at onset. Intuitively, HPC is a serious subtype of prostate cancer but there are only limited data on the clinical phenotype of HPC. Here, we aimed to compare the prognosis of HPC to the sporadic form

  13. LLNL/YMP Waste Container Fabrication and Closure Project

    International Nuclear Information System (INIS)

    1990-10-01

    The Department of Energy's Office of Civilian Radioactive Waste Management (OCRWM) Program is studying Yucca Mountain, Nevada as a suitable site for the first US high-level nuclear waste repository. Lawrence Livermore National Laboratory (LLNL) has the responsibility for designing and developing the waste package for the permanent storage of high-level nuclear waste. This report is a summary of the technical activities for the LLNL/YMP Nuclear Waste Disposal Container Fabrication and Closure Development Project. Candidate welding closure processes were identified in the Phase 1 report. This report discusses Phase 2. Phase 2 of this effort involved laboratory studies to determine the optimum fabrication and closure processes. Because of budget limitations, LLNL narrowed the materials for evaluation in Phase 2 from the original six to four: Alloy 825, CDA 715, CDA 102 (or CDA 122) and CDA 952. Phase 2 studies focused on evaluation of candidate material in conjunction with fabrication and closure processes

  14. Diversification and strategic management of LLNL's R ampersand D portfolio

    International Nuclear Information System (INIS)

    Glinsky, M.E.

    1994-12-01

    Strategic management of LLNL's research effort is addressed. A general framework is established by presenting the McKinsey/BCG Matrix Analysis as it applies to the research portfolio. The framework is used to establish the need for the diversification into new attractive areas of research and for the improvement of the market position of existing research in those attractive areas. With the need for such diversification established, attention is turned to optimizing it. There are limited resources available. It is concluded that LLNL should diversify into only a few areas and try to obtain full market share as soon as possible

  15. Thermochemical hydrogen production studies at LLNL: a status report

    International Nuclear Information System (INIS)

    Krikorian, O.H.

    1982-01-01

    Currently, studies are underway at the Lawrence Livermore National Laboratory (LLNL) on thermochemical hydrogen production based on magnetic fusion energy (MFE) and solar central receivers as heat sources. These areas of study were described earlier at the previous IEA Annex I Hydrogen Workshop (Juelich, West Germany, September 23-25, 1981), and a brief update will be given here. Some basic research has also been underway at LLNL on the electrolysis of water from fused phosphate salts, but there are no current results in that area, and the work is being terminated

  16. Fedora Bible 2010 Edition Featuring Fedora Linux 12

    CERN Document Server

    Negus, Christopher

    2010-01-01

    The perfect companion for mastering the latest version of Fedora. As a free, open source Linux operating system sponsored by Red Hat, Fedora can either be a stepping stone to Enterprise or used as a viable operating system for those looking for frequent updates. Written by veteran authors of perennial bestsellers, this book serves as an ideal companion for Linux users and offers a thorough look at the basics of the new Fedora 12. Step-by-step instructions make the Linux installation simple while clear explanations walk you through best practices for taking advantage of the desktop interface. Y

  17. Web application security analysis using the Kali Linux operating system

    OpenAIRE

    BABINCEV IVAN M.; VULETIC DEJAN V.

    2016-01-01

    The Kali Linux operating system is described as well as its purpose and possibilities. There are listed groups of tools that Kali Linux has together with the methods of their functioning, as well as a possibility to install and use tools that are not an integral part of Kali. The final part shows a practical testing of web applications using the tools from the Kali Linux operating system. The paper thus shows a part of the possibilities of this operating system in analaysing web applications ...

  18. A Quantitative Analysis of Variability Warnings in Linux

    DEFF Research Database (Denmark)

    Melo, Jean; Flesborg, Elvis; Brabrand, Claus

    2015-01-01

    In order to get insight into challenges with quality in highly-configurable software, we analyze one of the largest open source projects, the Linux kernel, and quantify basic properties of configuration-related warnings. We automatically analyze more than 20 thousand valid and distinct random...... configurations, in a computation that lasted more than a month. We count and classify a total of 400,000 warnings to get an insight in the distribution of warning types, and the location of the warnings. We run both on a stable and unstable version of the Linux kernel. The results show that Linux contains...

  19. Rebootless Linux Kernel Patching with Ksplice Uptrack at BNL

    International Nuclear Information System (INIS)

    Hollowell, Christopher; Pryor, James; Smith, Jason

    2012-01-01

    Ksplice/Oracle Uptrack is a software tool and update subscription service which allows system administrators to apply security and bug fix patches to the Linux kernel running on servers/workstations without rebooting them. The RHIC/ATLAS Computing Facility (RACF) at Brookhaven National Laboratory (BNL) has deployed Uptrack on nearly 2,000 hosts running Scientific Linux and Red Hat Enterprise Linux. The use of this software has minimized downtime, and increased our security posture. In this paper, we provide an overview of Ksplice's rebootless kernel patch creation/insertion mechanism, and our experiences with Uptrack.

  20. Shell Scripting Expert Recipes for Linux, Bash and more

    CERN Document Server

    Parker, Steve

    2011-01-01

    A compendium of shell scripting recipes that can immediately be used, adjusted, and applied The shell is the primary way of communicating with the Unix and Linux systems, providing a direct way to program by automating simple-to-intermediate tasks. With this book, Linux expert Steve Parker shares a collection of shell scripting recipes that can be used as is or easily modified for a variety of environments or situations. The book covers shell programming, with a focus on Linux and the Bash shell; it provides credible, real-world relevance, as well as providing the flexible tools to get started

  1. Pro Oracle database 11g RAC on Linux

    CERN Document Server

    Shaw, Steve

    2010-01-01

    Pro Oracle Database 11g RAC on Linux provides full-life-cycle guidance on implementing Oracle Real Application Clusters in a Linux environment. Real Application Clusters, commonly abbreviated as RAC, is Oracle's industry-leading architecture for scalable and fault-tolerant databases. RAC allows you to scale up and down by simply adding and subtracting inexpensive Linux servers. Redundancy provided by those multiple, inexpensive servers is the basis for the failover and other fault-tolerance features that RAC provides. Written by authors well-known for their talent with RAC, Pro Oracle Database

  2. Using HPC within an operational forecasting configuration

    Science.gov (United States)

    Jagers, H. R. A.; Genseberger, M.; van den Broek, M. A. F. H.

    2012-04-01

    Various natural disasters are caused by high-intensity events, for example: extreme rainfall can in a short time cause major damage in river catchments, storms can cause havoc in coastal areas. To assist emergency response teams in operational decisions, it's important to have reliable information and predictions as soon as possible. This starts before the event by providing early warnings about imminent risks and estimated probabilities of possible scenarios. In the context of various applications worldwide, Deltares has developed an open and highly configurable forecasting and early warning system: Delft-FEWS. Finding the right balance between simulation time (and hence prediction lead time) and simulation accuracy and detail is challenging. Model resolution may be crucial to capture certain critical physical processes. Uncertainty in forcing conditions may require running large ensembles of models; data assimilation techniques may require additional ensembles and repeated simulations. The computational demand is steadily increasing and data streams become bigger. Using HPC resources is a logical step; in different settings Delft-FEWS has been configured to take advantage of distributed computational resources available to improve and accelerate the forecasting process (e.g. Montanari et al, 2006). We will illustrate the system by means of a couple of practical applications including the real-time dynamic forecasting of wind driven waves, flow of water, and wave overtopping at dikes of Lake IJssel and neighboring lakes in the center of The Netherlands. Montanari et al., 2006. Development of an ensemble flood forecasting system for the Po river basin, First MAP D-PHASE Scientific Meeting, 6-8 November 2006, Vienna, Austria.

  3. Climate tools in mainstream Linux distributions

    Science.gov (United States)

    McKinstry, Alastair

    2015-04-01

    Debian/meterology is a project to integrate climate tools and analysis software into the mainstream Debian/Ubuntu Linux distributions. This work describes lessons learnt, and recommends practices for scientific software to be adopted and maintained in OS distributions. In addition to standard analysis tools (cdo,, grads, ferret, metview, ncl, etc.), software used by the Earth System Grid Federation was chosen for integraion, to enable ESGF portals to be built on this base; however exposing scientific codes via web APIs enables security weaknesses, normally ignorable, to be exposed. How tools are hardened, and what changes are required to handle security upgrades, are described. Secondly, to enable libraries and components (e.g. Python modules) to be integrated requires planning by writers: it is not sufficient to assume users can upgrade their code when you make incompatible changes. Here, practices are recommended to enable upgrades and co-installability of C, C++, Fortran and Python codes. Finally, software packages such as NetCDF and HDF5 can be built in multiple configurations. Tools may then expect incompatible versions of these libraries (e.g. serial and parallel) to be simultaneously available; how this was solved in Debian using "pkg-config" and shared library interfaces is described, and best practices for software writers to enable this are summarised.

  4. Infecting Windows, Linux & Mac in one go

    CERN Multimedia

    Computer Security Team

    2012-01-01

    Still love bashing on Windows as you believe it is an insecure operating system? Hold on a second! Just recently, a vulnerability has been published for Java 7.   It affects Windows/Linux PCs and Macs, Internet Explorer, Safari and Firefox. In fact, it affects all computers that have enabled the Java 7 plug-in in their browser (Java 6 and earlier is not affected). Once you visit a malicious website (and there are plenty already out in the wild), your computer is infected… That's "Game Over" for you.      And this is not the first time. For a while now, attackers have not been targeting the operating system itself, but rather aiming at vulnerabilities inherent in e.g. your Acrobat Reader, Adobe Flash or Java programmes. All these are standard plug-ins added into your favourite web browser which make your web-surfing comfortable (or impossible when you un-install them). A single compromised web-site, however, is sufficient to prob...

  5. BSD Portals for LINUX 2.0

    Science.gov (United States)

    McNab, A. David; woo, Alex (Technical Monitor)

    1999-01-01

    Portals, an experimental feature of 4.4BSD, extend the file system name space by exporting certain open () requests to a user-space daemon. A portal daemon is mounted into the file name space as if it were a standard file system. When the kernel resolves a pathname and encounters a portal mount point, the remainder of the path is passed to the portal daemon. Depending on the portal "pathname" and the daemon's configuration, some type of open (2) is performed. The resulting file descriptor is passed back to the kernel which eventually returns it to the user, to whom it appears that a "normal" open has occurred. A proxy portalfs file system is responsible for kernel interaction with the daemon. The overall effect is that the portal daemon performs an open (2) on behalf of the kernel, possibly hiding substantial complexity from the calling process. One particularly useful application is implementing a connection service that allows simple scripts to open network sockets. This paper describes the implementation of portals for LINUX 2.0.

  6. Evolution of Linux operating system network

    Science.gov (United States)

    Xiao, Guanping; Zheng, Zheng; Wang, Haoqin

    2017-01-01

    Linux operating system (LOS) is a sophisticated man-made system and one of the most ubiquitous operating systems. However, there is little research on the structure and functionality evolution of LOS from the prospective of networks. In this paper, we investigate the evolution of the LOS network. 62 major releases of LOS ranging from versions 1.0 to 4.1 are modeled as directed networks in which functions are denoted by nodes and function calls are denoted by edges. It is found that the size of the LOS network grows almost linearly, while clustering coefficient monotonically decays. The degree distributions are almost the same: the out-degree follows an exponential distribution while both in-degree and undirected degree follow power-law distributions. We further explore the functionality evolution of the LOS network. It is observed that the evolution of functional modules is shown as a sequence of seven events (changes) succeeding each other, including continuing, growth, contraction, birth, splitting, death and merging events. By means of a statistical analysis of these events in the top 4 largest components (i.e., arch, drivers, fs and net), it is shown that continuing, growth and contraction events occupy more than 95% events. Our work exemplifies a better understanding and describing of the dynamics of LOS evolution.

  7. Open discovery: An integrated live Linux platform of Bioinformatics tools.

    Science.gov (United States)

    Vetrivel, Umashankar; Pilla, Kalabharath

    2008-01-01

    Historically, live linux distributions for Bioinformatics have paved way for portability of Bioinformatics workbench in a platform independent manner. Moreover, most of the existing live Linux distributions limit their usage to sequence analysis and basic molecular visualization programs and are devoid of data persistence. Hence, open discovery - a live linux distribution has been developed with the capability to perform complex tasks like molecular modeling, docking and molecular dynamics in a swift manner. Furthermore, it is also equipped with complete sequence analysis environment and is capable of running windows executable programs in Linux environment. Open discovery portrays the advanced customizable configuration of fedora, with data persistency accessible via USB drive or DVD. The Open Discovery is distributed free under Academic Free License (AFL) and can be downloaded from http://www.OpenDiscovery.org.in.

  8. Real-time data collection in Linux: a case study.

    Science.gov (United States)

    Finney, S A

    2001-05-01

    Multiuser UNIX-like operating systems such as Linux are often considered unsuitable for real-time data collection because of the potential for indeterminate timing latencies resulting from preemptive scheduling. In this paper, Linux is shown to be fully adequate for precisely controlled programming with millisecond resolution or better. The Linux system calls that subserve such timing control are described and tested and then utilized in a MIDI-based program for tapping and music performance experiments. The timing of this program, including data input and output, is shown to be accurate at the millisecond level. This demonstrates that Linux, with proper programming, is suitable for real-time experiment software. In addition, the detailed description and test of both the operating system facilities and the application program itself may serve as a model for publicly documenting programming methods and software performance on other operating systems.

  9. Lecture 11: Systemtap : Patching the linux kernel on the fly

    CERN Multimedia

    CERN. Geneva

    2013-01-01

    The presentation will describe the usage of Systemtap, in CERN Scientific Linux environment. Systemtap is a tool that allows developers and administrators to write and reuse simple scripts to deeply examine the activities of a live Linux system. We will go through the life cycle of a system tap module : creation, packaging, deployment. It will focus on how we used it recently at CERN as a workaround to patch a 0-day. Thomas Oulevey is a member if the IT department at CERN where he is an active member of the Linux team which supports 9’000 servers, 3’000 desktop systems and more than 5’000 active users. Before CERN he worked at the former astrophysics department of CERN, now called the European Southern Observatory, based in Chile. He was used to maintain the core telescope Linux systems and monitoring infrastructure.

  10. Linux containers networking: performance and scalability of kernel modules

    NARCIS (Netherlands)

    Claassen, J.; Koning, R.; Grosso, P.; Oktug Badonnel, S.; Ulema, M.; Cavdar, C.; Zambenedetti Granville, L.; dos Santos, C.R.P.

    2016-01-01

    Linux container virtualisation is gaining momentum as lightweight technology to support cloud and distributed computing. Applications relying on container architectures might at times rely on inter-container communication, and container networking solutions are emerging to address this need.

  11. After the first five years: central Linux support at DESY

    International Nuclear Information System (INIS)

    Knut Woller; Thorsten Kleinwort; Peter Jung

    2001-01-01

    The authors will describe how Linux is embedded into DESY's unix computing, their support concept and policies, tools used and developed, and the challenges which they are facing now that the number of supported PCs is rapidly approaching one thousand

  12. On methods to increase the security of the Linux kernel

    International Nuclear Information System (INIS)

    Matvejchikov, I.V.

    2014-01-01

    Methods to increase the security of the Linux kernel for the implementation of imposed protection tools have been examined. The methods of incorporation into various subsystems of the kernel on the x86 architecture have been described [ru

  13. Linux, OpenBSD, and Talisker: A Comparative Complexity Analysis

    National Research Council Canada - National Science Library

    Smith, Kevin

    2002-01-01

    .... Rigorous engineering principles are applicable across a broad range of systems. The purpose of this study is to analyze and compare three operating systems, including two general-purpose operating systems (Linux and OpenBSD...

  14. Linux vallutab arvutiilma / Scott Handy ; interv. Kristjan Otsmann

    Index Scriptorium Estoniae

    Handy, Scott

    2000-01-01

    IBM tarkvaragrupi Linuxi lahenduste turundusdirektor S. Handy prognoosib, et kolme-nelja aasta pärast kasutab tasuta operatsioonisüsteemi Linux sama palju arvuteid kui Windowsi operatsioonisüsteemi

  15. Vabavarana levitatav Linux alles viib end massidesse / Erik Aru

    Index Scriptorium Estoniae

    Aru, Erik

    2004-01-01

    Tasuta operatsioonisüsteem Linux leiab maailmas aina laialdasemat kasutust. Operatsioonisüsteemi eeldusteks peetakse töö- ja viirusekindlust. Lisad: Toshiba lahing DVD tuleviku pärast. Vastuseks vt. Maaleht 16. dets. lk. 12

  16. Spill exercise 1980: an LLNL emergency training exercise

    International Nuclear Information System (INIS)

    Morse, J.L.; Gibson, T.A.; Vance, W.F.

    1981-01-01

    An emergency training exercise at Lawrence Livermore National Laboratory (LLNL) demonstrated that off-hours emergency personnel can respond promptly and effecively to an emergency situation involving radiation, hazardous chemicals, and injured persons. The exercise simulated an explosion in a chemistry laboratory and a subsequent toxic-gas release

  17. Capabilities required to conduct the LLNL plutonium mission

    International Nuclear Information System (INIS)

    Kass, J.; Bish, W.; Copeland, A.; West, J.; Sack, S.; Myers, B.

    1991-01-01

    This report outlines the LLNL plutonium related mission anticipated over the next decade and defines the capabilities required to meet that mission wherever the Plutonium Facility is located. If plutonium work is relocated to a place where the facility is shared, then some capabilities can be commonly used by the sharing parties. However, it is essential that LLNL independently control about 20000 sq ft of net lab space, filled with LLNL controlled equipment, and staffed by LLNL employees. It is estimated that the cost to construct this facility should range from $140M to $200M. Purchase and installation of equipment to replace that already in Bldg 332 along with additional equipment identified as being needed to meet the mission for the next ten to fifteen years, is estimated to cost $118M. About $29M of the equipment could be shared. The Hardened Engineering Test Building (HETB) with its additional 8000 sq ft of unique test capability must also be replaced. The fully equipped replacement cost is estimated to be about $10M. About 40000 sq ft of setup and support space are needed along with office and related facilities for a 130 person resident staff. The setup space is estimated to cost $8M. The annual cost of a 130 person resident staff (100 programmatic and 30 facility operation) is estimated to be $20M

  18. Proceedings of the LLNL Technical Women`s Symposium

    Energy Technology Data Exchange (ETDEWEB)

    von Holtz, E. [ed.

    1993-12-31

    This report documents events of the LLNL Technical Women`s Symposium. Topics include; future of computer systems, environmental technology, defense and space, Nova Inertial Confinement Fusion Target Physics, technical communication, tools and techniques for biology in the 1990s, automation and robotics, software applications, materials science, atomic vapor laser isotope separation, technical communication, technology transfer, and professional development workshops.

  19. Proceedings of the LLNL technical women`s symposium

    Energy Technology Data Exchange (ETDEWEB)

    von Holtz, E. [ed.

    1994-12-31

    Women from institutions such as LLNL, LBL, Sandia, and SLAC presented papers at this conference. The papers deal with many aspects of global security, global ecology, and bioscience; they also reflect the challenges faced in improving business practices, communicating effectively, and expanding collaborations in the industrial world. Approximately 87 ``abstracts`` are included in six sessions; more are included in the addendum.

  20. The design and implementation of the LLNL gigabit testbed

    Energy Technology Data Exchange (ETDEWEB)

    Garcia, D. [Lawrence Livermore National Labs., CA (United States)

    1994-12-01

    This paper will look at the design and implementation of the LLNL Gigabit testbed (LGTB), where various high speed networking products, can be tested in one environment. The paper will discuss the philosophy behind the design of and the need for the testbed, the tests that are performed in the testbed, and the tools used to implement those tests.

  1. LLNL X-ray Calibration and Standards Laboratory

    International Nuclear Information System (INIS)

    Anon.

    1982-01-01

    The LLNL X-ray Calibration and Standards Laboratory is a unique facility for developing and calibrating x-ray sources, detectors, and materials, and for conducting x-ray physics research in support of our weapon and fusion-energy programs

  2. Assessment of VME-PCI Interfaces with Linux Drivers

    CERN Document Server

    Schossmater, K; CERN. Geneva

    2000-01-01

    Abstract This report summarises the performance measurements and experiences recorded by testing three commercial VME-PCI interfaces with their Linux drivers. These interfaces are manufactured by Wiener, National Instruments and SBS Bit 3. The developed C programs are reading/writing a VME memory in different transfer modes via these interfaces. A dual processor HP Kayak XA-s workstation was used with the CERN certified Red Hat Linux 6.1 running on it.

  3. Research on application of VME based embedded linux

    International Nuclear Information System (INIS)

    Ji Xiaolu; Ye Mei; Zhu Kejun; Li Xiaonan; Wang Yifang

    2010-01-01

    It describes the feasibility and realization of using embedded Linux in DAQ readout system of high energy physics experiment. The first part, the hardware and software framework is introduced. And then emphasis is placed on the key technologies during the system realization. The development is based on the VME bus and vme u niverse driver. Finally, the test result is presented: the readout system can work well in Embedded Linux OS. (authors)

  4. Argonne National Lab gets Linux network teraflop cluster

    CERN Multimedia

    2003-01-01

    "Linux NetworX, Salt Lake City, Utah, has delivered an Evolocity II (E2) Linux cluster to Argonne National Laboratory that is capable of performing more than one trillion calculations per second (1 teraFLOP). The cluster, named "Jazz" by Argonne, is designed to provide optimum performance for multiple disciplines such as chemistry, physics and reactor engineering and will be used by the entire scientific community at the Lab" (1 page).

  5. ATLAS utilisation of the Czech national HPC center

    CERN Document Server

    Svatos, Michal; The ATLAS collaboration

    2018-01-01

    The Czech national HPC center IT4Innovations located in Ostrava provides two HPC systems, Anselm and Salomon. The Salomon HPC is amongst the hundred most powerful supercomputers on Earth since its commissioning in 2015. Both clusters were tested for usage by the ATLAS experiment for running simulation jobs. Several thousand core hours were allocated to the project for tests, but the main aim is to use free resources waiting for large parallel jobs of other users. Multiple strategies for ATLAS job execution were tested on the Salomon and Anselm HPCs. The solution described herein is based on the ATLAS experience with other HPC sites. ARC Compute Element (ARC-CE) installed at the grid site in Prague is used for job submission to Salomon. The ATLAS production system submits jobs to the ARC-CE via ARC Control Tower (aCT). The ARC-CE processes job requirements from aCT and creates a script for a batch system which is then executed via ssh. Sshfs is used to share scripts and input files between the site and the HPC...

  6. Study of ageing side effects in the DELPHI HPC calorimeter

    CERN Document Server

    Bonivento, W

    1997-01-01

    The readout proportional chambers of the HPC electromagnetic calorimeter in the DELPHI experiment are affected by large ageing. In order to study the long-term behaviour fo the calorimeter, one HPC module was extracted from DELPHI in 1992 and was brought to a test area where it was artificially aged during a period of two years; an ageing level exceeding the one expected for the HPC at the end of the LEP era was reached. During this period the performance of the module was periodically tested by means of dedicated beam tests whose results are discussed in this paper. These show that ageing has no significant effects on the response linearity and on the energy resolution for electromagnetic showers, once the analog response loss is compensated for by increasing the chamber gain through the anode voltage.

  7. Modular HPC I/O characterization with Darshan

    Energy Technology Data Exchange (ETDEWEB)

    Snyder, Shane; Carns, Philip; Harms, Kevin; Ross, Robert; Lockwood, Glenn K.; Wright, Nicholas J.

    2016-11-13

    Contemporary high-performance computing (HPC) applications encompass a broad range of distinct I/O strategies and are often executed on a number of different compute platforms in their lifetime. These large-scale HPC platforms employ increasingly complex I/O subsystems to provide a suitable level of I/O performance to applications. Tuning I/O workloads for such a system is nontrivial, and the results generally are not portable to other HPC systems. I/O profiling tools can help to address this challenge, but most existing tools only instrument specific components within the I/O subsystem that provide a limited perspective on I/O performance. The increasing diversity of scientific applications and computing platforms calls for greater flexibililty and scope in I/O characterization.

  8. Special Issue on Automatic Application Tuning for HPC Architectures

    Directory of Open Access Journals (Sweden)

    Siegfried Benkner

    2014-01-01

    Full Text Available High Performance Computing architectures have become incredibly complex and exploiting their full potential is becoming more and more challenging. As a consequence, automatic performance tuning (autotuning of HPC applications is of growing interest and many research groups around the world are currently involved. Autotuning is still a rapidly evolving research field with many different approaches being taken. This special issue features selected papers presented at the Dagstuhl seminar on “Automatic Application Tuning for HPC Architectures” in October 2013, which brought together researchers from the areas of autotuning and performance analysis in order to exchange ideas and steer future collaborations.

  9. Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system

    International Nuclear Information System (INIS)

    Meier, Konrad; Fleig, Georg; Hauth, Thomas; Quast, Günter; Janczyk, Michael; Von Suchodoletz, Dirk; Wiebelt, Bernd

    2016-01-01

    Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare

  10. Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system

    Science.gov (United States)

    Meier, Konrad; Fleig, Georg; Hauth, Thomas; Janczyk, Michael; Quast, Günter; von Suchodoletz, Dirk; Wiebelt, Bernd

    2016-10-01

    Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare

  11. Hazardous-waste analysis plan for LLNL operations

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, R.S.

    1982-02-12

    The Lawrence Livermore National Laboratory is involved in many facets of research ranging from nuclear weapons research to advanced Biomedical studies. Approximately 80% of all programs at LLNL generate hazardous waste in one form or another. Aside from producing waste from industrial type operations (oils, solvents, bottom sludges, etc.) many unique and toxic wastes are generated such as phosgene, dioxin (TCDD), radioactive wastes and high explosives. One key to any successful waste management program must address the following: proper identification of the waste, safe handling procedures and proper storage containers and areas. This section of the Waste Management Plan will address methodologies used for the Analysis of Hazardous Waste. In addition to the wastes defined in 40 CFR 261, LLNL and Site 300 also generate radioactive waste not specifically covered by RCRA. However, for completeness, the Waste Analysis Plan will address all hazardous waste.

  12. LLNL Mercury Project Trinity Open Science Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Brantley, Patrick [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dawson, Shawn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); McKinley, Scott [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); O' Brien, Matt [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Peters, Doug [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pozulp, Mike [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Becker, Greg [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Moody, Adam [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-04-20

    The Mercury Monte Carlo particle transport code developed at Lawrence Livermore National Laboratory (LLNL) is used to simulate the transport of radiation through urban environments. These challenging calculations include complicated geometries and require significant computational resources to complete. As a result, a question arises as to the level of convergence of the calculations with Monte Carlo simulation particle count. In the Trinity Open Science calculations, one main focus was to investigate convergence of the relevant simulation quantities with Monte Carlo particle count to assess the current simulation methodology. Both for this application space but also of more general applicability, we also investigated the impact of code algorithms on parallel scaling on the Trinity machine as well as the utilization of the Trinity DataWarp burst buffer technology in Mercury via the LLNL Scalable Checkpoint/Restart (SCR) library.

  13. Hazardous-waste analysis plan for LLNL operations

    International Nuclear Information System (INIS)

    Roberts, R.S.

    1982-01-01

    The Lawrence Livermore National Laboratory is involved in many facets of research ranging from nuclear weapons research to advanced Biomedical studies. Approximately 80% of all programs at LLNL generate hazardous waste in one form or another. Aside from producing waste from industrial type operations (oils, solvents, bottom sludges, etc.) many unique and toxic wastes are generated such as phosgene, dioxin (TCDD), radioactive wastes and high explosives. One key to any successful waste management program must address the following: proper identification of the waste, safe handling procedures and proper storage containers and areas. This section of the Waste Management Plan will address methodologies used for the Analysis of Hazardous Waste. In addition to the wastes defined in 40 CFR 261, LLNL and Site 300 also generate radioactive waste not specifically covered by RCRA. However, for completeness, the Waste Analysis Plan will address all hazardous waste

  14. Lawrence Livermore National Laboratory (LLNL) Waste Minimization Program Plan

    International Nuclear Information System (INIS)

    Heckman, R.A.; Tang, W.R.

    1989-01-01

    This Program Plan document describes the background of the Waste Minimization field at Lawrence Livermore National Laboratory (LLNL) and refers to the significant studies that have impacted on legislative efforts, both at the federal and state levels. A short history of formal LLNL waste minimization efforts is provided. Also included are general findings from analysis of work to date, with emphasis on source reduction findings. A short summary is provided on current regulations and probable future legislation which may impact on waste minimization methodology. The LLN Waste Minimization Program Plan is designed to be dynamic and flexible so as to meet current regulations, and yet is able to respond to an everchanging regulatory environment. 19 refs., 12 figs., 8 tabs

  15. Seismic evaluation of the LLNL plutonium facility (Building 332)

    International Nuclear Information System (INIS)

    Hall, W.J.; Sozen, M.A.

    1982-03-01

    The expected performance of the Lawrence Livermore National Laboratory (LLNL) Plutonium Facility (Building 332) subjected to earthquake ground motion has been evaluated. Anticipated behavior of the building, glove boxes, ventilation system and other systems critical for containment of plutonium is described for three severe postulated earthquake excitations. Based upon this evaluation, some damage to the building, glove boxes and ventilation system would be expected but no collapse of any structure is anticipated as a result of the postulated earthquake ground motions

  16. Probabilistic Seismic Hazards Update for LLNL: PSHA Results Report

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez, Alfredo [Fugro Consultants, Inc., Houston, TX (United States); Altekruse, Jason [Fugro Consultants, Inc., Houston, TX (United States); Menchawi, Osman El [Fugro Consultants, Inc., Houston, TX (United States)

    2016-03-11

    This report presents the Probabilistic Seismic Hazard Analysis (PSHA) performed for Building 332 at the Lawrence Livermore National Laboratory (LLNL), near Livermore, CA by Fugro Consultants, Inc. (FCL). This report is specific to Building 332 only and not to other portions of the Laboratory. The study performed for the LLNL site includes a comprehensive review of recent information relevant to the LLNL regional tectonic setting and regional seismic sources in the vicinity of the site and development of seismic wave transmission characteristics. The Seismic Source Characterization (SSC), documented in Project Report No. 2259-PR-02 (FCL, 2015a), and Ground Motion Characterization (GMC), documented in Project Report No. 2259-PR-06 (FCL, 2015c) were developed in accordance with ANS/ANSI 2.29-2008 Level 2 PSHA guidelines. The ANS/ANSI 2.29-2008 Level 2 PSHA framework is documented in Project Report No. 2259-PR-05 (FCL, 2016a). The Hazard Input Document (HID) for input into the PSHA developed from the SSC is presented in Project Report No. 2259-PR-04 (FCL, 2016b). The site characterization used as input for development of the idealized site profiles including epistemic uncertainty and aleatory variability is presented in Project Report No. 2259-PR-03 (FCL, 2015b).

  17. GAMA-LLNL Alpine Basin Special Study: Scope of Work

    Energy Technology Data Exchange (ETDEWEB)

    Singleton, M J; Visser, A; Esser, B K; Moran, J E

    2011-12-12

    For this task LLNL will examine the vulnerability of drinking water supplies in foothills and higher elevation areas to climate change impacts on recharge. Recharge locations and vulnerability will be determined through examination of groundwater ages and noble gas recharge temperatures in high elevation basins. LLNL will determine whether short residence times are common in one or more subalpine basin. LLNL will measure groundwater ages, recharge temperatures, hydrogen and oxygen isotopes, major anions and carbon isotope compositions on up to 60 samples from monitoring wells and production wells in these basins. In addition, a small number of carbon isotope analyses will be performed on surface water samples. The deliverable for this task will be a technical report that provides the measured data and an interpretation of the data from one or more subalpine basins. Data interpretation will: (1) Consider climate change impacts to recharge and its impact on water quality; (2) Determine primary recharge locations and their vulnerability to climate change; and (3) Delineate the most vulnerable areas and describe the likely impacts to recharge.

  18. Optical packet switching in HPC : an analysis of applications performance

    NARCIS (Netherlands)

    Meyer, Hugo; Sancho, Jose Carlos; Mrdakovic, Milica; Miao, Wang; Calabretta, Nicola

    2018-01-01

    Optical Packet Switches (OPS) could provide the needed low latency transmissions in today large data centers. OPS can deliver lower latency and higher bandwidth than traditional electrical switches. These features are needed for parallel High Performance Computing (HPC) applications. For this

  19. Fire performance of basalt FRP mesh reinforced HPC thin plates

    DEFF Research Database (Denmark)

    Hulin, Thomas; Hodicky, Kamil; Schmidt, Jacob Wittrup

    2013-01-01

    An experimental program was carried out to investigate the influence of basalt FRP (BFRP) reinforcing mesh on the fire behaviour of thin high performance concrete (HPC) plates applied to sandwich elements. Samples with BFRP mesh were compared to samples with no mesh, samples with steel mesh...

  20. Evaluation of mosix-Linux farm performances in GRID environment

    International Nuclear Information System (INIS)

    Barone, F.; Rosa, M.de; Rosa, R.de.; Eleuteri, A.; Esposito, R.; Mastroserio, P.; Milano, L.; Taurino, F.; Tortone, G.

    2001-01-01

    The MOSIX extensions to the Linux Operating System allow the creation of high-performance Linux Farms and an excellent integration of the several CPUs of the Farm, whose computational power can be furtherly increased and made more effective by networking them within the GRID environment. Following this strategy, the authors started to perform computational tests using two independent farms within the GRID environment. In particular, the authors performed a preliminary evaluation of the distributed computing efficiency with a MOSIX Linux farm in the simulation of gravitational waves data analysis from coalescing binaries. To this task, two different techniques were compared: the classical matched filters technique and one of its possible evolutions, based on a global optimisation technique

  1. Analysis of Linux kernel as a complex network

    International Nuclear Information System (INIS)

    Gao, Yichao; Zheng, Zheng; Qin, Fangyun

    2014-01-01

    Operating system (OS) acts as an intermediary between software and hardware in computer-based systems. In this paper, we analyze the core of the typical Linux OS, Linux kernel, as a complex network to investigate its underlying design principles. It is found that the Linux Kernel Network (LKN) is a directed network and its out-degree follows an exponential distribution while the in-degree follows a power-law distribution. The correlation between topology and functions is also explored, by which we find that LKN is a highly modularized network with 12 key communities. Moreover, we investigate the robustness of LKN under random failures and intentional attacks. The result shows that the failure of the large in-degree nodes providing basic services will do more damage on the whole system. Our work may shed some light on the design of complex software systems

  2. A General Purpose High Performance Linux Installation Infrastructure

    International Nuclear Information System (INIS)

    Wachsmann, Alf

    2002-01-01

    With more and more and larger and larger Linux clusters, the question arises how to install them. This paper addresses this question by proposing a solution using only standard software components. This installation infrastructure scales well for a large number of nodes. It is also usable for installing desktop machines or diskless Linux clients, thus, is not designed for cluster installations in particular but is, nevertheless, highly performant. The infrastructure proposed uses PXE as the network boot component on the nodes. It uses DHCP and TFTP servers to get IP addresses and a bootloader to all nodes. It then uses kickstart to install Red Hat Linux over NFS. We have implemented this installation infrastructure at SLAC with our given server hardware and installed a 256 node cluster in 30 minutes. This paper presents the measurements from this installation and discusses the bottlenecks in our installation

  3. MARIANE: MApReduce Implementation Adapted for HPC Environments

    Energy Technology Data Exchange (ETDEWEB)

    Fadika, Zacharia; Dede, Elif; Govindaraju, Madhusudhan; Ramakrishnan, Lavanya

    2011-07-06

    MapReduce is increasingly becoming a popular framework, and a potent programming model. The most popular open source implementation of MapReduce, Hadoop, is based on the Hadoop Distributed File System (HDFS). However, as HDFS is not POSIX compliant, it cannot be fully leveraged by applications running on a majority of existing HPC environments such as Teragrid and NERSC. These HPC environments typicallysupport globally shared file systems such as NFS and GPFS. On such resourceful HPC infrastructures, the use of Hadoop not only creates compatibility issues, but also affects overall performance due to the added overhead of the HDFS. This paper not only presents a MapReduce implementation directly suitable for HPC environments, but also exposes the design choices for better performance gains in those settings. By leveraging inherent distributed file systems' functions, and abstracting them away from its MapReduce framework, MARIANE (MApReduce Implementation Adapted for HPC Environments) not only allows for the use of the model in an expanding number of HPCenvironments, but also allows for better performance in such settings. This paper shows the applicability and high performance of the MapReduce paradigm through MARIANE, an implementation designed for clustered and shared-disk file systems and as such not dedicated to a specific MapReduce solution. The paper identifies the components and trade-offs necessary for this model, and quantifies the performance gains exhibited by our approach in distributed environments over Apache Hadoop in a data intensive setting, on the Magellan testbed at the National Energy Research Scientific Computing Center (NERSC).

  4. Enabling parallel simulation of large-scale HPC network systems

    International Nuclear Information System (INIS)

    Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; Carns, Philip

    2016-01-01

    Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks used in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations

  5. Servidor Linux para conexiones seguras de una LAN a Internet

    OpenAIRE

    Escartín Vigo, José Antonio

    2005-01-01

    Este documento esta elaborado para describir la implementación de un servidor GNU/Linux, así como especificar y resolver los principales problemas que un administrador se encuentra al poner en funcionamiento un servidor. Se aprenderá a configurar un servidor GNU/Linux describiendo los principales servicios utilizados para compartir archivos, páginas web, correo y otros que veremos más adelante. La herramienta de configuración Webmin, que se detalla en uno de los últimos capítulos es indepe...

  6. Design and Implementation of Linux Access Control Model

    Institute of Scientific and Technical Information of China (English)

    Wei Xiaomeng; Wu Yongbin; Zhuo Jingchuan; Wang Jianyun; Haliqian Mayibula

    2017-01-01

    In this paper,the design and implementation of an access control model for Linux system are discussed in detail. The design is based on the RBAC model and combines with the inherent characteristics of the Linux system,and the support for the process and role transition is added.The core idea of the model is that the file is divided into different categories,and access authority of every category is distributed to several roles.Then,roles are assigned to users of the system,and the role of the user can be transited from one to another by running the executable file.

  7. Using Vega Linux Cluster at Reactor Physics Dept

    International Nuclear Information System (INIS)

    Zefran, B.; Jeraj, R.; Skvarc, J.; Glumac, B.

    1999-01-01

    Experience using a Linux-based cluster for the reactor physics calculations are presented in this paper. Special attention is paid to the MCNP code in this environment and to practical guidelines how to prepare and use the paralel version of the code. Our results of a time comparison study are presented for two sets of inputs. The results are promising and speedup factor achieved on the Linux cluster agrees with previous tests on other parallel systems. We also tested tools for parallelization of other programs used at our Dept..(author)

  8. First experiences with large SAN storage and Linux

    International Nuclear Information System (INIS)

    Wezel, Jos van; Marten, Holger; Verstege, Bernhard; Jaeger, Axel

    2004-01-01

    The use of a storage area network (SAN) with Linux opens possibilities for scalable and affordable large data storage and poses a new challenge for cluster computing. The GridKa center uses a commercial parallel file system to create a highly available high-speed data storage using a combination of Fibre Channel (SAN) and Ethernet (LAN) to optimize between data throughput and costs. This article describes the design, implementation and optimizations of the GridKa storage solution which will offer over 400 TB online storage for 600 nodes. Presented are some throughput measurements of one of the largest Linux-based parallel storage systems in the world

  9. Teaching Hands-On Linux Host Computer Security

    Science.gov (United States)

    Shumba, Rose

    2006-01-01

    In the summer of 2003, a project to augment and improve the teaching of information assurance courses was started at IUP. Thus far, ten hands-on exercises have been developed. The exercises described in this article, and presented in the appendix, are based on actions required to secure a Linux host. Publicly available resources were used to…

  10. Drowning in PC Management: Could a Linux Solution Save Us?

    Science.gov (United States)

    Peters, Kathleen A.

    2004-01-01

    Short on funding and IT staff, a Western Canada library struggled to provide adequate public computing resources. Staff turned to a Linux-based solution that supports up to 10 users from a single computer, and blends Web browsing and productivity applications with session management, Internet filtering, and user authentication. In this article,…

  11. Fedora Linux A Complete Guide to Red Hat's Community Distribution

    CERN Document Server

    Tyler, Chris

    2009-01-01

    Whether you are running the stable version of Fedora Core or bleeding-edge Rawhide releases, this book has something for every level of user. The modular, lab-based approach not only shows you how things work--but also explains why--and provides you with the answers you need to get up and running with Fedora Linux.

  12. Linux Adventures on a Laptop. Computers in Small Libraries

    Science.gov (United States)

    Roberts, Gary

    2005-01-01

    This article discusses the pros and cons of open source software, such as Linux. It asserts that despite the technical difficulties of installing and maintaining this type of software, ultimately it is helpful in terms of knowledge acquisition and as a beneficial investment librarians can make in themselves, their libraries, and their patrons.…

  13. LHC@home online tutorial for Linux users - recording

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    A step-by-step online tutorial for LHC@home by Karolina Bozek It contains detailed instructions for Linux users on how-to-join this volunteer computing project.  This 5' linked from http://lhcathome.web.cern.ch/join-us CLICK Here to see the commands to copy/paste for installing BOINC and the VirtualBox.

  14. The Linux kernel as flexible product-line architecture

    NARCIS (Netherlands)

    M. de Jonge (Merijn)

    2002-01-01

    textabstractThe Linux kernel source tree is huge ($>$ 125 MB) and inflexible (because it is difficult to add new kernel components). We propose to make this architecture more flexible by assembling kernel source trees dynamically from individual kernel components. Users then, can select what

  15. Argonne Natl Lab receives TeraFLOP Cluster Linux NetworX

    CERN Multimedia

    2002-01-01

    " Linux NetworX announced today it has delivered an Evolocity II (E2) Linux cluster to Argonne National Laboratory that is capable of performing more than one trillion calculations per second (1 teraFLOP)" (1/2 page).

  16. Remote Boot of a Diskless Linux Client for Operating System Integrity

    National Research Council Canada - National Science Library

    Allen, Bruce

    2002-01-01

    .... The diskless Linux client is organized to provide read-write files over NFS at home, read-only files over NFS for accessing bulky immutable utilities, and sone volatile RAM disk files to allow the Linux Kernel to boot...

  17. Description and application of the AERIN Code at LLNL

    International Nuclear Information System (INIS)

    King, W.C.

    1986-01-01

    The AERIN code was written at the Lawrence Livermore National Laboratory in 1976 to compute the organ burdens and absorbed dose resulting from a chronic or acute inhalation of transuranic isotopes. The code was revised in 1982 to reflect the concepts of ICRP-30. This paper will describe the AERIN code and how it has been used at LLNL to study more than 80 cases of internal deposition and obtain estimates of internal dose. A comparison with the computed values of the committed organ dose is made with ICRP-30 values. The benefits of using the code are described. 3 refs., 3 figs., 6 tabs

  18. Final report on the LLNL compact torus acceleration project

    International Nuclear Information System (INIS)

    Eddleman, J.; Hammer, J.; Hartman, C.; McLean, H.; Molvik, A.

    1995-01-01

    In this report, we summarize recent work at LLNL on the compact torus (CT) acceleration project. The CT accelerator is a novel technique for projecting plasmas to high velocities and reaching high energy density states. The accelerator exploits magnetic confinement in the CT to stably transport plasma over large distances and to directed kinetic energies large in comparison with the CT internal and magnetic energy. Applications range from heating and fueling magnetic fusion devices, generation of intense pulses of x-rays or neutrons for weapons effects and high energy-density fusion concepts

  19. Easy Access to HPC Resources through the Application GUI

    KAUST Repository

    van Waveren, Matthijs

    2016-11-01

    The computing environment at the King Abdullah University of Science and Technology (KAUST) is growing in size and complexity. KAUST hosts the tenth fastest supercomputer in the world (Shaheen II) and several HPC clusters. Researchers can be inhibited by the complexity, as they need to learn new languages and execute many tasks in order to access the HPC clusters and the supercomputer. In order to simplify the access, we have developed an interface between the applications and the clusters and supercomputer that automates the transfer of input data and job submission and also the retrieval of results to the researcher’s local workstation. The innovation is that the user now submits his jobs from within the application GUI on his workstation, and does not have to directly log into the clusters or supercomputer anymore. This article details the solution and its benefits to the researchers.

  20. Feasibility study of BES data processing and physics analysis on a PC/Linux platform

    International Nuclear Information System (INIS)

    Rong Gang; He Kanglin; Zhao Jiawei; Heng Yuekun; Zhang Chun

    1999-01-01

    The authors report a feasibility study of off-line BES data processing (data reconstruction and Detector simulation) on a PC/Linux platform and an application of the PC/Linux system in D/Ds physics analysis. The authors compared the results obtained from the PC/Linux with that from HP workstation. It shows that PC/Linux platform can do BES data offline analysis as good as UNIX workstation do, but it is much powerful and economical

  1. Behavior of HPC with Fly Ash after Elevated Temperature

    OpenAIRE

    Shang, Huai-Shuai; Yi, Ting-Hua

    2013-01-01

    For use in fire resistance calculations, the relevant thermal properties of high-performance concrete (HPC) with fly ash were determined through an experimental study. These properties included compressive strength, cubic compressive strength, cleavage strength, flexural strength, and the ultrasonic velocity at various temperatures (20, 100, 200, 300, 400 and 500∘C) for high-performance concrete. The effect of temperature on compressive strength, cubic compressive strength, cleavage strength,...

  2. Users and Programmers Guide for HPC Platforms in CIEMAT

    International Nuclear Information System (INIS)

    Munoz Roldan, A.

    2003-01-01

    This Technical Report presents a description of the High Performance Computing platforms available to researchers in CIEMAT and dedicated mainly to scientific computing. It targets to users and programmers and tries to help in the processes of developing new code and porting code across platforms. A brief review is also presented about historical evolution in the field of HPC, ie, the programming paradigms and underlying architectures. (Author) 32 refs

  3. Trends in Data Locality Abstractions for HPC Systems

    KAUST Repository

    Unat, Didem; Dubey, Anshu; Hoefler, Torsten; Shalf, John; Abraham, Mark; Bianco, Mauro; Chamberlain, Bradford L.; Cledat, Romain; Edwards, H. Carter; Finkel, Hal; Fuerlinger, Karl; Hannig, Frank; Jeannot, Emmanuel; Kamil, Amir; Keasler, Jeff; Kelly, Paul H J; Leung, Vitus; Ltaief, Hatem; Maruyama, Naoya; Newburn, Chris J.; Pericas, Miquel

    2017-01-01

    The cost of data movement has always been an important concern in high performance computing (HPC) systems. It has now become the dominant factor in terms of both energy consumption and performance. Support for expression of data locality has been explored in the past, but those efforts have had only modest success in being adopted in HPC applications for various reasons. them However, with the increasing complexity of the memory hierarchy and higher parallelism in emerging HPC systems, locality management has acquired a new urgency. Developers can no longer limit themselves to low-level solutions and ignore the potential for productivity and performance portability obtained by using locality abstractions. Fortunately, the trend emerging in recent literature on the topic alleviates many of the concerns that got in the way of their adoption by application developers. Data locality abstractions are available in the forms of libraries, data structures, languages and runtime systems; a common theme is increasing productivity without sacrificing performance. This paper examines these trends and identifies commonalities that can combine various locality concepts to develop a comprehensive approach to expressing and managing data locality on future large-scale high-performance computing systems.

  4. Trends in Data Locality Abstractions for HPC Systems

    KAUST Repository

    Unat, Didem

    2017-05-12

    The cost of data movement has always been an important concern in high performance computing (HPC) systems. It has now become the dominant factor in terms of both energy consumption and performance. Support for expression of data locality has been explored in the past, but those efforts have had only modest success in being adopted in HPC applications for various reasons. them However, with the increasing complexity of the memory hierarchy and higher parallelism in emerging HPC systems, locality management has acquired a new urgency. Developers can no longer limit themselves to low-level solutions and ignore the potential for productivity and performance portability obtained by using locality abstractions. Fortunately, the trend emerging in recent literature on the topic alleviates many of the concerns that got in the way of their adoption by application developers. Data locality abstractions are available in the forms of libraries, data structures, languages and runtime systems; a common theme is increasing productivity without sacrificing performance. This paper examines these trends and identifies commonalities that can combine various locality concepts to develop a comprehensive approach to expressing and managing data locality on future large-scale high-performance computing systems.

  5. A Novel Approach to Semantic and Coreference Annotation at LLNL

    Energy Technology Data Exchange (ETDEWEB)

    Firpo, M

    2005-02-04

    A case is made for the importance of high quality semantic and coreference annotation. The challenges of providing such annotation are described. Asperger's Syndrome is introduced, and the connections are drawn between the needs of text annotation and the abilities of persons with Asperger's Syndrome to meet those needs. Finally, a pilot program is recommended wherein semantic annotation is performed by people with Asperger's Syndrome. The primary points embodied in this paper are as follows: (1) Document annotation is essential to the Natural Language Processing (NLP) projects at Lawrence Livermore National Laboratory (LLNL); (2) LLNL does not currently have a system in place to meet its need for text annotation; (3) Text annotation is challenging for a variety of reasons, many related to its very rote nature; (4) Persons with Asperger's Syndrome are particularly skilled at rote verbal tasks, and behavioral experts agree that they would excel at text annotation; and (6) A pilot study is recommend in which two to three people with Asperger's Syndrome annotate documents and then the quality and throughput of their work is evaluated relative to that of their neuro-typical peers.

  6. LLNL (Lawrence Livermore National Laboratory) research on cold fusion

    Energy Technology Data Exchange (ETDEWEB)

    Thomassen, K I; Holzrichter, J F [eds.

    1989-09-14

    With the appearance of reports on Cold Fusion,'' scientists at the Lawrence Livermore National Laboratory (LLNL) began a series of increasingly sophisticated experiments and calculations to explain these phenomena. These experiments can be categorized as follows: (a) simple experiments to replicate the Utah results, (b) more sophisticated experiments to place lower bounds on the generation of heat and production of nuclear products, (c) a collaboration with Texas A M University to analyze electrodes and electrolytes for fusion by-products in a cell producing 10% excess heat (we found no by-products), and (d) attempts to replicate the Frascati experiment that first found neutron bursts when high-pressure deuterium gas in a cylinder with Ti chips was temperature-cycled. We failed in categories (a) and (b) to replicate either the Pons/Fleischmann or the Jones phenomena. We have seen phenomena similar to the Frascati results, (d) but these low-level burst signals may not be coming from neutrons generated in the Ti chips. Summaries of our experiments are described in Section II, as is a theoretical effort based on cosmic ray muons to describe low-level neutron production. Details of the experimental groups' work are contained in the six appendices. At LLNL, independent teams were spontaneously formed in response to the early announcements on cold fusion. This report's format follows this organization.

  7. Joint FAM/Line Management Assessment Report on LLNL Machine Guarding Safety Program

    Energy Technology Data Exchange (ETDEWEB)

    Armstrong, J. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-07-19

    The LLNL Safety Program for Machine Guarding is implemented to comply with requirements in the ES&H Manual Document 11.2, "Hazards-General and Miscellaneous," Section 13 Machine Guarding (Rev 18, issued Dec. 15, 2015). The primary goal of this LLNL Safety Program is to ensure that LLNL operations involving machine guarding are managed so that workers, equipment and government property are adequately protected. This means that all such operations are planned and approved using the Integrated Safety Management System to provide the most cost effective and safest means available to support the LLNL mission.

  8. O Linux e a perspectiva da dádiva

    Directory of Open Access Journals (Sweden)

    Renata Apgaua

    2004-06-01

    Full Text Available O objetivo deste trabalho é analisar o surgimento e consolidação do sistema operacional Linux em um contexto marcado pela hegemonia de sistemas operacionais comerciais, sendo o Windows/Microsoft o exemplo paradigmático. O idealizador do Linux optou por abrir o seu código-fonte e oferecê-lo, gratuitamente, na Internet. Desde então, pessoas de diversas partes do mundo têm participado do seu desenvolvimento. Busca-se, assim, através deste estudo, analisar as características desse espaço de sociabilidade, onde as trocas apontam para outra lógica que não a do mercado. A proposta de compreender os laços sociais no universo Linux, a partir da perspectiva da dádiva, acaba remetendo a outra discussão, que também merecerá atenção nesse estudo, qual seja: a atualidade da dádiva. Releituras de Mauss, feitas por Godbout e Caillé, indicam que a dádiva, em seu "sistema de transformações", encontra-se presente nas sociedades contemporâneas, mas não apenas nos interstícios sociais, conforme afirmava o próprio Mauss.This work's goal is to analyze the appearance and consolidation of the Linux operational system in a context marked by the hegemony of commercial operational systems, taking the Windows/Microsoft as the paradigmatic example. The creator of Linux chose to make it open-source and offer it free of charge, in the Internet. Since then, people from the various parts of the world have participated in its development. This study, therefore, seeks to analyse the features of this space of sociability, where the exchange points to another logic different of that one adopted by the market. The proposal of comprehending the social ties of the Linux universe through the perspective of gift ends up sending us into another discussion, which will also deserve attention in this study, that would be: the recentness of gift. Re-interpretations of Mauss, made by Godbout and Caillé, indicate that gif, in its "changing system", is present in

  9. Installing, Running and Maintaining Large Linux Clusters at CERN

    CERN Document Server

    Bahyl, V; van Eldik, Jan; Fuchs, Ulrich; Kleinwort, Thorsten; Murth, Martin; Smith, Tim; Bahyl, Vladimir; Chardi, Benjamin; Eldik, Jan van; Fuchs, Ulrich; Kleinwort, Thorsten; Murth, Martin; Smith, Tim

    2003-01-01

    Having built up Linux clusters to more than 1000 nodes over the past five years, we already have practical experience confronting some of the LHC scale computing challenges: scalability, automation, hardware diversity, security, and rolling OS upgrades. This paper describes the tools and processes we have implemented, working in close collaboration with the EDG project [1], especially with the WP4 subtask, to improve the manageability of our clusters, in particular in the areas of system installation, configuration, and monitoring. In addition to the purely technical issues, providing shared interactive and batch services which can adapt to meet the diverse and changing requirements of our users is a significant challenge. We describe the developments and tuning that we have introduced on our LSF based systems to maximise both responsiveness to users and overall system utilisation. Finally, this paper will describe the problems we are facing in enlarging our heterogeneous Linux clusters, the progress we have ...

  10. Operational Numerical Weather Prediction systems based on Linux cluster architectures

    International Nuclear Information System (INIS)

    Pasqui, M.; Baldi, M.; Gozzini, B.; Maracchi, G.; Giuliani, G.; Montagnani, S.

    2005-01-01

    The progress in weather forecast and atmospheric science has been always closely linked to the improvement of computing technology. In order to have more accurate weather forecasts and climate predictions, more powerful computing resources are needed, in addition to more complex and better-performing numerical models. To overcome such a large computing request, powerful workstations or massive parallel systems have been used. In the last few years, parallel architectures, based on the Linux operating system, have been introduced and became popular, representing real high performance-low cost systems. In this work the Linux cluster experience achieved at the Laboratory far Meteorology and Environmental Analysis (LaMMA-CNR-IBIMET) is described and tips and performances analysed

  11. LXtoo: an integrated live Linux distribution for the bioinformatics community.

    Science.gov (United States)

    Yu, Guangchuang; Wang, Li-Gen; Meng, Xiao-Hua; He, Qing-Yu

    2012-07-19

    Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo.

  12. Channel Bonding in Linux Ethernet Environment using Regular Switching Hub

    Directory of Open Access Journals (Sweden)

    Chih-wen Hsueh

    2004-06-01

    Full Text Available Bandwidth plays an important role for quality of service in most network systems. There are many technologies developed to increase host bandwidth in a LAN environment. Most of them need special hardware support, such as switching hub that supports IEEE Link Aggregation standard. In this paper, we propose a Linux solution to increase the bandwidth between hosts with multiple network adapters connected to a regular switching hub. The approach is implemented as two Linux kernel modules in a LAN environment without modification to the hardware and operating systems on host machines. Packets are dispatched to bonding network adapters for transmission. The proposed approach is backward compatible, flexible and transparent to users and only one IP address is needed for multiple bonding network adapters. Evaluation experiments in TCP and UDP transmission are shown with bandwidth gain proportionally to the number of network adapters. It is suitable for large-scale LAN systems with high bandwidth requirement, such as clustering systems.

  13. ISAC EPICS on Linux: the march of the penguins

    International Nuclear Information System (INIS)

    Richards, J.; Nussbaumer, R.; Rapaz, S.; Waters, G.

    2012-01-01

    The DC linear accelerators of the ISAC radioactive beam facility at TRIUMF do not impose rigorous timing constraints on the control system. Therefore a real-time operating system is not essential for device control. The ISAC Control System is completing a move to the use of the Linux operating system for hosting all EPICS IOCs. The IOC platforms include GE-Fanuc VME based CPUs for control of most optics and diagnostics, rack mounted servers for supervising PLCs, small desktop PCs for GPIB and RS232 instruments, as well as embedded ARM processors controlling CAN-bus devices that provide a suitcase sized control system. This article focuses on the experience of creating a customized Linux distribution for front-end IOC deployment. Rationale, a road-map of the process, and efficiency advantages in personnel training and system management realized by using a single OS will be discussed. (authors)

  14. A embedded Linux system based on PowerPC

    International Nuclear Information System (INIS)

    Ye Mei; Zhao Jingwei; Chu Yuanping

    2006-01-01

    The authors will introduce a Embedded Linux System based on PowerPC as well as the base method on how to establish the system. The goal of the system is to build a test system of VMEbus device. It also can be used to setup the small data acquisition and control system. Two types of compiler are provided by the developer system according to the features of the system and the Power PC. At the top of the article some typical embedded Operation system will be introduced and the features of different system will be provided. And then the method on how to build a embedded Linux system as well as the key technique will be discussed in detail. Finally a successful read-write example will be given based on the test system. (authors)

  15. LightNVM: The Linux Open-Channel SSD Subsystem

    DEFF Research Database (Denmark)

    Bjørling, Matias; Gonzalez, Javier; Bonnet, Philippe

    2017-01-01

    resource utilization. We propose that SSD management trade-offs should be handled through Open-Channel SSDs, a new class of SSDs, that give hosts control over their internals. We present our experience building LightNVM, the Linux Open-Channel SSD subsystem. We introduce a new Physical Page Ad- dress I...... to limit read latency variability and that it can be customized to achieve predictable I/O latencies....

  16. Diversifying the Department of Defense Network Enterprise with Linux

    Science.gov (United States)

    2010-03-01

    protection of DoD infrastructure. In the competitive marketplace, strategy is defined as a firm’s theory on how it gains high levels of performance...practice of discontinuing support to legacy systems. Microsoft also needs to convey it was in the user’s best interest to upgrade the operating... stockholders , Microsoft acknowledged recent notable competitors in the market place threatening their long time monopolistic enterprise. Linux (a popular

  17. Developing and Benchmarking Native Linux Applications on Android

    Science.gov (United States)

    Batyuk, Leonid; Schmidt, Aubrey-Derrick; Schmidt, Hans-Gunther; Camtepe, Ahmet; Albayrak, Sahin

    Smartphones get increasingly popular where more and more smartphone platforms emerge. Special attention was gained by the open source platform Android which was presented by the Open Handset Alliance (OHA) hosting members like Google, Motorola, and HTC. Android uses a Linux kernel and a stripped-down userland with a custom Java VM set on top. The resulting system joins the advantages of both environments, while third-parties are intended to develop only Java applications at the moment.

  18. A camac data acquisition system based on PC-Linux

    International Nuclear Information System (INIS)

    Ribas, R.V.

    2002-01-01

    A multi-parametric data acquisition system for Nuclear Physics experiments using camac instrumentation on a personal computer with the Linux operating system in described. The system is very reliable, inexpensive and is capable of handling event rates up to 4-6 k events/s. In the present version, the maximum number of parameters to be acquired is limited only by the number of camac modules that can be fitted in one camac crate

  19. The Fifth Workshop on HPC Best Practices: File Systems and Archives

    Energy Technology Data Exchange (ETDEWEB)

    Hick, Jason; Hules, John; Uselton, Andrew

    2011-11-30

    The workshop on High Performance Computing (HPC) Best Practices on File Systems and Archives was the fifth in a series sponsored jointly by the Department Of Energy (DOE) Office of Science and DOE National Nuclear Security Administration. The workshop gathered technical and management experts for operations of HPC file systems and archives from around the world. Attendees identified and discussed best practices in use at their facilities, and documented findings for the DOE and HPC community in this report.

  20. The performance analysis of linux networking - packet receiving

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Wenji; Crawford, Matt; Bowden, Mark; /Fermilab

    2006-11-01

    The computing models for High-Energy Physics experiments are becoming ever more globally distributed and grid-based, both for technical reasons (e.g., to place computational and data resources near each other and the demand) and for strategic reasons (e.g., to leverage equipment investments). To support such computing models, the network and end systems, computing and storage, face unprecedented challenges. One of the biggest challenges is to transfer scientific data sets--now in the multi-petabyte (10{sup 15} bytes) range and expected to grow to exabytes within a decade--reliably and efficiently among facilities and computation centers scattered around the world. Both the network and end systems should be able to provide the capabilities to support high bandwidth, sustained, end-to-end data transmission. Recent trends in technology are showing that although the raw transmission speeds used in networks are increasing rapidly, the rate of advancement of microprocessor technology has slowed down. Therefore, network protocol-processing overheads have risen sharply in comparison with the time spent in packet transmission, resulting in degraded throughput for networked applications. More and more, it is the network end system, instead of the network, that is responsible for degraded performance of network applications. In this paper, the Linux system's packet receive process is studied from NIC to application. We develop a mathematical model to characterize the Linux packet receiving process. Key factors that affect Linux systems network performance are analyzed.

  1. OMICRON, LLNL ENDL Charged Particle Data Library Processing

    International Nuclear Information System (INIS)

    Mengoni, A.; Panini, G.C.

    2002-01-01

    1 - Description of program or function: The program has been designed to read the Evaluated Charged Particle Library (ECPL) of the LLNL Evaluated Nuclear Data Library (ENDL) and generate output in various forms: interpreted listing, ENDF format and graphs. 2 - Method of solution: A file containing ECPL in card image transmittal format is scanned to retrieve the requested reactions from the requested materials; in addition selections can be made by data type or incident particle. 3 - Restrictions on the complexity of the problem: The Reaction Property Designator I determines the type of data in the ENDL library (e.g. cross sections, angular distributions, Maxwellian averages, etc.); the program does not take into account the data for I=3,4 (energy-angle-distributions) since there are no data in the current ECPL version

  2. Release isentrope measurements with the LLNL electric gun

    Energy Technology Data Exchange (ETDEWEB)

    Gathers, G.R.; Osher, J.E.; Chau, H.H.; Weingart, R.C.; Lee, C.G.; Diaz, E.

    1987-06-01

    The liquid-vapor coexistence boundary is not well known for most metals because the extreme conditions near the critical point create severe experimental difficulties. The isentropes passing through the liquid-vapor region typically begin from rather large pressures on the Hugoniot. We are attempting to use the high velocities achievable with the Lawrence Livermore National Laboratory (LLNL) electric gun to obtain these extreme states in aluminum and measure the release isentropes by releasing into a series of calibrated standards with known Hugoniots. To achieve large pressure drops needed to explore the liquid-vapor region, we use argon gas for which Hugoniots have been calculated using the ACTEX code, as one of the release materials.

  3. Results of LLNL investigation of NYCT data sets

    International Nuclear Information System (INIS)

    Sale, K; Harrison, M; Guo, M; Groza, M

    2007-01-01

    Upon examination we have concluded that none of the alarms indicate the presence of a real threat. A brief history and results from our examination of the NYCT ASP occupancy data sets dated from 2007-05-14 19:11:07 to 2007-06-20 15:46:15 are presented in this letter report. When the ASP data collection campaign at NYCT was completed, rather than being shut down, the Canberra ASP annunciator box was unplugged leaving the data acquisition system running. By the time it was discovered that the ASP was still acquiring data about 15,000 occupancies had been recorded. Among these were about 500 alarms (classified by the ASP analysis system as either Threat Alarms or Suspect Alarms). At your request, these alarms have been investigated. Our conclusion is that none of the alarm data sets indicate the presence of a real threat (within statistics). The data sets (ICD1 and ICD2 files with concurrent JPEG pictures) were delivered to LLNL on a removable hard drive labeled FOUO. The contents of the data disk amounted to 53.39 GB of data requiring over two days for the standard LLNL virus checking software to scan before work could really get started. Our first step was to walk through the directory structure of the disk and create a database of occupancies. For each occupancy, the database was populated with the occupancy date and time, occupancy number, file path to the ICD1 data and the alarm ('No Alarm', 'Suspect Alarm' or 'Threat Alarm') from the ICD2 file along with some other incidental data. In an attempt to get a global understanding of what was going on, we investigated the occupancy information. The occupancy date/time and alarm type were binned into one-hour counts. These data are shown in Figures 1 and 2

  4. Continuous Security and Configuration Monitoring of HPC Clusters

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Lomeli, H. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bertsch, A. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fox, D. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-05-08

    Continuous security and configuration monitoring of information systems has been a time consuming and laborious task for system administrators at the High Performance Computing (HPC) center. Prior to this project, system administrators had to manually check the settings of thousands of nodes, which required a significant number of hours rendering the old process ineffective and inefficient. This paper explains the application of Splunk Enterprise, a software agent, and a reporting tool in the development of a user application interface to track and report on critical system updates and security compliance status of HPC Clusters. In conjunction with other configuration management systems, the reporting tool is to provide continuous situational awareness to system administrators of the compliance state of information systems. Our approach consisted of the development, testing, and deployment of an agent to collect any arbitrary information across a massively distributed computing center, and organize that information into a human-readable format. Using Splunk Enterprise, this raw data was then gathered into a central repository and indexed for search, analysis, and correlation. Following acquisition and accumulation, the reporting tool generated and presented actionable information by filtering the data according to command line parameters passed at run time. Preliminary data showed results for over six thousand nodes. Further research and expansion of this tool could lead to the development of a series of agents to gather and report critical system parameters. However, in order to make use of the flexibility and resourcefulness of the reporting tool the agent must conform to specifications set forth in this paper. This project has simplified the way system administrators gather, analyze, and report on the configuration and security state of HPC clusters, maintaining ongoing situational awareness. Rather than querying each cluster independently, compliance checking

  5. ENHANCING PERFORMANCE OF AN HPC CLUSTER BY ADOPTING NONDEDICATED NODES

    OpenAIRE

    Pil Seong Park

    2015-01-01

    Persona-sized HPC clusters are widely used in many small labs, because they are cost-effective and easy to build. Instead of adding costly new nodes to old clusters, we may try to make use of some servers’ idle times by including them working independently on the same LAN, especially during the night. However such extension across a firewall raises not only some security problem with NFS but also a load balancing problem caused by heterogeneity. In this paper, we propose a meth...

  6. Linux real-time framework for fusion devices

    Energy Technology Data Exchange (ETDEWEB)

    Neto, Andre [Associacao Euratom-IST, Instituto de Plasmas e Fusao Nuclear, Av. Rovisco Pais, 1049-001 Lisboa (Portugal)], E-mail: andre.neto@cfn.ist.utl.pt; Sartori, Filippo; Piccolo, Fabio [Euratom-UKAEA, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Barbalace, Antonio [Euratom-ENEA Association, Consorzio RFX, 35127 Padova (Italy); Vitelli, Riccardo [Dipartimento di Informatica, Sistemi e Produzione, Universita di Roma, Tor Vergata, Via del Politecnico 1-00133, Roma (Italy); Fernandes, Horacio [Associacao Euratom-IST, Instituto de Plasmas e Fusao Nuclear, Av. Rovisco Pais, 1049-001 Lisboa (Portugal)

    2009-06-15

    A new framework for the development and execution of real-time codes is currently being developed and commissioned at JET. The foundations of the system are Linux, the Real Time Application Interface (RTAI) and a wise exploitation of the new i386 multi-core processors technology. The driving motivation was the need to find a real-time operating system for the i386 platform able to satisfy JET Vertical Stabilisation Enhancement project requirements: 50 {mu}s cycle time. Even if the initial choice was the VxWorks operating system, it was decided to explore an open source alternative, mostly because of the costs involved in the commercial product. The work started with the definition of a precise set of requirements and milestones to achieve: Linux distribution and kernel versions to be used for the real-time operating system; complete characterization of the Linux/RTAI real-time capabilities; exploitation of the multi-core technology; implementation of all the required and missing features; commissioning of the system. Latency and jitter measurements were compared for Linux and RTAI in both user and kernel-space. The best results were attained using the RTAI kernel solution where the time to reschedule a real-time task after an external interrupt is of 2.35 {+-} 0.35 {mu}s. In order to run the real-time codes in the kernel-space, a solution to provide user-space functionalities to the kernel modules had to be designed. This novel work provided the most common functions from the standard C library and transparent interaction with files and sockets to the kernel real-time modules. Kernel C++ support was also tested, further developed and integrated in the framework. The work has produced very convincing results so far: complete isolation of the processors assigned to real-time from the Linux non real-time activities, high level of stability over several days of benchmarking operations and values well below 3 {mu}s for task rescheduling after external interrupt. From

  7. Linux real-time framework for fusion devices

    International Nuclear Information System (INIS)

    Neto, Andre; Sartori, Filippo; Piccolo, Fabio; Barbalace, Antonio; Vitelli, Riccardo; Fernandes, Horacio

    2009-01-01

    A new framework for the development and execution of real-time codes is currently being developed and commissioned at JET. The foundations of the system are Linux, the Real Time Application Interface (RTAI) and a wise exploitation of the new i386 multi-core processors technology. The driving motivation was the need to find a real-time operating system for the i386 platform able to satisfy JET Vertical Stabilisation Enhancement project requirements: 50 μs cycle time. Even if the initial choice was the VxWorks operating system, it was decided to explore an open source alternative, mostly because of the costs involved in the commercial product. The work started with the definition of a precise set of requirements and milestones to achieve: Linux distribution and kernel versions to be used for the real-time operating system; complete characterization of the Linux/RTAI real-time capabilities; exploitation of the multi-core technology; implementation of all the required and missing features; commissioning of the system. Latency and jitter measurements were compared for Linux and RTAI in both user and kernel-space. The best results were attained using the RTAI kernel solution where the time to reschedule a real-time task after an external interrupt is of 2.35 ± 0.35 μs. In order to run the real-time codes in the kernel-space, a solution to provide user-space functionalities to the kernel modules had to be designed. This novel work provided the most common functions from the standard C library and transparent interaction with files and sockets to the kernel real-time modules. Kernel C++ support was also tested, further developed and integrated in the framework. The work has produced very convincing results so far: complete isolation of the processors assigned to real-time from the Linux non real-time activities, high level of stability over several days of benchmarking operations and values well below 3 μs for task rescheduling after external interrupt. From being the

  8. Perbandingan proxy pada linux dan windows untuk mempercepat browsing website

    Directory of Open Access Journals (Sweden)

    Dafwen Toresa

    2017-05-01

    Full Text Available AbstrakPada saat ini sangat banyak organisasi, baik pendidikan, pemerintahan,  maupun perusahaan swasta berusaha membatasi akses para pengguna ke internet dengan alasan bandwidth yang dimiliki mulai terasa lambat ketika para penggunanya mulai banyak yang melakukan browsing ke internet. Mempercepat akses browsing menjadi perhatian utama dengan memanfaatkan teknologi Proxy server. Penggunaan proxy server perlu mempertimbangkan sistem operasi pada server dan tool yang digunakan belum diketahui performansi terbaiknya pada sistem operasi apa.  Untuk itu dirasa perlu untuk menganalisis performan Proxy server pada sistem operasi berbeda yaitu Sistem Operasi Linux dengan tools Squid  dan Sistem Operasi Windows dengan tool Winroute. Kajian ini dilakukan untuk mengetahui perbandingan kecepatan browsing dari komputer pengguna (client. Browser yang digunakan di komputer pengguna adalah Mozilla Firefox. Penelitian ini menggunakan 2 komputer klien dengan pengujian masing-masingnya 5 kali pengujian pengaksesan/browsing web yang dituju melalui proxy server. Dari hasil pengujian yang dilakukan, diperoleh kesimpulan bahwa penerapan proxy server di sistem operasi linux dengan tools squid lebih cepat browsing dari klien menggunakan web browser yang sama dan komputer klien yang berbeda dari pada proxy server sistem operasi windows dengan tools winroute.  Kata kunci: Proxy, Bandwidth, Browsing, Squid, Winroute AbstractAt this time very many organizations, both education, government, and private companies try to limit the access of users to the internet on the grounds that the bandwidth owned began to feel slow when the users began to do a lot of browsing to the internet. Speed up browsing access is a major concern by utilizing Proxy server technology. The use of proxy servers need to consider the operating system on the server and the tool used is not yet known the best performance on what operating system. For that it is necessary to analyze Performance Proxy

  9. Memory Analysis of the KBeast Linux Rootkit: Investigating Publicly Available Linux Rootkit Using the Volatility Memory Analysis Framework

    Science.gov (United States)

    2015-06-01

    examine how a computer forensic investigator/incident handler, without specialised computer memory or software reverse engineering skills , can successfully...memory images and malware, this new series of reports will be directed at those who must analyse Linux malware-infected memory images. The skills ...disable 1287 1000 1000 /usr/lib/policykit-1-gnome/polkit-gnome-authentication- agent-1 1310 1000 1000 /usr/lib/pulseaudio/pulse/gconf- helper 1350

  10. Evaluation of LLNL BSL-3 Maximum Credible Event Potential Consequence to the General Population and Surrounding Environment

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2010-08-16

    The purpose of this evaluation is to establish reproducibility of the analysis and consequence results to the general population and surrounding environment in the LLNL Biosafety Level 3 Facility Environmental Assessment (LLNL 2008).

  11. Climate simulations and services on HPC, Cloud and Grid infrastructures

    Science.gov (United States)

    Cofino, Antonio S.; Blanco, Carlos; Minondo Tshuma, Antonio

    2017-04-01

    Cloud, Grid and High Performance Computing have changed the accessibility and availability of computing resources for Earth Science research communities, specially for Climate community. These paradigms are modifying the way how climate applications are being executed. By using these technologies the number, variety and complexity of experiments and resources are increasing substantially. But, although computational capacity is increasing, traditional applications and tools used by the community are not good enough to manage this large volume and variety of experiments and computing resources. In this contribution, we evaluate the challenges to run climate simulations and services on Grid, Cloud and HPC infrestructures and how to tackle them. The Grid and Cloud infrastructures provided by EGI's VOs ( esr , earth.vo.ibergrid and fedcloud.egi.eu) will be evaluated, as well as HPC resources from PRACE infrastructure and institutional clusters. To solve those challenges, solutions using DRM4G framework will be shown. DRM4G provides a good framework to manage big volume and variety of computing resources for climate experiments. This work has been supported by the Spanish National R&D Plan under projects WRF4G (CGL2011-28864), INSIGNIA (CGL2016-79210-R) and MULTI-SDM (CGL2015-66583-R) ; the IS-ENES2 project from the 7FP of the European Commission (grant agreement no. 312979); the European Regional Development Fund—ERDF and the Programa de Personal Investigador en Formación Predoctoral from Universidad de Cantabria and Government of Cantabria.

  12. OCCAM: a flexible, multi-purpose and extendable HPC cluster

    Science.gov (United States)

    Aldinucci, M.; Bagnasco, S.; Lusso, S.; Pasteris, P.; Rabellino, S.; Vallero, S.

    2017-10-01

    The Open Computing Cluster for Advanced data Manipulation (OCCAM) is a multipurpose flexible HPC cluster designed and operated by a collaboration between the University of Torino and the Sezione di Torino of the Istituto Nazionale di Fisica Nucleare. It is aimed at providing a flexible, reconfigurable and extendable infrastructure to cater to a wide range of different scientific computing use cases, including ones from solid-state chemistry, high-energy physics, computer science, big data analytics, computational biology, genomics and many others. Furthermore, it will serve as a platform for R&D activities on computational technologies themselves, with topics ranging from GPU acceleration to Cloud Computing technologies. A heterogeneous and reconfigurable system like this poses a number of challenges related to the frequency at which heterogeneous hardware resources might change their availability and shareability status, which in turn affect methods and means to allocate, manage, optimize, bill, monitor VMs, containers, virtual farms, jobs, interactive bare-metal sessions, etc. This work describes some of the use cases that prompted the design and construction of the HPC cluster, its architecture and resource provisioning model, along with a first characterization of its performance by some synthetic benchmark tools and a few realistic use-case tests.

  13. Simplifying the Development, Use and Sustainability of HPC Software

    Directory of Open Access Journals (Sweden)

    Jeremy Cohen

    2014-07-01

    Full Text Available Developing software to undertake complex, compute-intensive scientific processes requires a challenging combination of both specialist domain knowledge and software development skills to convert this knowledge into efficient code. As computational platforms become increasingly heterogeneous and newer types of platform such as Infrastructure-as-a-Service (IaaS cloud computing become more widely accepted for high-performance computing (HPC, scientists require more support from computer scientists and resource providers to develop efficient code that offers long-term sustainability and makes optimal use of the resources available to them. As part of the libhpc stage 1 and 2 projects we are developing a framework to provide a richer means of job specification and efficient execution of complex scientific software on heterogeneous infrastructure. In this updated version of our submission to the WSSSPE13 workshop at SuperComputing 2013 we set out our approach to simplifying access to HPC applications and resources for end-users through the use of flexible and interchangeable software components and associated high-level functional-style operations. We believe this approach can support sustainability of scientific software and help to widen access to it.

  14. Performance Analysis, Modeling and Scaling of HPC Applications and Tools

    Energy Technology Data Exchange (ETDEWEB)

    Bhatele, Abhinav [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-13

    E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research along the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.

  15. Self-service for software development projects and HPC activities

    International Nuclear Information System (INIS)

    Husejko, M; Høimyr, N; Gonzalez, A; Koloventzos, G; Asbury, D; Trzcinska, A; Agtzidis, I; Botrel, G; Otto, J

    2014-01-01

    This contribution describes how CERN has implemented several essential tools for agile software development processes, ranging from version control (Git) to issue tracking (Jira) and documentation (Wikis). Running such services in a large organisation like CERN requires many administrative actions both by users and service providers, such as creating software projects, managing access rights, users and groups, and performing tool-specific customisation. Dealing with these requests manually would be a time-consuming task. Another area of our CERN computing services that has required dedicated manual support has been clusters for specific user communities with special needs. Our aim is to move all our services to a layered approach, with server infrastructure running on the internal cloud computing infrastructure at CERN. This contribution illustrates how we plan to optimise the management of our of services by means of an end-user facing platform acting as a portal into all the related services for software projects, inspired by popular portals for open-source developments such as Sourceforge, GitHub and others. Furthermore, the contribution will discuss recent activities with tests and evaluations of High Performance Computing (HPC) applications on different hardware and software stacks, and plans to offer a dynamically scalable HPC service at CERN, based on affordable hardware.

  16. I/O load balancing for big data HPC applications

    Energy Technology Data Exchange (ETDEWEB)

    Paul, Arnab K. [Virginia Polytechnic Institute and State University; Goyal, Arpit [Virginia Polytechnic Institute and State University; Wang, Feiyi [ORNL; Oral, H Sarp [ORNL; Butt, Ali R. [Virginia Tech, Blacksburg, VA; Brim, Michael J. [ORNL; Srinivasa, Sangeetha B. [Virginia Polytechnic Institute and State University

    2018-01-01

    High Performance Computing (HPC) big data problems require efficient distributed storage systems. However, at scale, such storage systems often experience load imbalance and resource contention due to two factors: the bursty nature of scientific application I/O; and the complex I/O path that is without centralized arbitration and control. For example, the extant Lustre parallel file system-that supports many HPC centers-comprises numerous components connected via custom network topologies, and serves varying demands of a large number of users and applications. Consequently, some storage servers can be more loaded than others, which creates bottlenecks and reduces overall application I/O performance. Existing solutions typically focus on per application load balancing, and thus are not as effective given their lack of a global view of the system. In this paper, we propose a data-driven approach to load balance the I/O servers at scale, targeted at Lustre deployments. To this end, we design a global mapper on Lustre Metadata Server, which gathers runtime statistics from key storage components on the I/O path, and applies Markov chain modeling and a minimum-cost maximum-flow algorithm to decide where data should be placed. Evaluation using a realistic system simulator and a real setup shows that our approach yields better load balancing, which in turn can improve end-to-end performance.

  17. Summary Statistics for Homemade ?Play Dough? -- Data Acquired at LLNL

    Energy Technology Data Exchange (ETDEWEB)

    Kallman, J S; Morales, K E; Whipple, R E; Huber, R D; Martz, A; Brown, W D; Smith, J A; Schneberk, D J; Martz, Jr., H E; White, III, W T

    2010-03-11

    Using x-ray computerized tomography (CT), we have characterized the x-ray linear attenuation coefficients (LAC) of a homemade Play Dough{trademark}-like material, designated as PDA. Table 1 gives the first-order statistics for each of four CT measurements, estimated with a Gaussian kernel density estimator (KDE) analysis. The mean values of the LAC range from a high of about 2700 LMHU{sub D} 100kVp to a low of about 1200 LMHUD at 300kVp. The standard deviation of each measurement is around 10% to 15% of the mean. The entropy covers the range from 6.0 to 7.4. Ordinarily, we would model the LAC of the material and compare the modeled values to the measured values. In this case, however, we did not have the detailed chemical composition of the material and therefore did not model the LAC. Using a method recently proposed by Lawrence Livermore National Laboratory (LLNL), we estimate the value of the effective atomic number, Z{sub eff}, to be near 10. LLNL prepared about 50mL of the homemade 'Play Dough' in a polypropylene vial and firmly compressed it immediately prior to the x-ray measurements. We used the computer program IMGREC to reconstruct the CT images. The values of the key parameters used in the data capture and image reconstruction are given in this report. Additional details may be found in the experimental SOP and a separate document. To characterize the statistical distribution of LAC values in each CT image, we first isolated an 80% central-core segment of volume elements ('voxels') lying completely within the specimen, away from the walls of the polypropylene vial. All of the voxels within this central core, including those comprised of voids and inclusions, are included in the statistics. We then calculated the mean value, standard deviation and entropy for (a) the four image segments and for (b) their digital gradient images. (A digital gradient image of a given image was obtained by taking the absolute value of the difference

  18. LLNL/JNC repository collaboration interim progress report

    International Nuclear Information System (INIS)

    Bourcier, W.L.; Couch, R.G.; Gansemer, J.; Halsey, W.G.; Palmer, C.E.; Sinz, K.H.; Stout, R.B.; Wijesinghe, A.; Wolery, T.J.

    1999-01-01

    Under this Annex, a research program on the near-field performance assessment related to the geological disposal of radioactive waste will be carried out at the Lawrence Livermore National Laboratory (LLNL) in close collaboration with the Power Reactor and Nuclear Fuel Development Corporation of Japan (PNC). This program will focus on activities that provide direct support for PNC's near-term and long-term needs that will, in turn, utilize and further strengthen US capabilities for radioactive waste management. The work scope for two years will be designed based on the PNC's priorities for its second progress report (the H12 report) of research and development for high-level radioactive waste disposal and on the interest and capabilities of the LLNL. The work will focus on the chemical modeling for the near-field environment and long-term mechanical modeling of engineered barrier system as it evolves. Certain activities in this program will provide for a final iteration of analyses to provide additional technical basis prior to the year 2000 as determined in discussions with the PNC's technical coordinator. The work for two years will include the following activities: Activity 1: Chemical Modeling of EBS Materials Interactions--Task 1.1 Chemical Modeling of Iron Effects on Borosilicate Glass Durability; and Task 1.2 Changes in Overpack and Bentonite Properties Due to Metal, Bentonite and Water Interactions. Activity 2: Thermodynamic Database Validation and Comparison--Task 2.1 Set up EQ3/6 to Run with the Pitzer-based PNC Thermodynamic Data Base; Task 2.2 Provide Expert Consultation on the Thermodynamic Data Base; and Task 2.3 Provide Analysis of Likely Solubility Controls on Selenium. Activity 3: Engineered Barrier Performance Assessment of the Unsaturated, Oxidizing Transient--Task 3.1 Apply YMIM to PNC Transient EBS Performance; Task 3.2 Demonstrate Methods for Modeling the Return to Reducing Conditions; and Task 3.3 Evaluate the Potential for Stress Corrosion

  19. Laboratorio de Seguridad Informática con Kali Linux

    OpenAIRE

    Gutiérrez Benito, Fernando

    2014-01-01

    Laboratorio de Seguridad Informática usando la distribución Linux Kali, un sistema operativo dedicado a la auditoría de seguridad informática. Se emplearán herramientas especializadas en los distintos campos de la seguridad, como nmap, Metaspoit, w3af, John the Ripper o Aircrack-ng. Se intentará que los alumnos comprendan la necesidad de crear aplicaciones seguras así como pueda servir de base para aquellos que deseen continuar en el mundo de la seguridad informática. Grado en Ingeniería T...

  20. Documenting and automating collateral evolutions in Linux device drivers

    DEFF Research Database (Denmark)

    Padioleau, Yoann; Hansen, René Rydhof; Lawall, Julia

    2008-01-01

    . Manually performing such collateral evolutions is time-consuming and unreliable, and has lead to errors when modifications have not been done consistently. In this paper, we present an automatic program transformation tool, Coccinelle, for documenting and automating device driver collateral evolutions...... programmer. We have evaluated our approach on 62 representative collateral evolutions that were previously performed manually in Linux 2.5 and 2.6. On a test suite of over 5800 relevant driver files, the semantic patches for these collateral evolutions update over 93% of the files completely...

  1. Millisecond accuracy video display using OpenGL under Linux.

    Science.gov (United States)

    Stewart, Neil

    2006-02-01

    To measure people's reaction times to the nearest millisecond, it is necessary to know exactly when a stimulus is displayed. This article describes how to display stimuli with millisecond accuracy on a normal CRT monitor, using a PC running Linux. A simple C program is presented to illustrate how this may be done within X Windows using the OpenGL rendering system. A test of this system is reported that demonstrates that stimuli may be consistently displayed with millisecond accuracy. An algorithm is presented that allows the exact time of stimulus presentation to be deduced, even if there are relatively large errors in measuring the display time.

  2. Construction of a Linux based chemical and biological information system.

    Science.gov (United States)

    Molnár, László; Vágó, István; Fehér, András

    2003-01-01

    A chemical and biological information system with a Web-based easy-to-use interface and corresponding databases has been developed. The constructed system incorporates all chemical, numerical and textual data related to the chemical compounds, including numerical biological screen results. Users can search the database by traditional textual/numerical and/or substructure or similarity queries through the web interface. To build our chemical database management system, we utilized existing IT components such as ORACLE or Tripos SYBYL for database management and Zope application server for the web interface. We chose Linux as the main platform, however, almost every component can be used under various operating systems.

  3. The Case for A Hierarchal System Model for Linux Clusters

    Energy Technology Data Exchange (ETDEWEB)

    Seager, M; Gorda, B

    2009-06-05

    The computer industry today is no longer driven, as it was in the 40s, 50s and 60s, by High-performance computing requirements. Rather, HPC systems, especially Leadership class systems, sit on top of a pyramid investment mode. Figure 1 shows a representative pyramid investment model for systems hardware. At the base of the pyramid is the huge investment (order 10s of Billions of US Dollars per year) in semiconductor fabrication and process technologies. These costs, which are approximately doubling with every generation, are funded from investments multiple markets: enterprise, desktops, games, embedded and specialized devices. Over and above these base technology investments are investments for critical technology elements such as microprocessor, chipsets and memory ASIC components. Investments for these components are spread across the same markets as the base semiconductor processes investments. These second tier investments are approximately half the size of the lower level of the pyramid. The next technology investment layer up, tier 3, is more focused on scalable computing systems such as those needed for HPC and other markets. These tier 3 technology elements include networking (SAN, WAN and LAN), interconnects and large scalable SMP designs. Above these is tier 4 are relatively small investments necessary to build very large, scalable systems high-end or Leadership class systems. Primary among these are the specialized network designs of vertically integrated systems, etc.

  4. A PC-Linux-based data acquisition system for the STAR TOFp detector

    International Nuclear Information System (INIS)

    Liu Zhixu; Liu Feng; Zhang Bingyun

    2003-01-01

    Commodity hardware running the open source operating system Linux is playing various important roles in the field of high energy physics. This paper describes the PC-Linux-based Data Acquisition System of STAR TOFp detector. It is based on the conventional solutions with front-end electronics made of NIM and CAMAC modules controlled by a PC running Linux. The system had been commissioned into the STAR DAQ system, and worked successfully in the second year of STAR physics runs

  5. Ubuntu Linux Toolbox 1000 + Commands for Ubuntu and Debian Power Users

    CERN Document Server

    Negus, Christopher

    2008-01-01

    In this handy, compact guide, you'll explore a ton of powerful Ubuntu Linux commands while you learn to use Ubuntu Linux as the experts do: from the command line. Try out more than 1,000 commands to find and get software, monitor system health and security, and access network resources. Then, apply the skills you learn from this book to use and administer desktops and servers running Ubuntu, Debian, and KNOPPIX or any other Linux distribution.

  6. LLNL medical and industrial laser isotope separation: large volume, low cost production through advanced laser technologies

    International Nuclear Information System (INIS)

    Comaskey, B.; Scheibner, K. F.; Shaw, M.; Wilder, J.

    1998-01-01

    The goal of this LDRD project was to demonstrate the technical and economical feasibility of applying laser isotope separation technology to the commercial enrichment (>lkg/y) of stable isotopes. A successful demonstration would well position the laboratory to make a credible case for the creation of an ongoing medical and industrial isotope production and development program at LLNL. Such a program would establish LLNL as a center for advanced medical isotope production, successfully leveraging previous LLNL Research and Development hardware, facilities, and knowledge

  7. Test results from the LLNL 250 GHz CARM experiment

    International Nuclear Information System (INIS)

    Kulke, B.; Caplan, M.; Bubp, D.; Houck, T.; Rogers, D.; Trimble, D.; VanMaren, R.; Westenskow, G.; McDermott, D.B.; Luhmann, N.C. Jr.; Danly, B.

    1991-01-01

    The authors have completed the initial phase of a 250 GHz CARM experiment, driven by the 2 MeV, 1 kA, 30 ns induction linac at the LLNL ARC facility. A non-Brillouin, solid, electron beam is generated from a flux-threaded, thermionic cathode. As the beam traverses a 10 kG plateau produced by a superconducting magnet, ten percent of the beam energy is converted into rotational energy in a bifilar helix wiggler that produces a spiraling, 50 G, transverse magnetic field. The beam is then compressed to a 5 mm diameter as it drifts into a 30 kG plateau. For the present experiment, the CARM interaction region consisted of a single Bragg section resonator, followed by a smooth-bore amplifier section. Using high-pass filters, they have observed broadband output signals estimated to be at the several megawatt level in the range 140 to over 230 GHz. This is consistent with operation as a superradiant amplifier. Simultaneously, they also observed K a band power levels near 3 MW

  8. Test results from the LLNL 250 GHz CARM experiment

    International Nuclear Information System (INIS)

    Kulke, B.; Caplan, M.; Bubp, D.; Houck, T.; Rogers, D.; Trimble, D.; VanMaren, R.; Westenskow, G.; McDermott, D.B.; Luhmann, N.C. Jr.; Danly, B.

    1991-05-01

    We have completed the initial phase of a 250 GHz CARM experiment, driven by the 2 MeV, 1 kA, 30 ns induction linac at the LLNL ARC facility. A non-Brillouin, solid, electron beam is generated from a flux-threaded, thermionic cathode. As the beam traverses a 10 kG plateau produced by a superconducting magnet, ten percent of the beam energy is converted into rotational energy in a bifilar helix wiggler that produces a spiraling, 50 G, transverse magnetic field. The beam is then compressed to a 5 mm diameter as it drifts into a 30 kG plateau. For the present experiment, the CARM interaction region consisted of a single Bragg section resonator, followed by a smooth-bore amplifier section. Using high-pass filters, we have observed broadband output signals estimated to be at the several megawatt level in the range 140 to over 230 GHz. This is consistent with operation as a superradiant amplifier. Simultaneously, we also observed K a band power levels near 3 MW

  9. Net Weight Issue LLNL DOE-STD-3013 Containers

    International Nuclear Information System (INIS)

    Wilk, P

    2008-01-01

    The following position paper will describe DOE-STD-3013 container sets No.L000072 and No.L000076, and how they are compliant with DOE-STD-3013-2004. All masses of accountable nuclear materials are measured on LLNL certified balances maintained under an MC and A Program approved by DOE/NNSA LSO. All accountability balances are recalibrated annually and checked to be within calibration on each day that the balance is used for accountability purposes. A statistical analysis of the historical calibration checks from the last seven years indicates that the full-range Limit of Error (LoE, 95% confidence level) for the balance used to measure the mass of the contents of the above indicated 3013 containers is 0.185 g. If this error envelope, at the 95% confidence level, were to be used to generate an upper-limit to the measured weight of the containers No.L000072 and No.L000076, the error-envelope would extend beyond the 5.0 kg 3013-standard limit on the package contents by less than 0.3 g. However, this is still well within the intended safety bounds of DOE-STD-3013-2004

  10. Innovative HPC architectures for the study of planetary plasma environments

    Science.gov (United States)

    Amaya, Jorge; Wolf, Anna; Lembège, Bertrand; Zitz, Anke; Alvarez, Damian; Lapenta, Giovanni

    2016-04-01

    DEEP-ER is an European Commission founded project that develops a new type of High Performance Computer architecture. The revolutionary system is currently used by KU Leuven to study the effects of the solar wind on the global environments of the Earth and Mercury. The new architecture combines the versatility of Intel Xeon computing nodes with the power of the upcoming Intel Xeon Phi accelerators. Contrary to classical heterogeneous HPC architectures, where it is customary to find CPU and accelerators in the same computing nodes, in the DEEP-ER system CPU nodes are grouped together (Cluster) and independently from the accelerator nodes (Booster). The system is equipped with a state of the art interconnection network, a highly scalable and fast I/O and a fail recovery resiliency system. The final objective of the project is to introduce a scalable system that can be used to create the next generation of exascale supercomputers. The code iPic3D from KU Leuven is being adapted to this new architecture. This particle-in-cell code can now perform the computation of the electromagnetic fields in the Cluster while the particles are moved in the Booster side. Using fast and scalable Xeon Phi accelerators in the Booster we can introduce many more particles per cell in the simulation than what is possible in the current generation of HPC systems, allowing to calculate fully kinetic plasmas with very low interpolation noise. The system will be used to perform fully kinetic, low noise, 3D simulations of the interaction of the solar wind with the magnetosphere of the Earth and Mercury. Preliminary simulations have been performed in other HPC centers in order to compare the results in different systems. In this presentation we show the complexity of the plasma flow around the planets, including the development of hydrodynamic instabilities at the flanks, the presence of the collision-less shock, the magnetosheath, the magnetopause, reconnection zones, the formation of the

  11. SOFTICE: Facilitating both Adoption of Linux Undergraduate Operating Systems Laboratories and Students' Immersion in Kernel Code

    Directory of Open Access Journals (Sweden)

    Alessio Gaspar

    2007-06-01

    Full Text Available This paper discusses how Linux clustering and virtual machine technologies can improve undergraduate students' hands-on experience in operating systems laboratories. Like similar projects, SOFTICE relies on User Mode Linux (UML to provide students with privileged access to a Linux system without creating security breaches on the hosting network. We extend such approaches in two aspects. First, we propose to facilitate adoption of Linux-based laboratories by using a load-balancing cluster made of recycled classroom PCs to remotely serve access to virtual machines. Secondly, we propose a new approach for students to interact with the kernel code.

  12. COMPOSE-HPC: A Transformational Approach to Exascale

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, David E [ORNL; Allan, Benjamin A. [Sandia National Laboratories (SNL); Armstrong, Robert C. [Sandia National Laboratories (SNL); Chavarria-Miranda, Daniel [Pacific Northwest National Laboratory (PNNL); Dahlgren, Tamara L. [Lawrence Livermore National Laboratory (LLNL); Elwasif, Wael R [ORNL; Epperly, Tom [Lawrence Livermore National Laboratory (LLNL); Foley, Samantha S [ORNL; Hulette, Geoffrey C. [Sandia National Laboratories (SNL); Krishnamoorthy, Sriram [Pacific Northwest National Laboratory (PNNL); Prantl, Adrian [Lawrence Livermore National Laboratory (LLNL); Panyala, Ajay [Louisiana State University; Sottile, Matthew [Galois, Inc.

    2012-04-01

    The goal of the COMPOSE-HPC project is to 'democratize' tools for automatic transformation of program source code so that it becomes tractable for the developers of scientific applications to create and use their own transformations reliably and safely. This paper describes our approach to this challenge, the creation of the KNOT tool chain, which includes tools for the creation of annotation languages to control the transformations (PAUL), to perform the transformations (ROTE), and optimization and code generation (BRAID), which can be used individually and in combination. We also provide examples of current and future uses of the KNOT tools, which include transforming code to use different programming models and environments, providing tests that can be used to detect errors in software or its execution, as well as composition of software written in different programming languages, or with different threading patterns.

  13. Utilizing HPC Network Technologies in High Energy Physics Experiments

    CERN Document Server

    AUTHOR|(CDS)2088631; The ATLAS collaboration

    2017-01-01

    Because of their performance characteristics high-performance fabrics like Infiniband or OmniPath are interesting technologies for many local area network applications, including data acquisition systems for high-energy physics experiments like the ATLAS experiment at CERN. This paper analyzes existing APIs for high-performance fabrics and evaluates their suitability for data acquisition systems in terms of performance and domain applicability. The study finds that existing software APIs for high-performance interconnects are focused on applications in high-performance computing with specific workloads and are not compatible with the requirements of data acquisition systems. To evaluate the use of high-performance interconnects in data acquisition systems a custom library, NetIO, is presented and compared against existing technologies. NetIO has a message queue-like interface which matches the ATLAS use case better than traditional HPC APIs like MPI. The architecture of NetIO is based on a interchangeable bac...

  14. Behavior of HPC with Fly Ash after Elevated Temperature

    Directory of Open Access Journals (Sweden)

    Huai-Shuai Shang

    2013-01-01

    Full Text Available For use in fire resistance calculations, the relevant thermal properties of high-performance concrete (HPC with fly ash were determined through an experimental study. These properties included compressive strength, cubic compressive strength, cleavage strength, flexural strength, and the ultrasonic velocity at various temperatures (20, 100, 200, 300, 400 and 500∘C for high-performance concrete. The effect of temperature on compressive strength, cubic compressive strength, cleavage strength, flexural strength, and the ultrasonic velocity of the high-performance concrete with fly ash was discussed according to the experimental results. The change of surface characteristics with the temperature was observed. It can serve as a reference for the maintenance, design, and the life prediction of high-performance concrete engineering, such as high-rise building, subjected to elevated temperatures.

  15. Behaviour of slag HPC submitted to immersion-drying cycles

    Directory of Open Access Journals (Sweden)

    Rabah Chaid

    2016-04-01

    Full Text Available This article is part of a summary of the work developed in conjunction with the Laboratory of Civil Engineering and Mechanical Engineering from INSA Rennes and Research Unit: Materials, Processes and Environment, University of Boumerdes. One of the objectives was indeed to promote, through studies of variants, the use of local cementitious additions in the formulation of high performance concretes (HPC. The binding contribution of mineral additions to the physical, mechanical and durability of concrete was evaluated by an experimental methodology to subjugate their original granular and pozzolanic effect. The results show that the contribution of couple cement -slag intensification of the matrix is higher than that obtained when the cement is not substituted by addition. Therefore, a significant improvement in performance of concretes was observed, despite the adverse action immersion cycles - drying maintained for 365 days.

  16. GSI operation software: migration from OpenVMS TO Linux

    International Nuclear Information System (INIS)

    Huhmann, R.; Froehlich, G.; Juelicher, S.; Schaa, V.R.W.

    2012-01-01

    The current operation software at GSI, controlling the linac, beam transfer lines, synchrotron and storage ring, has been developed over a period of more than two decades using OpenVMS on Alpha-Workstations. The GSI accelerator facilities will serve as an injector chain for the new FAIR accelerator complex for which a control system is currently developed. To enable reuse and integration of parts of the distributed GSI software system, in particular the linac operation software, within the FAIR control system, the corresponding software components must be migrated to Linux. Inter-operability with FAIR controls applications is achieved by adding a generic middle-ware interface accessible from Java applications. For porting applications to Linux a set of libraries and tools has been developed covering the necessary OpenVMS system functionality. Currently, core applications and services are already ported or rewritten and functionally tested but not in operational usage. This paper presents the current status of the project and concepts for putting the migrated software into operation. (authors)

  17. Berkeley lab checkpoint/restart (BLCR) for Linux clusters

    International Nuclear Information System (INIS)

    Hargrove, Paul H; Duell, Jason C

    2006-01-01

    This article describes the motivation, design and implementation of Berkeley Lab Checkpoint/Restart (BLCR), a system-level checkpoint/restart implementation for Linux clusters that targets the space of typical High Performance Computing applications, including MPI. Application-level solutions, including both checkpointing and fault-tolerant algorithms, are recognized as more time and space efficient than system-level checkpoints, which cannot make use of any application-specific knowledge. However, system-level checkpointing allows for preemption, making it suitable for responding to ''fault precursors'' (for instance, elevated error rates from ECC memory or network CRCs, or elevated temperature from sensors). Preemption can also increase the efficiency of batch scheduling; for instance reducing idle cycles (by allowing for shutdown without any queue draining period or reallocation of resources to eliminate idle nodes when better fitting jobs are queued), and reducing the average queued time (by limiting large jobs to running during off-peak hours, without the need to limit the length of such jobs). Each of these potential uses makes BLCR a valuable tool for efficient resource management in Linux clusters

  18. Implementing Journaling in a Linux Shared Disk File System

    Science.gov (United States)

    Preslan, Kenneth W.; Barry, Andrew; Brassow, Jonathan; Cattelan, Russell; Manthei, Adam; Nygaard, Erling; VanOort, Seth; Teigland, David; Tilstra, Mike; O'Keefe, Matthew; hide

    2000-01-01

    In computer systems today, speed and responsiveness is often determined by network and storage subsystem performance. Faster, more scalable networking interfaces like Fibre Channel and Gigabit Ethernet provide the scaffolding from which higher performance computer systems implementations may be constructed, but new thinking is required about how machines interact with network-enabled storage devices. In this paper we describe how we implemented journaling in the Global File System (GFS), a shared-disk, cluster file system for Linux. Our previous three papers on GFS at the Mass Storage Symposium discussed our first three GFS implementations, their performance, and the lessons learned. Our fourth paper describes, appropriately enough, the evolution of GFS version 3 to version 4, which supports journaling and recovery from client failures. In addition, GFS scalability tests extending to 8 machines accessing 8 4-disk enclosures were conducted: these tests showed good scaling. We describe the GFS cluster infrastructure, which is necessary for proper recovery from machine and disk failures in a collection of machines sharing disks using GFS. Finally, we discuss the suitability of Linux for handling the big data requirements of supercomputing centers.

  19. Migration of alcator C-Mod computer infrastructure to Linux

    International Nuclear Information System (INIS)

    Fredian, T.W.; Greenwald, M.; Stillerman, J.A.

    2004-01-01

    The Alcator C-Mod fusion experiment at MIT in Cambridge, Massachusetts has been operating for twelve years. The data handling for the experiment during most of this period was based on MDSplus running on a cluster of VAX and Alpha computers using the OpenVMS operating system. While the OpenVMS operating system provided a stable reliable platform, the support of the operating system and the software layered on the system has deteriorated in recent years. With the advent of extremely powerful low cost personal computers and the increasing popularity and robustness of the Linux operating system a decision was made to migrate the data handling systems for C-Mod to a collection of PC's running Linux. This paper will describe the new system configuration, the effort involved in the migration from OpenVMS, the results of the first run campaign under the new configuration and the impact the switch may have on the rest of the MDSplus community

  20. KNBD: A Remote Kernel Block Server for Linux

    Science.gov (United States)

    Becker, Jeff

    1999-01-01

    I am developing a prototype of a Linux remote disk block server whose purpose is to serve as a lower level component of a parallel file system. Parallel file systems are an important component of high performance supercomputers and clusters. Although supercomputer vendors such as SGI and IBM have their own custom solutions, there has been a void and hence a demand for such a system on Beowulf-type PC Clusters. Recently, the Parallel Virtual File System (PVFS) project at Clemson University has begun to address this need (1). Although their system provides much of the functionality of (and indeed was inspired by) the equivalent file systems in the commercial supercomputer market, their system is all in user-space. Migrating their 10 services to the kernel could provide a performance boost, by obviating the need for expensive system calls. Thanks to Pavel Machek, the Linux kernel has provided the network block device (2) with kernels 2.1.101 and later. You can configure this block device to redirect reads and writes to a remote machine's disk. This can be used as a building block for constructing a striped file system across several nodes.

  1. FIREWALL E SEGURANÇA DE SISTEMAS APLICADO AO LINUX

    Directory of Open Access Journals (Sweden)

    Rodrigo Ribeiro

    2017-04-01

    Full Text Available Tendo em vista a evolução da internet no mundo, torna-se necessário investir na segurança da informação, alguns importantes conceitos referentes as redes de computadores e sua evolução direcionam para o surgimento de novas vulnerabilidades. O objetivo principal deste trabalho é comprovar que, por meio da utilização de software livre como o Linux e suas ferramentas, é possível criar um cenário seguro contra alguns ataques, por meio de testes em ambientes controlados utilizando-se de arquiteturas testadas em tempo real e verificando qual o potencial de uso entre uma pesquisa autoral sobre o assunto, a partir dessa ideia, foi possível reconhecer a grande utilização dos mecanismos de segurança, validando a eficiência de tais ferramentas estudadas na mitigação de ataques a redes de computares. Os sistemas de defesa da plataforma Linux são extremamente eficientes e atende ao objetivo de prevenir uma rede de acesso indevido.

  2. End-to-end experiment management in HPC

    Energy Technology Data Exchange (ETDEWEB)

    Bent, John M [Los Alamos National Laboratory; Kroiss, Ryan R [Los Alamos National Laboratory; Torrez, Alfred [Los Alamos National Laboratory; Wingate, Meghan [Los Alamos National Laboratory

    2010-01-01

    Experiment management in any domain is challenging. There is a perpetual feedback loop cycling through planning, execution, measurement, and analysis. The lifetime of a particular experiment can be limited to a single cycle although many require myriad more cycles before definite results can be obtained. Within each cycle, a large number of subexperiments may be executed in order to measure the effects of one or more independent variables. Experiment management in high performance computing (HPC) follows this general pattern but also has three unique characteristics. One, computational science applications running on large supercomputers must deal with frequent platform failures which can interrupt, perturb, or terminate running experiments. Two, these applications typically integrate in parallel using MPI as their communication medium. Three, there is typically a scheduling system (e.g. Condor, Moab, SGE, etc.) acting as a gate-keeper for the HPC resources. In this paper, we introduce LANL Experiment Management (LEM), an experimental management framework simplifying all four phases of experiment management. LEM simplifies experiment planning by allowing the user to describe their experimental goals without having to fully construct the individual parameters for each task. To simplify execution, LEM dispatches the subexperiments itself thereby freeing the user from remembering the often arcane methods for interacting with the various scheduling systems. LEM provides transducers for experiments that automatically measure and record important information about each subexperiment; these transducers can easily be extended to collect additional measurements specific to each experiment. Finally, experiment analysis is simplified by providing a general database visualization framework that allows users to quickly and easily interact with their measured data.

  3. Training the Masses ? Web-based Laser Safety Training at LLNL

    Energy Technology Data Exchange (ETDEWEB)

    Sprague, D D

    2004-12-17

    The LLNL work smart standard requires us to provide ongoing laser safety training for a large number of persons on a three-year cycle. In order to meet the standard, it was necessary to find a cost and performance effective method to perform this training. This paper discusses the scope of the training problem, specific LLNL training needs, various training methods used at LLNL, the advantages and disadvantages of these methods and the rationale for selecting web-based laser safety training. The tools and costs involved in developing web-based training courses are also discussed, in addition to conclusions drawn from our training operating experience. The ILSC lecture presentation contains a short demonstration of the LLNL web-based laser safety-training course.

  4. LLNL Compliance Plan for TRUPACT-2 Authorized Methods for Payload Control

    International Nuclear Information System (INIS)

    1995-03-01

    This document describes payload control at LLNL to ensure that all shipments of CH-TRU waste in the TRUPACT-II (Transuranic Package Transporter-II) meet the requirements of the TRUPACT-II SARP (safety report for packaging). This document also provides specific instructions for the selection of authorized payloads once individual payload containers are qualified for transport. The physical assembly of the qualified payload and operating procedures for the use of the TRUPACT-II, including loading and unloading operations, are described in HWM Procedure No. 204, based on the information in the TRUPACT-II SARP. The LLNL TRAMPAC, along with the TRUPACT-II operating procedures contained in HWM Procedure No. 204, meet the documentation needs for the use of the TRUPACT-II at LLNL. Table 14-1 provides a summary of the LLNL waste generation and certification procedures as they relate to TRUPACT-II payload compliance

  5. Proposals for ORNL [Oak Ridge National Laboratory] support to Tiber LLNL [Lawrence Livermore National Laboratory

    International Nuclear Information System (INIS)

    Berry, L.A.; Rosenthal, M.W.; Saltmarsh, M.J.; Shannon, T.E.; Sheffield, J.

    1987-01-01

    This document describes the interests and capabilities of Oak Ridge National Laboratory in their proposals to support the Lawrence Livermore National Laboratory (LLNL) Engineering Test Reactor (ETR) project. Five individual proposals are cataloged separately. (FI)

  6. LLNL Center of Excellence Work Items for Q9-Q10 period

    Energy Technology Data Exchange (ETDEWEB)

    Neely, J. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-09-02

    This work plan encompasses a slice of effort going on within the ASC program, and for projects utilizing COE vendor resources, describes work that will be performed by both LLNL staff and COE vendor staff collaboratively.

  7. CompTIA Linux+ Complete Study Guide (Exams LX0-101 and LX0-102)

    CERN Document Server

    Smith, Roderick W

    2010-01-01

    Prepare for CompTIA's Linux+ Exams. As the Linux server and desktop markets continue to grow, so does the need for qualified Linux administrators. CompTIA's Linux+ certification (Exams LX0-101 and LX0-102) includes the very latest enhancements to the popular open source operating system. This detailed guide not only covers all key exam topics—such as using Linux command-line tools, understanding the boot process and scripts, managing files and file systems, managing system security, and much more—it also builds your practical Linux skills with real-world examples. Inside, you'll find:. Full co

  8. Review of LLNL Mixed Waste Streams for the Application of Potential Waste Reduction Controls

    International Nuclear Information System (INIS)

    Belue, A; Fischer, R P

    2007-01-01

    In July 2004, LLNL adopted the International Standard ISO 14001 as a Work Smart Standard in lieu of DOE Order 450.1. In support of this new requirement the Director issued a new environmental policy that was documented in Section 3.0 of Document 1.2, ''ES and H Policies of LLNL'', in the ES and H Manual. In recent years the Environmental Management System (EMS) process has become formalized as LLNL adopted ISO 14001 as part of the contract under which the laboratory is operated for the Department of Energy (DOE). On May 9, 2005, LLNL revised its Integrated Safety Management System Description to enhance existing environmental requirements to meet ISO 14001. Effective October 1, 2005, each new project or activity is required to be evaluated from an environmental aspect, particularly if a potential exists for significant environmental impacts. Authorizing organizations are required to consider the management of all environmental aspects, the applicable regulatory requirements, and reasonable actions that can be taken to reduce negative environmental impacts. During 2006, LLNL has worked to implement the corrective actions addressing the deficiencies identified in the DOE/LSO audit. LLNL has begun to update the present EMS to meet the requirements of ISO 14001:2004. The EMS commits LLNL--and each employee--to responsible stewardship of all the environmental resources in our care. The generation of mixed radioactive waste was identified as a significant environmental aspect. Mixed waste for the purposes of this report is defined as waste materials containing both hazardous chemical and radioactive constituents. Significant environmental aspects require that an Environmental Management Plan (EMP) be developed. The objective of the EMP developed for mixed waste (EMP-005) is to evaluate options for reducing the amount of mixed waste generated. This document presents the findings of the evaluation of mixed waste generated at LLNL and a proposed plan for reduction

  9. The National Ignition Facility (NIF) and High Energy Density Science Research at LLNL (Briefing Charts)

    Science.gov (United States)

    2013-06-21

    The National Ignition Facility ( NIF ) and High Energy Density Science Research at LLNL Presentation to: IEEE Pulsed Power and Plasma Science...Conference C. J. Keane Director, NIF User Office June 21, 2013 1491978-1-4673-5168-3/13/$31.00 ©2013 IEEE Report Documentation Page Form ApprovedOMB No...4. TITLE AND SUBTITLE The National Ignition Facility ( NIF ) and High Energy Density Science Research at LLNL 5a. CONTRACT NUMBER 5b. GRANT

  10. Linear collider research and development at SLAC, LBL and LLNL

    International Nuclear Information System (INIS)

    Mattison, T.S.

    1988-10-01

    The study of electron-positron (e + e/sup /minus//) annihilation in storage ring colliders has been very fruitful. It is by now well understood that the optimized cost and size of e + e/sup /minus// storage rings scales as E(sub cm//sup 2/ due to the need to replace energy lost to synchrotron radiation in the ring bending magnets. Linear colliders, using the beams from linear accelerators, evade this scaling law. The study of e/sup +/e/sup /minus// collisions at TeV energy will require linear colliders. The luminosity requirements for a TeV linear collider are set by the physics. Advanced accelerator research and development at SLAC is focused toward a TeV Linear Collider (TLC) of 0.5--1 TeV in the center of mass, with a luminosity of 10/sup 33/--10/sup 34/. The goal is a design for two linacs of less than 3 km each, and requiring less than 100 MW of power each. With a 1 km final focus, the TLC could be fit on Stanford University land (although not entirely within the present SLAC site). The emphasis is on technologies feasible for a proposal to be framed in 1992. Linear collider development work is progressing on three fronts: delivering electrical energy to a beam, delivering a focused high quality beam, and system optimization. Sources of high peak microwave radio frequency (RF) power to drive the high gradient linacs are being developed in collaboration with Lawrence Berkeley Laboratory (LBL) and Lawrence Livermore National Laboratory (LLNL). Beam generation, beam dynamics and final focus work has been done at SLAC and in collaboration with KEK. Both the accelerator physics and the utilization of TeV linear colliders were topics at the 1988 Snowmass Summer Study. 14 refs., 4 figs., 1 tab

  11. Progress in AMS measurements at the LLNL spectrometer

    International Nuclear Information System (INIS)

    Southon, J.R.; Vogel, J.S.; Trumbore, S.E.; Davis, J.C.; Roberts, M.L.; Caffee, M.; Finkel, R.; Proctor, I.D.; Heikkinen, D.W.; Berno, A.J.; Hornady, R.S.

    1991-06-01

    The AMS measurement program at LLNL began in earnest in late 1989, and has initially concentrated on 14 C measurements for biomedical and geoscience applications. We have now begun measurements on 10 Be and 36 Cl, are presently testing the spectrometer performance for 26 Al and 3 H, and will begin tests on 7 Be, 41 Ca and 129 I within the next few months. Our laboratory has a strong biomedical AMS program of 14 C tracer measurements involving large numbers of samples (sometimes hundreds in a single experiment) at 14 C concentrations which are typically .5--5 times Modern, but are occasionally highly enriched. The sample preparation techniques required for high throughput and low cross-contamination for this work are discussed elsewhere. Similar demands are placed on the AMS measurement system, and in particular on the ion source. Modifications to our GIC 846 ion source, described below, allow us to run biomedical and geoscience or archaeological samples in the same source wheel with no adverse effects. The source has a capacity for 60 samples (about 45 unknown) in a single wheel and provides currents of 30--60μA of C - from hydrogen-reduced graphite. These currents and sample capacity provide high throughput for both biomedical and other measurements: the AMS system can be started up, tuned, and a wheel of carbon samples measured to 1--1.5% in under a day; and 2 biomedical wheels can be measured per day without difficulty. We report on the present status of the Lawrence Livermore AMS spectrometer, including sample throughput and progress towards routine 1% measurement capability for 14 C, first results on other isotopes, and experience with a multi-sample high intensity ion source. 5 refs

  12. Challenges in biotechnology at LLNL: from genes to proteins; TOPICAL

    International Nuclear Information System (INIS)

    Albala, J S

    1999-01-01

    This effort has undertaken the task of developing a link between the genomics, DNA repair and structural biology efforts within the Biology and Biotechnology Research Program at LLNL. Through the advent of the I.M.A.G.E. (Integrated Molecular Analysis of Genomes and their Expression) Consortium, a world-wide effort to catalog the largest public collection of genes, accepted and maintained within BBRP, it is now possible to systematically express the protein complement of these to further elucidate novel gene function and structure. The work has ensued in four phases, outlined as follows: (1) Gene and System selection; (2) Protein expression and purification; (3) Structural analysis; and (4) biological integration. Proteins to be expressed have been those of high programmatic interest. This includes, in particular, proteins involved in the maintenance of genome integrity, particularly those involved in the repair of DNA damage, including ERCC1, ERCC4, XRCC2, XRCC3, XRCC9, HEX1, APN1, p53, RAD51B, RAD51C, and RAD51. Full-length cDNA cognates of selected genes were isolated, and cloned into baculovirus-based expression vectors. The baculoviral expression system for protein over-expression is now well-established in the Albala laboratory. Procedures have been successfully optimized for full-length cDNA clining into expression vectors for protein expression from recombinant constructs. This includes the reagents, cell lines, techniques necessary for expression of recombinant baculoviral constructs in Spodoptera frugiperda (Sf9) cells. The laboratory has also generated a high-throughput baculoviral expression paradigm for large scale expression and purification of human recombinant proteins amenable to automation

  13. Tri-Laboratory Linux Capacity Cluster 2007 SOW

    International Nuclear Information System (INIS)

    Seager, M.

    2007-01-01

    well, the budget demands are extreme and new, more cost effective ways of fielding these systems must be developed. This Tri-Laboratory Linux Capacity Cluster (TLCC) procurement represents the ASC first investment vehicle in these capacity systems. It also represents a new strategy for quickly building, fielding and integrating many Linux clusters of various sizes into classified and unclassified production service through a concept of Scalable Units (SU). The programmatic objective is to dramatically reduce the overall Total Cost of Ownership (TCO) of these 'capacity' systems relative to the best practices in Linux Cluster deployments today. This objective only makes sense in the context of these systems quickly becoming very robust and useful production clusters under the crushing load that will be inflicted on them by the ASC and SSP scientific simulation capacity workload.

  14. Enabling rootless Linux containers in multi-user environments. The udocker tool

    Energy Technology Data Exchange (ETDEWEB)

    Gomes, Jorge; David, Mario; Alves, Luis; Martins, Jo ao; Pina, Jo ao [Laboratorio de Instrumentacao e Fisica Experimental de Particulas (LIP), Lisboa (Portugal); Bagnaschi, Emanuele [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Campos, Isabel; Lopez-Garcia, Alvaro; Orviz, Pablo [IFCA, Consejo Superior de Investigaciones Cientificas-CSIC, Santander (Spain)

    2017-11-15

    Containers are increasingly used as means to distribute and run Linux services and applications. In this paper we describe the architectural design and implementation of udocker a tool to execute Linux containers in user mode and we describe a few practical applications for a range of scientific codes meeting different requirements: from single core execution to MPI parallel execution and execution on GPGPUs.

  15. Enabling rootless Linux containers in multi-user environments. The udocker tool

    International Nuclear Information System (INIS)

    Gomes, Jorge; David, Mario; Alves, Luis; Martins, Jo ao; Pina, Jo ao; Bagnaschi, Emanuele; Campos, Isabel; Lopez-Garcia, Alvaro; Orviz, Pablo

    2017-11-01

    Containers are increasingly used as means to distribute and run Linux services and applications. In this paper we describe the architectural design and implementation of udocker a tool to execute Linux containers in user mode and we describe a few practical applications for a range of scientific codes meeting different requirements: from single core execution to MPI parallel execution and execution on GPGPUs.

  16. SmPL: A Domain-Specific Language for Specifying Collateral Evolutions in Linux Device Drivers

    DEFF Research Database (Denmark)

    Padioleau, Yoann; Lawall, Julia Laetitia; Muller, Gilles

    2007-01-01

    identifying the affected files and modifying all of the code fragments in these files that in some way depend on the changed interface. We have studied the collateral evolution problem in the context of Linux device drivers. Currently, collateral evolutions in Linux are mostly done manually using a text...

  17. Big Data demonstrator using Hadoop to build a Linux cluster for log data analysis using R

    DEFF Research Database (Denmark)

    Torbensen, Rune Sonnich; Top, Søren

    2017-01-01

    This article walks through the steps to create a Hadoop Linux cluster in the cloud and outlines how to analyze device log data via an example in the R programming language.......This article walks through the steps to create a Hadoop Linux cluster in the cloud and outlines how to analyze device log data via an example in the R programming language....

  18. Application of instrument platform based embedded Linux system on intelligent scaler

    International Nuclear Information System (INIS)

    Wang Jikun; Yang Run'an; Xia Minjian; Yang Zhijun; Li Lianfang; Yang Binhua

    2011-01-01

    It designs a instrument platform based on embedded Linux system and peripheral circuit, by designing Linux device driver and application program based on QT Embedded, various functions of the intelligent scaler are realized. The system architecture is very reasonable, so the stability and the expansibility and the integration level are increased, the development cycle is shorten greatly. (authors)

  19. Automating the Port of Linux to the VirtualLogix Hypervisor using Semantic Patches

    DEFF Research Database (Denmark)

    Armand, Francois; Muller, Gilles; Lawall, Julia Laetitia

    2008-01-01

    of Linux to the VLX hypervisor.  Coccinelle provides a notion of semantic patches, which are more abstract than standard patches, and thus are potentially applicable to a wider range of OS versions.  We have applied this approach in the context of Linux versions 2.6.13, 2.6.14, and 2.6.15, for the ARM...

  20. Linux thin-client conversion in a large cardiology practice: initial experience.

    Science.gov (United States)

    Echt, Martin P; Rosen, Jordan

    2004-01-01

    Capital Cardiology Associates (CCA) is a single-specialty cardiology practice with offices in New York and Massachusetts. In 2003, CCA converted its IT system from a Microsoft-based network to a Linux network employing Linux thin-client technology with overall positive outcomes.

  1. Energy efficient HPC on embedded SoCs : optimization techniques for mali GPU

    OpenAIRE

    Grasso, Ivan; Radojkovic, Petar; Rajovic, Nikola; Gelado Fernandez, Isaac; Ramírez Bellido, Alejandro

    2014-01-01

    A lot of effort from academia and industry has been invested in exploring the suitability of low-power embedded technologies for HPC. Although state-of-the-art embedded systems-on-chip (SoCs) inherently contain GPUs that could be used for HPC, their performance and energy capabilities have never been evaluated. Two reasons contribute to the above. Primarily, embedded GPUs until now, have not supported 64-bit floating point arithmetic - a requirement for HPC. Secondly, embedded GPUs did not pr...

  2. Comparative performance of conventional OPC concrete and HPC designed by densified mixture design algorithm

    Science.gov (United States)

    Huynh, Trong-Phuoc; Hwang, Chao-Lung; Yang, Shu-Ti

    2017-12-01

    This experimental study evaluated the performance of normal ordinary Portland cement (OPC) concrete and high-performance concrete (HPC) that were designed by the conventional method (ACI) and densified mixture design algorithm (DMDA) method, respectively. Engineering properties and durability performance of both the OPC and HPC samples were studied using the tests of workability, compressive strength, water absorption, ultrasonic pulse velocity, and electrical surface resistivity. Test results show that the HPC performed good fresh property and further showed better performance in terms of strength and durability as compared to the OPC.

  3. International Energy Agency's Heat Pump Centre (IEA-HPC) Annual National Team Working Group Meeting

    Science.gov (United States)

    Broders, M. A.

    1992-09-01

    The traveler, serving as Delegate from the United States Advanced Heat Pump National Team, participated in the activities of the fourth IEA-HPC National Team Working Group meeting. Highlights of this meeting included review and discussion of 1992 IEA-HPC activities and accomplishments, introduction of the Switzerland National Team, and development of the 1993 IEA-HPC work program. The traveler also gave a formal presentation about the Development and Activities of the IEA Advanced Heat Pump U.S. National Team.

  4. EMBEDDED LINUX BASED ALBUM BROWSER SYSTEM AT MUSIC STORES

    Directory of Open Access Journals (Sweden)

    Suryadiputra Liawatimena

    2009-01-01

    Full Text Available The goal of this research is the creation of an album browser system at a music store based on embedded Linux. It is expected with this system; it will help the promotion of said music store and make the customers activity at the store simpler and easier. This system uses NFS for networking, database system, ripping software, and GUI development. The research method used are and laboratory experiments to test the system’s hardware using TPC-57 (Touch Panel Computer 5.7" SA2410 ARM-9 Medallion CPU Module and software using QtopiaCore. The result of the research are; 1. The database query process is working properly; 2. The audio data buffering process is working properly. With those experiment results, it can be concluded that the summary of this research is that the system is ready to be implemented and used in the music stores.

  5. Impact on TRMM Products of Conversion to Linux

    Science.gov (United States)

    Stocker, Erich Franz; Kwiatkowski, John

    2008-01-01

    In June 2008, TRMM data processing will be assumed by the Precipitation Processing System (PPS). This change will also mean a change in the hardware production environment from an SGI 32 bit IRIX processing environment to a Linux (Beowulf) 64 bit processing environment. This change of platform and operating system addressing (32 to 64) has some influence on data values in the TRMM data products. This paper will describe the transition architecture and scheduling. It will also provide an analysis of what the nature of the product differences will be. It will demonstrate that the differences are not scientifically significant and are generally not visible. However, they are not always identical with those which the SGI would produce.

  6. Improving the high performance concrete (HPC behaviour in high temperatures

    Directory of Open Access Journals (Sweden)

    Cattelan Antocheves De Lima, R.

    2003-12-01

    Full Text Available High performance concrete (HPC is an interesting material that has been long attracting the interest from the scientific and technical community, due to the clear advantages obtained in terms of mechanical strength and durability. Given these better characteristics, HFC, in its various forms, has been gradually replacing normal strength concrete, especially in structures exposed to severe environments. However, the veiy dense microstructure and low permeability typical of HPC can result in explosive spalling under certain thermal and mechanical conditions, such as when concrete is subject to rapid temperature rises, during a f¡re. This behaviour is caused by the build-up of internal water pressure, in the pore structure, during heating, and by stresses originating from thermal deformation gradients. Although there are still a limited number of experimental programs in this area, some researchers have reported that the addition of polypropylene fibers to HPC is a suitable way to avoid explosive spalling under f re conditions. This change in behavior is derived from the fact that polypropylene fibers melt in high temperatures and leave a pathway for heated gas to escape the concrete matrix, therefore allowing the outward migration of water vapor and resulting in the reduction of interned pore pressure. The present research investigates the behavior of high performance concrete on high temperatures, especially when polypropylene fibers are added to the mix.

    El hormigón de alta resistencia (HAR es un material de gran interés para la comunidad científica y técnica, debido a las claras ventajas obtenidas en término de resistencia mecánica y durabilidad. A causa de estas características, el HAR, en sus diversas formas, en algunas aplicaciones está reemplazando gradualmente al hormigón de resistencia normal, especialmente en estructuras expuestas a ambientes severos. Sin embargo, la microestructura muy densa y la baja permeabilidad t

  7. Compact PCI/Linux platform in FTU slow control system

    International Nuclear Information System (INIS)

    Iannone, F.; Centioli, C.; Panella, M.; Mazza, G.; Vitale, V.; Wang, L.

    2004-01-01

    In large fusion experiments, such as tokamak devices, there is a common trend for slow control systems. Because of complexity of the plants, the so-called 'Standard Model' (SM) in slow control has been adopted on several tokamak machines. This model is based on a three-level hierarchical control: 1) High-Level Control (HLC) with a supervisory function; 2) Medium-Level Control (MLC) to interface and concentrate I/O field equipment; 3) Low-Level Control (LLC) with hard real-time I/O function, often managed by PLCs. FTU (Frascati Tokamak Upgrade) control system designed with SM concepts has underwent several stages of developments in its fifteen years duration of runs. The latest evolution was inevitable, due to the obsolescence of the MLC CPUs, based on VME-MOTOROLA 68030 with OS9 operating system. A large amount of C code was developed for that platform to route the data flow from LLC, which is constituted by 24 Westinghouse Numalogic PC-700 PLCs with about 8000 field-points, to HLC, based on a commercial Object-Oriented Real-Time database on Alpha/CompaqTru64 platform. Therefore, authors have to look for cost-effective solutions and finally a CompactPCI-Intel x86 platform with Linux operating system was chosen. A software porting has been done, taking into account the differences between OS9 and Linux operating system in terms of Inter/Network Processes Communications and I/O multi-ports serial driver. This paper describes the hardware/software architecture of the new MLC system, emphasizing the reliability and the low costs of the open source solutions. Moreover, a huge amount of software packages available in open source environment will assure a less painful maintenance, and will open the way to further improvements of the system itself. (authors)

  8. Perm State University HPC-hardware and software services: capabilities for aircraft engine aeroacoustics problems solving

    Science.gov (United States)

    Demenev, A. G.

    2018-02-01

    The present work is devoted to analyze high-performance computing (HPC) infrastructure capabilities for aircraft engine aeroacoustics problems solving at Perm State University. We explore here the ability to develop new computational aeroacoustics methods/solvers for computer-aided engineering (CAE) systems to handle complicated industrial problems of engine noise prediction. Leading aircraft engine engineering company, including “UEC-Aviadvigatel” JSC (our industrial partners in Perm, Russia), require that methods/solvers to optimize geometry of aircraft engine for fan noise reduction. We analysed Perm State University HPC-hardware resources and software services to use efficiently. The performed results demonstrate that Perm State University HPC-infrastructure are mature enough to face out industrial-like problems of development CAE-system with HPC-method and CFD-solvers.

  9. High Temperature Exposure of HPC – Experimental Analysis of Residual Properties and Thermal Response

    Directory of Open Access Journals (Sweden)

    Pavlík Zbyšek

    2016-01-01

    Full Text Available The effect of high temperature exposure on properties of a newly designed High Performance Concrete (HPC is studied in the paper. The HPC samples are exposed to the temperatures of 200, 400, 600, 800, and 1000°C respectively. Among the basic physical properties, bulk density, matrix density and total open porosity are measured. The mechanical resistivity against disruptive temperature action is characterised by compressive strength, flexural strength and dynamic modulus of elasticity. To study the chemical and physical processes in HPC during its high-temperature exposure, Simultaneous Thermal Analysis (STA is performed. Linear thermal expansion coefficient is determined as function of temperature using thermodilatometry (TDA. In order to describe the changes in microstructure of HPC induced by high temperature loading, MIP measurement of pore size distribution is done. Increase of the total open porosity and connected decrease of the mechanical parameters for temperatures higher than 200 °C were identified.

  10. Advanced High and Low Fidelity HPC Simulations of FCS Concept Designs for Dynamic Systems

    National Research Council Canada - National Science Library

    Sandhu, S. S; Kanapady, R; Tamma, K. K

    2004-01-01

    ...) resources of many Army initiatives. In this paper we present a new and advanced HPC based rigid and flexible modeling and simulation technology capable of adaptive high/low fidelity modeling that is useful in the initial design concept...

  11. Accelerating Memory-Access-Limited HPC Applications via Novel Fast Data Compression, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — A fast-paced continual increase on the ratio of CPU to memory speed feeds an exponentially growing limitation for extracting performance from HPC systems. Breaking...

  12. Accelerating Memory-Access-Limited HPC Applications via Novel Fast Data Compression, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — A fast-paced continual increase on the ratio of CPU to memory speed feeds an exponentially growing limitation for extracting performance from HPC systems. Ongoing...

  13. Modeling the Performance of Fast Mulipole Method on HPC platforms

    KAUST Repository

    Ibeid, Huda

    2012-04-06

    The current trend in high performance computing is pushing towards exascale computing. To achieve this exascale performance, future systems will have between 100 million and 1 billion cores assuming gigahertz cores. Currently, there are many efforts studying the hardware and software bottlenecks for building an exascale system. It is important to understand and meet these bottlenecks in order to attain 10 PFLOPS performance. On applications side, there is an urgent need to model application performance and to understand what changes need to be made to ensure continued scalability at this scale. Fast multipole methods (FMM) were originally developed for accelerating N-body problems for particle based methods. Nowadays, FMM is more than an N-body solver, recent trends in HPC have been to use FMMs in unconventional application areas. FMM is likely to be a main player in exascale due to its hierarchical nature and the techniques used to access the data via a tree structure which allow many operations to happen simultaneously at each level of the hierarchy. In this thesis , we discuss the challenges for FMM on current parallel computers and future exasclae architecture. Furthermore, we develop a novel performance model for FMM. Our ultimate aim of this thesis is to ensure the scalability of FMM on the future exascale machines.

  14. A ``Cyber Wind Facility'' for HPC Wind Turbine Field Experiments

    Science.gov (United States)

    Brasseur, James; Paterson, Eric; Schmitz, Sven; Campbell, Robert; Vijayakumar, Ganesh; Lavely, Adam; Jayaraman, Balaji; Nandi, Tarak; Jha, Pankaj; Dunbar, Alex; Motta-Mena, Javier; Craven, Brent; Haupt, Sue

    2013-03-01

    The Penn State ``Cyber Wind Facility'' (CWF) is a high-fidelity multi-scale high performance computing (HPC) environment in which ``cyber field experiments'' are designed and ``cyber data'' collected from wind turbines operating within the atmospheric boundary layer (ABL) environment. Conceptually the ``facility'' is akin to a high-tech wind tunnel with controlled physical environment, but unlike a wind tunnel it replicates commercial-scale wind turbines operating in the field and forced by true atmospheric turbulence with controlled stability state. The CWF is created from state-of-the-art high-accuracy technology geometry and grid design and numerical methods, and with high-resolution simulation strategies that blend unsteady RANS near the surface with high fidelity large-eddy simulation (LES) in separated boundary layer, blade and rotor wake regions, embedded within high-resolution LES of the ABL. CWF experiments complement physical field facility experiments that can capture wider ranges of meteorological events, but with minimal control over the environment and with very small numbers of sensors at low spatial resolution. I shall report on the first CWF experiments aimed at dynamical interactions between ABL turbulence and space-time wind turbine loadings. Supported by DOE and NSF.

  15. A Distributed Python HPC Framework: ODIN, PyTrilinos, & Seamless

    Energy Technology Data Exchange (ETDEWEB)

    Grant, Robert [Enthought, Inc., Austin, TX (United States)

    2015-11-23

    Under this grant, three significant software packages were developed or improved, all with the goal of improving the ease-of-use of HPC libraries. The first component is a Python package, named DistArray (originally named Odin), that provides a high-level interface to distributed array computing. This interface is based on the popular and widely used NumPy package and is integrated with the IPython project for enhanced interactive parallel distributed computing. The second Python package is the Distributed Array Protocol (DAP) that enables separate distributed array libraries to share arrays efficiently without copying or sending messages. If a distributed array library supports the DAP, it is then automatically able to communicate with any other library that also supports the protocol. This protocol allows DistArray to communicate with the Trilinos library via PyTrilinos, which was also enhanced during this project. A third package, PyTrilinos, was extended to support distributed structured arrays (in addition to the unstructured arrays of its original design), allow more flexible distributed arrays (i.e., the restriction to double precision data was lifted), and implement the DAP. DAP support includes both exporting the protocol so that external packages can use distributed Trilinos data structures, and importing the protocol so that PyTrilinos can work with distributed data from external packages.

  16. HPC Co-operation between industry and university

    International Nuclear Information System (INIS)

    Ruhle, R.

    2003-01-01

    The full text of publication follows. Some years ago industry and university were using the same kind of high performance computers. Therefore it seemed appropriate to run the systems in common. Achieved synergies are larger systems to have better capabilities, to share skills in operating and using the system and to have less operating cost because of larger scale of operations. An example for a business model which allows that kind of co-operation would be demonstrated. Recently more and more simulations especially in the automotive industry are using PC clusters. A small number of PC's are used for one simulation, but the cluster is used for a large number of simulations as a throughput device. These devices are easily installed on the department level and it is difficult to achieve better cost on a central site, mainly because of the cost of the network. This is in contrast to the scientific need which still needs capability computing. In the presentation, strategies will be discussed for which cooperation potential in HPC (high performance computing) still exists. These are: to install heterogeneous computer farms, which allow to use the best computer for each application, to improve the quality of large scale simulation models to be used in design calculations or to form expert teams from industry and university to solve difficult problems in industry applications. Some examples of this co-operation are shown

  17. Spark and HPC for High Energy Physics Data Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Sehrish, Saba; Kowalkowski, Jim; Paterno, Marc

    2017-05-01

    A full High Energy Physics (HEP) data analysis is divided into multiple data reduction phases. Processing within these phases is extremely time consuming, therefore intermediate results are stored in files held in mass storage systems and referenced as part of large datasets. This processing model limits what can be done with interactive data analytics. Growth in size and complexity of experimental datasets, along with emerging big data tools are beginning to cause changes to the traditional ways of doing data analyses. Use of big data tools for HEP analysis looks promising, mainly because extremely large HEP datasets can be represented and held in memory across a system, and accessed interactively by encoding an analysis using highlevel programming abstractions. The mainstream tools, however, are not designed for scientific computing or for exploiting the available HPC platform features. We use an example from the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) in Geneva, Switzerland. The LHC is the highest energy particle collider in the world. Our use case focuses on searching for new types of elementary particles explaining Dark Matter in the universe. We use HDF5 as our input data format, and Spark to implement the use case. We show the benefits and limitations of using Spark with HDF5 on Edison at NERSC.

  18. HPC Colony II Consolidated Annual Report: July-2010 to June-2011

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Terry R [ORNL

    2011-06-01

    This report provides a brief progress synopsis of the HPC Colony II project for the period of July 2010 to June 2011. HPC Colony II is a 36-month project and this report covers project months 10 through 21. It includes a consolidated view of all partners (Oak Ridge National Laboratory, IBM, and the University of Illinois at Urbana-Champaign) as well as detail for Oak Ridge. Highlights are noted and fund status data (burn rates) are provided.

  19. HPC Cloud for Scientific and Business Applications: Taxonomy, Vision, and Research Challenges

    OpenAIRE

    Netto, Marco A. S.; Calheiros, Rodrigo N.; Rodrigues, Eduardo R.; Cunha, Renato L. F.; Buyya, Rajkumar

    2017-01-01

    High Performance Computing (HPC) clouds are becoming an alternative to on-premise clusters for executing scientific applications and business analytics services. Most research efforts in HPC cloud aim to understand the cost-benefit of moving resource-intensive applications from on-premise environments to public cloud platforms. Industry trends show hybrid environments are the natural path to get the best of the on-premise and cloud resources---steady (and sensitive) workloads can run on on-pr...

  20. LDRD HPC4Energy Wrapup Report - LDRD 12-ERD-074

    Energy Technology Data Exchange (ETDEWEB)

    Dube, E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Grosh, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-01-23

    High-performance computing and simulation has the potential to optimize production, distribution, and conversion of energy. Although a number of concepts have been discussed, a comprehensive research project to establish and quantify the effectiveness of computing and simulation at scale to core energy problems has not been conducted. We propose to perform the basic research to adapt existing high-performance computing tools and simulation approaches to two selected classes of problems common across the energy sector. The first, applying uncertainty quantification and contingency analysis techniques to energy optimization, allows us to assess the effectiveness of LLNL core competencies to problems such as grid optimization and building-system efficiency. The second, applying adaptive meshing and numerical analysis techniques to physical problems at fine scale, could allow immediate impacts in key areas such as efficient combustion and fracture and spallation. By creating an integrated project team with the necessary expertise, we can efficiently address these issues, delivering both near-term results as well as quantifying developments needed to address future energy challenges.

  1. Tri-Laboratory Linux Capacity Cluster 2007 SOW

    Energy Technology Data Exchange (ETDEWEB)

    Seager, M

    2007-03-22

    ). However, given the growing need for 'capability' systems as well, the budget demands are extreme and new, more cost effective ways of fielding these systems must be developed. This Tri-Laboratory Linux Capacity Cluster (TLCC) procurement represents the ASC first investment vehicle in these capacity systems. It also represents a new strategy for quickly building, fielding and integrating many Linux clusters of various sizes into classified and unclassified production service through a concept of Scalable Units (SU). The programmatic objective is to dramatically reduce the overall Total Cost of Ownership (TCO) of these 'capacity' systems relative to the best practices in Linux Cluster deployments today. This objective only makes sense in the context of these systems quickly becoming very robust and useful production clusters under the crushing load that will be inflicted on them by the ASC and SSP scientific simulation capacity workload.

  2. Hard Real-Time Performances in Multiprocessor-Embedded Systems Using ASMP-Linux

    Directory of Open Access Journals (Sweden)

    Daniel Pierre Bovet

    2008-01-01

    Full Text Available Multiprocessor systems, especially those based on multicore or multithreaded processors, and new operating system architectures can satisfy the ever increasing computational requirements of embedded systems. ASMP-LINUX is a modified, high responsiveness, open-source hard real-time operating system for multiprocessor systems capable of providing high real-time performance while maintaining the code simple and not impacting on the performances of the rest of the system. Moreover, ASMP-LINUX does not require code changing or application recompiling/relinking. In order to assess the performances of ASMP-LINUX, benchmarks have been performed on several hardware platforms and configurations.

  3. Hard Real-Time Performances in Multiprocessor-Embedded Systems Using ASMP-Linux

    Directory of Open Access Journals (Sweden)

    Betti Emiliano

    2008-01-01

    Full Text Available Abstract Multiprocessor systems, especially those based on multicore or multithreaded processors, and new operating system architectures can satisfy the ever increasing computational requirements of embedded systems. ASMP-LINUX is a modified, high responsiveness, open-source hard real-time operating system for multiprocessor systems capable of providing high real-time performance while maintaining the code simple and not impacting on the performances of the rest of the system. Moreover, ASMP-LINUX does not require code changing or application recompiling/relinking. In order to assess the performances of ASMP-LINUX, benchmarks have been performed on several hardware platforms and configurations.

  4. Čiščenje operacijskega sistema GNU/Linux

    OpenAIRE

    OBLAK, DENIS

    2018-01-01

    Cilj diplomskega dela je izdelava aplikacije, ki bo pomagala očistiti operacijski sistem Linux in bo delala v večini distribucij. V teoretičnem delu je obravnavano čiščenje operacijskega sistema Linux, ki sprosti prostor na disku in omogoči boljše delovanje sistema. Sistematično so pregledani in teoretično predstavljeni tehnike čiščenja in obstoječa orodja za operacijski sistem Linux. V nadaljevanju je predstavljeno čiščenje operacijskih sistemov Windows in MacOS. Hkrati so predstavljen...

  5. Evaluation of LLNL's Nuclear Accident Dosimeters at the CALIBAN Reactor September 2010

    International Nuclear Information System (INIS)

    Hickman, D.P.; Wysong, A.R.; Heinrichs, D.P.; Wong, C.T.; Merritt, M.J.; Topper, J.D.; Gressmann, F.A.; Madden, D.J.

    2011-01-01

    The Lawrence Livermore National Laboratory uses neutron activation elements in a Panasonic TLD holder as a personnel nuclear accident dosimeter (PNAD). The LLNL PNAD has periodically been tested using a Cf-252 neutron source, however until 2009, it was more than 25 years since the PNAD has been tested against a source of neutrons that arise from a reactor generated neutron spectrum that simulates a criticality. In October 2009, LLNL participated in an intercomparison of nuclear accident dosimeters at the CEA Valduc Silene reactor (Hickman, et.al. 2010). In September 2010, LLNL participated in a second intercomparison of nuclear accident dosimeters at CEA Valduc. The reactor generated neutron irradiations for the 2010 exercise were performed at the Caliban reactor. The Caliban results are described in this report. The procedure for measuring the nuclear accident dosimeters in the event of an accident has a solid foundation based on many experimental results and comparisons. The entire process, from receiving the activated NADs to collecting and storing them after counting was executed successfully in a field based operation. Under normal conditions at LLNL, detectors are ready and available 24/7 to perform the necessary measurement of nuclear accident components. Likewise LLNL maintains processing laboratories that are separated from the areas where measurements occur, but contained within the same facility for easy movement from processing area to measurement area. In the event of a loss of LLNL permanent facilities, the Caliban and previous Silene exercises have demonstrated that LLNL can establish field operations that will very good nuclear accident dosimetry results. There are still several aspects of LLNL's nuclear accident dosimetry program that have not been tested or confirmed. For instance, LLNL's method for using of biological samples (blood and hair) has not been verified since the method was first developed in the 1980's. Because LLNL and the other DOE

  6. LLNL/YMP Waste Container Fabrication and Closure Project; GFY technical activity summary

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1990-10-01

    The Department of Energy`s Office of Civilian Radioactive Waste Management (OCRWM) Program is studying Yucca Mountain, Nevada as a suitable site for the first US high-level nuclear waste repository. Lawrence Livermore National Laboratory (LLNL) has the responsibility for designing and developing the waste package for the permanent storage of high-level nuclear waste. This report is a summary of the technical activities for the LLNL/YMP Nuclear Waste Disposal Container Fabrication and Closure Development Project. Candidate welding closure processes were identified in the Phase 1 report. This report discusses Phase 2. Phase 2 of this effort involved laboratory studies to determine the optimum fabrication and closure processes. Because of budget limitations, LLNL narrowed the materials for evaluation in Phase 2 from the original six to four: Alloy 825, CDA 715, CDA 102 (or CDA 122) and CDA 952. Phase 2 studies focused on evaluation of candidate material in conjunction with fabrication and closure processes.

  7. Debugging Nondeterministic Failures in Linux Programs through Replay Analysis

    Directory of Open Access Journals (Sweden)

    Shakaiba Majeed

    2018-01-01

    Full Text Available Reproducing a failure is the first and most important step in debugging because it enables us to understand the failure and track down its source. However, many programs are susceptible to nondeterministic failures that are hard to reproduce, which makes debugging extremely difficult. We first address the reproducibility problem by proposing an OS-level replay system for a uniprocessor environment that can capture and replay nondeterministic events needed to reproduce a failure in Linux interactive and event-based programs. We then present an analysis method, called replay analysis, based on the proposed record and replay system to diagnose concurrency bugs in such programs. The replay analysis method uses a combination of static analysis, dynamic tracing during replay, and delta debugging to identify failure-inducing memory access patterns that lead to concurrency failure. The experimental results show that the presented record and replay system has low-recording overhead and hence can be safely used in production systems to catch rarely occurring bugs. We also present few concurrency bug case studies from real-world applications to prove the effectiveness of the proposed bug diagnosis framework.

  8. ZIO: The Ultimate Linux I/O Framework

    CERN Document Server

    Gonzalez Cobas, J D; Rubini, A; Nellaga, S; Vaga, F

    2014-01-01

    ZIO (with Z standing for “The Ultimate I/O” Framework) was developed for CERN with the specific needs of physics labs in mind, which are poorly addressed in the mainstream Linux kernel. ZIO provides a framework for industrial, high-bandwith, high-channel count I/O device drivers (digitizers, function generators, timing devices like TDCs) with performance, generality and scalability as design goals. Among its features, it offers abstractions for • both input and output channels, and channel sets • run-time selection of trigger types • run-time selection of buffer types • sysfs-based configuration • char devices for data and metadata • a socket interface (PF ZIO) as alternative to char devices In this paper, we discuss the design and implementation of ZIO, and describe representative cases of driver development for typical and exotic applications: drivers for the FMC (FPGAMezzanine Card, see [1]) boards developed at CERN like the FMC ADC 100Msps digitizer, FMC TDC timestamp counter, and FMC DEL ...

  9. Multi-terabyte EIDE disk arrays running Linux RAID5

    International Nuclear Information System (INIS)

    Sanders, D.A.; Cremaldi, L.M.; Eschenburg, V.; Godang, R.; Joy, M.D.; Summers, D.J.; Petravick, D.L.

    2004-01-01

    High-energy physics experiments are currently recording large amounts of data and in a few years will be recording prodigious quantities of data. New methods must be developed to handle this data and make analysis at universities possible. Grid Computing is one method; however, the data must be cached at the various Grid nodes. We examine some storage techniques that exploit recent developments in commodity hardware. Disk arrays using RAID level 5 (RAID-5) include both parity and striping. The striping improves access speed. The parity protects data in the event of a single disk failure, but not in the case of multiple disk failures. We report on tests of dual-processor Linux Software RAID-5 arrays and Hardware RAID-5 arrays using a 12-disk 3ware controller, in conjunction with 250 and 300 GB disks, for use in offline high-energy physics data analysis. The price of IDE disks is now less than $1/GB. These RAID-5 disk arrays can be scaled to sizes affordable to small institutions and used when fast random access at low cost is important

  10. Multi-terabyte EIDE disk arrays running Linux RAID5

    Energy Technology Data Exchange (ETDEWEB)

    Sanders, D.A.; Cremaldi, L.M.; Eschenburg, V.; Godang, R.; Joy, M.D.; Summers, D.J.; /Mississippi U.; Petravick, D.L.; /Fermilab

    2004-11-01

    High-energy physics experiments are currently recording large amounts of data and in a few years will be recording prodigious quantities of data. New methods must be developed to handle this data and make analysis at universities possible. Grid Computing is one method; however, the data must be cached at the various Grid nodes. We examine some storage techniques that exploit recent developments in commodity hardware. Disk arrays using RAID level 5 (RAID-5) include both parity and striping. The striping improves access speed. The parity protects data in the event of a single disk failure, but not in the case of multiple disk failures. We report on tests of dual-processor Linux Software RAID-5 arrays and Hardware RAID-5 arrays using a 12-disk 3ware controller, in conjunction with 250 and 300 GB disks, for use in offline high-energy physics data analysis. The price of IDE disks is now less than $1/GB. These RAID-5 disk arrays can be scaled to sizes affordable to small institutions and used when fast random access at low cost is important.

  11. Operating characteristics and modeling of the LLNL 100-kV electric gun

    International Nuclear Information System (INIS)

    Osher, J.E.; Barnes, G.; Chau, H.H.; Lee, R.S.; Lee, C.; Speer, R.; Weingart, R.C.

    1989-01-01

    In the electric gun, the explosion of an electrically heated metal foil and the accompanying magnetic forces drive a thin flyer plate up a short barrel. Flyer velocities of up to 18 km/s make the gun useful for hypervelocity impact studies. The authors briefly review the technological evolution of the exploding-metal circuit elements that power the gun, describe the 100-kV electric gun designed at Lawrence Livermore National Laboratory (LLNL) in some detail, and present the general principles of electric gun operation. They compare the experimental performance of the LLNL gun with a simple model and with predictions of a magnetohydrodynamics code

  12. Comprehensive Angular Response Study of LLNL Panasonic Dosimeter Configurations and Artificial Intelligence Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Stone, D. K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-06-30

    In April of 2016, the Lawrence Livermore National Laboratory External Dosimetry Program underwent a Department of Energy Laboratory Accreditation Program (DOELAP) on-site assessment. The assessment reported a concern that the study performed in 2013 Angular Dependence Study Panasonic UD-802 and UD-810 Dosimeters LLNL Artificial Intelligence Algorithm was incomplete. Only the responses at ±60° and 0° were evaluated and independent data from dosimeters was not used to evaluate the algorithm. Additionally, other configurations of LLNL dosimeters were not considered in this study. This includes nuclear accident dosimeters (NAD) which are placed in the wells surrounding the TLD in the dosimeter holder.

  13. Assessment of the proposed decontamination and waste treatment facility at LLNL

    International Nuclear Information System (INIS)

    Cohen, J.J.

    1987-01-01

    To provide a centralized decontamination and waste treatment facility (DWTF) at LLNL, the construction of a new installation has been planned. Objectives for this new facility were to replace obsolete, structurally and environmentally sub-marginal liquid and solid waste process facilities and decontamination facility and to bring these facilities into compliance with existing federal, state and local regulations as well as DOE orders. In a previous study, SAIC conducted a preliminary review and evaluation of existing facilities at LLNL and cost effectiveness of the proposed DWTF. This document reports on a detailed review of specific aspects of the proposed DWTF

  14. Development of a laboratory model of SSSC using RTAI on Linux ...

    Indian Academy of Sciences (India)

    ... capability to Linux Gen- eral Purpose Operating System (GPOS) over and above the capabilities of non ... Introduction. Power transfer ... of a controller prototyping environment is Matlab/Simulink/Real-time Workshop software, which can be ...

  15. Supporting the Secure Halting of User Sessions and Processes in the Linux Operating System

    National Research Council Canada - National Science Library

    Brock, Jerome

    2001-01-01

    .... Only when a session must be reactivated are its processes returned to a runnable state. This thesis presents an approach for adding this "secure halting" functionality to the Linux operating system...

  16. [Study for lung sound acquisition module based on ARM and Linux].

    Science.gov (United States)

    Lu, Qiang; Li, Wenfeng; Zhang, Xixue; Li, Junmin; Liu, Longqing

    2011-07-01

    A acquisition module with ARM and Linux as a core was developed. This paper presents the hardware configuration and the software design. It is shown that the module can extract human lung sound reliably and effectively.

  17. Parallel Processing Performance Evaluation of Mixed T10/T100 Ethernet Topologies on Linux Pentium Systems

    National Research Council Canada - National Science Library

    Decato, Steven

    1997-01-01

    ... performed on relatively inexpensive off the shelf components. Alternative network topologies were implemented using 10 and 100 megabit-per-second Ethernet cards under the Linux operating system on Pentium based personal computer platforms...

  18. Linux aitab olla sõltumatu / Jon Hall ; interv. Kristjan Otsmann

    Index Scriptorium Estoniae

    Hall, Jon

    2002-01-01

    Eesti peaks kasutama rohkem avatud lähtekoodil põhinevat tarkvara, sest see seab Eesti väiksemasse sõltuvusse välismaistest tarkvaratootjatest, ütles intervjuus Postimehele Linux Internationali juht

  19. Summary of the LLNL one-dimensional transport-kinetics model of the troposphere and stratosphere: 1981

    International Nuclear Information System (INIS)

    Wuebbles, D.J.

    1981-09-01

    Since the LLNL one-dimensional coupled transport and chemical kinetics model of the troposphere and stratosphere was originally developed in 1972 (Chang et al., 1974), there have been many changes to the model's representation of atmospheric physical and chemical processes. A brief description is given of the current LLNL one-dimensional coupled transport and chemical kinetics model of the troposphere and stratosphere

  20. LLNL radioactive waste management plan as per DOE Order 5820.2

    International Nuclear Information System (INIS)

    1984-01-01

    The following aspects of LLNL's radioactive waste management plan are discussed: program administration; description of waste generating processes; radioactive waste collection, treatment, and disposal; sanitary waste management; site 300 operations; schedules and major milestones for waste management activities; and environmental monitoring programs (sampling and analysis)

  1. National Uranium Resource Evaluation Program: the Hydrogeochemical Stream Sediment Reconnaissance Program at LLNL

    International Nuclear Information System (INIS)

    Higgins, G.H.

    1980-08-01

    From early 1975 to mid 1979, Lawrence Livermore National Laboratory (LLNL) participated in the Hydrogeochemical Stream Sediment Reconnaissance (HSSR), part of the National Uranium Resource Evaluation (NURE) program sponsored by the Department of Energy (DOE). The Laboratory was initially responsible for collecting, analyzing, and evaluating sediment and water samples from approximately 200,000 sites in seven western states. Eventually, however, the NURE program redefined its sampling priorities, objectives, schedules, and budgets, with the increasingly obvious result that LLNL objectives and methodologies were not compatible with those of the NURE program office, and the LLNL geochemical studies were not relevant to the program goal. The LLNL portion of the HSSR program was consequently terminated, and all work was suspended by June 1979. Of the 38,000 sites sampled, 30,000 were analyzed by instrumental neutron activation analyses (INAA), delayed neutron counting (DNC), optical emission spectroscopy (OES), and automated chloride-sulfate analyses (SC). Data from about 13,000 sites have been formally reported. From each site, analyses were published of about 30 of the 60 elements observed. Uranium mineralization has been identified at several places which were previously not recognized as potential uranium source areas, and a number of other geochemical anomalies were discovered

  2. LLNL Site plan for a MOX fuel lead assembly mission in support of surplus plutonium disposition

    Energy Technology Data Exchange (ETDEWEB)

    Bronson, M.C.

    1997-10-01

    The principal facilities that LLNL would use to support a MOX Fuel Lead Assembly Mission are Building 332 and Building 334. Both of these buildings are within the security boundary known as the LLNL Superblock. Building 332 is the LLNL Plutonium Facility. As an operational plutonium facility, it has all the infrastructure and support services required for plutonium operations. The LLNL Plutonium Facility routinely handles kilogram quantities of plutonium and uranium. Currently, the building is limited to a plutonium inventory of 700 kilograms and a uranium inventory of 300 kilograms. Process rooms (excluding the vaults) are limited to an inventory of 20 kilograms per room. Ongoing operations include: receiving SSTS, material receipt, storage, metal machining and casting, welding, metal-to-oxide conversion, purification, molten salt operations, chlorination, oxide calcination, cold pressing and sintering, vitrification, encapsulation, chemical analysis, metallography and microprobe analysis, waste material processing, material accountability measurements, packaging, and material shipping. Building 334 is the Hardened Engineering Test Building. This building supports environmental and radiation measurements on encapsulated plutonium and uranium components. Other existing facilities that would be used to support a MOX Fuel Lead Assembly Mission include Building 335 for hardware receiving and storage and TRU and LLW waste storage and shipping facilities, and Building 331 or Building 241 for storage of depleted uranium.

  3. Beam-beam studies for the proposed SLAC/LBL/LLNL B Factory

    International Nuclear Information System (INIS)

    Furman, M.A.

    1991-05-01

    We present a summary of beam-beam dynamics studies that have been carried out to date for the proposed SLAC/LBL/LLNL B Factory. Most of the material presented here is contained in the proposal's Conceptual Design Report, although post-CDR studies are also presented. 15 refs., 6 figs., 2 tabs

  4. LLNL Site plan for a MOX fuel lead assembly mission in support of surplus plutonium disposition

    International Nuclear Information System (INIS)

    Bronson, M.C.

    1997-01-01

    The principal facilities that LLNL would use to support a MOX Fuel Lead Assembly Mission are Building 332 and Building 334. Both of these buildings are within the security boundary known as the LLNL Superblock. Building 332 is the LLNL Plutonium Facility. As an operational plutonium facility, it has all the infrastructure and support services required for plutonium operations. The LLNL Plutonium Facility routinely handles kilogram quantities of plutonium and uranium. Currently, the building is limited to a plutonium inventory of 700 kilograms and a uranium inventory of 300 kilograms. Process rooms (excluding the vaults) are limited to an inventory of 20 kilograms per room. Ongoing operations include: receiving SSTS, material receipt, storage, metal machining and casting, welding, metal-to-oxide conversion, purification, molten salt operations, chlorination, oxide calcination, cold pressing and sintering, vitrification, encapsulation, chemical analysis, metallography and microprobe analysis, waste material processing, material accountability measurements, packaging, and material shipping. Building 334 is the Hardened Engineering Test Building. This building supports environmental and radiation measurements on encapsulated plutonium and uranium components. Other existing facilities that would be used to support a MOX Fuel Lead Assembly Mission include Building 335 for hardware receiving and storage and TRU and LLW waste storage and shipping facilities, and Building 331 or Building 241 for storage of depleted uranium

  5. Dispersion of Radionuclides and Exposure Assessment in Urban Environments: A Joint CEA and LLNL Report

    Energy Technology Data Exchange (ETDEWEB)

    Glascoe, Lee [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gowardhan, Akshay [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Lennox, Kristin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Simpson, Matthew [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Yu, Kristen [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Armand, Patrick [Alternative Energies and Atomic Energy Commission (CEA), Paris (France); Duchenne, Christophe [Alternative Energies and Atomic Energy Commission (CEA), Paris (France); Mariotte, Frederic [Alternative Energies and Atomic Energy Commission (CEA), Paris (France); Pectorin, Xavier [Alternative Energies and Atomic Energy Commission (CEA), Paris (France)

    2014-12-19

    In the interest of promoting the international exchange of technical expertise, the US Department of Energy’s Office of Emergency Operations (NA-40) and the French Commissariat à l'Energie Atomique et aux énergies alternatives (CEA) requested that the National Atmospheric Release Advisory Center (NARAC) of Lawrence Livermore National Laboratory (LLNL) in Livermore, California host a joint table top exercise with experts in emergency management and atmospheric transport modeling. In this table top exercise, LLNL and CEA compared each other’s flow and dispersion models. The goal of the comparison is to facilitate the exchange of knowledge, capabilities, and practices, and to demonstrate the utility of modeling dispersal at different levels of computational fidelity. Two modeling approaches were examined, a regional scale modeling approach, appropriate for simple terrain and/or very large releases, and an urban scale modeling approach, appropriate for small releases in a city environment. This report is a summary of LLNL and CEA modeling efforts from this exercise. Two different types of LLNL and CEA models were employed in the analysis: urban-scale models (Aeolus CFD at LLNL/NARAC and Parallel- Micro-SWIFT-SPRAY, PMSS, at CEA) for analysis of a 5,000 Ci radiological release and Lagrangian Particle Dispersion Models (LODI at LLNL/NARAC and PSPRAY at CEA) for analysis of a much larger (500,000 Ci) regional radiological release. Two densely-populated urban locations were chosen: Chicago with its high-rise skyline and gridded street network and Paris with its more consistent, lower building height and complex unaligned street network. Each location was considered under early summer daytime and nighttime conditions. Different levels of fidelity were chosen for each scale: (1) lower fidelity mass-consistent diagnostic, intermediate fidelity Navier-Stokes RANS models, and higher fidelity Navier-Stokes LES for urban-scale analysis, and (2) lower-fidelity single

  6. Dispersion of Radionuclides and Exposure Assessment in Urban Environments: A Joint CEA and LLNL Report

    International Nuclear Information System (INIS)

    Glascoe, Lee; Gowardhan, Akshay; Lennox, Kristin; Simpson, Matthew; Yu, Kristen; Armand, Patrick; Duchenne, Christophe; Mariotte, Frederic; Pectorin, Xavier

    2014-01-01

    In the interest of promoting the international exchange of technical expertise, the US Department of Energy’s Office of Emergency Operations (NA-40) and the French Commissariat à l'Energie Atomique et aux énergies alternatives (CEA) requested that the National Atmospheric Release Advisory Center (NARAC) of Lawrence Livermore National Laboratory (LLNL) in Livermore, California host a joint table top exercise with experts in emergency management and atmospheric transport modeling. In this table top exercise, LLNL and CEA compared each other's flow and dispersion models. The goal of the comparison is to facilitate the exchange of knowledge, capabilities, and practices, and to demonstrate the utility of modeling dispersal at different levels of computational fidelity. Two modeling approaches were examined, a regional scale modeling approach, appropriate for simple terrain and/or very large releases, and an urban scale modeling approach, appropriate for small releases in a city environment. This report is a summary of LLNL and CEA modeling efforts from this exercise. Two different types of LLNL and CEA models were employed in the analysis: urban-scale models (Aeolus CFD at LLNL/NARAC and Parallel- Micro-SWIFT-SPRAY, PMSS, at CEA) for analysis of a 5,000 Ci radiological release and Lagrangian Particle Dispersion Models (LODI at LLNL/NARAC and PSPRAY at CEA) for analysis of a much larger (500,000 Ci) regional radiological release. Two densely-populated urban locations were chosen: Chicago with its high-rise skyline and gridded street network and Paris with its more consistent, lower building height and complex unaligned street network. Each location was considered under early summer daytime and nighttime conditions. Different levels of fidelity were chosen for each scale: (1) lower fidelity mass-consistent diagnostic, intermediate fidelity Navier-Stokes RANS models, and higher fidelity Navier-Stokes LES for urban-scale analysis, and (2) lower-fidelity single

  7. The Benefits and Complexities of Operating Geographic Information Systems (GIS) in a High Performance Computing (HPC) Environment

    Science.gov (United States)

    Shute, J.; Carriere, L.; Duffy, D.; Hoy, E.; Peters, J.; Shen, Y.; Kirschbaum, D.

    2017-12-01

    The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center is building and maintaining an Enterprise GIS capability for its stakeholders, to include NASA scientists, industry partners, and the public. This platform is powered by three GIS subsystems operating in a highly-available, virtualized environment: 1) the Spatial Analytics Platform is the primary NCCS GIS and provides users discoverability of the vast DigitalGlobe/NGA raster assets within the NCCS environment; 2) the Disaster Mapping Platform provides mapping and analytics services to NASA's Disaster Response Group; and 3) the internal (Advanced Data Analytics Platform/ADAPT) enterprise GIS provides users with the full suite of Esri and open source GIS software applications and services. All systems benefit from NCCS's cutting edge infrastructure, to include an InfiniBand network for high speed data transfers; a mixed/heterogeneous environment featuring seamless sharing of information between Linux and Windows subsystems; and in-depth system monitoring and warning systems. Due to its co-location with the NCCS Discover High Performance Computing (HPC) environment and the Advanced Data Analytics Platform (ADAPT), the GIS platform has direct access to several large NCCS datasets including DigitalGlobe/NGA, Landsat, MERRA, and MERRA2. Additionally, the NCCS ArcGIS Desktop Windows virtual machines utilize existing NetCDF and OPeNDAP assets for visualization, modelling, and analysis - thus eliminating the need for data duplication. With the advent of this platform, Earth scientists have full access to vast data repositories and the industry-leading tools required for successful management and analysis of these multi-petabyte, global datasets. The full system architecture and integration with scientific datasets will be presented. Additionally, key applications and scientific analyses will be explained, to include the NASA Global Landslide Catalog (GLC) Reporter crowdsourcing application, the

  8. Virtualization of the ATLAS software environment on a shared HPC system

    CERN Document Server

    Schnoor, Ulrike; The ATLAS collaboration

    2017-01-01

    High-Performance Computing (HPC) and other research cluster computing resources provided by universities can be useful supplements to the collaboration’s own WLCG computing resources for data analysis and production of simulated event samples. The shared HPC cluster "NEMO" at the University of Freiburg has been made available to local ATLAS users through the provisioning of virtual machines incorporating the ATLAS software environment analogously to a WLCG center. The talk describes the concept and implementation of virtualizing the ATLAS software environment to run both data analysis and production on the HPC host system which is connected to the existing Tier-3 infrastructure. Main challenges include the integration into the NEMO and Tier-3 schedulers in a dynamic, on-demand way, the scalability of the OpenStack infrastructure, as well as the automatic generation of a fully functional virtual machine image providing access to the local user environment, the dCache storage element and the parallel file sys...

  9. The rise of HPC accelerators: towards a common vision for a petascale future

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    Nowadays new exciting scientific discoveries are mainly driven by large challenging simulations. An analysis of the trends in High Performance Computing clearly show that we hit several barriers (CPU frequency, power consumption, technological limits, limitations of the present paradigms) that we cannot easily overcome. In this context, accelerators became the concrete alternative to increase the compute capabilities of the deployed HPC clusters inside Universities and research centers across Europe. Within the EC funded "Partnership of Advanced Computing in Europe" (PRACE) project, several actions has been taken and will be taken to enable community codes to exploit accelerators in modern HPC architectures. In this talk, the vision and the strategy adopted by the PRACE project will be presented, focusing on new HPC programming model and paradigm. Accelerators are a fundamental piece to innovate in this direction, from both the hardware and the software point of view. This work started dur...

  10. Virtualization of the ATLAS software environment on a shared HPC system

    CERN Document Server

    Gamel, Anton Josef; The ATLAS collaboration

    2017-01-01

    The shared HPC cluster NEMO at the University of Freiburg has been made available to local ATLAS users through the provisioning of virtual machines incorporating the ATLAS software environment analogously to a WLCG center. This concept allows to run both data analysis and production on the HPC host system which is connected to the existing Tier2/Tier3 infrastructure. Schedulers of the two clusters were integrated in a dynamic, on-demand way. An automatically generated, fully functional virtual machine image provides access to the local user environment. The performance in the virtualized environment is evaluated for typical High-Energy Physics applications.

  11. Getting Priorities Straight: Improving Linux Support for Database I/O

    DEFF Research Database (Denmark)

    Hall, Christoffer; Bonnet, Philippe

    2005-01-01

    advantage of Linux asynchronous I/O and how Linux can help MySQL/InnoDB best take advantage of the underlying I/O bandwidth. This is a crucial problem for the increasing number of MySQL servers deployed for very large database applications. In this paper, we rst show that the conservative I/O submission......The Linux 2.6 kernel supports asynchronous I/O as a result of propositions from the database industry. This is a positive evolution but is it a panacea? In the context of the Badger project, a collaboration between MySQL AB and University of Copenhagen, we evaluate how MySQL/InnoDB can best take...... policy used by InnoDB (as well as Oracle 9.2) leads to an under-utilization of the available I/O bandwidth. We then show that introducing prioritized asynchronous I/O in Linux will allow MySQL/InnoDB and the other Linux databases to fully utilize the available I/O bandwith using a more aggressive I...

  12. CompTIA Linux+ study guide exam LX0-103 and exam LX0-104

    CERN Document Server

    Bresnahan, Christine

    2015-01-01

    CompTIA Authorized Linux+ prepCompTIA Linux+ Study Guide is your comprehensive study guide for the Linux+ Powered by LPI certification exams. With complete coverage of 100% of the objectives on both exam LX0-103 and exam LX0-104, this study guide provides clear, concise information on all aspects of Linux administration, with a focus on the latest version of the exam. You'll gain the insight of examples drawn from real-world scenarios, with detailed guidance and authoritative coverage of key topics, including GNU and Unix commands, system operation, system administration, system services, secu

  13. Joint research and development and exchange of technology on toxic material emergency response between LLNL and ENEA. 1985 progress report

    International Nuclear Information System (INIS)

    Dickerson, M.H.; Caracciolo, R.

    1986-01-01

    For the past six years, the US Department of Energy, LLNL, and the ENEA, Rome, Italy, have participated in cooperative studies for improving a systems approach to an emergency response following nuclear accidents. Technology exchange between LLNL and the ENEA was initially confined to the development, application, and evaluation of atmospheric transport and diffusion models. With the emergence of compatible hardware configurations between LLNL and ENEA, exchanges of technology and ideas for improving the development and implementation of systems are beginning to emerge. This report describes cooperative work that has occurred during the past three years, the present state of each system, and recommendations for future exchanges of technology

  14. On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

    Science.gov (United States)

    Erli, G.; Fischer, F.; Fleig, G.; Giffels, M.; Hauth, T.; Quast, G.; Schnepf, M.; Heese, J.; Leppert, K.; Arnaez de Pedro, J.; Sträter, R.

    2017-10-01

    This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers. Rather than relying on dedicated HEP computing centers, it is nowadays more reasonable and flexible to utilize remote computing capacity via virtualization techniques or container concepts. We report on recent experience from incorporating a remote HPC center (NEMO Cluster, Freiburg University) and resources dynamically requested from the commercial provider 1&1 Internet SE into our intitute’s computing infrastructure. The Freiburg HPC resources are requested via the standard batch system, allowing HPC and HEP applications to be executed simultaneously, such that regular batch jobs run side by side to virtual machines managed via OpenStack [1]. For the inclusion of the 1&1 commercial resources, a Python API and SDK as well as the possibility to upload images were available. Large scale tests prove the capability to serve the scientific use case in the European 1&1 datacenters. The described environment at the Institute of Experimental Nuclear Physics (IEKP) at KIT serves the needs of researchers participating in the CMS and Belle II experiments. In total, resources exceeding half a million CPU hours have been provided by remote sites.

  15. Could Blobs Fuel Storage-Based Convergence between HPC and Big Data?

    Energy Technology Data Exchange (ETDEWEB)

    Matri, Pierre; Alforov, Yevhen; Brandon, Alvaro; Kuhn, Michael; Carns, Philip; Ludwig, Thomas

    2017-09-05

    The increasingly growing data sets processed on HPC platforms raise major challenges for the underlying storage layer. A promising alternative to POSIX-IO- compliant file systems are simpler blobs (binary large objects), or object storage systems. Such systems offer lower overhead and better performance at the cost of largely unused features such as file hierarchies or permissions. Similarly, blobs are increasingly considered for replacing distributed file systems for big data analytics or as a base for storage abstractions such as key-value stores or time-series databases. This growing interest in such object storage on HPC and big data platforms raises the question: Are blobs the right level of abstraction to enable storage-based convergence between HPC and Big Data? In this paper we study the impact of blob-based storage for real-world applications on HPC and cloud environments. The results show that blobbased storage convergence is possible, leading to a significant performance improvement on both platforms

  16. Performance Analysis Tool for HPC and Big Data Applications on Scientific Clusters

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wucherl [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Koo, Michelle [Univ. of California, Berkeley, CA (United States); Cao, Yu [California Inst. of Technology (CalTech), Pasadena, CA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Nugent, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States); Wu, Kesheng [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-09-17

    Big data is prevalent in HPC computing. Many HPC projects rely on complex workflows to analyze terabytes or petabytes of data. These workflows often require running over thousands of CPU cores and performing simultaneous data accesses, data movements, and computation. It is challenging to analyze the performance involving terabytes or petabytes of workflow data or measurement data of the executions, from complex workflows over a large number of nodes and multiple parallel task executions. To help identify performance bottlenecks or debug the performance issues in large-scale scientific applications and scientific clusters, we have developed a performance analysis framework, using state-ofthe- art open-source big data processing tools. Our tool can ingest system logs and application performance measurements to extract key performance features, and apply the most sophisticated statistical tools and data mining methods on the performance data. It utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of the big data analysis framework, we conduct case studies on the workflows from an astronomy project known as the Palomar Transient Factory (PTF) and the job logs from the genome analysis scientific cluster. Our study processed many terabytes of system logs and application performance measurements collected on the HPC systems at NERSC. The implementation of our tool is generic enough to be used for analyzing the performance of other HPC systems and Big Data workows.

  17. A harmonic polynomial cell (HPC) method for 3D Laplace equation with application in marine hydrodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Yan-Lin, E-mail: yanlin.shao@dnvgl.com; Faltinsen, Odd M.

    2014-10-01

    We propose a new efficient and accurate numerical method based on harmonic polynomials to solve boundary value problems governed by 3D Laplace equation. The computational domain is discretized by overlapping cells. Within each cell, the velocity potential is represented by the linear superposition of a complete set of harmonic polynomials, which are the elementary solutions of Laplace equation. By its definition, the method is named as Harmonic Polynomial Cell (HPC) method. The characteristics of the accuracy and efficiency of the HPC method are demonstrated by studying analytical cases. Comparisons will be made with some other existing boundary element based methods, e.g. Quadratic Boundary Element Method (QBEM) and the Fast Multipole Accelerated QBEM (FMA-QBEM) and a fourth order Finite Difference Method (FDM). To demonstrate the applications of the method, it is applied to some studies relevant for marine hydrodynamics. Sloshing in 3D rectangular tanks, a fully-nonlinear numerical wave tank, fully-nonlinear wave focusing on a semi-circular shoal, and the nonlinear wave diffraction of a bottom-mounted cylinder in regular waves are studied. The comparisons with the experimental results and other numerical results are all in satisfactory agreement, indicating that the present HPC method is a promising method in solving potential-flow problems. The underlying procedure of the HPC method could also be useful in other fields than marine hydrodynamics involved with solving Laplace equation.

  18. DOE/LLNL verification symposium on technologies for monitoring nuclear tests related to weapons proliferation

    International Nuclear Information System (INIS)

    Nakanishi, K.K.

    1993-01-01

    The rapidly changing world situation has raised concerns regarding the proliferation of nuclear weapons and the ability to monitor a possible clandestine nuclear testing program. To address these issues, Lawrence Livermore National Laboratory's (LLNL) Treaty Verification Program sponsored a symposium funded by the US Department of Energy's (DOE) Office of Arms Control, Division of Systems and Technology. The DOE/LLNL Symposium on Technologies for Monitoring Nuclear Tests Related to Weapons Proliferation was held at the DOE's Nevada Operations Office in Las Vegas, May 6--7,1992. This volume is a collection of several papers presented at the symposium. Several experts in monitoring technology presented invited talks assessing the status of monitoring technology with emphasis on the deficient areas requiring more attention in the future. In addition, several speakers discussed proliferation monitoring technologies being developed by the DOE's weapons laboratories

  19. The LLNL Multiuser Tandem Laboratory computer-controlled radiation monitoring system

    International Nuclear Information System (INIS)

    Homann, S.G.

    1992-01-01

    The Physics Department of the Lawrence Livermore National Laboratory (LLNL) recently constructed a Multiuser Tandem Laboratory (MTL) to perform a variety of basic and applied measurement programs. The laboratory and its research equipment were constructed with support from a consortium of LLNL Divisions, Sandia National Laboratories Livermore, and the University of California. Primary design goals for the facility were inexpensive construction and operation, high beam quality at a large number of experimental stations, and versatility in adapting to new experimental needs. To accomplish these goals, our main design decisions were to place the accelerator in an unshielded structure, to make use of reconfigured cyclotrons as effective switching magnets, and to rely on computer control systems for both radiological protection and highly reproducible and well-characterized accelerator operation. This paper addresses the radiological control computer system

  20. Implementing Discretionary Access Control with Time Character in Linux and Performance Analysis

    Institute of Scientific and Technical Information of China (English)

    TAN Liang; ZHOU Ming-Tian

    2006-01-01

    DAC (Discretionary Access Control Policy) is access control based on ownership relations between subject and object, the subject can discretionarily decide on that who, by what methods, can access his owns object. In this paper, the system time is looked as a basic secure element. The DAC_T (Discretionary Access Control Policy with Time Character) is presented and formalized. The DAC_T resolves that the subject can discretionarily decide that who, on when, can access his owns objects. And then the DAC_T is implemented on Linux based on GFAC (General Framework for Access Control), and the algorithm is put forward. Finally, the performance analysis for the DAC_T_Linux is carried out. It is proved that the DAC_T_Linux not only can realize time constraints between subject and object but also can still be accepted by us though its performance have been decreased.

  1. Research and implementation of intelligent gateway driver layer based on Linux bus

    Directory of Open Access Journals (Sweden)

    ZHANG Jian

    2016-10-01

    Full Text Available Currently,in the field of smart home,there is no relevant organization that yet has proposed an unified protocol standard.It increases the complexity and limitations of heterogeneous gateway software framework design that different vendor′s devices have different communication mode and protocol standards.In this paper,a serial of interfaces are provided by Linux kernel,and a virtual bus is registered under Linux.The physical device drivers are able to connect to the virtual bus.The detailed designs of the communication protocol are placed in the underlying adapters,making the integration of heterogeneous networks more natural.At the same time,designing the intelligent gateway system driver layer based on Linux bus can let the application layer be more unified and clear logical.And it also let the hardware access network become more convenient and distinct.

  2. Quality of service on Linux for the Atlas TDAQ event building network

    International Nuclear Information System (INIS)

    Yasu, Y.; Manabe, A.; Fujii, H.; Watase, Y.; Nagasaka, Y.; Hasegawa, Y.; Shimojima, M.; Nomachi, M.

    2001-01-01

    Congestion control for packets sent on a network is important for DAQ systems that contain an event builder using switching network technologies. Quality of Service (QoS) is a technique for congestion control. Recent Linux releases provide QoS in the kernel to manage network traffic. The authors have analyzed the packet-loss and packet distribution for the event builder prototype of the Atlas TDAQ system. The authors used PC/Linux with Gigabit Ethernet network as the testbed. The results showed that QoS using CBQ and TBF eliminated packet loss on UDP/IP transfer while the UDP/IP transfer in best effort made lots of packet loss. The result also showed that the QoS overhead was small. The authors concluded that QoS on Linux performed efficiently in TCP/IP and UDP/IP and will have an important role of the Atlas TDAQ system

  3. Reuse of the compact nuclear simulator software under PC with Linux

    International Nuclear Information System (INIS)

    Cha, K. H.; Park, J. C.; Kwon, K. C.; Lee, G. Y.

    2000-01-01

    This study was approached to reuse source programs for a nuclear simulator under PC with Open Source Software(OSS) and to extend its applicability. Source programs in the Compact Nuclear Simulator(CNS), which has been operated for institutional research and training in KAERI, were reused and implemented for Linux-PC environment with the aim of supporting the study. PC with 500 MHz processor and Linux 2.2.5-22 kernel were utilized for the reuse implementation and it was investigated for some applications, through the functional testing for its main functions as interfaced with compact control panels in the current CNS. Development and upgrade of small-scale simulators, establishment of process simulation for PC, and development of prototype predictive simulation, can effectively be enabled with the experience though the reuse implementation was limited to port only CNS programs for PC with Linux

  4. Research on applications of ARM-LINUX embedded systems in manufacturing the nuclear equipment

    International Nuclear Information System (INIS)

    Nguyen Van Sy; Phan Luong Tuan; Nguyen Xuan Vinh; Dang Quang Bao

    2016-01-01

    A new microprocessor system that is ARM processor with open source Linux operating system is studied with the objective to apply ARM-Linux embedded systems in manufacturing the nuclear equipment. We use the development board of the company to learn and to build the workflow for an embedded system, then basing on the knowledge we design a motherboard embedded systems interface with the peripherals is buttons, LEDs through GPIO interface and connected with GM counting system via RS232 interface. The results of this study are: i) The procedures for working with embedded systems: process customization, installation embedded operating system and installation process, configure the development tools on the host computer; ii) ARM-Linux motherboard embedded systems interface with the peripherals and GM counting system, indicating the counts from GM counting system on the touch screen. (author)

  5. A probabilistic risk assessment of the LLNL Plutonium facility's evaluation basis fire operational accident

    International Nuclear Information System (INIS)

    Brumburgh, G.

    1994-01-01

    The Lawrence Livermore National Laboratory (LLNL) Plutonium Facility conducts numerous involving plutonium to include device fabrication, development of fabrication techniques, metallurgy research, and laser isotope separation. A Safety Analysis Report (SAR) for the building 332 Plutonium Facility was completed rational safety and acceptable risk to employees, the public, government property, and the environment. This paper outlines the PRA analysis of the Evaluation Basis Fire (EDF) operational accident. The EBF postulates the worst-case programmatic impact event for the Plutonium Facility

  6. Effects of stratospheric aerosol surface processes on the LLNL two-dimensional zonally averaged model

    International Nuclear Information System (INIS)

    Connell, P.S.; Kinnison, D.E.; Wuebbles, D.J.; Burley, J.D.; Johnston, H.S.

    1992-01-01

    We have investigated the effects of incorporating representations of heterogeneous chemical processes associated with stratospheric sulfuric acid aerosol into the LLNL two-dimensional, zonally averaged, model of the troposphere and stratosphere. Using distributions of aerosol surface area and volume density derived from SAGE 11 satellite observations, we were primarily interested in changes in partitioning within the Cl- and N- families in the lower stratosphere, compared to a model including only gas phase photochemical reactions

  7. Status of the SLAC/LBL/LLNL B-factory and the BABAR detector

    International Nuclear Information System (INIS)

    Oddone, P.

    1994-10-01

    After a brief introduction on the physics reach of the SLAC/LBL/LLNL Asymmetric B-Factory, the author describes the status of the accelerator and the detector as of the end of 1994. At this time, essentially all major decisions have been made, including the choice of particle identification for the detector. The author concludes this report with the description of the schedule for the construction of both accelerator and detector

  8. Evaluation of the neutron dose received by personnel at the LLNL

    International Nuclear Information System (INIS)

    Hankins, D.E.

    1982-01-01

    This report was prepared to document the techniques being used to evaluate the neutron exposures received by personnel at the LLNL. Two types of evaluations are discussed covering the use of the routine personnel dosimeter and of the albedo neutron dosimeter. Included in the report are field survey results which were used to determine the calibration factors being applied to the dosimeter readings. Calibration procedures are discussed and recommendations are made on calibration and evaluation procedures

  9. LLNL Contribution to LLE FY09 Annual Report: NIC and HED Results

    International Nuclear Information System (INIS)

    Heeter, R.F.; Landen, O.L.; Hsing, W.W.; Fournier, K.B.

    2009-01-01

    In FY09, LLNL led 238 target shots on the OMEGA Laser System. Approximately half of these LLNL-led shots supported the National Ignition Campaign (NIC). The remainder was dedicated to experiments for the high-energy-density stewardship experiments (HEDSE). Objectives of the LLNL led NIC campaigns at OMEGA included: (1) Laser-plasma interaction studies in physical conditions relevant for the NIF ignition targets; (2) Demonstration of Tr = 100 eV foot symmetry tuning using a reemission sphere; (3) X-ray scattering in support of conductivity measurements of solid density Be plasmas; (4) Experiments to study the physical properties (thermal conductivity) of shocked fusion fuels; (5) High-resolution measurements of velocity nonuniformities created by microscopic perturbations in NIF ablator materials; (6) Development of a novel Compton Radiography diagnostic platform for ICF experiments; and (7) Precision validation of the equation of state for quartz. The LLNL HEDSE campaigns included the following experiments: (1) Quasi-isentropic (ICE) drive used to study material properties such as strength, equation of state, phase, and phase-transition kinetics under high pressure; (2) Development of a high-energy backlighter for radiography in support of material strength experiments using Omega EP and the joint OMEGA-OMEGA-EP configuration; (3) Debris characterization from long-duration, point-apertured, point-projection x-ray backlighters for NIF radiation transport experiments; (4) Demonstration of ultrafast temperature and density measurements with x-ray Thomson scattering from short-pulse laser-heated matter; (5) The development of an experimental platform to study nonlocal thermodynamic equilibrium (NLTE) physics using direct-drive implosions; (6) Opacity studies of high-temperature plasmas under LTE conditions; and (7) Characterization of copper (Cu) foams for HEDSE experiments.

  10. Superconducting magnet development capability of the LLNL [Lawrence Livermore National Laboratory] High Field Test Facility

    International Nuclear Information System (INIS)

    Miller, J.R.; Shen, S.; Summers, L.T.

    1990-02-01

    This paper discusses the following topics: High-Field Test Facility Equipment at LLNL; FENIX Magnet Facility; High-Field Test Facility (HFTF) 2-m Solenoid; Cryogenic Mechanical Test Facility; Electro-Mechanical Conductor Test Apparatus; Electro-Mechanical Wire Test Apparatus; FENIX/HFTF Data System and Network Topology; Helium Gas Management System (HGMS); Airco Helium Liquefier/Refrigerator; CTI 2800 Helium Liquefier; and MFTF-B/ITER Magnet Test Facility

  11. LLNL Containment Program nuclear test effects and geologic data base: glossary and parameter definitions

    International Nuclear Information System (INIS)

    Howard, N.W.

    1983-01-01

    This report lists, defines, and updates Parameters in DBASE, an LLNL test effects data bank in which data are stored from experiments performed at NTS and other test sites. Parameters are listed by subject and by number. Part 2 of this report presents the same information for parameters for which some of the data may be classified; it was issued in 1979 and is not being reissued at this time as it is essentially unchanged

  12. Performance of HEPA filters at LLNL following the 1980 and 1989 earthquakes

    International Nuclear Information System (INIS)

    Bergman, W.; Elliott, J.; Wilson, K.

    1995-01-01

    The Lawrence Livermore National Laboratory has experienced two significant earthquakes for which data is available to assess the ability of HEPA filters to withstand seismic conditions. A 5.9 magnitude earthquake with an epicenter 10 miles from LLNL struck on January 24, l980. Estimates of the peak ground accelerations ranged from 0.2 to 0.3 g. A 7.0 magnitude earthquake with an epicenter about 50 miles from LLNL struck on October 17, 1989. Measurements of the ground accelerations at LLNL averaged 0.1 g. The results from the in-place filter tests obtained after each of the earthquakes were compiled and studied to determine if the earthquakes had caused filter leakage. Our study showed that only the 1980 earthquake resulted in a small increase in the number of HEPA filters developing leaks. In the 12 months following the 1980 and 1989 earthquakes, the in-place filter tests showed 8.0% and 4.1% of all filters respectively developed leaks. The average percentage of filters developing leaks from 1980 to 1993 was 3.3%+/-1.7%. The increase in the filter leaks is significant for the 1980 earthquake, but not for the 1989 earthquake. No contamination was detected following the earthquakes that would suggest transient releases from the filtration system

  13. Physics of laser fusion. Volume II. Diagnostics of experiments on laser fusion targets at LLNL

    Energy Technology Data Exchange (ETDEWEB)

    Ahlstrom, H.G.

    1982-01-01

    These notes present the experimental basis and status for laser fusion as developed at LLNL. There are two other volumes in this series: Vol. I, by C.E. Max, presents the theoretical laser-plasma interaction physics; Vol. III, by J.F. Holzrichter et al., presents the theory and design of high-power pulsed lasers. A fourth volume will present the theoretical implosion physics. The notes consist of six sections. The first, an introductory section, provides some of the history of inertial fusion and a simple explanation of the concepts involved. The second section presents an extensive discussion of diagnostic instrumentation used in the LLNL Laser Fusion Program. The third section is a presentation of laser facilities and capabilities at LLNL. The purpose here is to define capability, not to derive how it was obtained. The fourth and fifth sections present the experimental data on laser-plasma interaction and implosion physics. The last chapter is a short projection of the future.

  14. Performance of HEPA filters at LLNL following the 1980 and 1989 earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Bergman, W.; Elliott, J.; Wilson, K. [Lawrence Livermore National Laboratory, CA (United States)

    1995-02-01

    The Lawrence Livermore National Laboratory has experienced two significant earthquakes for which data is available to assess the ability of HEPA filters to withstand seismic conditions. A 5.9 magnitude earthquake with an epicenter 10 miles from LLNL struck on January 24, l980. Estimates of the peak ground accelerations ranged from 0.2 to 0.3 g. A 7.0 magnitude earthquake with an epicenter about 50 miles from LLNL struck on October 17, 1989. Measurements of the ground accelerations at LLNL averaged 0.1 g. The results from the in-place filter tests obtained after each of the earthquakes were compiled and studied to determine if the earthquakes had caused filter leakage. Our study showed that only the 1980 earthquake resulted in a small increase in the number of HEPA filters developing leaks. In the 12 months following the 1980 and 1989 earthquakes, the in-place filter tests showed 8.0% and 4.1% of all filters respectively developed leaks. The average percentage of filters developing leaks from 1980 to 1993 was 3.3%+/-1.7%. The increase in the filter leaks is significant for the 1980 earthquake, but not for the 1989 earthquake. No contamination was detected following the earthquakes that would suggest transient releases from the filtration system.

  15. Physics of laser fusion. Volume II. Diagnostics of experiments on laser fusion targets at LLNL

    International Nuclear Information System (INIS)

    Ahlstrom, H.G.

    1982-01-01

    These notes present the experimental basis and status for laser fusion as developed at LLNL. There are two other volumes in this series: Vol. I, by C.E. Max, presents the theoretical laser-plasma interaction physics; Vol. III, by J.F. Holzrichter et al., presents the theory and design of high-power pulsed lasers. A fourth volume will present the theoretical implosion physics. The notes consist of six sections. The first, an introductory section, provides some of the history of inertial fusion and a simple explanation of the concepts involved. The second section presents an extensive discussion of diagnostic instrumentation used in the LLNL Laser Fusion Program. The third section is a presentation of laser facilities and capabilities at LLNL. The purpose here is to define capability, not to derive how it was obtained. The fourth and fifth sections present the experimental data on laser-plasma interaction and implosion physics. The last chapter is a short projection of the future

  16. Malware Memory Analysis of the Jynx2 Linux Rootkit (Part 1): Investigating a Publicly Available Linux Rootkit Using the Volatility Memory Analysis Framework

    Science.gov (United States)

    2014-10-01

    represented by the Minister of National Defence, 2014 © Sa Majesté la Reine (en droit du Canada), telle que représentée par le ministre de la Défense...analysis techniques is outside the scope of this work, as it requires a comprehensive study of operating system internals and software reverse engineering...2 Peripheral concerns 2.1 Why examine Linux memory images or make them available? After extensively searching the available public

  17. ARTiS, an Asymmetric Real-Time Scheduler for Linux on Multi-Processor Architectures

    OpenAIRE

    Piel , Éric; Marquet , Philippe; Soula , Julien; Osuna , Christophe; Dekeyser , Jean-Luc

    2005-01-01

    The ARTiS system is a real-time extension of the GNU/Linux scheduler dedicated to SMP (Symmetric Multi-Processors) systems. It allows to mix High Performance Computing and real-time. ARTiS exploits the SMP architecture to guarantee the preemption of a processor when the system has to schedule a real-time task. The implementation is available as a modification of the Linux kernel, especially focusing (but not restricted to) IA-64 architecture. The basic idea of ARTiS is to assign a selected se...

  18. Linux toys II 9 Cool New Projects for Home, Office, and Entertainment

    CERN Document Server

    Negus, Christopher

    2006-01-01

    Builds on the success of the original Linux Toys (0-7645-2508-5) and adds projects using different Linux distributionsAll-new toys in this edition include a car computer system with built-in entertainment and navigation features, bootable movies, a home surveillance monitor, a LEGO Mindstorms robot, and a weather mapping stationIntroduces small business opportunities with an Internet radio station and Internet caf ̌projectsCompanion Web site features specialized hardware drivers, software interfaces, music and game software, project descriptions, and discussion forumsIncludes a CD-ROM with scr

  19. LPIC-2 Linux Professional Institute Certification Study Guide Exams 201 and 202

    CERN Document Server

    Smith, Roderick W

    2011-01-01

    The first book to cover the LPIC-2 certification Linux allows developers to update source code freely, making it an excellent, low-cost, secure alternative to alternate, more expensive operating systems. It is for this reason that the demand for IT professionals to have an LPI certification is so strong. This study guide provides unparalleled coverage of the LPIC-2 objectives for exams 201 and 202. Clear and concise coverage examines all Linux administration topics while practical, real-world examples enhance your learning process. On the CD, you'll find the Sybex Test Engine, electronic flash

  20. Memanfaatkan Sistem Operasi Linux Untuk Keamanan Data Pada E-commerce

    OpenAIRE

    Isnania

    2012-01-01

    E-commerce is one of the major networks to do the transaction, where security is an issue that must be considered vital to the security of customer data and transactions. To realize the process of e-commerce, let prepared operating system (OS) that are reliable to secure the transaction path and also Dynamic Database back end that provides a product catalog that will be sold online. For technology, we can adopt open source technologies that are all available on linux. On linux it's too bundle...

  1. PhyLIS: A Simple GNU/Linux Distribution for Phylogenetics and Phyloinformatics

    Directory of Open Access Journals (Sweden)

    Robert C. Thomson

    2009-01-01

    Full Text Available PhyLIS is a free GNU/Linux distribution that is designed to provide a simple, standardized platform for phylogenetic and phyloinformatic analysis. The operating system incorporates most commonly used phylogenetic software, which has been pre-compiled and pre-configured, allowing for straightforward application of phylogenetic methods and development of phyloinformatic pipelines in a stable Linux environment. The software is distributed as a live CD and can be installed directly or run from the CD without making changes to the computer. PhyLIS is available for free at http://www.eve.ucdavis.edu/rcthomson/phylis/.

  2. Getting priorities straight: improving Linux support for database I/O

    DEFF Research Database (Denmark)

    Hall, Christoffer; Bonnet, Philippe

    2005-01-01

    The Linux 2.6 kernel supports asynchronous I/O as a result of propositions from the database industry. This is a positive evolution but is it a panacea? In the context of the Badger project, a collaboration between MySQL AB and University of Copenhagen, ......The Linux 2.6 kernel supports asynchronous I/O as a result of propositions from the database industry. This is a positive evolution but is it a panacea? In the context of the Badger project, a collaboration between MySQL AB and University of Copenhagen, ...

  3. An approach to improving the structure of error-handling code in the linux kernel

    DEFF Research Database (Denmark)

    Saha, Suman; Lawall, Julia; Muller, Gilles

    2011-01-01

    The C language does not provide any abstractions for exception handling or other forms of error handling, leaving programmers to devise their own conventions for detecting and handling errors. The Linux coding style guidelines suggest placing error handling code at the end of each function, where...... an automatic program transformation that transforms error-handling code into this style. We have applied our transformation to the Linux 2.6.34 kernel source code, on which it reorganizes the error handling code of over 1800 functions, in about 25 minutes....

  4. PhyLIS: a simple GNU/Linux distribution for phylogenetics and phyloinformatics.

    Science.gov (United States)

    Thomson, Robert C

    2009-07-30

    PhyLIS is a free GNU/Linux distribution that is designed to provide a simple, standardized platform for phylogenetic and phyloinformatic analysis. The operating system incorporates most commonly used phylogenetic software, which has been pre-compiled and pre-configured, allowing for straightforward application of phylogenetic methods and development of phyloinformatic pipelines in a stable Linux environment. The software is distributed as a live CD and can be installed directly or run from the CD without making changes to the computer. PhyLIS is available for free at http://www.eve.ucdavis.edu/rcthomson/phylis/.

  5. Institute of Geophysics and Planetary Physics (IGPP), Lawrence Livermore National Laboratory (LLNL): Quinquennial report, November 14-15, 1996

    Energy Technology Data Exchange (ETDEWEB)

    Tweed, J.

    1996-10-01

    This Quinquennial Review Report of the Lawrence Livermore National Laboratory (LLNL) branch of the Institute for Geophysics and Planetary Physics (IGPP) provides an overview of IGPP-LLNL, its mission, and research highlights of current scientific activities. This report also presents an overview of the University Collaborative Research Program (UCRP), a summary of the UCRP Fiscal Year 1997 proposal process and the project selection list, a funding summary for 1993-1996, seminars presented, and scientific publications. 2 figs., 3 tabs.

  6. Development of Automatic Live Linux Rebuilding System with Flexibility in Science and Engineering Education and Applying to Information Processing Education

    Science.gov (United States)

    Sonoda, Jun; Yamaki, Kota

    We develop an automatic Live Linux rebuilding system for science and engineering education, such as information processing education, numerical analysis and so on. Our system is enable to easily and automatically rebuild a customized Live Linux from a ISO image of Ubuntu, which is one of the Linux distribution. Also, it is easily possible to install/uninstall packages and to enable/disable init daemons. When we rebuild a Live Linux CD using our system, we show number of the operations is 8, and the rebuilding time is about 33 minutes on CD version and about 50 minutes on DVD version. Moreover, we have applied the rebuilded Live Linux CD in a class of information processing education in our college. As the results of a questionnaires survey from our 43 students who used the Live Linux CD, we obtain that the our Live Linux is useful for about 80 percents of students. From these results, we conclude that our system is able to easily and automatically rebuild a useful Live Linux in short time.

  7. Lawrence Livermore National Laboratory selects Intel Itanium 2 processors for world's most powerful Linux cluster

    CERN Multimedia

    2003-01-01

    "Intel Corporation, system manufacturer California Digital and the University of California at Lawrence Livermore National Laboratory (LLNL) today announced they are building one of the world's most powerful supercomputers. The supercomputer project, codenamed "Thunder," uses nearly 4,000 Intel® Itanium® 2 processors... is expected to be complete in January 2004" (1 page).

  8. The Free Software Movement and the GNU/Linux Operating System

    CERN Multimedia

    CERN. Geneva

    2003-01-01

    Richard Stallman will speak about the purpose, goals, philosophy, methods, status, and future prospects of the GNU operating system, which in combination with the kernel Linux is now used by an estimated 17 to 20 million users world wide.BiographyRichard Stallman is the founder of the Gnu Project, launched in 1984 to develop the free operating system GNU (an acronym for ''GNU's Not Unix''), and thereby give computer users the freedom that most of them have lost. GNU is free software: everyone is free to copy it and redistribute it, as well as to make changes either large or small. Today, Linux-based variants of the GNU system, based on the kernel Linux developed by Linus Torvalds, are in widespread use. There are estimated to be some 20 million users of GNU/Linux systems today. Richard Stallman is the principal author of the GNU Compiler Collection, a portable optimizing compiler which was designed to support diverse architectures and multiple languages. The compiler now supports over 30 different architect...

  9. Convolutional Neural Network on Embedded Linux System-on-Chip: A Methodology and Performance Benchmark

    Science.gov (United States)

    2016-05-01

    Linux® is a registered trademark of Linus Torvalds. NVIDIA ® is a registered trademark of NVIDIA Corporation. Oracle® is a registered trademark of...two NVIDIA ® GTX580 GPUs [3]. Therefore, for this initial work, we decided to concentrate on small networks and small datasets until the methods are

  10. WYSIWIB: A Declarative Approach to Finding API Protocols and Bugs in Linux Code

    DEFF Research Database (Denmark)

    Lawall, Julia; Brunel, Julien Pierre Manuel; Palix, Nicolas Jean-Michel

    2009-01-01

    the tools on specific kinds of bugs and to relate the results to patterns in the source code. We propose a declarative approach to bug finding in Linux OS code using a control-flow based program search engine. Our approach is WYSIWIB (What You See Is Where It Bugs), since the programmer expresses...

  11. Web application for monitoring mainframe computer, Linux operating systems and application servers

    OpenAIRE

    Dimnik, Tomaž

    2016-01-01

    This work presents the idea and the realization of web application for monitoring the operation of the mainframe computer, servers with Linux operating system and application servers. Web application is intended for administrators of these systems, as an aid to better understand the current state, load and operation of the individual components of the server systems.

  12. NSC KIPT Linux cluster for computing within the CMS physics program

    International Nuclear Information System (INIS)

    Levchuk, L.G.; Sorokin, P.V.; Soroka, D.V.

    2002-01-01

    The architecture of the NSC KIPT specialized Linux cluster constructed for carrying out work on CMS physics simulations and data processing is described. The configuration of the portable batch system (PBS) on the cluster is outlined. Capabilities of the cluster in its current configuration to perform CMS physics simulations are pointed out

  13. Implementation of the On-the-fly Encryption for the Linux OS Based on Certified CPS

    Directory of Open Access Journals (Sweden)

    Alexander Mikhailovich Korotin

    2013-02-01

    Full Text Available The article is devoted to tools for on-the-fly encryption and a method to implement such tool for the Linux OS based on a certified CPS.The idea is to modify the existing tool named eCryptfs. Russian cryptographic algorithms will be used in the user and kernel modes.

  14. Design of software platform based on linux operating system for γ-spectrometry instrument

    International Nuclear Information System (INIS)

    Hong Tianqi; Zhou Chen; Zhang Yongjin

    2008-01-01

    This paper described the design of γ-spectrometry instrument software platform based on s3c2410a processor with arm920t core, emphases are focused on analyzing the integrated application of embedded linux operating system, yaffs file system and qt/embedded GUI development library. It presented a new software platform in portable instrument for γ measurement. (authors)

  15. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard; Allcock, William; Beggio, Chris; Campbell, Stuart; Cherry, Andrew; Cholia, Shreyas; Dart, Eli; England, Clay; Fahey, Tim; Foertter, Fernanda; Goldstone, Robin; Hick, Jason; Karelitz, David; Kelly, Kaki; Monroe, Laura; Prabhat,; Skinner, David; White, Julia

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at the DOE national laboratories. The report contains findings from that review.

  16. Educational program on HPC technologies based on the heterogeneous cluster HybriLIT (LIT JINR

    Directory of Open Access Journals (Sweden)

    Vladimir V. Korenkov

    2017-12-01

    Full Text Available The article highlights the issues of training personnel for work with high-performance computing systems (HPC, as well as of support of the software and information environment which is necessary for the efficient use of heterogeneous computing resources and the development of parallel and hybrid applications. The heterogeneous computing cluster HybriLIT, which is one of the components of the Multifunctional Information and Computing Complex of JINR, is used as the main platform for training and re-training specialists, as well as for training students, graduate students and young scientists. The HybriLIT cluster is a dynamic, actively developing structure, incorporating the most advanced HPC computing architectures (graphics accelerators, Intel Xeon Phi coprocessors, and also it has a developed software and information environment, which in turn, makes it possible to build educational programs on the up-to-date level, and enables the learners to master both modern computing platforms and modern IT technologies.

  17. First experience with particle-in-cell plasma physics code on ARM-based HPC systems

    Science.gov (United States)

    Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Mantsinen, Mervi; Mateo, Sergi; Cela, José M.; Castejón, Francisco

    2015-09-01

    In this work, we will explore the feasibility of porting a Particle-in-cell code (EUTERPE) to an ARM multi-core platform from the Mont-Blanc project. The used prototype is based on a system-on-chip Samsung Exynos 5 with an integrated GPU. It is the first prototype that could be used for High-Performance Computing (HPC), since it supports double precision and parallel programming languages.

  18. Survey on Projects at DLR Simulation and Software Technology with Focus on Software Engineering and HPC

    OpenAIRE

    Schreiber, Andreas; Basermann, Achim

    2013-01-01

    We introduce the DLR institute “Simulation and Software Technology” (SC) and present current activities regarding software engineering and high performance computing (HPC) in German or international projects. Software engineering at SC focusses on data and knowledge management as well as tools for studies and experiments. We discuss how we apply software configuration management, validation and verification in our projects. Concrete research topics are traceability of (software devel...

  19. When to Renew Software Licences at HPC Centres? A Mathematical Analysis

    International Nuclear Information System (INIS)

    Baolai, Ge; MacIsaac, Allan B

    2010-01-01

    In this paper we study a common problem faced by many high performance computing (HPC) centres: When and how to renew commercial software licences. Software vendors often sell perpetual licences along with forward update and support contracts at an additional, annual cost. Every year or so, software support personnel and the budget units of HPC centres are required to make the decision of whether or not to renew such support, and usually such decisions are made intuitively. The total cost for a continuing support contract can, however, be costly. One might therefore want a rational answer to the question of whether the option for a renewal should be exercised and when. In an attempt to study this problem within a market framework, we present the mathematical problem derived for the day to day operation of a hypothetical HPC centre that charges for the use of software packages. In the mathematical model, we assume that the uncertainty comes from the demand, number of users using the packages, as well as the price. Further we assume the availability of up to date software versions may also affect the demand. We develop a renewal strategy that aims to maximize the expected profit from the use the software under consideration. The derived problem involves a decision tree, which constitutes a numerical procedure that can be processed in parallel.

  20. Optimizing new components of PanDA for ATLAS production on HPC resources

    CERN Document Server

    Maeno, Tadashi; The ATLAS collaboration

    2017-01-01

    The Production and Distributed Analysis system (PanDA) has been used for workload management in the ATLAS Experiment for over a decade. It uses pilots to retrieve jobs from the PanDA server and execute them on worker nodes. While PanDA has been mostly used on Worldwide LHC Computing Grid (WLCG) resources for production operations, R&D work has been ongoing on cloud and HPC resources for many years. These efforts have led to the significant usage of large scale HPC resources in the past couple of years. In this talk we will describe the changes to the pilot which enabled the use of HPC sites by PanDA, specifically the Titan supercomputer at Oakridge National Laboratory. Furthermore, it was decided in 2016 to start a fresh redesign of the Pilot with a more modern approach to better serve present and future needs from ATLAS and other collaborations that are interested in using the PanDA System. Another new project for development of a resource oriented service, PanDA Harvester, was also launched in 2016. The...

  1. When to Renew Software Licences at HPC Centres? A Mathematical Analysis

    Science.gov (United States)

    Baolai, Ge; MacIsaac, Allan B.

    2010-11-01

    In this paper we study a common problem faced by many high performance computing (HPC) centres: When and how to renew commercial software licences. Software vendors often sell perpetual licences along with forward update and support contracts at an additional, annual cost. Every year or so, software support personnel and the budget units of HPC centres are required to make the decision of whether or not to renew such support, and usually such decisions are made intuitively. The total cost for a continuing support contract can, however, be costly. One might therefore want a rational answer to the question of whether the option for a renewal should be exercised and when. In an attempt to study this problem within a market framework, we present the mathematical problem derived for the day to day operation of a hypothetical HPC centre that charges for the use of software packages. In the mathematical model, we assume that the uncertainty comes from the demand, number of users using the packages, as well as the price. Further we assume the availability of up to date software versions may also affect the demand. We develop a renewal strategy that aims to maximize the expected profit from the use the software under consideration. The derived problem involves a decision tree, which constitutes a numerical procedure that can be processed in parallel.

  2. BEAM: A computational workflow system for managing and modeling material characterization data in HPC environments

    Energy Technology Data Exchange (ETDEWEB)

    Lingerfelt, Eric J [ORNL; Endeve, Eirik [ORNL; Ovchinnikov, Oleg S [ORNL; Borreguero Calvo, Jose M [ORNL; Park, Byung H [ORNL; Archibald, Richard K [ORNL; Symons, Christopher T [ORNL; Kalinin, Sergei V [ORNL; Messer, Bronson [ORNL; Shankar, Mallikarjun [ORNL; Jesse, Stephen [ORNL

    2016-01-01

    Improvements in scientific instrumentation allow imaging at mesoscopic to atomic length scales, many spectroscopic modes, and now with the rise of multimodal acquisition systems and the associated processing capability the era of multidimensional, informationally dense data sets has arrived. Technical issues in these combinatorial scientific fields are exacerbated by computational challenges best summarized as a necessity for drastic improvement in the capability to transfer, store, and analyze large volumes of data. The Bellerophon Environment for Analysis of Materials (BEAM) platform provides material scientists the capability to directly leverage the integrated computational and analytical power of High Performance Computing (HPC) to perform scalable data analysis and simulation via an intuitive, cross-platform client user interface. This framework delivers authenticated, push-button execution of complex user workflows that deploy data analysis algorithms and computational simulations utilizing the converged compute-and-data infrastructure at Oak Ridge National Laboratory s (ORNL) Compute and Data Environment for Science (CADES) and HPC environments like Titan at the Oak Ridge Leadership Computing Facility (OLCF). In this work we address the underlying HPC needs for characterization in the material science community, elaborate how BEAM s design and infrastructure tackle those needs, and present a small sub-set of user cases where scientists utilized BEAM across a broad range of analytical techniques and analysis modes.

  3. A probabilistic risk assessment of the LLNL Plutonium Facility's evaluation basis fire operational accident. Revision 1

    International Nuclear Information System (INIS)

    Brumburgh, G.P.

    1995-01-01

    The Lawrence Livermore National Laboratory (LLNL) Plutonium Facility conducts numerous programmatic activities involving plutonium to include device fabrication, development of improved and/or unique fabrication techniques, metallurgy research, and laser isotope separation. A Safety Analysis Report (SAR) for the building 332 Plutonium Facility was completed in July 1994 to address operational safety and acceptable risk to employees, the public, government property, and the environmental. This paper outlines the PRA analysis of the Evaluation Basis Fire (EBF) operational accident. The EBF postulates the worst-case programmatic impact event for the Plutonium Facility

  4. Status of the SLAC/LBL/LLNL B-Factory and the BaBar detector

    International Nuclear Information System (INIS)

    Oddone, P.

    1994-08-01

    The primary motivation of the Asymmetric B-Factory is the study of CP violation. The decay of B mesons and, in particular, the decay of neutral B mesons, offers the possibility of determining conclusively whether CP violation is part and parcel of the Standard Model with three generations of quarks and leptons. Alternatively, the authors may discover that CP violation lies outside the present framework. In this paper the authors briefly describe the physics reach of the SLAC/LBL/LLNL Asymmetric B-Factory, the progress on the machine design and construction, the progress on the detector design, and the schedule to complete both projects

  5. M4FT-15LL0806062-LLNL Thermodynamic and Sorption Data FY15 Progress Report

    Energy Technology Data Exchange (ETDEWEB)

    Zavarin, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wolery, T. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-08-31

    This progress report (Milestone Number M4FT-15LL0806062) summarizes research conducted at Lawrence Livermore National Laboratory (LLNL) within Work Package Number FT-15LL080606. The focus of this research is the thermodynamic modeling of Engineered Barrier System (EBS) materials and properties and development of thermodynamic databases and models to evaluate the stability of EBS materials and their interactions with fluids at various physicochemical conditions relevant to subsurface repository environments. The development and implementation of equilibrium thermodynamic models are intended to describe chemical and physical processes such as solubility, sorption, and diffusion.

  6. Analyses in Support of Z-IFE LLNL Progress Report for FY-05

    International Nuclear Information System (INIS)

    Moir, R W; Abbott, R P; Callahan, D A; Latkowski, J F; Meier, W R; Reyes, S

    2005-01-01

    The FY04 LLNL study of Z-IFE [1] proposed and evaluated a design that deviated from SNL's previous baseline design. The FY04 study included analyses of shock mitigation, stress in the first wall, neutronics and systems studies. In FY05, the subject of this report, we build on our work and the theme of last year. Our emphasis continues to be on alternatives that hold promise of considerable improvements in design and economics compared to the base-line design. Our key results are summarized here

  7. Over Batch Analysis for the LLNL Plutonium Packaging System (PuPS)

    International Nuclear Information System (INIS)

    Riley, D.; Dodson, K.

    2007-01-01

    This document addresses the concern raised in the Savannah River Site (SRS) Acceptance Criteria (Reference 1, Section 6.a.3) about receiving an item that is over batched by 1.0 kg of fissile materials. This document shows that the occurrence of this is incredible. Some of the Department of Energy Standard 3013 (DOE-STD-3013) requirements are described in Section 2.1. The SRS requirement is discussed in Section 2.2. Section 2.3 describes the way fissile materials are handled in the Lawrence Livermore National Laboratory (LLNL) Plutonium Facility (B332). Based on the material handling discussed in Section 2.3, there are only three errors that could result in a shipping container being over batched. These are: incorrect measurement of the item, selecting the wrong item to package, and packaging two items into a single shipping container. The analysis in Section 3 shows that the first two events are incredible because of the controls that exist at LLNL. The third event is physically impossible. Therefore, it is incredible for an item to be shipped to SRS that is more than 1.0 kg of fissile materials over batched

  8. Over Batch Analysis for the LLNL DOE-STD-3013 Packaging System

    International Nuclear Information System (INIS)

    Riley, D.C.; Dodson, K.

    2009-01-01

    This document addresses the concern raised in the Savannah River Site (SRS) Acceptance Criteria about receiving an item that is over batched by 1.0 kg of fissile materials. This document shows that the occurrence of this is incredible. Some of the Department of Energy Standard 3013 (DOE-STD-3013) requirements are described in Section 2.1. The SRS requirement is discussed in Section 2.2. Section 2.3 describes the way fissile materials are handled in the Lawrence Livermore National Laboratory (LLNL) Plutonium Facility (B332). Based on the material handling discussed in Section 2.3, there are only three errors that could result in a shipping container being over batched. These are: incorrect measurement of the item, selecting the wrong item to package, and packaging two items into a single shipping container. The analysis in Section 3 shows that the first two events are incredible because of the controls that exist at LLNL. The third event is physically impossible. Therefore, it is incredible for an item to be shipped to SRS that is more than 1.0 kg of fissile materials over batched.

  9. Implementing necessary and sufficient standards for radioactive waste management at LLNL

    International Nuclear Information System (INIS)

    Sims, J.M.; Ladran, A.; Hoyt, D.

    1995-01-01

    Lawrence Livermore National Laboratory (LLNL) and the U.S. Department of Energy, Oakland Field Office (DOE/OAK), are participating in a pilot program to evaluate the process to develop necessary and sufficient sets of standards for contractor activities. This concept of contractor and DOE jointly and locally deciding on what constitutes the set of standards that are necessary and sufficient to perform work safely and in compliance with federal, state, and local regulations, grew out of DOE's Department Standards Committee (Criteria for the Department's Standards Program, August 1994, DOE/EH/-0416). We have chosen radioactive waste management activities as the pilot program at LLNL. This pilot includes low-level radioactive waste, transuranic (TRU) waste, and the radioactive component of low-level and TRU mixed wastes. Guidance for the development and implementation of the necessary and sufficient set of standards is provided in open-quotes The Department of Energy Closure Process for Necessary and Sufficient Sets of Standards,close quotes March 27, 1995 (draft)

  10. LLNL Experimental Test Site (Site 300) Potable Water System Operations Plan

    Energy Technology Data Exchange (ETDEWEB)

    Ocampo, R. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bellah, W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-09-14

    The existing Lawrence Livermore National Laboratory (LLNL) Site 300 drinking water system operation schematic is shown in Figures 1 and 2 below. The sources of water are from two Site 300 wells (Well #18 and Well #20) and San Francisco Public Utilities Commission (SFPUC) Hetch-Hetchy water through the Thomas shaft pumping station. Currently, Well #20 with 300 gallons per minute (gpm) pump capacity is the primary source of well water used during the months of September through July, while Well #18 with 225 gpm pump capacity is the source of well water for the month of August. The well water is chlorinated using sodium hypochlorite to provide required residual chlorine throughout Site 300. Well water chlorination is covered in the Lawrence Livermore National Laboratory Experimental Test Site (Site 300) Chlorination Plan (“the Chlorination Plan”; LLNL-TR-642903; current version dated August 2013). The third source of water is the SFPUC Hetch-Hetchy Water System through the Thomas shaft facility with a 150 gpm pump capacity. At the Thomas shaft station the pumped water is treated through SFPUC-owned and operated ultraviolet (UV) reactor disinfection units on its way to Site 300. The Thomas Shaft Hetch- Hetchy water line is connected to the Site 300 water system through the line common to Well pumps #18 and #20 at valve box #1.

  11. High-speed data acquisition with the Solaris and Linux operating systems

    International Nuclear Information System (INIS)

    Zilker, M.; Heimann, P.

    2000-01-01

    In this paper, we discuss whether Solaris and Linux are suitable for data acquisition systems in soft real time conditions. As an example we consider a plasma diagnostic (Mirnov coils), which collects data for a complete plasma discharge of about 10 s from up to 72 channels. Each ADC-Channel generates a data stream of 4 MB/s. To receive these data streams an eight-channel Hotlink PCI interface board was designed. With a prototype system using Solaris and the driver developed by us we investigate important properties of the operating system such as the I/O performance and scheduling of processes. We compare the Solaris operating system on the Ultra Sparc platform with Linux on the Intel platform. Finally, some points of user program development are mentioned to show how the application can make the most efficient use of the underlying high-speed I/O system

  12. [Making a low cost IPSec router on Linux and the assessment for practical use].

    Science.gov (United States)

    Amiki, M; Horio, M

    2001-09-01

    We installed Linux and FreeS/WAN on a PC/AT compatible machine to make an IPSec router. We measured the time of ping/ftp, only in the university, between the university and the external network. Between the university and the external network (the Internet), there were no differences. Therefore, we concluded that CPU load was not remarkable at low speed networks, because packets exchanged via the Internet are small, or compressions of VPN are more effective than encoding and decoding. On the other hand, in the university, the IPSec router performed down about 20-30% compared with normal IP communication, but this is not a serious problem for practical use. Recently, VPN machines are becoming cheaper, but they do not function sufficiently to create a fundamental VPN environment. Therefore, if one wants a fundamental VPN environment at a low cost, we believe you should select a VPN router on Linux.

  13. MySQL databases as part of the Online Business, using a platform based on Linux

    Directory of Open Access Journals (Sweden)

    Ion-Sorin STROE

    2011-09-01

    Full Text Available The Internet is a business development environment that has major advantages over traditional environment. From a financial standpoint, the initial investment is much reduced and, as yield, the chances of success are considerably higher. Developing an online business also depends on the manager’s ability to use the best solutions, sustainable on a long term. The current trend is to decrease the costs for the technical platform by adopting open-source license products. Such platform is based on a Linux operating system and a database system based on MySQL product. This article aims to answer two basic questions: “A platform based on Linux and MySQL can handle the demands of an online business?” and “Adopting such a solution has the effect of increasing profitability?”

  14. A PC parallel port button box provides millisecond response time accuracy under Linux.

    Science.gov (United States)

    Stewart, Neil

    2006-02-01

    For psychologists, it is sometimes necessary to measure people's reaction times to the nearest millisecond. This article describes how to use the PC parallel port to receive signals from a button box to achieve millisecond response time accuracy. The workings of the parallel port, the corresponding port addresses, and a simple Linux program for controlling the port are described. A test of the speed and reliability of button box signal detection is reported. If the reader is moderately familiar with Linux, this article should provide sufficient instruction for him or her to build and test his or her own parallel port button box. This article also describes how the parallel port could be used to control an external apparatus.

  15. Free and open source software at CERN: integration of drivers in the Linux kernel

    International Nuclear Information System (INIS)

    Gonzalez Cobas, J.D.; Iglesias Gonsalvez, S.; Howard Lewis, J.; Serrano, J.; Vanga, M.; Cota, E.G.; Rubini, A.; Vaga, F.

    2012-01-01

    Most device drivers written for accelerator control systems suffer from a severe lack of portability due to the ad hoc nature of the code, often embodied with intimate knowledge of the particular machine it is deployed in. In this paper we challenge this practice by arguing for the opposite approach: development in the open, which in our case translates into the integration of our code within the Linux kernel. We make our case by describing the upstream merge effort of the tsi148 driver, a critical (and complex) component of the control system. The encouraging results from this effort have then led us to follow the same approach with two more ambitious projects, currently in the works: Linux support for the upcoming FMC boards and a new I/O subsystem. (authors)

  16. The Linux based distributed data acquisition system for the ISTRA+ experiment

    International Nuclear Information System (INIS)

    Filin, A.; Inyakin, A.; Novikov, V.; Obraztsov, V.; Smirnov, N.; Vlassov, E.; Yuschenko, O.

    2001-01-01

    The DAQ hardware of the ISTRA+ experiment consists of the VME system crate that contains two PCI-VME bridges interfacing two PCs with VME, external interrupts receiver, the readout controller for dedicated front-end electronics, the readout controller buffer memory module, the VME-CAMAC interface, and additional control modules. The DAQ computing consist of 6 PCs running the Linux operating system and linked into LAN. The first PC serves the external interrupts and acquires the data from front-end electronic. The second one is the slow control computer. The remaining PCs host the monitoring and data analysis software. The Linux based DAQ software provides the external interrupts processing, the data acquisition, recording, and distribution between monitoring and data analysis tasks running at DAQ PCs. The monitoring programs are based on two packages for data visualization: home-written one and the ROOT system. MySQL is used as a DAQ database

  17. An update on perfmon and the struggle to get into the Linux kernel

    Energy Technology Data Exchange (ETDEWEB)

    Nowak, Andrzej, E-mail: Andrzej.Nowak@cern.c [CERN openlab (Switzerland)

    2010-04-01

    At CHEP2007 we reported on the perfmon2 subsystem as a tool for interfacing to the PMUs (Performance Monitoring Units) which are found in the hardware of all modern processors (from AMD, Intel, SUN, IBM, MIPS, etc.). The intent was always to get the subsystem into the Linux kernel by default. This paper reports on how progress was made (after long discussions) and will also show the latest additions to the subsystems.

  18. An update on perfmon and the struggle to get into the Linux kernel

    International Nuclear Information System (INIS)

    Nowak, Andrzej

    2010-01-01

    At CHEP2007 we reported on the perfmon2 subsystem as a tool for interfacing to the PMUs (Performance Monitoring Units) which are found in the hardware of all modern processors (from AMD, Intel, SUN, IBM, MIPS, etc.). The intent was always to get the subsystem into the Linux kernel by default. This paper reports on how progress was made (after long discussions) and will also show the latest additions to the subsystems.

  19. A real-time data transmission method based on Linux for physical experimental readout systems

    International Nuclear Information System (INIS)

    Cao Ping; Song Kezhu; Yang Junfeng

    2012-01-01

    In a typical physical experimental instrument, such as a fusion or particle physical application, the readout system generally implements an interface between the data acquisition (DAQ) system and the front-end electronics (FEE). The key task of a readout system is to read, pack, and forward the data from the FEE to the back-end data concentration center in real time. To guarantee real-time performance, the VxWorks operating system (OS) is widely used in readout systems. However, VxWorks is not an open-source OS, which gives it has many disadvantages. With the development of multi-core processor and new scheduling algorithm, Linux OS exhibits performance in real-time applications similar to that of VxWorks. It has been successfully used even for some hard real-time systems. Discussions and evaluations of real-time Linux solutions for a possible replacement of VxWorks arise naturally. In this paper, a real-time transmission method based on Linux is introduced. To reduce the number of transfer cycles for large amounts of data, a large block of contiguous memory buffer for DMA transfer is allocated by modifying the Linux Kernel (version 2.6) source code slightly. To increase the throughput for network transmission, the user software is designed into formation of parallelism. To achieve high performance in real-time data transfer from hardware to software, mapping techniques must be used to avoid unnecessary data copying. A simplified readout system is implemented with 4 readout modules in a PXI crate. This system can support up to 48 MB/s data throughput from the front-end hardware to the back-end concentration center through a Gigabit Ethernet connection. There are no restrictions on the use of this method, hardware or software, which means that it can be easily migrated to other interrupt related applications.

  20. genepop'007: a complete re-implementation of the genepop software for Windows and Linux.

    Science.gov (United States)

    Rousset, François

    2008-01-01

    This note summarizes developments of the genepop software since its first description in 1995, and in particular those new to version 4.0: an extended input format, several estimators of neighbourhood size under isolation by distance, new estimators and confidence intervals for null allele frequency, and less important extensions to previous options. genepop now runs under Linux as well as under Windows, and can be entirely controlled by batch calls. © 2007 The Author.

  1. Empirical tests of Zipf's law mechanism in open source Linux distribution.

    Science.gov (United States)

    Maillart, T; Sornette, D; Spaeth, S; von Krogh, G

    2008-11-21

    Zipf's power law is a ubiquitous empirical regularity found in many systems, thought to result from proportional growth. Here, we establish empirically the usually assumed ingredients of stochastic growth models that have been previously conjectured to be at the origin of Zipf's law. We use exceptionally detailed data on the evolution of open source software projects in Linux distributions, which offer a remarkable example of a growing complex self-organizing adaptive system, exhibiting Zipf's law over four full decades.

  2. Disk cloning program 'Dolly+' for system management of PC Linux cluster

    International Nuclear Information System (INIS)

    Atsushi Manabe

    2001-01-01

    The Dolly+ is a Linux application program to clone files and disk partition image from a PC to many others. By using several techniques such as logical ring connection, multi threading and pipelining, it could achieve high performance and scalability. For example, in typical condition, installations to a hundred PCs takes almost equivalent time for two PCs. Together with the Intel PXE and the RedHat kickstart, automatic and very fast system installation and upgrading could be performed

  3. Aplicación de RT-Linux en el control de motores de pasos. Parte II; Appication of RT-Linux in the Control of Steps Motors. Part II

    Directory of Open Access Journals (Sweden)

    Ernesto Duany Renté

    2011-02-01

    Full Text Available Este trabajo complementa al presentado anteriormente: "Aplicación de RT-Linux en el control de motoresde pasos. Primera parte", de manera que se puedan relacionar a las tareas de adquisición y control para laobtención de un sistema lo más exacto posible. Las técnicas empleadas son las de tiempo real aprovechandolas posibilidades del microkernel RT-Linux y los software libres contenidos en sistemas Unix/Linux. Lasseñales se obtienen mediante un conversor AD y mostradas en pantalla empleando el Gnuplot.  The work presented in this paper is a complement to the control and acquisition tasks which were explainedin "Application of RT-Linux in the Control of Steps Motors. First Part", so that those both real time taskscan be fully related in order to make the whole control system more accurate. The employed techniquesare those of Real Time Taking advantage of the possibilities of the micro kernel RT-Linux and the freesoftware distributed in the Unix/Linux operating systems. The signals are obtained by means of an ADconverter and are shown in screen using Gnuplot.

  4. Low Cost Multisensor Kinematic Positioning and Navigation System with Linux/RTAI

    Directory of Open Access Journals (Sweden)

    Baoxin Hu

    2012-09-01

    Full Text Available Despite its popularity, the development of an embedded real-time multisensor kinematic positioning and navigation system discourages many researchers and developers due to its complicated hardware environment setup and time consuming device driver development. To address these issues, this paper proposed a multisensor kinematic positioning and navigation system built on Linux with Real Time Application Interface (RTAI, which can be constructed in a fast and economical manner upon popular hardware platforms. The authors designed, developed, evaluated and validated the application of Linux/RTAI in the proposed system for the integration of the low cost MEMS IMU and OEM GPS sensors. The developed system with Linux/RTAI as the core of a direct geo-referencing system provides not only an excellent hard real-time performance but also the conveniences for sensor hardware integration and real-time software development. A software framework is proposed in this paper for a universal kinematic positioning and navigation system with loosely-coupled integration architecture. In addition, general strategies of sensor time synchronization in a multisensor system are also discussed. The success of the loosely-coupled GPS-aided inertial navigation Kalman filter is represented via post-processed solutions from road tests.

  5. Development of a portable Linux-based ECG measurement and monitoring system.

    Science.gov (United States)

    Tan, Tan-Hsu; Chang, Ching-Su; Huang, Yung-Fa; Chen, Yung-Fu; Lee, Cheng

    2011-08-01

    This work presents a portable Linux-based electrocardiogram (ECG) signals measurement and monitoring system. The proposed system consists of an ECG front end and an embedded Linux platform (ELP). The ECG front end digitizes 12-lead ECG signals acquired from electrodes and then delivers them to the ELP via a universal serial bus (USB) interface for storage, signal processing, and graphic display. The proposed system can be installed anywhere (e.g., offices, homes, healthcare centers and ambulances) to allow people to self-monitor their health conditions at any time. The proposed system also enables remote diagnosis via Internet. Additionally, the system has a 7-in. interactive TFT-LCD touch screen that enables users to execute various functions, such as scaling a single-lead or multiple-lead ECG waveforms. The effectiveness of the proposed system was verified by using a commercial 12-lead ECG signal simulator and in vivo experiments. In addition to its portability, the proposed system is license-free as Linux, an open-source code, is utilized during software development. The cost-effectiveness of the system significantly enhances its practical application for personal healthcare.

  6. Fast scalar data buffering interface in Linux 2.6 kernel

    International Nuclear Information System (INIS)

    Homs, A.

    2012-01-01

    Key instrumentation devices like counter/timers, analog-to-digital converters and encoders provide scalar data input. Many of them allow fast acquisitions, but do not provide hardware triggering or buffering mechanisms. A Linux 2.4 kernel driver called Hook was developed at the ESRF as a generic software-triggered buffering interface. This work presents the portage of the ESRF Hook interface to the Linux 2.6 kernel. The interface distinguishes 2 independent functional groups: trigger event generators and data channels. Devices in the first group create software events, like hardware interrupts generated by timers or external signals. On each event, one or more device channels on the second group are read and stored in kernel buffers. The event generators and data channels to be read are fully configurable before each sequence. Designed for fast acquisitions, the Hook implementation is well adapted to multi-CPU systems, where the interrupt latency is notably reduced. On heavily loaded dual-core PCs running standard (non real time) Linux, data can be taken at 1 KHz without losing events. Additional features include full integration into the /sys virtual file-system and hot-plug devices support. (author)

  7. Towards Spherical Mesh Gravity and Magnetic Modelling in an HPC Environment

    Science.gov (United States)

    Lane, R. J.; Brodie, R. C.; de Hoog, M.; Navin, J.; Chen, C.; Du, J.; Liang, Q.; Wang, H.; Li, Y.

    2013-12-01

    Staff at Geoscience Australia (GA), Australia's Commonwealth Government geoscientific agency, have routinely performed 3D gravity and magnetic modelling as part of geoscience investigations. For this work, we have used software programs that have been based on a Cartesian mesh spatial framework. These programs have come as executable files that were compiled to operate in a Windows environment on single core personal computers (PCs). To cope with models with higher resolution and larger extents, we developed an approach whereby a large problem could be broken down into a number of overlapping smaller models (';tiles') that could be modelled separately, with the results combined back into a single output model. To speed up the processing, we established a Condor distributed network from existing desktop PCs. A number of factors have caused us to consider a new approach to this modelling work. The drivers for change include; 1) models with very large lateral extents where the effects of Earth curvature are a consideration, 2) a desire to ensure that the modelling of separate regions is carried out in a consistent and managed fashion, 3) migration of scientific computing to off-site High Performance Computing (HPC) facilities, and 4) development of virtual globe environments for integration and visualization of 3D spatial objects. Some of the more surprising realizations to emerge have been that; 1) there aren't any readily available commercial software packages for modelling gravity and magnetic data in a spherical mesh spatial framework, 2) there are many different types of HPC environments, 3) no two HPC environments are the same, and 4) the most common virtual globe environment (i.e., Google Earth) doesn't allow spatial objects to be displayed below the topographic/bathymetric surface. Our response has been to do the following; 1) form a collaborative partnership with researchers at the Colorado School of Mines (CSM) and the China University of Geosciences (CUG

  8. Design and Achievement of User Interface Automation Testing of Linux Based on Element Tree of DogTail

    Directory of Open Access Journals (Sweden)

    Yuan Wen-Chao

    2017-01-01

    Full Text Available As Linux gets more popular around the world, the advantage of the open source on software makes people do automated UI test by unified testing framework. UI software testing can guarantee the rationality of User Interface of Linux and accuracy of the UI’s widgets. In order to set free from fuzzy and repeated manual testing, and improve efficiency, this paper achieves automation testing of UI under Linux, and proposes a method to identify and test UI widgets under Linux, which is according to element tree of DogTail automaton testing framework. It achieves automation test of UI under Linux. According to this method, Aiming at the product of Red Hat Subscription Manager under Red Hat Enterprise Linux, it designs the automation test plan of this series of product’s dialogs. After many tests, it is indicated that this plan can identify UI widgets accurately and rationally, describe the structure of software clearly, avoid software errors and improve efficiency of the software. Simultaneously, it also can be used in the internationalization testing for checking translation during software internationalization.

  9. Novel HPC-ibuprofen conjugates: synthesis, characterization, thermal analysis and degradation kinetics

    International Nuclear Information System (INIS)

    Hussain, M.A.; Lodhi, B.A.; Abbas, K.

    2014-01-01

    Naturally occurring hydrophilic polysaccharides are advantageously used as drug carriers because they provide a mechanism to improve drug action. Hydroxypropylcellulose (HPC) is water-soluble, biocompatible and bears hydroxyl groups for drug conjugation outside the parent polymeric chains. This unique geometry allows the attachment of drug molecules with higher covalent loading. The HPC-Ibuprofen conjugates as macromolecular prodrugs were therefore synthesized employing homogenous and one pot reaction methodologies using p-toluenesulfonyl chloride in N,N-dimethylacetamide solvent at 80 degree C for 24 h under nitrogen atmosphere. The imidazole was used as a base for neutralization of acidic impurities. Present strategy appeared effective to get high yield (77-81%) and high degree of drug substitution (DS 0.88-1.40) onto the HPC polymer as determined by the acid-base titration and verified by 1H-NMR spectroscopy. The gel permeation chromatography has shown uni-modal absorption which indicates no significant degradation of polymer during reaction. Macromolecular prodrugs with different DS of ibuprofen were synthesized, purified, characterized and found soluble in organic solvents. From thermogravimetric analysis, initial, maximum and final degradation temperatures of the conjugates were calculated and compared for relative thermal stability. Thermal degradation kinetics was also studied and results have indicated that degradation of conjugates follows about first order kinetics as calculated by Kissinger model. The energy of activation was also found moderate 92.38, 99.34 and 87.34 kJ/mol as calculated using Friedman, Broido and Chang models. It was found that these novel prodrugs of ibuprofen were thermally stable therefore these may have potential pharmaceutical applications. (author)

  10. Divide and Conquer (DC BLAST: fast and easy BLAST execution within HPC environments

    Directory of Open Access Journals (Sweden)

    Won Cheol Yim

    2017-06-01

    Full Text Available Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI Basic Local Alignment Search Tool (BLAST and BLAST+ suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible and used due to the increasing availability of high-performance computing (HPC systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1 to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. This freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.

  11. The HARNESS Workbench: Unified and Adaptive Access to Diverse HPC Platforms

    Energy Technology Data Exchange (ETDEWEB)

    Sunderam, Vaidy S.

    2012-03-20

    The primary goal of the Harness WorkBench (HWB) project is to investigate innovative software environments that will help enhance the overall productivity of applications science on diverse HPC platforms. Two complementary frameworks were designed: one, a virtualized command toolkit for application building, deployment, and execution, that provides a common view across diverse HPC systems, in particular the DOE leadership computing platforms (Cray, IBM, SGI, and clusters); and two, a unified runtime environment that consolidates access to runtime services via an adaptive framework for execution-time and post processing activities. A prototype of the first was developed based on the concept of a 'system-call virtual machine' (SCVM), to enhance portability of the HPC application deployment process across heterogeneous high-end machines. The SCVM approach to portable builds is based on the insertion of toolkit-interpretable directives into original application build scripts. Modifications resulting from these directives preserve the semantics of the original build instruction flow. The execution of the build script is controlled by our toolkit that intercepts build script commands in a manner transparent to the end-user. We have applied this approach to a scientific production code (Gamess-US) on the Cray-XT5 machine. The second facet, termed Unibus, aims to facilitate provisioning and aggregation of multifaceted resources from resource providers and end-users perspectives. To achieve that, Unibus proposes a Capability Model and mediators (resource drivers) to virtualize access to diverse resources, and soft and successive conditioning to enable automatic and user-transparent resource provisioning. A proof of concept implementation has demonstrated the viability of this approach on high end machines, grid systems and computing clouds.

  12. First experience with particle-in-cell plasma physics code on ARM-based HPC systems

    OpenAIRE

    Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Mantsinen, Mervi; Mateo, Sergio; Cela, José M.; Castejón, Francisco

    2015-01-01

    In this work, we will explore the feasibility of porting a Particle-in-cell code (EUTERPE) to an ARM multi-core platform from the Mont-Blanc project. The used prototype is based on a system-on-chip Samsung Exynos 5 with an integrated GPU. It is the first prototype that could be used for High-Performance Computing (HPC), since it supports double precision and parallel programming languages. The research leading to these results has received funding from the European Com- munity's Seventh...

  13. Development and assessment of a fiber reinforced HPC container for radioactive waste

    International Nuclear Information System (INIS)

    Roulet, A.; Pineau, F.; Chanut, S.; Thibaux, Th.

    2007-01-01

    As part of its research into solutions for concrete disposal containers for long-lived radioactive waste, Andra defined requirements for high-performance concretes with enhanced porosity, diffusion, and permeability characteristics. It is the starting point for further research into severe conditions of containment and durability. To meet these objectives, Eiffage TP consequently developed a highly fibered High Performance Concrete (HPC) design mix using CEM V cement and silica fume. Then, mockups were produced to characterize the performance various concepts of containers with this new concrete mix. These mockups helped to identify possible manufacturing problems, and particularly the risk of cracking due to restrained shrinkage. (authors)

  14. Concept of a Cloud Service for Data Preparation and Computational Control on Custom HPC Systems in Application to Molecular Dynamics

    Science.gov (United States)

    Puzyrkov, Dmitry; Polyakov, Sergey; Podryga, Viktoriia; Markizov, Sergey

    2018-02-01

    At the present stage of computer technology development it is possible to study the properties and processes in complex systems at molecular and even atomic levels, for example, by means of molecular dynamics methods. The most interesting are problems related with the study of complex processes under real physical conditions. Solving such problems requires the use of high performance computing systems of various types, for example, GRID systems and HPC clusters. Considering the time consuming computational tasks, the need arises of software for automatic and unified monitoring of such computations. A complex computational task can be performed over different HPC systems. It requires output data synchronization between the storage chosen by a scientist and the HPC system used for computations. The design of the computational domain is also quite a problem. It requires complex software tools and algorithms for proper atomistic data generation on HPC systems. The paper describes the prototype of a cloud service, intended for design of atomistic systems of large volume for further detailed molecular dynamic calculations and computational management for this calculations, and presents the part of its concept aimed at initial data generation on the HPC systems.

  15. Concept of a Cloud Service for Data Preparation and Computational Control on Custom HPC Systems in Application to Molecular Dynamics

    Directory of Open Access Journals (Sweden)

    Puzyrkov Dmitry

    2018-01-01

    Full Text Available At the present stage of computer technology development it is possible to study the properties and processes in complex systems at molecular and even atomic levels, for example, by means of molecular dynamics methods. The most interesting are problems related with the study of complex processes under real physical conditions. Solving such problems requires the use of high performance computing systems of various types, for example, GRID systems and HPC clusters. Considering the time consuming computational tasks, the need arises of software for automatic and unified monitoring of such computations. A complex computational task can be performed over different HPC systems. It requires output data synchronization between the storage chosen by a scientist and the HPC system used for computations. The design of the computational domain is also quite a problem. It requires complex software tools and algorithms for proper atomistic data generation on HPC systems. The paper describes the prototype of a cloud service, intended for design of atomistic systems of large volume for further detailed molecular dynamic calculations and computational management for this calculations, and presents the part of its concept aimed at initial data generation on the HPC systems.

  16. Overview of the LBL/LLNL negative-ion-based neutral beam program

    International Nuclear Information System (INIS)

    Pyle, R.V.

    1980-01-01

    The LBL/LLNL negative-ion-based neutral beam development program and status are described. The emphasis has shifted in some details since the first symposium in 1977, but our overall objectives remain the same, namely, the development of megawatt d.c. injection systems. Previous emphasis was on a system in which the negative ions were produced by double charge exchange in sodium vapor. At present, the emphasis is on a self-extraction source in which the negative ions are produced on a biased surface imbedded in a plasma. A one-ampere beam will be accelerated to at least 40 keV next year. Studies of negative-ion formation and interactions help provide a data base for the technology program

  17. Research at Clark in the early '60s and at LLNL in the late '80s

    International Nuclear Information System (INIS)

    Gatrousis, C.

    1993-01-01

    Tom Sugihara's scientific leadership over a period of almost four decades covered many areas. His early research at Clark dealt with fission yields measurements and radiochemical separations of fallout species in the marine environment. Tom pioneered many of the methods for detecting soft beta emitters and low levels of radioactivity. Studies of the behavior of radioactivity in the marine ecosystem were important adjuncts to Tom's nuclear science research at Clark University which emphasized investigations of nuclear reaction mechanisms. Among Tom's most important contributions while at Clark was his work with Matsuo and Dudey on the interpretation of isomeric yield ratios and fission studies with Noshkin and Baba. Tom's scientific career oscillated between research and administration. During the latter part of his career his great breadth of interests and his scientific open-quotes tasteclose quotes had a profound influence at LLNL in areas that were new to him, materials science and solid state physics

  18. A historical perspective on fifteen years of laser damage thresholds at LLNL

    International Nuclear Information System (INIS)

    Rainer, F.; De Marco, F.P.; Staggs, M.C.; Kozlowski, M.R.; Atherton, L.J.; Sheehan, L.M.

    1993-01-01

    We have completed a fifteen year, referenced and documented compilation of more than 15,000 measurements of laser-induced damage thresholds (LIDT) conducted at the Lawrence Livermore National Laboratory (LLNL). These measurements cover the spectrum from 248 to 1064 nm with pulse durations ranging from < 1 ns to 65 ns and at pulse-repetition frequencies (PRF) from single shots to 6.3 kHz. We emphasize the changes in LIDTs during the past two years since we last summarized our database. We relate these results to earlier data concentrating on improvements in processing methods, materials, and conditioning techniques. In particular, we highlight the current status of anti-reflective (AR) coatings, high reflectors (HR), polarizers, and frequency-conversion crystals used primarily at 355 nm and 1064 nm

  19. Production of High Harmonic X-Ray Radiation from Non-linear Thomson at LLNL PLEIADES

    CERN Document Server

    Lim, Jae; Betts, Shawn; Crane, John; Doyuran, Adnan; Frigola, Pedro; Gibson, David J; Hartemann, Fred V; Rosenzweig, James E; Travish, Gil; Tremaine, Aaron M

    2005-01-01

    We describe an experiment for production of high harmonic x-ray radiation from Thomson backscattering of an ultra-short high power density laser by a relativistic electron beam at the PLEIADES facility at LLNL. In this scenario, electrons execute a “figure-8” motion under the influence of the high-intensity laser field, where the constant characterizing the field strength is expected to exceed unity: $aL=e*EL/m*c*ωL ≥ 1$. With large $aL$ this motion produces high harmonic x-ray radiation and significant broadening of the spectral peaks. This paper is intended to give a layout of the PLEIADES experiment, along with progress towards experimental goals.

  20. HPC Applications

    Czech Academy of Sciences Publication Activity Database

    Blaheta, Radim; Georgiev, I.; Georgiev, K.; Jakl, Ondřej; Kohut, Roman; Margenov, S.; Starý, Jiří

    2017-01-01

    Roč. 17, č. 5 (2017), s. 5-16 ISSN 1311-9702 R&D Projects: GA MŠk LQ1602 Institutional support: RVO:68145535 Keywords : analysis of fiber-reinforced concrete * homogenization * identification of parameters * parallelizable solver * additive Schwarz method Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics http://www.cit.iit.bas.bg/cit_online_contents.html

  1. Author Contribution to the Pu Handbook II: Chapter 37 LLNL Integrated Sample Preparation Glovebox (TEM) Section

    International Nuclear Information System (INIS)

    Wall, Mark A.

    2016-01-01

    The development of our Integrated Actinide Sample Preparation Laboratory (IASPL) commenced in 1998 driven by the need to perform transmission electron microscopy studies on naturally aged plutonium and its alloys looking for the microstructural effects of the radiological decay process (1). Remodeling and construction of a laboratory within the Chemistry and Materials Science Directorate facilities at LLNL was required to turn a standard radiological laboratory into a Radiological Materials Area (RMA) and Radiological Buffer Area (RBA) containing type I, II and III workplaces. Two inert atmosphere dry-train glove boxes with antechambers and entry/exit fumehoods (Figure 1), having a baseline atmosphere of 1 ppm oxygen and 1 ppm water vapor, a utility fumehood and a portable, and a third double-walled enclosure have been installed and commissioned. These capabilities, along with highly trained technical staff, facilitate the safe operation of sample preparation processes and instrumentation, and sample handling while minimizing oxidation or corrosion of the plutonium. In addition, we are currently developing the capability to safely transfer small metallographically prepared samples to a mini-SEM for microstructural imaging and chemical analysis. The gloveboxes continue to be the most crucial element of the laboratory allowing nearly oxide-free sample preparation for a wide variety of LLNL-based characterization experiments, which includes transmission electron microscopy, electron energy loss spectroscopy, optical microscopy, electrical resistivity, ion implantation, X-ray diffraction and absorption, magnetometry, metrological surface measurements, high-pressure diamond anvil cell equation-of-state, phonon dispersion measurements, X-ray absorption and emission spectroscopy, and differential scanning calorimetry. The sample preparation and materials processing capabilities in the IASPL have also facilitated experimentation at world-class facilities such as the

  2. Author Contribution to the Pu Handbook II: Chapter 37 LLNL Integrated Sample Preparation Glovebox (TEM) Section

    Energy Technology Data Exchange (ETDEWEB)

    Wall, Mark A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-10-25

    The development of our Integrated Actinide Sample Preparation Laboratory (IASPL) commenced in 1998 driven by the need to perform transmission electron microscopy studies on naturally aged plutonium and its alloys looking for the microstructural effects of the radiological decay process (1). Remodeling and construction of a laboratory within the Chemistry and Materials Science Directorate facilities at LLNL was required to turn a standard radiological laboratory into a Radiological Materials Area (RMA) and Radiological Buffer Area (RBA) containing type I, II and III workplaces. Two inert atmosphere dry-train glove boxes with antechambers and entry/exit fumehoods (Figure 1), having a baseline atmosphere of 1 ppm oxygen and 1 ppm water vapor, a utility fumehood and a portable, and a third double-walled enclosure have been installed and commissioned. These capabilities, along with highly trained technical staff, facilitate the safe operation of sample preparation processes and instrumentation, and sample handling while minimizing oxidation or corrosion of the plutonium. In addition, we are currently developing the capability to safely transfer small metallographically prepared samples to a mini-SEM for microstructural imaging and chemical analysis. The gloveboxes continue to be the most crucial element of the laboratory allowing nearly oxide-free sample preparation for a wide variety of LLNL-based characterization experiments, which includes transmission electron microscopy, electron energy loss spectroscopy, optical microscopy, electrical resistivity, ion implantation, X-ray diffraction and absorption, magnetometry, metrological surface measurements, high-pressure diamond anvil cell equation-of-state, phonon dispersion measurements, X-ray absorption and emission spectroscopy, and differential scanning calorimetry. The sample preparation and materials processing capabilities in the IASPL have also facilitated experimentation at world-class facilities such as the

  3. Final report for the 1996 DOE grant supporting research at the SLAC/LBNL/LLNL B factory

    International Nuclear Information System (INIS)

    Judd, D.; Wright, D.

    1997-01-01

    This final report discusses Department of Energy-supported research funded through Lawrence Livermore National Laboratory (LLNL) which was performed as part of a collaboration between LLNL and Prairie View A and M University to develop part of the BaBar detector at the SLAC B Factory. This work focuses on the Instrumented Flux Return (IFR) subsystem of BaBar and involves a full range of detector development activities: computer simulations of detector performance, creation of reconstruction algorithms, and detector hardware R and D. Lawrence Livermore National Laboratory has a leading role in the IFR subsystem and has established on-site computing and detector facilities to conduct this research. By establishing ties with the existing LLNL Research Collaboration Program and leveraging LLNL resources, the experienced Prairie View group was able to quickly achieve a more prominent role within the BaBar collaboration and make significant contributions to the detector design. In addition, this work provided the first entry point for Historically Black Colleges and Universities into the B Factory collaboration, and created an opportunity to train a new generation of minority students at the premier electron-positron high energy physics facility in the US

  4. Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data

    Energy Technology Data Exchange (ETDEWEB)

    Di, Sheng; Cappello, Franck

    2018-01-01

    Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points can be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.

  5. Toward Transparent Data Management in Multi-layer Storage Hierarchy for HPC Systems

    Energy Technology Data Exchange (ETDEWEB)

    Wadhwa, Bharti [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States). Dept. of Computer Science; Byna, Suren [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Butt, Ali R. [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States). Dept. of Computer Science

    2018-04-17

    Upcoming exascale high performance computing (HPC) systems are expected to comprise multi-tier storage hierarchy, and thus will necessitate innovative storage and I/O mechanisms. Traditional disk and block-based interfaces and file systems face severe challenges in utilizing capabilities of storage hierarchies due to the lack of hierarchy support and semantic interfaces. Object-based and semantically-rich data abstractions for scientific data management on large scale systems offer a sustainable solution to these challenges. Such data abstractions can also simplify users involvement in data movement. Here, we take the first steps of realizing such an object abstraction and explore storage mechanisms for these objects to enhance I/O performance, especially for scientific applications. We explore how an object-based interface can facilitate next generation scalable computing systems by presenting the mapping of data I/O from two real world HPC scientific use cases: a plasma physics simulation code (VPIC) and a cosmology simulation code (HACC). Our storage model stores data objects in different physical organizations to support data movement across layers of memory/storage hierarchy. Our implementation sclaes well to 16K parallel processes, and compared to the state of the art, such as MPI-IO and HDF5, our object-based data abstractions and data placement strategy in multi-level storage hierarchy achieves up to 7 X I/O performance improvement for scientific data.

  6. Degradation of 2,4,6-Trinitrophenol (TNP) by Arthrobacter sp. HPC1223 Isolated from Effluent Treatment Plant

    OpenAIRE

    Qureshi, Asifa; Kapley, Atya; Purohit, Hemant J.

    2012-01-01

    Arthrobacter sp. HPC1223 (Genebank Accession No. AY948280) isolated from activated biomass of effluent treatment plant was capable of utilizing 2,4,6 trinitrophenol (TNP) under aerobic condition at 30 °C and pH 7 as nitrogen source. It was observed that the isolated bacteria utilized TNP up to 70 % (1 mM) in R2A media with nitrite release. The culture growth media changed into orange-red color hydride-meisenheimer complex at 24 h as detected by HPLC. Oxygen uptake of Arthrobacter HPC1223 towa...

  7. A package of Linux scripts for the parallelization of Monte Carlo simulations

    Science.gov (United States)

    Badal, Andreu; Sempau, Josep

    2006-09-01

    Despite the fact that fast computers are nowadays available at low cost, there are many situations where obtaining a reasonably low statistical uncertainty in a Monte Carlo (MC) simulation involves a prohibitively large amount of time. This limitation can be overcome by having recourse to parallel computing. Most tools designed to facilitate this approach require modification of the source code and the installation of additional software, which may be inconvenient for some users. We present a set of tools, named clonEasy, that implement a parallelization scheme of a MC simulation that is free from these drawbacks. In clonEasy, which is designed to run under Linux, a set of "clone" CPUs is governed by a "master" computer by taking advantage of the capabilities of the Secure Shell (ssh) protocol. Any Linux computer on the Internet that can be ssh-accessed by the user can be used as a clone. A key ingredient for the parallel calculation to be reliable is the availability of an independent string of random numbers for each CPU. Many generators—such as RANLUX, RANECU or the Mersenne Twister—can readily produce these strings by initializing them appropriately and, hence, they are suitable to be used with clonEasy. This work was primarily motivated by the need to find a straightforward way to parallelize PENELOPE, a code for MC simulation of radiation transport that (in its current 2005 version) employs the generator RANECU, which uses a combination of two multiplicative linear congruential generators (MLCGs). Thus, this paper is focused on this class of generators and, in particular, we briefly present an extension of RANECU that increases its period up to ˜5×10 and we introduce seedsMLCG, a tool that provides the information necessary to initialize disjoint sequences of an MLCG to feed different CPUs. This program, in combination with clonEasy, allows to run PENELOPE in parallel easily, without requiring specific libraries or significant alterations of the

  8. Real-time head movement system and embedded Linux implementation for the control of power wheelchairs.

    Science.gov (United States)

    Nguyen, H T; King, L M; Knight, G

    2004-01-01

    Mobility has become very important for our quality of life. A loss of mobility due to an injury is usually accompanied by a loss of self-confidence. For many individuals, independent mobility is an important aspect of self-esteem. Head movement is a natural form of pointing and can be used to directly replace the joystick whilst still allowing for similar control. Through the use of embedded LINUX and artificial intelligence, a hands-free head movement wheelchair controller has been designed and implemented successfully. This system provides for severely disabled users an effective power wheelchair control method with improved posture, ease of use and attractiveness.

  9. Transitioning to Intel-based Linux Servers in the Payload Operations Integration Center

    Science.gov (United States)

    Guillebeau, P. L.

    2004-01-01

    The MSFC Payload Operations Integration Center (POIC) is the focal point for International Space Station (ISS) payload operations. The POIC contains the facilities, hardware, software and communication interface necessary to support payload operations. ISS ground system support for processing and display of real-time spacecraft and telemetry and command data has been operational for several years. The hardware components were reaching end of life and vendor costs were increasing while ISS budgets were becoming severely constrained. Therefore it has been necessary to migrate the Unix portions of our ground systems to commodity priced Intel-based Linux servers. hardware architecture including networks, data storage, and highly available resources. This paper will concentrate on the Linux migration implementation for the software portion of our ground system. The migration began with 3.5 million lines of code running on Unix platforms with separate servers for telemetry, command, Payload information management systems, web, system control, remote server interface and databases. The Intel-based system is scheduled to be available for initial operational use by August 2004 The overall migration to Intel-based Linux servers in the control center involves changes to the This paper will address the Linux migration study approach including the proof of concept, criticality of customer buy-in and importance of beginning with POSlX compliant code. It will focus on the development approach explaining the software lifecycle. Other aspects of development will be covered including phased implementation, interim milestones and metrics measurements and reporting mechanisms. This paper will also address the testing approach covering all levels of testing including development, development integration, IV&V, user beta testing and acceptance testing. Test results including performance numbers compared with Unix servers will be included. need for a smooth transition while maintaining

  10. Implementação de um sistema SIP para o sistema operacional Linux

    OpenAIRE

    Davison Gonzaga da Silva

    2003-01-01

    Resumo: Este trabalho apresenta a implementação de um Sistema de VoIP usando o Protocolo SIP. Este Sistema SIP foi desenvolvido para o Linux, usando-se a linguagem C++ em conjunto com a biblioteca QT. O Sistema SIP é composto de três entidades básicas: o Terminal SIP, o Proxy e o Servidor de Registros. O Terminal SIP é a entidade responsável por estabelecer sessões SIP com outros Terminais SIP. Para o Terminal SIP, foi desenvolvida uma biblioteca de acesso à placa de áudio, que permite a modi...

  11. Implementasi Manajemen Bandwidth Dengan Disiplin Antrian Hierarchical Token Bucket (HTB Pada Sistem Operasi Linux

    Directory of Open Access Journals (Sweden)

    Muhammad Nugraha

    2016-09-01

    Full Text Available Important Problem on Internet networking is exhausted resource and bandwidth by some user while other user did not get service properly. To overcome that problem we need to implement traffic control and bandwidth management system in router. In this research author want to implement Hierarchical Token Bucket algorithm as queue discipline (qdisc to get bandwidth management accurately in order the user can get bandwidth properly. The result of this research is form the management bandwidth cheaply and efficiently by using Hierarchical Token Bucket qdisc on Linux operating system were able to manage the user as we want.

  12. Programación de LEGO MindStorms bajo GNU/Linux

    OpenAIRE

    Matellán Olivera, Vicente; Heras Quirós, Pedro de las; Centeno González, José; González Barahona, Jesús

    2002-01-01

    GNU/Linux sobre un ordenador personal es la opción libre preferida por muchos desarrolladores de aplicaciones, pero también es una plataforma de desarrollo muy popular para otros sistemas, incluida la programación de robots, en particular es muy adecuada para jugar con los LEGO Mindstorms. En este artículo presentaremos las dos opciones más extendidas a la hora de programar estos juguetes: NQC y LegOS. NQC es una versión reducida de C que permite el desarrollo rápido de programas ...

  13. The visual and remote analyzing software for a Linux-based radiation information acquisition system

    International Nuclear Information System (INIS)

    Fan Zhaoyang; Zhang Li; Chen Zhiqiang

    2003-01-01

    A visual and remote analyzing software for the radiation information, which has the merit of universality and credibility, is developed based on the Linux operating system and the TCP/IP network protocol. The software is applied to visually debug and real time monitor of the high-speed radiation information acquisition system, and a safe, direct and timely control can assured. The paper expatiates the designing thought of the software, which provides the reference for other software with the same purpose for the similar systems

  14. Hard Real-Time Linux for Off-The-Shelf Multicore Architectures

    OpenAIRE

    Radder, Dirk

    2015-01-01

    This document describes the research results that were obtained from the development of a real-time extension for the Linux operating system. The paper describes a full extension of the kernel, which enables hard real-time performance on a 64-bit x86 architecture. In the first part of this study, real-time systems are categorized and concepts of real-time operating systems are introduced to the reader. In addition, numerous well-known real-time operating systems are considered. QNX Neutrino, ...

  15. DB2 9 for Linux, Unix, and Windows database administration certification study guide

    CERN Document Server

    Sanders, Roger E

    2007-01-01

    In DB2 9 for Linux, UNIX, and Windows Database Administration Certification Study Guide, Roger E. Sanders-one of the world's leading DB2 authors and an active participant in the development of IBM's DB2 certification exams-covers everything a reader needs to know to pass the DB2 9 UDB DBA Certification Test (731).This comprehensive study guide steps you through all of the topics that are covered on the test, including server management, data placement, database access, analyzing DB2 activity, DB2 utilities, high availability, security, and much more. Each chapter contains an extensive set of p

  16. IMPLEMENTASI MANAJEMEN BANDWIDTH DENGAN DISIPLIN ANTRIAN HIERARCHICAL TOKEN BUCKET (HTB PADA SISTEM OPERASI LINUX

    Directory of Open Access Journals (Sweden)

    Muhammad Nugraha

    2017-01-01

    Full Text Available Important Problem on Internet networking is exhausted resource and bandwidth by some user while other user did not get service properly. To overcome that problem we need to implement traffic control and bandwidth management system in router. In this research author want to implement Hierarchical Token Bucket algorithm as queue discipline (qdisc to get bandwidth management accurately in order the user can get bandwidth properly. The result of this research is form the management bandwidth cheaply and efficiently by using Hierarchical Token Bucket qdisc on Linux operating system were able to manage the user as we want.

  17. AliEnFS - a Linux File System for the AliEn Grid Services

    OpenAIRE

    Peters, Andreas J.; Saiz, P.; Buncic, P.

    2003-01-01

    Among the services offered by the AliEn (ALICE Environment http://alien.cern.ch) Grid framework there is a virtual file catalogue to allow transparent access to distributed data-sets using various file transfer protocols. $alienfs$ (AliEn File System) integrates the AliEn file catalogue as a new file system type into the Linux kernel using LUFS, a hybrid user space file system framework (Open Source http://lufs.sourceforge.net). LUFS uses a special kernel interface level called VFS (Virtual F...

  18. ClusterControl: a web interface for distributing and monitoring bioinformatics applications on a Linux cluster.

    Science.gov (United States)

    Stocker, Gernot; Rieder, Dietmar; Trajanoski, Zlatko

    2004-03-22

    ClusterControl is a web interface to simplify distributing and monitoring bioinformatics applications on Linux cluster systems. We have developed a modular concept that enables integration of command line oriented program into the application framework of ClusterControl. The systems facilitate integration of different applications accessed through one interface and executed on a distributed cluster system. The package is based on freely available technologies like Apache as web server, PHP as server-side scripting language and OpenPBS as queuing system and is available free of charge for academic and non-profit institutions. http://genome.tugraz.at/Software/ClusterControl

  19. [Design of an embedded stroke rehabilitation apparatus system based on Linux computer engineering].

    Science.gov (United States)

    Zhuang, Pengfei; Tian, XueLong; Zhu, Lin

    2014-04-01

    A realizaton project of electrical stimulator aimed at motor dysfunction of stroke is proposed in this paper. Based on neurophysiological biofeedback, this system, using an ARM9 S3C2440 as the core processor, integrates collection and display of surface electromyography (sEMG) signal, as well as neuromuscular electrical stimulation (NMES) into one system. By embedding Linux system, the project is able to use Qt/Embedded as a graphical interface design tool to accomplish the design of stroke rehabilitation apparatus. Experiments showed that this system worked well.

  20. LINK codes TRAC-BF1/PARCSv2.7 in LINUX without external communication interface

    International Nuclear Information System (INIS)

    Barrachina, T.; Garcia-Fenoll, M.; Abarca, A.; Miro, R.; Verdu, G.; Concejal, A.; Solar, A.

    2014-01-01

    The TRAC-BF1 code is still widely used by the nuclear industry for safety analysis. The plant models developed using this code are highly validated, so it is advisable to continue improving this code before migrating to another completely different code. The coupling with the NRC neutronic code PARCSv2.7 increases the simulation capabilities in transients in which the power distribution plays an important role. In this paper, the procedure for the coupling of TRAC-BF1 and PARCSv2.7 codes without PVM and in Linux is presented. (Author)

  1. Using a Linux Cluster for Parallel Simulations of an Active Magnetic Regenerator Refrigerator

    DEFF Research Database (Denmark)

    Petersen, T.F.; Pryds, N.; Smith, A.

    2006-01-01

    This paper describes the implementation of a Comsol Multiphysics model on a Linux computer Cluster. The Magnetic Refrigerator (MR) is a special type of refrigerator with potential to reduce the energy consumption of household refrigeration by a factor of two or more. To conduct numerical analysis....... The coupled set of equations and the transient convergence towards the final steady state means that the model has an excessive solution time. To make parametric studies practical, the developed model was implemented on a Cluster to allow parallel simulations, which has decreased the solution time...

  2. Audio Arduino - an ALSA (Advanced Linux Sound Architecture) audio driver for FTDI-based Arduinos

    DEFF Research Database (Denmark)

    Dimitrov, Smilen; Serafin, Stefania

    2011-01-01

    be considered to be a system, that encompasses design decisions on both hardware and software levels - that also demand a certain understanding of the architecture of the target PC operating system. This project outlines how an Arduino Duemillanove board (containing a USB interface chip, manufactured by Future...... Technology Devices International Ltd [FTDI] company) can be demonstrated to behave as a full-duplex, mono, 8-bit 44.1 kHz soundcard, through an implementation of: a PC audio driver for ALSA (Advanced Linux Sound Architecture); a matching program for the Arduino's ATmega microcontroller - and nothing more...

  3. Feasibility study of BES data off-line processing and D/Ds physics analysis on a PC/Linux platform

    International Nuclear Information System (INIS)

    Rong Gang; He Kanglin; Heng Yuekun; Zhang Chun; Liu Huaimin; Cheng Baosen; Yan Wuguang; Mai Jimao; Zhao Haiwen

    2000-01-01

    The authors report a feasibility study of BES data off-line processing (BES data off-line reconstruction and Monte Carlo simulation) and D/Ds physics analysis on a PC/Linux platform. The authors compared the results obtained from the PC/Linux with that from HP/UNIX workstation. It shows that PC/Linux platform can do BES data off-line analysis as good as HP/UNIX workstation, and is much powerful and economical

  4. ISC High Performance 2017 International Workshops, DRBSD, ExaComm, HCPM, HPC-IODC, IWOPH, IXPUG, P^3MA, VHPC, Visualization at Scale, WOPSSS

    CERN Document Server

    Yokota, Rio; Taufer, Michela; Shalf, John

    2017-01-01

    This book constitutes revised selected papers from 10 workshops that were held as the ISC High Performance 2017 conference in Frankfurt, Germany, in June 2017. The 59 papers presented in this volume were carefully reviewed and selected for inclusion in this book. They stem from the following workshops: Workshop on Virtualization in High-Performance Cloud Computing (VHPC) Visualization at Scale: Deployment Case Studies and Experience Reports International Workshop on Performance Portable Programming Models for Accelerators (P^3MA) OpenPOWER for HPC (IWOPH) International Workshop on Data Reduction for Big Scientific Data (DRBSD) International Workshop on Communication Architectures for HPC, Big Data, Deep Learning and Clouds at Extreme Scale Workshop on HPC Computing in a Post Moore's Law World (HCPM) HPC I/O in the Data Center ( HPC-IODC) Workshop on Performance and Scalability of Storage Systems (WOPSSS) IXPUG: Experiences on Intel Knights Landing at the One Year Mark International Workshop on Communicati...

  5. FLUKA-LIVE-an embedded framework, for enabling a computer to execute FLUKA under the control of a Linux OS

    International Nuclear Information System (INIS)

    Cohen, A.; Battistoni, G.; Mark, S.

    2008-01-01

    This paper describes a Linux-based OS framework for integrating the FLUKA Monte Carlo software (currently distributed only for Linux) into a CD-ROM, resulting in a complete environment for a scientist to edit, link and run FLUKA routines-without the need to install a UNIX/Linux operating system. The building process includes generating from scratch a complete operating system distribution which will, when operative, build all necessary components for successful operation of FLUKA software and libraries. Various source packages, as well as the latest kernel sources, are freely available from the Internet. These sources are used to create a functioning Linux system that integrates several core utilities in line with the main idea-enabling FLUKA to act as if it was running under a popular Linux distribution or even a proprietary UNIX workstation. On boot-up a file system will be created and the contents from the CD will be uncompressed and completely loaded into RAM-after which the presence of the CD is no longer necessary, and could be removed for use on a second computer. The system can operate on any i386 PC as long as it can boot from a CD

  6. Linux: Hacia una revolución silenciosa de la sociedad de la información

    Directory of Open Access Journals (Sweden)

    Pascuale Sofia

    2004-01-01

    Full Text Available l presente artículo intenta realizar una demostración de las cualidades globales que posee el nuevo sistema operativo LINUX a nivel técnico, y develar el cambio que está engendrando en el sector económico y en el mundo cultural. Esto se realiza por medio, de un análisis comparativo entre los sistemas operativos: Comerciales (Microsoft y Open Source (LINUX. El mundo de hoy está caracterizado por cambios rádicales y rápidos, ocurriendo con mayor frecuencia en el sector de la informática. Actualmente en éste sector y específicamente en el ámbito del sofware, es LINUX el nuevo sistema operativo que está modificando el mundo de la informática. Todo ello se efectuó sobre los lineamientos metodológicos exploratorios, porque la literatura sobre los avances de Linux es escasa, por lo tanto el trabajo responde a la sintesis de un amplio trabajo (Conferencias, Exposiciones en Universidades, Asociaciones de empresas, entre otros de los autores, llevaron a cabo desde que el producto LINUX es conocido y trabajado por una pequeña elite de técnicos.

  7. The VERCE Science Gateway: enabling user friendly seismic waves simulations across European HPC infrastructures

    Science.gov (United States)

    Spinuso, Alessandro; Krause, Amy; Ramos Garcia, Clàudia; Casarotti, Emanuele; Magnoni, Federica; Klampanos, Iraklis A.; Frobert, Laurent; Krischer, Lion; Trani, Luca; David, Mario; Leong, Siew Hoon; Muraleedharan, Visakh

    2014-05-01

    The EU-funded project VERCE (Virtual Earthquake and seismology Research Community in Europe) aims to deploy technologies which satisfy the HPC and data-intensive requirements of modern seismology. As a result of VERCE's official collaboration with the EU project SCI-BUS, access to computational resources, like local clusters and international infrastructures (EGI and PRACE), is made homogeneous and integrated within a dedicated science gateway based on the gUSE framework. In this presentation we give a detailed overview on the progress achieved with the developments of the VERCE Science Gateway, according to a use-case driven implementation strategy. More specifically, we show how the computational technologies and data services have been integrated within a tool for Seismic Forward Modelling, whose objective is to offer the possibility to perform simulations of seismic waves as a service to the seismological community. We will introduce the interactive components of the OGC map based web interface and how it supports the user with setting up the simulation. We will go through the selection of input data, which are either fetched from federated seismological web services, adopting community standards, or provided by the users themselves by accessing their own document data store. The HPC scientific codes can be selected from a number of waveform simulators, currently available to the seismological community as batch tools or with limited configuration capabilities in their interactive online versions. The results will be staged out from the HPC via a secure GridFTP transfer to a VERCE data layer managed by iRODS. The provenance information of the simulation will be automatically cataloged by the data layer via NoSQL techonologies. We will try to demonstrate how data access, validation and visualisation can be supported by a general purpose provenance framework which, besides common provenance concepts imported from the OPM and the W3C-PROV initiatives, also offers

  8. Criticality Safety Support to a Project Addressing SNM Legacy Items at LLNL

    International Nuclear Information System (INIS)

    Pearson, J S; Burch, J G; Dodson, K E; Huang, S T

    2005-01-01

    The programmatic, facility and criticality safety support staffs at the LLNL Plutonium Facility worked together to successfully develop and implement a project to process legacy (DNFSB Recommendation 94-1 and non-Environmental, Safety, and Health (ES and H) labeled) materials in storage. Over many years, material had accumulated in storage that lacked information to adequately characterize the material for current criticality safety controls used in the facility. Generally, the fissionable material mass information was well known, but other information such as form, impurities, internal packaging, and presence of internal moderating or reflecting materials were not well documented. In many cases, the material was excess to programmatic need, but such a determination was difficult with the little information given on MC and A labels and in the MC and A database. The material was not packaged as efficiently as possible, so it also occupied much more valuable storage space than was necessary. Although safe as stored, the inadequately characterized material posed a risk for criticality safety noncompliances if moved within the facility under current criticality safety controls. A Legacy Item Implementation Plan was developed and implemented to deal with this problem. Reasonable bounding conditions were determined for the material involved, and criticality safety evaluations were completed. Two appropriately designated glove boxes were identified and criticality safety controls were developed to safely inspect the material. Inspecting the material involved identifying containers of legacy material, followed by opening, evaluating, processing if necessary, characterizing and repackaging the material. Material from multiple containers was consolidated more efficiently thus decreasing the total number of stored items to about one half of the highest count. Current packaging requirements were implemented. Detailed characterization of the material was captured in databases

  9. Impact of the Revised 10 CFR 835 on the Neutron Dose Rates at LLNL

    International Nuclear Information System (INIS)

    Radev, R.

    2009-01-01

    In June 2007, 10 CFR 835 (1) was revised to include new radiation weighting factors for neutrons, updated dosimetric models, and dose terms consistent with the newer ICRP recommendations. A significant aspect of the revised 10 CFR 835 is the adoption of the recommendations outlined in ICRP-60 (2). The recommended new quantities demand a review of much of the basic data used in protection against exposure to sources of ionizing radiation. The International Commission on Radiation Units and Measurements has defined a number of quantities for use in personnel and area monitoring (3,4,5) including the ambient dose equivalent H*(d) to be used for area monitoring and instrument calibrations. These quantities are used in ICRP-60 and ICRP-74. This report deals only with the changes in the ambient dose equivalent and ambient dose rate equivalent for neutrons as a result of the implementation of the revised 10 CFR 835. In the report, the terms neutron dose and neutron dose rate will be used for convenience for ambient neutron dose and ambient neutron dose rate unless otherwise stated. This report provides a qualitative and quantitative estimate of how much the neutron dose rates at LLNL will change with the implementation of the revised 10 CFR 835. Neutron spectra and dose rates from selected locations at the LLNL were measured with a high resolution spectroscopic neutron dose rate system (ROSPEC) as well as with a standard neutron rem meter (a.k.a., a remball). The spectra obtained at these locations compare well with the spectra from the Radiation Calibration Laboratory's (RCL) bare californium source that is currently used to calibrate neutron dose rate instruments. The measurements obtained from the high resolution neutron spectrometer and dose meter ROSPEC and the NRD dose meter compare within the range of ±25%. When the new radiation weighting factors are adopted with the implementation of the revised 10 CFR 835, the measured dose rates will increase by up to 22%. The

  10. Unsteady, Cooled Turbine Simulation Using a PC-Linux Analysis System

    Science.gov (United States)

    List, Michael G.; Turner, Mark G.; Chen, Jen-Pimg; Remotigue, Michael G.; Veres, Joseph P.

    2004-01-01

    The fist stage of the high-pressure turbine (HPT) of the GE90 engine was simulated with a three-dimensional unsteady Navier-Sokes solver, MSU Turbo, which uses source terms to simulate the cooling flows. In addition to the solver, its pre-processor, GUMBO, and a post-processing and visualization tool, Turbomachinery Visual3 (TV3) were run in a Linux environment to carry out the simulation and analysis. The solver was run both with and without cooling. The introduction of cooling flow on the blade surfaces, case, and hub and its effects on both rotor-vane interaction as well the effects on the blades themselves were the principle motivations for this study. The studies of the cooling flow show the large amount of unsteadiness in the turbine and the corresponding hot streak migration phenomenon. This research on the GE90 turbomachinery has also led to a procedure for running unsteady, cooled turbine analysis on commodity PC's running the Linux operating system.

  11. Linux OS integrated modular avionics application development framework with apex API of ARINC653 specification

    Directory of Open Access Journals (Sweden)

    Anna V. Korneenkova

    2017-01-01

    Full Text Available The framework is made to provide tools to develop the integrated modular avionics (IMA applications, which could be launched on the target platform LynxOs-178 without modifying their source code. The framework usage helps students to form skills for developing modern modules of the avionics. In addition, students obtain deeper knowledge for the development of competencies in the field of technical creativity by using of the framework.The article describes the architecture and implementation of the Linux OS framework for ARINC653 compliant OS application development.The proposed approach reduces ARINC-653 application development costs and gives a unified tool to implement OS vendor independent code that meets specification. To achieve import substitution free and open-source Linux OS is used as an environment for developing IMA applications.The proposed framework is applicable for using as the tool to develop IMA applications and as the tool for development of the following competencies: the ability to master techniques of using software to solve practical problems, the ability to develop components of hardware and software systems and databases, using modern tools and programming techniques, the ability to match hardware and software tools in the information and automated systems, the readiness to apply the fundamentals of informatics and programming to designing, constructing and testing of software products, the readiness to apply basic methods and tools of software development, knowledge of various technologies of software development.

  12. Kualitas Jaringan Pada Jaringan Virtual Local Area Network (VLAN Yang Menerapkan Linux Terminal Server Project (LTSP

    Directory of Open Access Journals (Sweden)

    Lipur Sugiyanta

    2017-12-01

    Full Text Available Virtual Local Area Network (VLAN merupakan sebuah teknik dalam jaringan komputer untuk menciptakan beberapa jaringan yang berbeda tetapi masih merupakan sebuah jaringan lokal yang tidak terbatas pada lokasi fisik seperti LAN sedangkan Linux Terminal Server Project (LTSP merupakan sebuah teknik terminal server yang dapat memperbanyak workstation dengan hanya menggunakan sebuah Linux server. Dalam membangun sebuah jaringan komputer perlu memperhatikan beberapa hal dan salah satunya adalah kualitas jaringan dari jaringan yang dibangun. Pada penelitian ini bertujuan untuk mengetahui pengaruh jumlah client terhadap kualitas jaringan berdasarkan parameter delay dan packet loss pada jaringan VLAN yang menerapkan LTSP. Oleh karena itu, penelitian ini menggunakan jenis metode penelitian kualitatif dengan memperhatikan standar yang digunakan dalam penelitian yaitu standar International Telecommunication Union – Telecommunication (ITU-T. Penerapan penelitian ini menggunakan sistem operasi pada server adalah Ubuntu Desktop 14.04 LTS. Berdasarkan dari hasil penelitian yang ditemukan dapat disimpulkan bahwa benar terbukti bahwa makin banyak client yang dilayani oleh server maka akan menurunkan kualitas jaringan berdasarkan parameter Quality of Service (QoS yang digunakan yaitu delay dan packet loss.

  13. Real-time data acquisition and feedback control using Linux Intel computers

    International Nuclear Information System (INIS)

    Penaflor, B.G.; Ferron, J.R.; Piglowski, D.A.; Johnson, R.D.; Walker, M.L.

    2006-01-01

    This paper describes the experiences of the DIII-D programming staff in adapting Linux based Intel computing hardware for use in real-time data acquisition and feedback control systems. Due to the highly dynamic and unstable nature of magnetically confined plasmas in tokamak fusion experiments, real-time data acquisition and feedback control systems are in routine use with all major tokamaks. At DIII-D, plasmas are created and sustained using a real-time application known as the digital plasma control system (PCS). During each experiment, the PCS periodically samples data from hundreds of diagnostic signals and provides these data to control algorithms implemented in software. These algorithms compute the necessary commands to send to various actuators that affect plasma performance. The PCS consists of a group of rack mounted Intel Xeon computer systems running an in-house customized version of the Linux operating system tailored specifically to meet the real-time performance needs of the plasma experiments. This paper provides a more detailed description of the real-time computing hardware and custom developed software, including recent work to utilize dual Intel Xeon equipped computers within the PCS

  14. CompactPCI/Linux platform for medium level control system on FTU

    International Nuclear Information System (INIS)

    Wang, L.; Centioli, C.; Iannone, F.; Panella, M.; Mazza, G.; Vitale, V.

    2004-01-01

    In large fusion experiments, such as tokamak devices, there are common trends for slow control systems. Because of complexity of the plants, several tokamaks adopt the so-called 'standard model' (SM) based on a three levels hierarchical control: (i) high level control (HLC) - the supervisor; (ii) medium level control (MLC) - I/O field equipments interface and concentration units and (iii) low level control (LLC) - the programmable logic controllers (PLC). FTU control system was designed with SM concepts and, in its 15 years life cycle, it underwent several developments. The latest evolution was mandatory, due to the obsolescence of the MLC CPUs, based on VME/Motorola 68030 with OS9 operating system. Therefore, we had to look for cost-effective solutions and we chose a CompactPCI-Intel x86 platform with Linux operating system. A software porting has been done taking into account the differences between OS9 and Linux operating system in terms of inter/network processes communications and I/O multi-ports serial driver. This paper describes the hardware/software architecture of the new MLC system emphasising the reliability and the low costs of the open source solutions. Moreover, the huge amount of software packages available in open source environment will assure a less painful maintenance, and will open the way to further improvements of the system itself

  15. Lin4Neuro: a customized Linux distribution ready for neuroimaging analysis.

    Science.gov (United States)

    Nemoto, Kiyotaka; Dan, Ippeita; Rorden, Christopher; Ohnishi, Takashi; Tsuzuki, Daisuke; Okamoto, Masako; Yamashita, Fumio; Asada, Takashi

    2011-01-25

    A variety of neuroimaging software packages have been released from various laboratories worldwide, and many researchers use these packages in combination. Though most of these software packages are freely available, some people find them difficult to install and configure because they are mostly based on UNIX-like operating systems. We developed a live USB-bootable Linux package named "Lin4Neuro." This system includes popular neuroimaging analysis tools. The user interface is customized so that even Windows users can use it intuitively. The boot time of this system was only around 40 seconds. We performed a benchmark test of inhomogeneity correction on 10 subjects of three-dimensional T1-weighted MRI scans. The processing speed of USB-booted Lin4Neuro was as fast as that of the package installed on the hard disk drive. We also installed Lin4Neuro on a virtualization software package that emulates the Linux environment on a Windows-based operation system. Although the processing speed was slower than that under other conditions, it remained comparable. With Lin4Neuro in one's hand, one can access neuroimaging software packages easily, and immediately focus on analyzing data. Lin4Neuro can be a good primer for beginners of neuroimaging analysis or students who are interested in neuroimaging analysis. It also provides a practical means of sharing analysis environments across sites.

  16. Development and application of remote video monitoring system for combine harvester based on embedded Linux

    Science.gov (United States)

    Chen, Jin; Wang, Yifan; Wang, Xuelei; Wang, Yuehong; Hu, Rui

    2017-01-01

    Combine harvester usually works in sparsely populated areas with harsh environment. In order to achieve the remote real-time video monitoring of the working state of combine harvester. A remote video monitoring system based on ARM11 and embedded Linux is developed. The system uses USB camera for capturing working state video data of the main parts of combine harvester, including the granary, threshing drum, cab and cut table. Using JPEG image compression standard to compress video data then transferring monitoring screen to remote monitoring center over the network for long-range monitoring and management. At the beginning of this paper it describes the necessity of the design of the system. Then it introduces realization methods of hardware and software briefly. And then it describes detailedly the configuration and compilation of embedded Linux operating system and the compiling and transplanting of video server program are elaborated. At the end of the paper, we carried out equipment installation and commissioning on combine harvester and then tested the system and showed the test results. In the experiment testing, the remote video monitoring system for combine harvester can achieve 30fps with the resolution of 800x600, and the response delay in the public network is about 40ms.

  17. PsyToolkit: a software package for programming psychological experiments using Linux.

    Science.gov (United States)

    Stoet, Gijsbert

    2010-11-01

    PsyToolkit is a set of software tools for programming psychological experiments on Linux computers. Given that PsyToolkit is freely available under the Gnu Public License, open source, and designed such that it can easily be modified and extended for individual needs, it is suitable not only for technically oriented Linux users, but also for students, researchers on small budgets, and universities in developing countries. The software includes a high-level scripting language, a library for the programming language C, and a questionnaire presenter. The software easily integrates with other open source tools, such as the statistical software package R. PsyToolkit is designed to work with external hardware (including IoLab and Cedrus response keyboards and two common digital input/output boards) and to support millisecond timing precision. Four in-depth examples explain the basic functionality of PsyToolkit. Example 1 demonstrates a stimulus-response compatibility experiment. Example 2 demonstrates a novel mouse-controlled visual search experiment. Example 3 shows how to control light emitting diodes using PsyToolkit, and Example 4 shows how to build a light-detection sensor. The last two examples explain the electronic hardware setup such that they can even be used with other software packages.

  18. A case study in application I/O on Linux clusters

    International Nuclear Information System (INIS)

    Ross, R.; Nurmi, D.; Cheng, A.; Zingale, M.

    2001-01-01

    A critical but often ignored component of system performance is the I/O system. Today's applications expect a great deal from underlying storage systems and software, and both high performance distributed storage and high level interfaces have been developed to fill these needs. In this paper they discuss the I/O performance of a parallel scientific application on a Linux cluster, the FLASH astrophysics code. This application relies on three I/O software components to provide high performance parallel I/O on Linux clusters: the Parallel Virtual File System (PVFS), the ROMIO MPI-IO implementation, and the Hierarchical Data Format (HDF5) library. First they discuss the roles played by each of these components in providing an I/O solution. Next they discuss the FLASH I/O benchmark and point out its relevance. Following this they examine the performance of the benchmark, and through instrumentation of both the application and underlying system software code they discover the location of major software bottlenecks. They work around the most inhibiting of these bottlenecks, showing substantial performance improvement. Finally they point out similarities between the inefficiencies found here and those found in message passing systems, indicating that research in the message passing field could be leveraged to solve similar problems in high-level I/O interfaces

  19. Exploiting HPC Platforms for Metagenomics: Challenges and Opportunities (MICW - Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    Energy Technology Data Exchange (ETDEWEB)

    Canon, Shane

    2011-10-12

    DOE JGI's Zhong Wang, chair of the High-performance Computing session, gives a brief introduction before Berkeley Lab's Shane Canon talks about "Exploiting HPC Platforms for Metagenomics: Challenges and Opportunities" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.

  20. HPC Colony II: FAST_OS II: Operating Systems and Runtime Systems at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Moreira, Jose [IBM, Armonk, NY (United States)

    2013-11-13

    HPC Colony II has been a 36-month project focused on providing portable performance for leadership class machines—a task made difficult by the emerging variety of more complex computer architectures. The project attempts to move the burden of portable performance to adaptive system software, thereby allowing domain scientists to concentrate on their field rather than the fine details of a new leadership class machine. To accomplish our goals, we focused on adding intelligence into the system software stack. Our revised components include: new techniques to address OS jitter; new techniques to dynamically address load imbalances; new techniques to map resources according to architectural subtleties and application dynamic behavior; new techniques to dramatically improve the performance of checkpoint-restart; and new techniques to address membership service issues at scale.

  1. Final Report for File System Support for Burst Buffers on HPC Systems

    Energy Technology Data Exchange (ETDEWEB)

    Yu, W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mohror, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-11-27

    Distributed burst buffers are a promising storage architecture for handling I/O workloads for exascale computing. As they are being deployed on more supercomputers, a file system that efficiently manages these burst buffers for fast I/O operations carries great consequence. Over the past year, FSU team has undertaken several efforts to design, prototype and evaluate distributed file systems for burst buffers on HPC systems. These include MetaKV: a Key-Value Store for Metadata Management of Distributed Burst Buffers, a user-level file system with multiple backends, and a specialized file system for large datasets of deep neural networks. Our progress for these respective efforts are elaborated further in this report.

  2. An integrated genetic data environment (GDE)-based LINUX interface for analysis of HIV-1 and other microbial sequences.

    Science.gov (United States)

    De Oliveira, T; Miller, R; Tarin, M; Cassol, S

    2003-01-01

    Sequence databases encode a wealth of information needed to develop improved vaccination and treatment strategies for the control of HIV and other important pathogens. To facilitate effective utilization of these datasets, we developed a user-friendly GDE-based LINUX interface that reduces input/output file formatting. GDE was adapted to the Linux operating system, bioinformatics tools were integrated with microbe-specific databases, and up-to-date GDE menus were developed for several clinically important viral, bacterial and parasitic genomes. Each microbial interface was designed for local access and contains Genbank, BLAST-formatted and phylogenetic databases. GDE-Linux is available for research purposes by direct application to the corresponding author. Application-specific menus and support files can be downloaded from (http://www.bioafrica.net).

  3. LLNL-G3Dv3: Global P wave tomography model for improved regional and teleseismic travel time prediction: LLNL-G3DV3---GLOBAL P WAVE TOMOGRAPHY

    Energy Technology Data Exchange (ETDEWEB)

    Simmons, N. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Myers, S. C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Johannesson, G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Matzel, E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2012-10-06

    [1] We develop a global-scale P wave velocity model (LLNL-G3Dv3) designed to accurately predict seismic travel times at regional and teleseismic distances simultaneously. The model provides a new image of Earth's interior, but the underlying practical purpose of the model is to provide enhanced seismic event location capabilities. The LLNL-G3Dv3 model is based on ∼2.8 millionP and Pnarrivals that are re-processed using our global multiple-event locator called Bayesloc. We construct LLNL-G3Dv3 within a spherical tessellation based framework, allowing for explicit representation of undulating and discontinuous layers including the crust and transition zone layers. Using a multiscale inversion technique, regional trends as well as fine details are captured where the data allow. LLNL-G3Dv3 exhibits large-scale structures including cratons and superplumes as well numerous complex details in the upper mantle including within the transition zone. Particularly, the model reveals new details of a vast network of subducted slabs trapped within the transition beneath much of Eurasia, including beneath the Tibetan Plateau. We demonstrate the impact of Bayesloc multiple-event location on the resulting tomographic images through comparison with images produced without the benefit of multiple-event constraints (single-event locations). We find that the multiple-event locations allow for better reconciliation of the large set of direct P phases recorded at 0–97° distance and yield a smoother and more continuous image relative to the single-event locations. Travel times predicted from a 3-D model are also found to be strongly influenced by the initial locations of the input data, even when an iterative inversion/relocation technique is employed.

  4. Summary of Environmental Data Analysis and Work Performed by Lawrence Livermore National Laboratory (LLNL) in Support of the Navajo Nation Abandoned Mine Lands Project at Tse Tah, Arizona

    Energy Technology Data Exchange (ETDEWEB)

    Taffet, Michael J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Esser, Bradley K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Madrid, Victor M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-05-17

    This report summarizes work performed by Lawrence Livermore National Laboratory (LLNL) under Navajo Nation Services Contract CO9729 in support of the Navajo Abandoned Mine Lands Reclamation Program (NAMLRP). Due to restrictions on access to uranium mine waste sites at Tse Tah, Arizona that developed during the term of the contract, not all of the work scope could be performed. LLNL was able to interpret environmental monitoring data provided by NAMLRP. Summaries of these data evaluation activities are provided in this report. Additionally, during the contract period, LLNL provided technical guidance, instructional meetings, and review of relevant work performed by NAMLRP and its contractors that was not contained in the contract work scope.

  5. The VERCE Science Gateway: Enabling User Friendly HPC Seismic Wave Simulations.

    Science.gov (United States)

    Casarotti, E.; Spinuso, A.; Matser, J.; Leong, S. H.; Magnoni, F.; Krause, A.; Garcia, C. R.; Muraleedharan, V.; Krischer, L.; Anthes, C.

    2014-12-01

    The EU-funded project VERCE (Virtual Earthquake and seismology Research Community in Europe) aims to deploy technologies which satisfy the HPC and data-intensive requirements of modern seismology.As a result of VERCE official collaboration with the EU project SCI-BUS, access to computational resources, like local clusters and international infrastructures (EGI and PRACE), is made homogeneous and integrated within a dedicated science gateway based on the gUSE framework. In this presentation we give a detailed overview on the progress achieved with the developments of the VERCE Science Gateway, according to a use-case driven implementation strategy. More specifically, we show how the computational technologies and data services have been integrated within a tool for Seismic Forward Modelling, whose objective is to offer the possibility to performsimulations of seismic waves as a service to the seismological community.We will introduce the interactive components of the OGC map based web interface and how it supports the user with setting up the simulation. We will go through the selection of input data, which are either fetched from federated seismological web services, adopting community standards, or provided by the users themselves by accessing their own document data store. The HPC scientific codes can be selected from a number of waveform simulators, currently available to the seismological community as batch tools or with limited configuration capabilities in their interactive online versions.The results will be staged out via a secure GridFTP transfer to a VERCE data layer managed by iRODS. The provenance information of the simulation will be automatically cataloged by the data layer via NoSQL techonologies.Finally, we will show the example of how the visualisation output of the gateway could be enhanced by the connection with immersive projection technology at the Virtual Reality and Visualisation Centre of Leibniz Supercomputing Centre (LRZ).

  6. INFN-Pisa scientific computation environment (GRID, HPC and Interactive Analysis)

    International Nuclear Information System (INIS)

    Arezzini, S; Carboni, A; Caruso, G; Ciampa, A; Coscetti, S; Mazzoni, E; Piras, S

    2014-01-01

    The INFN-Pisa Tier2 infrastructure is described, optimized not only for GRID CPU and Storage access, but also for a more interactive use of the resources in order to provide good solutions for the final data analysis step. The Data Center, equipped with about 6700 production cores, permits the use of modern analysis techniques realized via advanced statistical tools (like RooFit and RooStat) implemented in multicore systems. In particular a POSIX file storage access integrated with standard SRM access is provided. Therefore the unified storage infrastructure is described, based on GPFS and Xrootd, used both for SRM data repository and interactive POSIX access. Such a common infrastructure allows a transparent access to the Tier2 data to the users for their interactive analysis. The organization of a specialized many cores CPU facility devoted to interactive analysis is also described along with the login mechanism integrated with the INFN-AAI (National INFN Infrastructure) to extend the site access and use to a geographical distributed community. Such infrastructure is used also for a national computing facility in use to the INFN theoretical community, it enables a synergic use of computing and storage resources. Our Center initially developed for the HEP community is now growing and includes also HPC resources fully integrated. In recent years has been installed and managed a cluster facility (1000 cores, parallel use via InfiniBand connection) and we are now updating this facility that will provide resources for all the intermediate level HPC computing needs of the INFN theoretical national community.

  7. SoAx: A generic C++ Structure of Arrays for handling particles in HPC codes

    Science.gov (United States)

    Homann, Holger; Laenen, Francois

    2018-03-01

    The numerical study of physical problems often require integrating the dynamics of a large number of particles evolving according to a given set of equations. Particles are characterized by the information they are carrying such as an identity, a position other. There are generally speaking two different possibilities for handling particles in high performance computing (HPC) codes. The concept of an Array of Structures (AoS) is in the spirit of the object-oriented programming (OOP) paradigm in that the particle information is implemented as a structure. Here, an object (realization of the structure) represents one particle and a set of many particles is stored in an array. In contrast, using the concept of a Structure of Arrays (SoA), a single structure holds several arrays each representing one property (such as the identity) of the whole set of particles. The AoS approach is often implemented in HPC codes due to its handiness and flexibility. For a class of problems, however, it is known that the performance of SoA is much better than that of AoS. We confirm this observation for our particle problem. Using a benchmark we show that on modern Intel Xeon processors the SoA implementation is typically several times faster than the AoS one. On Intel's MIC co-processors the performance gap even attains a factor of ten. The same is true for GPU computing, using both computational and multi-purpose GPUs. Combining performance and handiness, we present the library SoAx that has optimal performance (on CPUs, MICs, and GPUs) while providing the same handiness as AoS. For this, SoAx uses modern C++ design techniques such template meta programming that allows to automatically generate code for user defined heterogeneous data structures.

  8. Estimating The Reliability of the Lawrence Livermore National Laboratory (LLNL) Flash X-ray (FXR) Machine

    International Nuclear Information System (INIS)

    Ong, M M; Kihara, R; Zentler, J M; Kreitzer, B R; DeHope, W J

    2007-01-01

    At Lawrence Livermore National Laboratory (LLNL), our flash X-ray accelerator (FXR) is used on multi-million dollar hydrodynamic experiments. Because of the importance of the radiographs, FXR must be ultra-reliable. Flash linear accelerators that can generate a 3 kA beam at 18 MeV are very complex. They have thousands, if not millions, of critical components that could prevent the machine from performing correctly. For the last five years, we have quantified and are tracking component failures. From this data, we have determined that the reliability of the high-voltage gas-switches that initiate the pulses, which drive the accelerator cells, dominates the statistics. The failure mode is a single-switch pre-fire that reduces the energy of the beam and degrades the X-ray spot-size. The unfortunate result is a lower resolution radiograph. FXR is a production machine that allows only a modest number of pulses for testing. Therefore, reliability switch testing that requires thousands of shots is performed on our test stand. Study of representative switches has produced pre-fire statistical information and probability distribution curves. This information is applied to FXR to develop test procedures and determine individual switch reliability using a minimal number of accelerator pulses

  9. LLNL Underground-Coal-Gasification Project. Quarterly progress report, July-September 1981

    Energy Technology Data Exchange (ETDEWEB)

    Stephens, D.R.; Clements, W. (eds.)

    1981-11-09

    We have continued our laboratory studies of forward gasification in small blocks of coal mounted in 55-gal drums. A steam/oxygen mixture is fed into a small hole drilled longitudinally through the center of the block, the coal is ignited near the inlet and burns toward the outlet, and the product gases come off at the outlet. Various diagnostic measurements are made during the course of the burn, and afterward the coal block is split open so that the cavity can be examined. Development work continues on our mathematical model for the small coal block experiments. Preparations for the large block experiments at a coal outcrop in the Tono Basin of Washington State have required steadily increasing effort with the approach of the scheduled starting time for the experiments (Fall 1981). Also in preparation is the deep gasification experiment, Tono 1, planned for another site in the Tono Basin after the large block experiments have been completed. Wrap-up work continues on our previous gasification experiments in Wyoming. Results of the postburn core-drilling program Hoe Creek 3 are presented here. Since 1976 the Soviets have been granted four US patents on various aspects of the underground coal gasification process. These patents are described here, and techniques of special interest are noted. Finally, we include ten abstracts of pertinent LLNL reports and papers completed during the quarter.

  10. Status of experiments at LLNL on high-power X-band microwave generators

    International Nuclear Information System (INIS)

    Houck, T.L.; Westenskow, G.A.

    1994-01-01

    The Microwave Source Facility at the Lawrence Livermore National Laboratory (LLNL) is studying the application of induction accelerator technology to high-power microwave generators suitable for linear collider power sources. The authors report on the results of two experiments, both using the Choppertron's 11.4 GHz modulator and a 5-MeV, 1-kA induction beam. The first experimental configuration has a single traveling wave output structure designed to produce in excess of 300 MW in a single fundamental waveguide. This output structure consists of 12 individual cells, the first two incorporating de-Q-ing circuits to dampen higher order resonant modes. The second experiment studies the feasibility of enhancing beam to microwave power conversion by accelerating a modulated beam with induction cells. Referred to as the ''Reacceleration Experiment,'' this experiment consists of three traveling-wave output structures designed to produce about 125 MW per output and two induction cells located between the outputs. Status of current and planned experiments are presented

  11. Pleiades: A Sub-picosecond Tunable X-ray Source at the LLNL Electron Linac

    International Nuclear Information System (INIS)

    Slaughter, Dennis; Springer, Paul; Le Sage, Greg; Crane, John; Ditmire, Todd; Cowan, Tom; Anderson, Scott G.; Rosenzweig, James B.

    2002-01-01

    The use of ultra fast laser pulses to generate very high brightness, ultra short (fs to ps) pulses of x-rays is a topic of great interest to the x-ray user community. In principle, femto-second-scale pump-probe experiments can be used to temporally resolve structural dynamics of materials on the time scale of atomic motion. The development of sub-ps x-ray pulses will make possible a wide range of materials and plasma physics studies with unprecedented time resolution. A current project at LLNL will provide such a novel x-ray source based on Thomson scattering of high power, short laser pulses with a high peak brightness, relativistic electron bunch. The system is based on a 5 mm-mrad normalized emittance photo-injector, a 100 MeV electron RF linac, and a 300 mJ, 35 fs solid-state laser system. The Thomson x-ray source produces ultra fast pulses with x-ray energies capable of probing into high-Z metals, and a high flux per pulse enabling single shot experiments. The system will also operate at a high repetition rate (∼ 10 Hz). (authors)

  12. Assessment and cleanup of the Taxi Strip waste storage area at LLNL [Lawrence Livermore National Laboratory

    International Nuclear Information System (INIS)

    Buerer, A.

    1983-01-01

    In September 1982 the Hazards Control Department of the Lawrence Livermore National Laboratory (LLNL) began a final radiological survey of a former low-level radioactive waste storage area called the Taxi Strip so that the area could be released for construction of an office building. Collection of soil samples at the location of a proposed sewer line led to the discovery of an old disposal pit containing soil contaminated with low-level radioactive waste and organic solvents. The Taxi Strip area was excavated leading to the discovery of three additional small pits. The clean-up of Pit No. 1 is considered to be complete for radioactive contamination. The results from the chlorinated solvent analysis of the borehole samples and the limited number of samples analyzed by gas chromatography/mass spectrometry indicate that solvent clean-up at this pit is complete. This is being verified by gas chromatography/mass spectrometry analysis of a few additional soil samples from the bottom sides and ends of the pit. As a precaution, samples are also being analyzed for metals to determine if further excavation is necessary. Clean-up of Pits No. 2 and No. 3 is considered to be complete for radioactive and solvent contamination. Results of analysis for metals will determine if excavation is complete. Excavation of Pit No. 4 which resulted from surface leakage of radioactive contamination from an evaporation tray is complete

  13. Summary of LLNL's accomplishments for the FY93 Waste Processing Operations Program

    International Nuclear Information System (INIS)

    Grasz, E.; Domning, E.; Heggins, D.; Huber, L.; Hurd, R.; Martz, H.; Roberson, P.; Wilhelmsen, K.

    1994-04-01

    Under the US Department of Energy's (DOE's) Office of Technology Development (OTD)-Robotic Technology Development Program (RTDP), the Waste Processing Operations (WPO) Program was initiated in FY92 to address the development of automated material handling and automated chemical and physical processing systems for mixed wastes. The Program's mission was to develop a strategy for the treatment of all DOE mixed, low-level, and transuranic wastes. As part of this mission, DOE's Mixed Waste Integrated Program (MWIP) was charged with the development of innovative waste treatment technologies to surmount shortcomings of existing baseline systems. Current technology advancements and applications results from cooperation of private industry, educational institutions, and several national laboratories operated for DOE. This summary document presents the LLNL Environmental Restoration and Waste Management (ER and WM) Automation and Robotics Section's contributions in support of DOE's FY93 WPO Program. This document further describes the technological developments that were integrated in the 1993 Mixed Waste Operations (MWO) Demonstration held at SRTC in November 1993

  14. The EBIT Calorimeter Spectrometer: a new, permanent user facility at the LLNL EBIT

    International Nuclear Information System (INIS)

    Porter, F.S.; Beiersdorfer, P.; Brown, G.V.; Doriese, W.; Gygax, J.; Kelley, R.L.; Kilbourne, C.A.; King, J.; Irwin, K.; Reintsema, C.; Ullom, J.

    2007-01-01

    The EBIT Calorimeter Spectrometer (ECS) is currently being completed and will be installed at the EBIT facility at the Lawrence Livermore National Laboratory in October 2007. The ECS will replace the smaller XRS/EBIT microcalorimeter spectrometer that has been in almost continuous operation since 2000. The XRS/EBIT was based on a spare laboratory cryostat and an engineering model detector system from the Suzaku/XRS observatory program. The new ECS spectrometer was built to be a low maintenance, high performance implanted silicon microcalorimeter spectrometer with 4 eV resolution at 6 keV, 32 detector channels, 10 (micro)s event timing, and capable of uninterrupted acquisition sessions of over 60 hours at 50 mK. The XRS/EBIT program has been very successful, producing many results on topics such as laboratory astrophysics, atomic physics, nuclear physics, and calibration of the spectrometers for the National Ignition Facility. The ECS spectrometer will continue this work into the future with improved spectral resolution, integration times, and ease-of-use. We designed the ECS instrument with TES detectors in mind by using the same highly successful magnetic shielding as our laboratory TES cryostats. This design will lead to a future TES instrument at the LLNL EBIT. Here we discuss the legacy of the XRS/EBIT program, the performance of the new ECS spectrometer, and plans for a future TES instrument.

  15. Overview and applications of the Monte Carlo radiation transport kit at LLNL

    International Nuclear Information System (INIS)

    Sale, K. E.

    1999-01-01

    Modern Monte Carlo radiation transport codes can be applied to model most applications of radiation, from optical to TeV photons, from thermal neutrons to heavy ions. Simulations can include any desired level of detail in three-dimensional geometries using the right level of detail in the reaction physics. The technology areas to which we have applied these codes include medical applications, defense, safety and security programs, nuclear safeguards and industrial and research system design and control. The main reason such applications are interesting is that by using these tools substantial savings of time and effort (i.e. money) can be realized. In addition it is possible to separate out and investigate computationally effects which can not be isolated and studied in experiments. In model calculations, just as in real life, one must take care in order to get the correct answer to the right question. Advancing computing technology allows extensions of Monte Carlo applications in two directions. First, as computers become more powerful more problems can be accurately modeled. Second, as computing power becomes cheaper Monte Carlo methods become accessible more widely. An overview of the set of Monte Carlo radiation transport tools in use a LLNL will be presented along with a few examples of applications and future directions

  16. LPIC-1 Linux Professional Institute certification study guide exam 101-400 and exam 102-400

    CERN Document Server

    Bresnahan, Christine

    2015-01-01

    Thorough LPIC-1 exam prep, with complete coverage and bonus study tools LPIC-1Study Guide is your comprehensive source for the popular Linux Professional Institute Certification Level 1 exam, fully updated to reflect the changes to the latest version of the exam. With 100% coverage of objectives for both LPI 101 and LPI 102, this book provides clear and concise information on all Linux administration topics and practical examples drawn from real-world experience. Authoritative coverage of key exam topics includes GNU and UNIX commands, devices, file systems, file system hierarchy, user interf

  17. RELAP5-3D developmental assessment: Comparison of version 4.2.1i on Linux and Windows

    Energy Technology Data Exchange (ETDEWEB)

    Bayless, Paul D. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2014-06-01

    Figures have been generated comparing the parameters used in the developmental assessment of the RELAP5-3D code, version 4.2i, compiled on Linux and Windows platforms. The figures, which are the same as those used in Volume III of the RELAP5-3D code manual, compare calculations using the semi-implicit solution scheme with available experiment data. These figures provide a quick, visual indication of how the code predictions differ between the Linux and Windows versions.

  18. RELAP5-3D Developmental Assessment. Comparison of Version 4.3.4i on Linux and Windows

    International Nuclear Information System (INIS)

    Bayless, Paul David

    2015-01-01

    Figures have been generated comparing the parameters used in the developmental assessment of the RELAP5-3D code, version 4.3i, compiled on Linux and Windows platforms. The figures, which are the same as those used in Volume III of the RELAP5-3D code manual, compare calculations using the semi-implicit solution scheme with available experiment data. These figures provide a quick, visual indication of how the code predictions differ between the Linux and Windows versions.

  19. A real-time computer simulation of nuclear simulator software using standard PC hardware and linux environments

    International Nuclear Information System (INIS)

    Cha, K. H.; Kweon, K. C.

    2001-01-01

    A feasibility study, which standard PC hardware and Real-Time Linux are applied to real-time computer simulation of software for a nuclear simulator, is presented in this paper. The feasibility prototype was established with the existing software in the Compact Nuclear Simulator (CNS). Throughout the real-time implementation in the feasibility prototype, we has identified that the approach can enable the computer-based predictive simulation to be approached, due to both the remarkable improvement in real-time performance and the less efforts for real-time implementation under standard PC hardware and Real-Time Linux envrionments

  20. Report on the B-Fields at NIF Workshop Held at LLNL October 12-13, 2015

    International Nuclear Information System (INIS)

    Fournier, K. B.; Moody, J. D.

    2015-01-01

    A national ICF laboratory workshop on requirements for a magnetized target capability on NIF was held by NIF at LLNL on October 12 and 13, attended by experts from LLNL, SNL, LLE, LANL, GA, and NRL. Advocates for indirect drive (LLNL), magnetic (Z) drive (SNL), polar direct drive (LLE), and basic science needing applied B (many institutions) presented and discussed requirements for the magnetized target capabilities they would like to see. 30T capability was most frequently requested. A phased operation increasing the field in steps experimentally can be envisioned. The NIF management will take the inputs from the scientific community represented at the workshop and recommend pulse-powered magnet parameters for NIF that best meet the collective user requests. In parallel, LLNL will continue investigating magnets for future generations that might be powered by compact laser-B-field generators (Moody, Fujioka, Santos, Woolsey, Pollock). The NIF facility engineers will start to analyze compatibility of the recommended pulsed magnet parameters (size, field, rise time, materials) with NIF chamber constraints, diagnostic access, and final optics protection against debris in FY16. The objective of this assessment will be to develop a schedule for achieving an initial Bfield capability. Based on an initial assessment, room temperature magnetized gas capsules will be fielded on NIF first. Magnetized cryo-ice-layered targets will take longer (more compatibility issues). Magnetized wetted foam DT targets (Olson) may have somewhat fewer compatibility issues making them a more likely choice for the first cryo-ice-layered target fielded with applied Bz.

  1. Joint research and development on toxic-material emergency response between ENEA and LLNL. 1982 progress report

    International Nuclear Information System (INIS)

    Gudiksen, P.; Lange, R.; Dickerson, M.; Sullivan, T.; Rosen, L.; Walker, H.; Boeri, G.B.; Caracciolo, R.; Fiorenza, R.

    1982-11-01

    A summary is presented of current and future cooperative studies between ENEA and LLNL researchers designed to develop improved real-time emergency response capabilities for assessing the environmental consequences resulting from an accidental release of toxic materials into the atmosphere. These studies include development and evaluation of atmospheric transport and dispersion models, interfacing of data processing and communications systems, supporting meteorological field experiments, and integration of radiological measurements and model results into real-time assessments

  2. The LLNL [Lawrence Livermore National Laboratory] ICF [Inertial Confinement Fusion] Program: Progress toward ignition in the Laboratory

    International Nuclear Information System (INIS)

    Storm, E.; Batha, S.H.; Bernat, T.P.; Bibeau, C.; Cable, M.D.; Caird, J.A.; Campbell, E.M.; Campbell, J.H.; Coleman, L.W.; Cook, R.C.; Correll, D.L.; Darrow, C.B.; Davis, J.I.; Drake, R.P.; Ehrlich, R.B.; Ellis, R.J.; Glendinning, S.G.; Haan, S.W.; Haendler, B.L.; Hatcher, C.W.; Hatchett, S.P.; Hermes, G.L.; Hunt, J.P.; Kania, D.R.; Kauffman, R.L.; Kilkenny, J.D.; Kornblum, H.N.; Kruer, W.L.; Kyrazis, D.T.; Lane, S.M.; Laumann, C.W.; Lerche, R.A.; Letts, S.A.; Lindl, J.D.; Lowdermilk, W.H.; Mauger, G.J.; Montgomery, D.S.; Munro, D.H.; Murray, J.R.; Phillion, D.W.; Powell, H.T.; Remington, B.R.; Ress, D.B.; Speck, D.R.; Suter, L.J.; Tietbohl, G.L.; Thiessen, A.R.; Trebes, J.E.; Trenholme, J.B.; Turner, R.E.; Upadhye, R.S.; Wallace, R.J.; Wiedwald, J.D.; Woodworth, J.G.; Young, P.M.; Ze, F.

    1990-01-01

    The Inertial Confinement Fusion (ICF) Program at the Lawrence Livermore National Laboratory (LLNL) has made substantial progress in target physics, target diagnostics, and laser science and technology. In each area, progress required the development of experimental techniques and computational modeling. The objectives of the target physics experiments in the Nova laser facility are to address and understand critical physics issues that determine the conditions required to achieve ignition and gain in an ICF capsule. The LLNL experimental program primarily addresses indirect-drive implosions, in which the capsule is driven by x rays produced by the interaction of the laser light with a high-Z plasma. Experiments address both the physics of generating the radiation environment in a laser-driven hohlraum and the physics associated with imploding ICF capsules to ignition and high-gain conditions in the absence of alpha deposition. Recent experiments and modeling have established much of the physics necessary to validate the basic concept of ignition and ICF target gain in the laboratory. The rapid progress made in the past several years, and in particular, recent results showing higher radiation drive temperatures and implosion velocities than previously obtained and assumed for high-gain target designs, has led LLNL to propose an upgrade of the Nova laser to 1.5 to 2 MJ (at 0.35 μm) to demonstrate ignition and energy gains of 10 to 20 -- the Nova Upgrade

  3. PENGUKURAN KINERJA ROUND-ROBIN SCHEDULER UNTUK LINUX VIRTUAL SERVER PADA KASUS WEB SERVER

    Directory of Open Access Journals (Sweden)

    Royyana Muslim Ijtihadie

    2005-07-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Dengan meningkatnya perkembangan jumlah pengguna internet dan mulai diadopsinya penggunaan internet dalam kehidupan sehari-hari, maka lalulintas data di Internet telah meningkat secara signifikan. Sejalan dengan itu pula beban kerja server-server yang memberikan service di Internet juga mengalami kenaikan yang cukup signifikan. Hal tersebut dapat mengakibatkan suatu server mengalami kelebihan beban pada suatu saat. Untuk mengatasi hal tersebut maka diterapkan skema konfigurasi server cluster menggunakan konsep load balancing. Load balancing server menerapkan algoritma dalam melakukan pembagian tugas. Algoritma round robin telah digunakan pada Linux Virtual Server. Penelitian ini melakukan pengukuran kinerja terhadap Linux Virtual Server yang menggunakan algoritma round robin untuk melakukan penjadwalan pembagian beban terhadap server. Penelitian ini mengukur performa dari sisi client yang mencoba mengakses web server.performa yang diukur adalah jumlah request yang bisa diselesaikan perdetik (request per second, waktu untuk menyelesaikan per satu request, dan   throughput yang dihasilkan. Dari hasil percobaan didapatkan bahwa penggunaan LVS bisa meningkatkan performa, yaitu menaikkan jumlah request per detik

  4. LLNL MOX fuel lead assemblies data report for the surplus plutonium disposition environmental impact statement

    International Nuclear Information System (INIS)

    O'Connor, D.G.; Fisher, S.E.; Holdaway, R.

    1998-08-01

    The purpose of this document is to support the US Department of Energy (DOE) Fissile Materials Disposition Program's preparation of the draft surplus plutonium disposition environmental impact statement. This is one of several responses to data call requests for background information on activities associated with the operation of the lead assembly (LA) mixed-oxide (MOX) fuel fabrication facility. The DOE Office of Fissile Materials Disposition (DOE-MD) has developed a dual-path strategy for disposition of surplus weapons-grade plutonium. One of the paths is to disposition surplus plutonium through irradiation of MOX fuel in commercial nuclear reactors. MOX fuel consists of plutonium and uranium oxides (PuO 2 and UO 2 ), typically containing 95% or more UO 2 . DOE-MD requested that the DOE Site Operations Offices nominate DOE sites that meet established minimum requirements that could produce MOX LAs. LLNL has proposed an LA MOX fuel fabrication approach that would be done entirely inside an S and S Category 1 area. This includes receipt and storage of PuO 2 powder, fabrication of MOX fuel pellets, assembly of fuel rods and bundles, and shipping of the packaged fuel to a commercial reactor site. Support activities will take place within a Category 1 area. Building 332 will be used to receive and store the bulk PuO 2 powder, fabricate MOX fuel pellets, and assemble fuel rods. Building 334 will be used to assemble, store, and ship fuel bundles. Only minor modifications would be required of Building 332. Uncontaminated glove boxes would need to be removed, petition walls would need to be removed, and minor modifications to the ventilation system would be required

  5. Attenuation Drift in the Micro-Computed Tomography System at LLNL

    Energy Technology Data Exchange (ETDEWEB)

    Dooraghi, Alex A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brown, William [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Seetho, Isaac [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kallman, Jeff [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Lennox, Kristin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Glascoe, Lee [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-12

    The maximum allowable level of drift in the linear attenuation coefficients (μ) for a Lawrence Livermore National Laboratory (LLNL) micro-computed tomography (MCT) system was determined to be 0.1%. After ~100 scans were acquired during the period of November 2014 to March 2015, the drift in μ for a set of six reference materials reached or exceeded 0.1%. Two strategies have been identified to account for or correct the drift. First, normalizing the 160 kV and 100 kV μ data by the μ of water at the corresponding energy, in contrast to conducting normalization at the 160 kV energy only, significantly compensates for measurement drift. Even after the modified normalization, μ of polytetrafluoroethylene (PTFE) increases linearly with scan number at an average rate of 0.00147% per scan. This is consistent with PTFE radiation damage documented in the literature. The second strategy suggested is the replacement of the PTFE reference with fluorinated ethylene propylene (FEP), which has the same effective atomic number (Ze) and electron density (ρe) as PTFE, but is 10 times more radiation resistant. This is important as effective atomic number and electron density are key parameters in analysis. The presence of a material with properties such as PTFE, when taken together with the remaining references, allows for a broad range of the (Ze, ρe) feature space to be used in analysis. While FEP is documented as 10 times more radiation resistant, testing will be necessary to assess how often, if necessary, FEP will need to be replaced. As radiation damage to references has been observed, it will be necessary to monitor all reference materials for radiation damage to ensure consistent x-ray characteristics of the references.

  6. LLNL MOX fuel lead assemblies data report for the surplus plutonium disposition environmental impact statement

    Energy Technology Data Exchange (ETDEWEB)

    O`Connor, D.G.; Fisher, S.E.; Holdaway, R. [and others

    1998-08-01

    The purpose of this document is to support the US Department of Energy (DOE) Fissile Materials Disposition Program`s preparation of the draft surplus plutonium disposition environmental impact statement. This is one of several responses to data call requests for background information on activities associated with the operation of the lead assembly (LA) mixed-oxide (MOX) fuel fabrication facility. The DOE Office of Fissile Materials Disposition (DOE-MD) has developed a dual-path strategy for disposition of surplus weapons-grade plutonium. One of the paths is to disposition surplus plutonium through irradiation of MOX fuel in commercial nuclear reactors. MOX fuel consists of plutonium and uranium oxides (PuO{sub 2} and UO{sub 2}), typically containing 95% or more UO{sub 2}. DOE-MD requested that the DOE Site Operations Offices nominate DOE sites that meet established minimum requirements that could produce MOX LAs. LLNL has proposed an LA MOX fuel fabrication approach that would be done entirely inside an S and S Category 1 area. This includes receipt and storage of PuO{sub 2} powder, fabrication of MOX fuel pellets, assembly of fuel rods and bundles, and shipping of the packaged fuel to a commercial reactor site. Support activities will take place within a Category 1 area. Building 332 will be used to receive and store the bulk PuO{sub 2} powder, fabricate MOX fuel pellets, and assemble fuel rods. Building 334 will be used to assemble, store, and ship fuel bundles. Only minor modifications would be required of Building 332. Uncontaminated glove boxes would need to be removed, petition walls would need to be removed, and minor modifications to the ventilation system would be required.

  7. DB2 9 for Linux, Unix, and Windows database administration upgrade certification study guide

    CERN Document Server

    Sanders, Roger E

    2007-01-01

    Written by one of the world's leading DB2 authors who is an active participant in the development of the DB2 certification exams, this resource covers everything a database adminstrator needs to know to pass the DB2 9 for Linux, UNIX, and Windows Database Administration Certification Upgrade exam (Exam 736). This comprehensive study guide discusses all exam topics: server management, data placement, XML concepts, analyzing activity, high availability, database security, and much more. Each chapter contains an extensive set of practice questions along with carefully explained answers. Both information-technology professionals who have experience as database administrators and have a current DBA certification on version 8 of DB2 and individuals who would like to learn the new features of DB2 9 will benefit from the information in this reference guide.

  8. FTAP: a Linux-based program for tapping and music experiments.

    Science.gov (United States)

    Finney, S A

    2001-02-01

    This paper describes FTAP, a flexible data collection system for tapping and music experiments. FTAP runs on standard PC hardware with the Linux operating system and can process input keystrokes and auditory output with reliable millisecond resolution. It uses standard MIDI devices for input and output and is particularly flexible in the area of auditory feedback manipulation. FTAP can run a wide variety of experiments, including synchronization/continuation tasks (Wing & Kristofferson, 1973), synchronization tasks combined with delayed auditory feedback (Aschersleben & Prinz, 1997), continuation tasks with isolated feedback perturbations (Wing, 1977), and complex alterations of feedback in music performance (Finney, 1997). Such experiments have often been implemented with custom hardware and software systems, but with FTAP they can be specified by a simple ASCII text parameter file. FTAP is available at no cost in source-code form.

  9. Use of Low-Cost Acquisition Systems with an Embedded Linux Device for Volcanic Monitoring.

    Science.gov (United States)

    Moure, David; Torres, Pedro; Casas, Benito; Toma, Daniel; Blanco, María José; Del Río, Joaquín; Manuel, Antoni

    2015-08-19

    This paper describes the development of a low-cost multiparameter acquisition system for volcanic monitoring that is applicable to gravimetry and geodesy, as well as to the visual monitoring of volcanic activity. The acquisition system was developed using a System on a Chip (SoC) Broadcom BCM2835 Linux operating system (based on DebianTM) that allows for the construction of a complete monitoring system offering multiple possibilities for storage, data-processing, configuration, and the real-time monitoring of volcanic activity. This multiparametric acquisition system was developed with a software environment, as well as with different hardware modules designed for each parameter to be monitored. The device presented here has been used and validated under different scenarios for monitoring ocean tides, ground deformation, and gravity, as well as for monitoring with images the island of Tenerife and ground deformation on the island of El Hierro.

  10. A Linux cluster for between-pulse magnetic equilibrium reconstructions and other processor bound analyses

    International Nuclear Information System (INIS)

    Peng, Q.; Groebner, R. J.; Lao, L. L.; Schachter, J.; Schissel, D. P.; Wade, M. R.

    2001-01-01

    A 12-processor Linux PC cluster has been installed to perform between-pulse magnetic equilibrium reconstructions during tokamak operations using the EFIT code written in FORTRAN. The MPICH package implementing message passing interface is employed by EFIT for data distribution and communication. The new system calculates equilibria eight times faster than the previous system yielding a complete equilibrium time history on a 25 ms time scale 4 min after the pulse ends. A graphical interface is provided for users to control the time resolution and the type of EFITs. The next analysis to benefit from the cluster is CERQUICK written in IDL for ion temperature profile analysis. The plan is to expand the cluster so that a full profile analysis (Te, Ti, ne, Vr, Zeff) can be made available between pulses, which lays the ground work for Kinetic EFIT and/or ONETWO power balance analyses

  11. RTSPM: real-time Linux control software for scanning probe microscopy.

    Science.gov (United States)

    Chandrasekhar, V; Mehta, M M

    2013-01-01

    Real time computer control is an essential feature of scanning probe microscopes, which have become important tools for the characterization and investigation of nanometer scale samples. Most commercial (and some open-source) scanning probe data acquisition software uses digital signal processors to handle the real time data processing and control, which adds to the expense and complexity of the control software. We describe here scan control software that uses a single computer and a data acquisition card to acquire scan data. The computer runs an open-source real time Linux kernel, which permits fast acquisition and control while maintaining a responsive graphical user interface. Images from a simulated tuning-fork based microscope as well as a standard topographical sample are also presented, showing some of the capabilities of the software.

  12. Dugong: a Docker image, based on Ubuntu Linux, focused on reproducibility and replicability for bioinformatics analyses.

    Science.gov (United States)

    Menegidio, Fabiano B; Jabes, Daniela L; Costa de Oliveira, Regina; Nunes, Luiz R

    2018-02-01

    This manuscript introduces and describes Dugong, a Docker image based on Ubuntu 16.04, which automates installation of more than 3500 bioinformatics tools (along with their respective libraries and dependencies), in alternative computational environments. The software operates through a user-friendly XFCE4 graphic interface that allows software management and installation by users not fully familiarized with the Linux command line and provides the Jupyter Notebook to assist in the delivery and exchange of consistent and reproducible protocols and results across laboratories, assisting in the development of open science projects. Source code and instructions for local installation are available at https://github.com/DugongBioinformatics, under the MIT open source license. Luiz.nunes@ufabc.edu.br. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  13. Porting oxbash to linux and its application in SD-shell calculations

    International Nuclear Information System (INIS)

    Suman, H.; Suleiman, S.

    1998-01-01

    Oxbash, a code for nuclear structure calculations within the shell model approach, was ported to Linux that is a UNIX clone for PC's. Due to many faults in the code version we had, deep corrective actions in the code had to be undertaken. This was done through intensive use of UNIX utilities like sed, nm, make in addition to proper shell script programming. Our version contained calls for missing subroutines. Some of these were included from C- and f90 libraries. Others had to be written separately. All these actions were organized and automated through a robust system of M akefiles . Finally the code was tested and applied for nuclei with 18 and 20 nucleons. (author)

  14. 77 FR 5864 - BluePoint Linux Software Corp., China Bottles Inc., Long-e International, Inc., and Nano...

    Science.gov (United States)

    2012-02-06

    ... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] BluePoint Linux Software Corp., China Bottles Inc., Long-e International, Inc., and Nano Superlattice Technology, Inc.; Order of Suspension of... that there is a lack of current and accurate information concerning the securities of Nano Superlattice...

  15. Convolutional Neural Network on Embedded Linux(trademark) System-on-Chip: A Methodology and Performance Benchmark

    Science.gov (United States)

    2016-05-01

    Linux® is a registered trademark of Linus Torvalds. NVIDIA ® is a registered trademark of NVIDIA Corporation. Oracle® is a registered trademark of...two NVIDIA ® GTX580 GPUs [3]. Therefore, for this initial work, we decided to concentrate on small networks and small datasets until the methods are

  16. PCI-VME bridge device driver design of a high-performance data acquisition and control system on LINUX

    International Nuclear Information System (INIS)

    Sun Yan; Ye Mei; Zhang Nan; Zhao Jingwei

    2000-01-01

    Data Acquisition and Control is an important part of Nuclear Electronic and Nuclear Detection application in HEP. The key methods are introduced for designing LINUX Device Driver of PCI-VME Bridge Device based on the realized Data Acquisition and Control System

  17. Setup of the development tools for a small-sized controller built in a robot using Linux

    International Nuclear Information System (INIS)

    Lee, Jae Cheol; Jun, Hyeong Seop; Choi, Yu Rak; Kim, Jae Hee

    2004-03-01

    This report explains a setup method of practical development tools for robot control software programming. Well constituted development tools make a programmer more productive and a program more reliable. We ported a proven operating system to the target board (our robot's controller) to avoid such convention. We selected open source Linux as operating system, because it is free, reliable, flexible and widely used. First, we setup the host computer with Linux, and installed a cross compiler on it. And we ported Linux to the target board and connected to the host computer with ethernet, and setup NFS to both the host and the target. So the target board can use host computer's hard disk as it's own disk. Next, we installed gdb server on the target board and gdb client and DDD to the host computer for debugging the target program in the host computer with graphic environment. Finally, we patched the target board's Linux kernel with another one which have realtime capability. In this way, we can have a realtime embedded hardware controller for a robot with convenient software developing tools. All source programs are edited and compiled on the host computer, and executable codes exist in the NFS mounted directory that can be accessed from target board's directory. We can execute and debugging the code by means of logging into the target through the ethernet or the serial line

  18. Pro Linux system administration learn to build systems for your business using free and open source software

    CERN Document Server

    Matotek, Dennis; Lieverdink, Peter

    2017-01-01

    This book aims to ease the entry of businesses to the world of zero-cost software running on Linux. It takes a layered, component-based approach to open source business systems, while training system administrators as the builders of business infrastructure.

  19. PCI-VME bridge device driver design of a high-performance data acquisition and control system on LINUX

    International Nuclear Information System (INIS)

    Sun Yan; Ye Mei; Zhang Nan; Zhao Jingwei

    2001-01-01

    Data acquisition and control is an important part of nuclear electronic and nuclear detection application in HEP. The key method has been introduced for designing LINUX device driver of PCI-VME bridge device based on realized by authors' data acquisition and control system

  20. Efficient Machine Learning Approach for Optimizing Scientific Computing Applications on Emerging HPC Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Arumugam, Kamesh [Old Dominion Univ., Norfolk, VA (United States)

    2017-05-01

    the parallel implementation challenges of such irregular applications on different HPC architectures. In particular, we use supervised learning to predict the computation structure and use it to address the control-ow and memory access irregularities in the parallel implementation of such applications on GPUs, Xeon Phis, and heterogeneous architectures composed of multi-core CPUs with GPUs or Xeon Phis. We use numerical simulation of charged particles beam dynamics simulation as a motivating example throughout the dissertation to present our new approach, though they should be equally applicable to a wide range of irregular applications. The machine learning approach presented here use predictive analytics and forecasting techniques to adaptively model and track the irregular memory access pattern at each time step of the simulation to anticipate the future memory access pattern. Access pattern forecasts can then be used to formulate optimization decisions during application execution which improves the performance of the application at a future time step based on the observations from earlier time steps. In heterogeneous architectures, forecasts can also be used to improve the memory performance and resource utilization of all the processing units to deliver a good aggregate performance. We used these optimization techniques and anticipation strategy to design a cache-aware, memory efficient parallel algorithm to address the irregularities in the parallel implementation of charged particles beam dynamics simulation on different HPC architectures. Experimental result using a diverse mix of HPC architectures shows that our approach in using anticipation strategy is effective in maximizing data reuse, ensuring workload balance, minimizing branch and memory divergence, and in improving resource utilization.