WorldWideScience

Sample records for linux cluster systems

  1. Super computer made with Linux cluster

    International Nuclear Information System (INIS)

    Lee, Jeong Hun; Oh, Yeong Eun; Kim, Jeong Seok

    2002-01-01

    This book consists of twelve chapters, which introduce super computer made with Linux cluster. The contents of this book are Linux cluster, the principle of cluster, design of Linux cluster, general things for Linux, building up terminal server and client, Bear wolf cluster by Debian GNU/Linux, cluster system with red hat, Monitoring system, application programming-MPI, on set-up and install application programming-PVM, with PVM programming and XPVM application programming-open PBS with composition and install and set-up and GRID with GRID system, GSI, GRAM, MDS, its install and using of tool kit

  2. Diskless Linux Cluster How-To

    National Research Council Canada - National Science Library

    Shumaker, Justin L

    2005-01-01

    Diskless linux clustering is not yet a turn-key solution. The process of configuring a cluster of diskless linux machines requires many modifications to the stock linux operating system before they can boot cleanly...

  3. Minimalist's linux cluster

    International Nuclear Information System (INIS)

    Choi, Chang-Yeong; Kim, Jeong-Hyun; Kim, Seyong

    2004-01-01

    Using barebone PC components and NIC's, we construct a linux cluster which has 2-dimensional mesh structure. This cluster has smaller footprint, is less expensive, and use less power compared to conventional linux cluster. Here, we report our experience in building such a machine and discuss our current lattice project on the machine

  4. Linux System Administration

    CERN Document Server

    Adelstein, Tom

    2007-01-01

    If you're an experienced system administrator looking to acquire Linux skills, or a seasoned Linux user facing a new challenge, Linux System Administration offers practical knowledge for managing a complete range of Linux systems and servers. The book summarizes the steps you need to build everything from standalone SOHO hubs, web servers, and LAN servers to load-balanced clusters and servers consolidated through virtualization. Along the way, you'll learn about all of the tools you need to set up and maintain these working environments. Linux is now a standard corporate platform with user

  5. Operational Numerical Weather Prediction systems based on Linux cluster architectures

    International Nuclear Information System (INIS)

    Pasqui, M.; Baldi, M.; Gozzini, B.; Maracchi, G.; Giuliani, G.; Montagnani, S.

    2005-01-01

    The progress in weather forecast and atmospheric science has been always closely linked to the improvement of computing technology. In order to have more accurate weather forecasts and climate predictions, more powerful computing resources are needed, in addition to more complex and better-performing numerical models. To overcome such a large computing request, powerful workstations or massive parallel systems have been used. In the last few years, parallel architectures, based on the Linux operating system, have been introduced and became popular, representing real high performance-low cost systems. In this work the Linux cluster experience achieved at the Laboratory far Meteorology and Environmental Analysis (LaMMA-CNR-IBIMET) is described and tips and performances analysed

  6. Using Vega Linux Cluster at Reactor Physics Dept

    International Nuclear Information System (INIS)

    Zefran, B.; Jeraj, R.; Skvarc, J.; Glumac, B.

    1999-01-01

    Experience using a Linux-based cluster for the reactor physics calculations are presented in this paper. Special attention is paid to the MCNP code in this environment and to practical guidelines how to prepare and use the paralel version of the code. Our results of a time comparison study are presented for two sets of inputs. The results are promising and speedup factor achieved on the Linux cluster agrees with previous tests on other parallel systems. We also tested tools for parallelization of other programs used at our Dept..(author)

  7. Distributed MDSplus database performance with Linux clusters

    International Nuclear Information System (INIS)

    Minor, D.H.; Burruss, J.R.

    2006-01-01

    The staff at the DIII-D National Fusion Facility, operated for the USDOE by General Atomics, are investigating the use of grid computing and Linux technology to improve performance in our core data management services. We are in the process of converting much of our functionality to cluster-based and grid-enabled software. One of the most important pieces is a new distributed version of the MDSplus scientific data management system that is presently used to support fusion research in over 30 countries worldwide. To improve data handling performance, the staff is investigating the use of Linux clusters for both data clients and servers. The new distributed capability will result in better load balancing between these clients and servers, and more efficient use of network resources resulting in improved support of the data analysis needs of the scientific staff

  8. Installing, Running and Maintaining Large Linux Clusters at CERN

    CERN Document Server

    Bahyl, V; van Eldik, Jan; Fuchs, Ulrich; Kleinwort, Thorsten; Murth, Martin; Smith, Tim; Bahyl, Vladimir; Chardi, Benjamin; Eldik, Jan van; Fuchs, Ulrich; Kleinwort, Thorsten; Murth, Martin; Smith, Tim

    2003-01-01

    Having built up Linux clusters to more than 1000 nodes over the past five years, we already have practical experience confronting some of the LHC scale computing challenges: scalability, automation, hardware diversity, security, and rolling OS upgrades. This paper describes the tools and processes we have implemented, working in close collaboration with the EDG project [1], especially with the WP4 subtask, to improve the manageability of our clusters, in particular in the areas of system installation, configuration, and monitoring. In addition to the purely technical issues, providing shared interactive and batch services which can adapt to meet the diverse and changing requirements of our users is a significant challenge. We describe the developments and tuning that we have introduced on our LSF based systems to maximise both responsiveness to users and overall system utilisation. Finally, this paper will describe the problems we are facing in enlarging our heterogeneous Linux clusters, the progress we have ...

  9. Argonne National Lab gets Linux network teraflop cluster

    CERN Multimedia

    2003-01-01

    "Linux NetworX, Salt Lake City, Utah, has delivered an Evolocity II (E2) Linux cluster to Argonne National Laboratory that is capable of performing more than one trillion calculations per second (1 teraFLOP). The cluster, named "Jazz" by Argonne, is designed to provide optimum performance for multiple disciplines such as chemistry, physics and reactor engineering and will be used by the entire scientific community at the Lab" (1 page).

  10. Berkeley lab checkpoint/restart (BLCR) for Linux clusters

    International Nuclear Information System (INIS)

    Hargrove, Paul H; Duell, Jason C

    2006-01-01

    This article describes the motivation, design and implementation of Berkeley Lab Checkpoint/Restart (BLCR), a system-level checkpoint/restart implementation for Linux clusters that targets the space of typical High Performance Computing applications, including MPI. Application-level solutions, including both checkpointing and fault-tolerant algorithms, are recognized as more time and space efficient than system-level checkpoints, which cannot make use of any application-specific knowledge. However, system-level checkpointing allows for preemption, making it suitable for responding to ''fault precursors'' (for instance, elevated error rates from ECC memory or network CRCs, or elevated temperature from sensors). Preemption can also increase the efficiency of batch scheduling; for instance reducing idle cycles (by allowing for shutdown without any queue draining period or reallocation of resources to eliminate idle nodes when better fitting jobs are queued), and reducing the average queued time (by limiting large jobs to running during off-peak hours, without the need to limit the length of such jobs). Each of these potential uses makes BLCR a valuable tool for efficient resource management in Linux clusters

  11. NSC KIPT Linux cluster for computing within the CMS physics program

    International Nuclear Information System (INIS)

    Levchuk, L.G.; Sorokin, P.V.; Soroka, D.V.

    2002-01-01

    The architecture of the NSC KIPT specialized Linux cluster constructed for carrying out work on CMS physics simulations and data processing is described. The configuration of the portable batch system (PBS) on the cluster is outlined. Capabilities of the cluster in its current configuration to perform CMS physics simulations are pointed out

  12. ClusterControl: a web interface for distributing and monitoring bioinformatics applications on a Linux cluster.

    Science.gov (United States)

    Stocker, Gernot; Rieder, Dietmar; Trajanoski, Zlatko

    2004-03-22

    ClusterControl is a web interface to simplify distributing and monitoring bioinformatics applications on Linux cluster systems. We have developed a modular concept that enables integration of command line oriented program into the application framework of ClusterControl. The systems facilitate integration of different applications accessed through one interface and executed on a distributed cluster system. The package is based on freely available technologies like Apache as web server, PHP as server-side scripting language and OpenPBS as queuing system and is available free of charge for academic and non-profit institutions. http://genome.tugraz.at/Software/ClusterControl

  13. Tri-Laboratory Linux Capacity Cluster 2007 SOW

    Energy Technology Data Exchange (ETDEWEB)

    Seager, M

    2007-03-22

    ). However, given the growing need for 'capability' systems as well, the budget demands are extreme and new, more cost effective ways of fielding these systems must be developed. This Tri-Laboratory Linux Capacity Cluster (TLCC) procurement represents the ASC first investment vehicle in these capacity systems. It also represents a new strategy for quickly building, fielding and integrating many Linux clusters of various sizes into classified and unclassified production service through a concept of Scalable Units (SU). The programmatic objective is to dramatically reduce the overall Total Cost of Ownership (TCO) of these 'capacity' systems relative to the best practices in Linux Cluster deployments today. This objective only makes sense in the context of these systems quickly becoming very robust and useful production clusters under the crushing load that will be inflicted on them by the ASC and SSP scientific simulation capacity workload.

  14. Tri-Laboratory Linux Capacity Cluster 2007 SOW

    International Nuclear Information System (INIS)

    Seager, M.

    2007-01-01

    well, the budget demands are extreme and new, more cost effective ways of fielding these systems must be developed. This Tri-Laboratory Linux Capacity Cluster (TLCC) procurement represents the ASC first investment vehicle in these capacity systems. It also represents a new strategy for quickly building, fielding and integrating many Linux clusters of various sizes into classified and unclassified production service through a concept of Scalable Units (SU). The programmatic objective is to dramatically reduce the overall Total Cost of Ownership (TCO) of these 'capacity' systems relative to the best practices in Linux Cluster deployments today. This objective only makes sense in the context of these systems quickly becoming very robust and useful production clusters under the crushing load that will be inflicted on them by the ASC and SSP scientific simulation capacity workload.

  15. Argonne Natl Lab receives TeraFLOP Cluster Linux NetworX

    CERN Multimedia

    2002-01-01

    " Linux NetworX announced today it has delivered an Evolocity II (E2) Linux cluster to Argonne National Laboratory that is capable of performing more than one trillion calculations per second (1 teraFLOP)" (1/2 page).

  16. A case study in application I/O on Linux clusters

    International Nuclear Information System (INIS)

    Ross, R.; Nurmi, D.; Cheng, A.; Zingale, M.

    2001-01-01

    A critical but often ignored component of system performance is the I/O system. Today's applications expect a great deal from underlying storage systems and software, and both high performance distributed storage and high level interfaces have been developed to fill these needs. In this paper they discuss the I/O performance of a parallel scientific application on a Linux cluster, the FLASH astrophysics code. This application relies on three I/O software components to provide high performance parallel I/O on Linux clusters: the Parallel Virtual File System (PVFS), the ROMIO MPI-IO implementation, and the Hierarchical Data Format (HDF5) library. First they discuss the roles played by each of these components in providing an I/O solution. Next they discuss the FLASH I/O benchmark and point out its relevance. Following this they examine the performance of the benchmark, and through instrumentation of both the application and underlying system software code they discover the location of major software bottlenecks. They work around the most inhibiting of these bottlenecks, showing substantial performance improvement. Finally they point out similarities between the inefficiencies found here and those found in message passing systems, indicating that research in the message passing field could be leveraged to solve similar problems in high-level I/O interfaces

  17. Pro Linux System Administration

    CERN Document Server

    Turnbull, James

    2009-01-01

    We can all be Linux experts, provided we invest the time in learning the craft of Linux administration. Pro Linux System Administration makes it easy for small to medium--sized businesses to enter the world of zero--cost software running on Linux and covers all the distros you might want to use, including Red Hat, Ubuntu, Debian, and CentOS. Authors, and systems infrastructure experts James Turnbull, Peter Lieverdink, and Dennis Matotek take a layered, component--based approach to open source business systems, while training system administrators as the builders of business infrastructure. If

  18. SOFTICE: Facilitating both Adoption of Linux Undergraduate Operating Systems Laboratories and Students' Immersion in Kernel Code

    Directory of Open Access Journals (Sweden)

    Alessio Gaspar

    2007-06-01

    Full Text Available This paper discusses how Linux clustering and virtual machine technologies can improve undergraduate students' hands-on experience in operating systems laboratories. Like similar projects, SOFTICE relies on User Mode Linux (UML to provide students with privileged access to a Linux system without creating security breaches on the hosting network. We extend such approaches in two aspects. First, we propose to facilitate adoption of Linux-based laboratories by using a load-balancing cluster made of recycled classroom PCs to remotely serve access to virtual machines. Secondly, we propose a new approach for students to interact with the kernel code.

  19. Big Data demonstrator using Hadoop to build a Linux cluster for log data analysis using R

    DEFF Research Database (Denmark)

    Torbensen, Rune Sonnich; Top, Søren

    2017-01-01

    This article walks through the steps to create a Hadoop Linux cluster in the cloud and outlines how to analyze device log data via an example in the R programming language.......This article walks through the steps to create a Hadoop Linux cluster in the cloud and outlines how to analyze device log data via an example in the R programming language....

  20. The Linux operating system: An introduction

    Science.gov (United States)

    Bokhari, Shahid H.

    1995-01-01

    Linux is a Unix-like operating system for Intel 386/486/Pentium based IBM-PCs and compatibles. The kernel of this operating system was written from scratch by Linus Torvalds and, although copyrighted by the author, may be freely distributed. A world-wide group has collaborated in developing Linux on the Internet. Linux can run the powerful set of compilers and programming tools of the Free Software Foundation, and XFree86, a port of the X Window System from MIT. Most capabilities associated with high performance workstations, such as networking, shared file systems, electronic mail, TeX, LaTeX, etc. are freely available for Linux. It can thus transform cheap IBM-PC compatible machines into Unix workstations with considerable capabilities. The author explains how Linux may be obtained, installed and networked. He also describes some interesting applications for Linux that are freely available. The enormous consumer market for IBM-PC compatible machines continually drives down prices of CPU chips, memory, hard disks, CDROMs, etc. Linux can convert such machines into powerful workstations that can be used for teaching, research and software development. For professionals who use Unix based workstations at work, Linux permits virtually identical working environments on their personal home machines. For cost conscious educational institutions Linux can create world-class computing environments from cheap, easily maintained, PC clones. Finally, for university students, it provides an essentially cost-free path away from DOS into the world of Unix and X Windows.

  1. Implementing Journaling in a Linux Shared Disk File System

    Science.gov (United States)

    Preslan, Kenneth W.; Barry, Andrew; Brassow, Jonathan; Cattelan, Russell; Manthei, Adam; Nygaard, Erling; VanOort, Seth; Teigland, David; Tilstra, Mike; O'Keefe, Matthew; hide

    2000-01-01

    In computer systems today, speed and responsiveness is often determined by network and storage subsystem performance. Faster, more scalable networking interfaces like Fibre Channel and Gigabit Ethernet provide the scaffolding from which higher performance computer systems implementations may be constructed, but new thinking is required about how machines interact with network-enabled storage devices. In this paper we describe how we implemented journaling in the Global File System (GFS), a shared-disk, cluster file system for Linux. Our previous three papers on GFS at the Mass Storage Symposium discussed our first three GFS implementations, their performance, and the lessons learned. Our fourth paper describes, appropriately enough, the evolution of GFS version 3 to version 4, which supports journaling and recovery from client failures. In addition, GFS scalability tests extending to 8 machines accessing 8 4-disk enclosures were conducted: these tests showed good scaling. We describe the GFS cluster infrastructure, which is necessary for proper recovery from machine and disk failures in a collection of machines sharing disks using GFS. Finally, we discuss the suitability of Linux for handling the big data requirements of supercomputing centers.

  2. Application of the Linux cluster for exhaustive window haplotype analysis using the FBAT and Unphased programs.

    Science.gov (United States)

    Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun

    2008-05-28

    Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4-15.9 times faster, while Unphased jobs performed 1.1-18.6 times faster compared to the accumulated computation duration. Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance.

  3. Scalability of a Low-Cost Multi-Teraflop Linux Cluster for High-End Classical Atomistic and Quantum Mechanical Simulations

    Science.gov (United States)

    Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash

    2003-01-01

    Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.

  4. Cleaning up a GNU/Linux operating system

    OpenAIRE

    Oblak , Denis

    2018-01-01

    The aim of the thesis is to develop an application for cleaning up the Linux operating system that would be able to function on most distributions. The theoretical part discusses the cleaning of the Linux operating system that frees up disk space and allows a better functioning. The cleaning techniques and the existing tools for Linux are systematically reviewed and presented. The following part examines the cleaning of the Windows and MacOS operating systems. The thesis also compares all...

  5. The renewed HT-7 plasma control system based on real-time Linux cluster

    Energy Technology Data Exchange (ETDEWEB)

    Yuan, Q.P., E-mail: qpyuan@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei (China); Xiao, B.J.; Zhang, R.R. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei (China); Walker, M.L.; Penaflor, B.G.; Piglowski, D.A.; Johnson, R.D. [General Atomics, DIII-D National Fusion Facility, San Diego, CA (United States)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer The hardware and software structure of the new HT-7 plasma control system (HT-7 PCS) is reported. Black-Right-Pointing-Pointer All original systems were integrated in the new HT-7 PCS. And the implementation details of the control algorithms are given in the paper. Black-Right-Pointing-Pointer Different from EAST PCS, the AC operation mode is realized in HT-7 PCS. Black-Right-Pointing-Pointer The experiment results are discussed. Good control performance has been obtained. - Abstract: In order to improve the synchronization, flexibility and expansibility of the plasma control on HT-7, a new plasma control system (HT-7 PCS) was constructed. The HT-7 PCS was based on a real-time Linux cluster with a well-defined, robust and flexible software infrastructure which was adapted from DIII-D PCS. In this paper, the hardware structure and system customization details for HT-7 PCS are reported. The plasma position and current control, plasma density control and off-normal event detection, which were realized in separated systems originally, have been integrated and implemented in such HT-7 PCS. All these control algorithms have been successfully validated in the last several HT-7 experiment campaigns. Good control performance has been achieved and the experiment results are discussed in the paper.

  6. A Linux cluster for between-pulse magnetic equilibrium reconstructions and other processor bound analyses

    International Nuclear Information System (INIS)

    Peng, Q.; Groebner, R. J.; Lao, L. L.; Schachter, J.; Schissel, D. P.; Wade, M. R.

    2001-01-01

    A 12-processor Linux PC cluster has been installed to perform between-pulse magnetic equilibrium reconstructions during tokamak operations using the EFIT code written in FORTRAN. The MPICH package implementing message passing interface is employed by EFIT for data distribution and communication. The new system calculates equilibria eight times faster than the previous system yielding a complete equilibrium time history on a 25 ms time scale 4 min after the pulse ends. A graphical interface is provided for users to control the time resolution and the type of EFITs. The next analysis to benefit from the cluster is CERQUICK written in IDL for ion temperature profile analysis. The plan is to expand the cluster so that a full profile analysis (Te, Ti, ne, Vr, Zeff) can be made available between pulses, which lays the ground work for Kinetic EFIT and/or ONETWO power balance analyses

  7. Disk cloning program 'Dolly+' for system management of PC Linux cluster

    International Nuclear Information System (INIS)

    Atsushi Manabe

    2001-01-01

    The Dolly+ is a Linux application program to clone files and disk partition image from a PC to many others. By using several techniques such as logical ring connection, multi threading and pipelining, it could achieve high performance and scalability. For example, in typical condition, installations to a hundred PCs takes almost equivalent time for two PCs. Together with the Intel PXE and the RedHat kickstart, automatic and very fast system installation and upgrading could be performed

  8. Embedded Linux platform for data acquisition systems

    International Nuclear Information System (INIS)

    Patel, Jigneshkumar J.; Reddy, Nagaraj; Kumari, Praveena; Rajpal, Rachana; Pujara, Harshad; Jha, R.; Kalappurakkal, Praveen

    2014-01-01

    Highlights: • The design and the development of data acquisition system on FPGA based reconfigurable hardware platform. • Embedded Linux configuration and compilation for FPGA based systems. • Hardware logic IP core and its Linux device driver development for the external peripheral to interface it with the FPGA based system. - Abstract: This scalable hardware–software system is designed and developed to explore the emerging open standards for data acquisition requirement of Tokamak experiments. To address the future need for a scalable data acquisition and control system for fusion experiments, we have explored the capability of software platform using Open Source Embedded Linux Operating System on a programmable hardware platform such as FPGA. The idea was to identify the platform which can be customizable, flexible and scalable to support the data acquisition system requirements. To do this, we have selected FPGA based reconfigurable and scalable hardware platform to design the system with Embedded Linux based operating system for flexibility in software development and Gigabit Ethernet interface for high speed data transactions. The proposed hardware–software platform using FPGA and Embedded Linux OS offers a single chip solution with processor, peripherals such ADC interface controller, Gigabit Ethernet controller, memory controller amongst other peripherals. The Embedded Linux platform for data acquisition is implemented and tested on a Virtex-5 FXT FPGA ML507 which has PowerPC 440 (PPC440) [2] hard block on FPGA. For this work, we have used the Linux Kernel version 2.6.34 with BSP support for the ML507 platform. It is downloaded from the Xilinx [1] GIT server. Cross-compiler tool chain is created using the Buildroot scripts. The Linux Kernel and Root File System are configured and compiled using the cross-tools to support the hardware platform. The Analog to Digital Converter (ADC) IO module is designed and interfaced with the ML507 through Xilinx

  9. Embedded Linux platform for data acquisition systems

    Energy Technology Data Exchange (ETDEWEB)

    Patel, Jigneshkumar J., E-mail: jjp@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Reddy, Nagaraj, E-mail: nagaraj.reddy@coreel.com [Sandeepani School of Embedded System Design, Bangalore, Karnataka (India); Kumari, Praveena, E-mail: praveena@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Rajpal, Rachana, E-mail: rachana@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Pujara, Harshad, E-mail: pujara@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Jha, R., E-mail: rjha@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Kalappurakkal, Praveen, E-mail: praveen.k@coreel.com [Sandeepani School of Embedded System Design, Bangalore, Karnataka (India)

    2014-05-15

    Highlights: • The design and the development of data acquisition system on FPGA based reconfigurable hardware platform. • Embedded Linux configuration and compilation for FPGA based systems. • Hardware logic IP core and its Linux device driver development for the external peripheral to interface it with the FPGA based system. - Abstract: This scalable hardware–software system is designed and developed to explore the emerging open standards for data acquisition requirement of Tokamak experiments. To address the future need for a scalable data acquisition and control system for fusion experiments, we have explored the capability of software platform using Open Source Embedded Linux Operating System on a programmable hardware platform such as FPGA. The idea was to identify the platform which can be customizable, flexible and scalable to support the data acquisition system requirements. To do this, we have selected FPGA based reconfigurable and scalable hardware platform to design the system with Embedded Linux based operating system for flexibility in software development and Gigabit Ethernet interface for high speed data transactions. The proposed hardware–software platform using FPGA and Embedded Linux OS offers a single chip solution with processor, peripherals such ADC interface controller, Gigabit Ethernet controller, memory controller amongst other peripherals. The Embedded Linux platform for data acquisition is implemented and tested on a Virtex-5 FXT FPGA ML507 which has PowerPC 440 (PPC440) [2] hard block on FPGA. For this work, we have used the Linux Kernel version 2.6.34 with BSP support for the ML507 platform. It is downloaded from the Xilinx [1] GIT server. Cross-compiler tool chain is created using the Buildroot scripts. The Linux Kernel and Root File System are configured and compiled using the cross-tools to support the hardware platform. The Analog to Digital Converter (ADC) IO module is designed and interfaced with the ML507 through Xilinx

  10. Linux malware incident response an excerpt from malware forensic field guide for Linux systems

    CERN Document Server

    Malin, Cameron H; Aquilina, James M

    2013-01-01

    Linux Malware Incident Response is a ""first look"" at the Malware Forensics Field Guide for Linux Systems, exhibiting the first steps in investigating Linux-based incidents. The Syngress Digital Forensics Field Guides series includes companions for any digital and computer forensic investigator and analyst. Each book is a ""toolkit"" with checklists for specific tasks, case studies of difficult situations, and expert analyst tips. This compendium of tools for computer forensics analysts and investigators is presented in a succinct outline format with cross-references to suppleme

  11. Using a Linux Cluster for Parallel Simulations of an Active Magnetic Regenerator Refrigerator

    DEFF Research Database (Denmark)

    Petersen, T.F.; Pryds, N.; Smith, A.

    2006-01-01

    This paper describes the implementation of a Comsol Multiphysics model on a Linux computer Cluster. The Magnetic Refrigerator (MR) is a special type of refrigerator with potential to reduce the energy consumption of household refrigeration by a factor of two or more. To conduct numerical analysis....... The coupled set of equations and the transient convergence towards the final steady state means that the model has an excessive solution time. To make parametric studies practical, the developed model was implemented on a Cluster to allow parallel simulations, which has decreased the solution time...

  12. Pro Oracle database 11g RAC on Linux

    CERN Document Server

    Shaw, Steve

    2010-01-01

    Pro Oracle Database 11g RAC on Linux provides full-life-cycle guidance on implementing Oracle Real Application Clusters in a Linux environment. Real Application Clusters, commonly abbreviated as RAC, is Oracle's industry-leading architecture for scalable and fault-tolerant databases. RAC allows you to scale up and down by simply adding and subtracting inexpensive Linux servers. Redundancy provided by those multiple, inexpensive servers is the basis for the failover and other fault-tolerance features that RAC provides. Written by authors well-known for their talent with RAC, Pro Oracle Database

  13. Research of Performance Linux Kernel File Systems

    Directory of Open Access Journals (Sweden)

    Andrey Vladimirovich Ostroukh

    2015-10-01

    Full Text Available The article describes the most common Linux Kernel File Systems. The research was carried out on a personal computer, the characteristics of which are written in the article. The study was performed on a typical workstation running GNU/Linux with below characteristics. On a personal computer for measuring the file performance, has been installed the necessary software. Based on the results, conclusions and proposed recommendations for use of file systems. Identified and recommended by the best ways to store data.

  14. Linux software for large topology optimization problems

    DEFF Research Database (Denmark)

    evolving product, which allows a parallel solution of the PDE, it lacks the important feature that the matrix-generation part of the computations is localized to each processor. This is well-known to be critical for obtaining a useful speedup on a Linux cluster and it motivates the search for a COMSOL......-like package for large topology optimization problems. One candidate for such software is developed for Linux by Sandia Nat’l Lab in the USA being the Sundance system. Sundance also uses a symbolic representation of the PDE and a scalable numerical solution is achieved by employing the underlying Trilinos...

  15. First experiences with large SAN storage and Linux

    International Nuclear Information System (INIS)

    Wezel, Jos van; Marten, Holger; Verstege, Bernhard; Jaeger, Axel

    2004-01-01

    The use of a storage area network (SAN) with Linux opens possibilities for scalable and affordable large data storage and poses a new challenge for cluster computing. The GridKa center uses a commercial parallel file system to create a highly available high-speed data storage using a combination of Fibre Channel (SAN) and Ethernet (LAN) to optimize between data throughput and costs. This article describes the design, implementation and optimizations of the GridKa storage solution which will offer over 400 TB online storage for 600 nodes. Presented are some throughput measurements of one of the largest Linux-based parallel storage systems in the world

  16. Web application security analysis using the Kali Linux operating system

    OpenAIRE

    BABINCEV IVAN M.; VULETIC DEJAN V.

    2016-01-01

    The Kali Linux operating system is described as well as its purpose and possibilities. There are listed groups of tools that Kali Linux has together with the methods of their functioning, as well as a possibility to install and use tools that are not an integral part of Kali. The final part shows a practical testing of web applications using the tools from the Kali Linux operating system. The paper thus shows a part of the possibilities of this operating system in analaysing web applications ...

  17. Linux Essentials

    CERN Document Server

    Smith, Roderick W

    2012-01-01

    A unique, full-color introduction to Linux fundamentals Serving as a low-cost, secure alternative to expensive operating systems, Linux is a UNIX-based, open source operating system. Full-color and concise, this beginner's guide takes a learning-by-doing approach to understanding the essentials of Linux. Each chapter begins by clearly identifying what you will learn in the chapter, followed by a straightforward discussion of concepts that leads you right into hands-on tutorials. Chapters conclude with additional exercises and review questions, allowing you to reinforce and measure your underst

  18. KNBD: A Remote Kernel Block Server for Linux

    Science.gov (United States)

    Becker, Jeff

    1999-01-01

    I am developing a prototype of a Linux remote disk block server whose purpose is to serve as a lower level component of a parallel file system. Parallel file systems are an important component of high performance supercomputers and clusters. Although supercomputer vendors such as SGI and IBM have their own custom solutions, there has been a void and hence a demand for such a system on Beowulf-type PC Clusters. Recently, the Parallel Virtual File System (PVFS) project at Clemson University has begun to address this need (1). Although their system provides much of the functionality of (and indeed was inspired by) the equivalent file systems in the commercial supercomputer market, their system is all in user-space. Migrating their 10 services to the kernel could provide a performance boost, by obviating the need for expensive system calls. Thanks to Pavel Machek, the Linux kernel has provided the network block device (2) with kernels 2.1.101 and later. You can configure this block device to redirect reads and writes to a remote machine's disk. This can be used as a building block for constructing a striped file system across several nodes.

  19. Ubuntu Linux toolbox

    CERN Document Server

    Negus, Christopher

    2012-01-01

    This bestseller from Linux guru Chris Negus is packed with an array of new and revised material As a longstanding bestseller, Ubuntu Linux Toolbox has taught you how to get the most out Ubuntu, the world?s most popular Linux distribution. With this eagerly anticipated new edition, Christopher Negus returns with a host of new and expanded coverage on tools for managing file systems, ways to connect to networks, techniques for securing Ubuntu systems, and a look at the latest Long Term Support (LTS) release of Ubuntu, all aimed at getting you up and running with Ubuntu Linux quickly.

  20. Python for Unix and Linux system administration

    CERN Document Server

    Gift, Noah

    2007-01-01

    Python is an ideal language for solving problems, especially in Linux and Unix networks. With this pragmatic book, administrators can review various tasks that often occur in the management of these systems, and learn how Python can provide a more efficient and less painful way to handle them. Each chapter in Python for Unix and Linux System Administration presents a particular administrative issue, such as concurrency or data backup, and presents Python solutions through hands-on examples. Once you finish this book, you'll be able to develop your own set of command-line utilities with Pytho

  1. Linux Desktop Pocket Guide

    CERN Document Server

    Brickner, David

    2005-01-01

    While Mac OS X garners all the praise from pundits, and Windows XP attracts all the viruses, Linux is quietly being installed on millions of desktops every year. For programmers and system administrators, business users, and educators, desktop Linux is a breath of fresh air and a needed alternative to other operating systems. The Linux Desktop Pocket Guide is your introduction to using Linux on five of the most popular distributions: Fedora, Gentoo, Mandriva, SUSE, and Ubuntu. Despite what you may have heard, using Linux is not all that hard. Firefox and Konqueror can handle all your web bro

  2. A General Purpose High Performance Linux Installation Infrastructure

    International Nuclear Information System (INIS)

    Wachsmann, Alf

    2002-01-01

    With more and more and larger and larger Linux clusters, the question arises how to install them. This paper addresses this question by proposing a solution using only standard software components. This installation infrastructure scales well for a large number of nodes. It is also usable for installing desktop machines or diskless Linux clients, thus, is not designed for cluster installations in particular but is, nevertheless, highly performant. The infrastructure proposed uses PXE as the network boot component on the nodes. It uses DHCP and TFTP servers to get IP addresses and a bootloader to all nodes. It then uses kickstart to install Red Hat Linux over NFS. We have implemented this installation infrastructure at SLAC with our given server hardware and installed a 256 node cluster in 30 minutes. This paper presents the measurements from this installation and discusses the bottlenecks in our installation

  3. Beginning Ubuntu Linux

    CERN Document Server

    Raggi, Emilio; Channelle, Andy; Parsons, Trevor; Van Vugt, Sander

    2010-01-01

    Ubuntu Linux is the fastest growing Linux-based operating system, and Beginning Ubuntu Linux, Fifth Edition teaches all of us - including those who have never used Linux - how to use it productively, whether you come from Windows or the Mac or the world of open source. Beginning Ubuntu Linux, Fifth Edition shows you how to take advantage of the newest Ubuntu release, Lucid Lynx. Based on the best-selling previous edition, Emilio Raggi maintains a fine balance between teaching Ubuntu and introducing new features. Whether you aim to use it in the home or in the office, you'll be introduced to th

  4. Hard Real-Time Performances in Multiprocessor-Embedded Systems Using ASMP-Linux

    Directory of Open Access Journals (Sweden)

    Daniel Pierre Bovet

    2008-01-01

    Full Text Available Multiprocessor systems, especially those based on multicore or multithreaded processors, and new operating system architectures can satisfy the ever increasing computational requirements of embedded systems. ASMP-LINUX is a modified, high responsiveness, open-source hard real-time operating system for multiprocessor systems capable of providing high real-time performance while maintaining the code simple and not impacting on the performances of the rest of the system. Moreover, ASMP-LINUX does not require code changing or application recompiling/relinking. In order to assess the performances of ASMP-LINUX, benchmarks have been performed on several hardware platforms and configurations.

  5. Hard Real-Time Performances in Multiprocessor-Embedded Systems Using ASMP-Linux

    Directory of Open Access Journals (Sweden)

    Betti Emiliano

    2008-01-01

    Full Text Available Abstract Multiprocessor systems, especially those based on multicore or multithreaded processors, and new operating system architectures can satisfy the ever increasing computational requirements of embedded systems. ASMP-LINUX is a modified, high responsiveness, open-source hard real-time operating system for multiprocessor systems capable of providing high real-time performance while maintaining the code simple and not impacting on the performances of the rest of the system. Moreover, ASMP-LINUX does not require code changing or application recompiling/relinking. In order to assess the performances of ASMP-LINUX, benchmarks have been performed on several hardware platforms and configurations.

  6. STATUS OF THE LINUX PC CLUSTER FOR BETWEEN-PULSE DATA ANALYSES AT DIII-D

    International Nuclear Information System (INIS)

    PENG, Q; GROEBNER, R.J; LAO, L.L; SCHACHTER, J.; SCHISSEL, D.P; WADE, M.R.

    2001-08-01

    OAK-B135 Some analyses that survey experimental data are carried out at a sparse sample rate between pulses during tokamak operation and/or completed as a batch job overnight because the complete analysis on a single fast workstation cannot fit in the narrow time window between two pulses. Scientists therefore miss the opportunity to use these results to guide experiments quickly. With a dedicated Beowulf type cluster at a cost less than that of a workstation, these analyses can be accomplished between pulses and the analyzed data made available for the research team during the tokamak operation. A Linux PC cluster comprises of 12 processors was installed at DIII-D National Fusion Facility in CY00 and expanded to 24 processors in CY01 to automatically perform between-pulse magnetic equilibrium reconstructions using the EFIT code written in Fortran, CER analyses using CERQUICK code written in IDL and full profile fitting analyses (n e , T e , T i , V r , Z eff ) using IDL code ZIPFIT. This paper reports the current status of the system and discusses some problems and concerns raised during the implementation and expansion of the system

  7. MARS Code in Linux Environment

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Moon Kyu; Bae, Sung Won; Jung, Jae Joon; Chung, Bub Dong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2005-07-01

    The two-phase system analysis code MARS has been incorporated into Linux system. The MARS code was originally developed based on the RELAP5/MOD3.2 and COBRA-TF. The 1-D module which evolved from RELAP5 alone could be applied for the whole NSSS system analysis. The 3-D module developed based on the COBRA-TF, however, could be applied for the analysis of the reactor core region where 3-D phenomena would be better treated. The MARS code also has several other code units that could be incorporated for more detailed analysis. The separate code units include containment analysis modules and 3-D kinetics module. These code modules could be optionally invoked to be coupled with the main MARS code. The containment code modules (CONTAIN and CONTEMPT), for example, could be utilized for the analysis of the plant containment phenomena in a coupled manner with the nuclear reactor system. The mass and energy interaction during the hypothetical coolant leakage accident could, thereby, be analyzed in a more realistic manner. In a similar way, 3-D kinetics could be incorporated for simulating the three dimensional reactor kinetic behavior, instead of using the built-in point kinetics model. The MARS code system, developed initially for the MS Windows environment, however, would not be adequate enough for the PC cluster system where multiple CPUs are available. When parallelism is to be eventually incorporated into the MARS code, MS Windows environment is not considered as an optimum platform. Linux environment, on the other hand, is generally being adopted as a preferred platform for the multiple codes executions as well as for the parallel application. In this study, MARS code has been modified for the adaptation of Linux platform. For the initial code modification, the Windows system specific features have been removed from the code. Since the coupling code module CONTAIN is originally in a form of dynamic load library (DLL) in the Windows system, a similar adaptation method

  8. MARS Code in Linux Environment

    International Nuclear Information System (INIS)

    Hwang, Moon Kyu; Bae, Sung Won; Jung, Jae Joon; Chung, Bub Dong

    2005-01-01

    The two-phase system analysis code MARS has been incorporated into Linux system. The MARS code was originally developed based on the RELAP5/MOD3.2 and COBRA-TF. The 1-D module which evolved from RELAP5 alone could be applied for the whole NSSS system analysis. The 3-D module developed based on the COBRA-TF, however, could be applied for the analysis of the reactor core region where 3-D phenomena would be better treated. The MARS code also has several other code units that could be incorporated for more detailed analysis. The separate code units include containment analysis modules and 3-D kinetics module. These code modules could be optionally invoked to be coupled with the main MARS code. The containment code modules (CONTAIN and CONTEMPT), for example, could be utilized for the analysis of the plant containment phenomena in a coupled manner with the nuclear reactor system. The mass and energy interaction during the hypothetical coolant leakage accident could, thereby, be analyzed in a more realistic manner. In a similar way, 3-D kinetics could be incorporated for simulating the three dimensional reactor kinetic behavior, instead of using the built-in point kinetics model. The MARS code system, developed initially for the MS Windows environment, however, would not be adequate enough for the PC cluster system where multiple CPUs are available. When parallelism is to be eventually incorporated into the MARS code, MS Windows environment is not considered as an optimum platform. Linux environment, on the other hand, is generally being adopted as a preferred platform for the multiple codes executions as well as for the parallel application. In this study, MARS code has been modified for the adaptation of Linux platform. For the initial code modification, the Windows system specific features have been removed from the code. Since the coupling code module CONTAIN is originally in a form of dynamic load library (DLL) in the Windows system, a similar adaptation method

  9. A PC-Linux-based data acquisition system for the STAR TOFp detector

    International Nuclear Information System (INIS)

    Liu Zhixu; Liu Feng; Zhang Bingyun

    2003-01-01

    Commodity hardware running the open source operating system Linux is playing various important roles in the field of high energy physics. This paper describes the PC-Linux-based Data Acquisition System of STAR TOFp detector. It is based on the conventional solutions with front-end electronics made of NIM and CAMAC modules controlled by a PC running Linux. The system had been commissioned into the STAR DAQ system, and worked successfully in the second year of STAR physics runs

  10. The Free Software Movement and the GNU/Linux Operating System

    CERN Multimedia

    CERN. Geneva

    2003-01-01

    Richard Stallman will speak about the purpose, goals, philosophy, methods, status, and future prospects of the GNU operating system, which in combination with the kernel Linux is now used by an estimated 17 to 20 million users world wide.BiographyRichard Stallman is the founder of the Gnu Project, launched in 1984 to develop the free operating system GNU (an acronym for ''GNU's Not Unix''), and thereby give computer users the freedom that most of them have lost. GNU is free software: everyone is free to copy it and redistribute it, as well as to make changes either large or small. Today, Linux-based variants of the GNU system, based on the kernel Linux developed by Linus Torvalds, are in widespread use. There are estimated to be some 20 million users of GNU/Linux systems today. Richard Stallman is the principal author of the GNU Compiler Collection, a portable optimizing compiler which was designed to support diverse architectures and multiple languages. The compiler now supports over 30 different architect...

  11. Evolution of Linux operating system network

    Science.gov (United States)

    Xiao, Guanping; Zheng, Zheng; Wang, Haoqin

    2017-01-01

    Linux operating system (LOS) is a sophisticated man-made system and one of the most ubiquitous operating systems. However, there is little research on the structure and functionality evolution of LOS from the prospective of networks. In this paper, we investigate the evolution of the LOS network. 62 major releases of LOS ranging from versions 1.0 to 4.1 are modeled as directed networks in which functions are denoted by nodes and function calls are denoted by edges. It is found that the size of the LOS network grows almost linearly, while clustering coefficient monotonically decays. The degree distributions are almost the same: the out-degree follows an exponential distribution while both in-degree and undirected degree follow power-law distributions. We further explore the functionality evolution of the LOS network. It is observed that the evolution of functional modules is shown as a sequence of seven events (changes) succeeding each other, including continuing, growth, contraction, birth, splitting, death and merging events. By means of a statistical analysis of these events in the top 4 largest components (i.e., arch, drivers, fs and net), it is shown that continuing, growth and contraction events occupy more than 95% events. Our work exemplifies a better understanding and describing of the dynamics of LOS evolution.

  12. A embedded Linux system based on PowerPC

    International Nuclear Information System (INIS)

    Ye Mei; Zhao Jingwei; Chu Yuanping

    2006-01-01

    The authors will introduce a Embedded Linux System based on PowerPC as well as the base method on how to establish the system. The goal of the system is to build a test system of VMEbus device. It also can be used to setup the small data acquisition and control system. Two types of compiler are provided by the developer system according to the features of the system and the Power PC. At the top of the article some typical embedded Operation system will be introduced and the features of different system will be provided. And then the method on how to build a embedded Linux system as well as the key technique will be discussed in detail. Finally a successful read-write example will be given based on the test system. (authors)

  13. Membangun Sistem Linux Mandrake Minimal Menggunakan Inisial Disk Ram

    OpenAIRE

    Wagito, Wagito

    2006-01-01

    Minimal Linux system is commonly used for special systems like router, gateway, Linux installer and diskless Linux system. Minimal Linux system is a Linux system that use a few facilities of all Linux capabilities. Mandrake Linux, as one of Linux distribution is able to perform minimal Linux system. RAM is a computer resource that especially used as main memory. A part of RAM's function can be changed into disk called RAM disk. This RAM disk can be used to run the Linux system. This ...

  14. MEMBANGUN SISTEM LINUX MANDRAKE MINIMAL MENGGUNAKAN INISIAL DISK RAM

    OpenAIRE

    Wagito, Wagito

    2009-01-01

            Minimal Linux system is commonly used for special systems like router, gateway, Linux installer and diskless Linux system. Minimal Linux system is a Linux system that use a few facilities of all Linux capabilities. Mandrake Linux, as one of Linux distribution is able to perform minimal Linux system.         RAM is a computer resource that especially used as main memory. A  part of RAM’s function can be changed into disk called RAM disk. This RAM disk can be used to run the Linux syste...

  15. Web-based Quality Control Tool used to validate CERES products on a cluster of Linux servers

    Science.gov (United States)

    Chu, C.; Sun-Mack, S.; Heckert, E.; Chen, Y.; Mlynczak, P.; Mitrescu, C.; Doelling, D.

    2014-12-01

    There have been a few popular desktop tools used in the Earth Science community to validate science data. Because of the limitation on the capacity of desktop hardware such as disk space and CPUs, those softwares are not able to display large amount of data from files.This poster will talk about an in-house developed web-based software built on a cluster of Linux servers. That allows users to take advantage of a few Linux servers working in parallel to generate hundreds images in a short period of time. The poster will demonstrate:(1) The hardware and software architecture is used to provide high throughput of images. (2) The software structure that can incorporate new products and new requirement quickly. (3) The user interface about how users can manipulate the data and users can control how the images are displayed.

  16. Application of instrument platform based embedded Linux system on intelligent scaler

    International Nuclear Information System (INIS)

    Wang Jikun; Yang Run'an; Xia Minjian; Yang Zhijun; Li Lianfang; Yang Binhua

    2011-01-01

    It designs a instrument platform based on embedded Linux system and peripheral circuit, by designing Linux device driver and application program based on QT Embedded, various functions of the intelligent scaler are realized. The system architecture is very reasonable, so the stability and the expansibility and the integration level are increased, the development cycle is shorten greatly. (authors)

  17. Embedded Linux in het onderwijs

    NARCIS (Netherlands)

    Dr Ruud Ermers

    2008-01-01

    Embedded Linux wordt bij steeds meer grote bedrijven ingevoerd als embedded operating system. Binnen de opleiding Technische Informatica van Fontys Hogeschool ICT is Embedded Linux geïntroduceerd in samenwerking met het lectoraat Architectuur van Embedded Systemen. Embedded Linux is als vakgebied

  18. Migration of alcator C-Mod computer infrastructure to Linux

    International Nuclear Information System (INIS)

    Fredian, T.W.; Greenwald, M.; Stillerman, J.A.

    2004-01-01

    The Alcator C-Mod fusion experiment at MIT in Cambridge, Massachusetts has been operating for twelve years. The data handling for the experiment during most of this period was based on MDSplus running on a cluster of VAX and Alpha computers using the OpenVMS operating system. While the OpenVMS operating system provided a stable reliable platform, the support of the operating system and the software layered on the system has deteriorated in recent years. With the advent of extremely powerful low cost personal computers and the increasing popularity and robustness of the Linux operating system a decision was made to migrate the data handling systems for C-Mod to a collection of PC's running Linux. This paper will describe the new system configuration, the effort involved in the migration from OpenVMS, the results of the first run campaign under the new configuration and the impact the switch may have on the rest of the MDSplus community

  19. UNIX and Linux system administration handbook

    CERN Document Server

    Nemeth, Evi; Hein, Trent R; Whaley, Ben; Mackin, Dan; Garnett, James; Branca, Fabrizio; Mouat, Adrian

    2018-01-01

    Now fully updated for today’s Linux distributions and cloud environments, it details best practices for every facet of system administration, including storage management, network design and administration, web hosting and scale-out, automation, configuration management, performance analysis, virtualization, DNS, security, management of IT service organizations, and much more. For modern system and network administrators, this edition contains indispensable new coverage of cloud deployments, continuous delivery, Docker and other containerization solutions, and much more.

  20. Linux all-in-one for dummies

    CERN Document Server

    Dulaney, Emmett

    2014-01-01

    Eight minibooks in one volume cover every important aspect of Linux and everything you need to know to pass level-1 certification Linux All-in-One For Dummies explains everything you need to get up and running with the popular Linux operating system. Written in the friendly and accessible For Dummies style, the book ideal for new and intermediate Linux users, as well as anyone studying for level-1 Linux certification. The eight minibooks inside cover the basics of Linux, interacting with it, networking issues, Internet services, administration, security, scripting, and level-1 certification. C

  1. Research on applications of ARM-LINUX embedded systems in manufacturing the nuclear equipment

    International Nuclear Information System (INIS)

    Nguyen Van Sy; Phan Luong Tuan; Nguyen Xuan Vinh; Dang Quang Bao

    2016-01-01

    A new microprocessor system that is ARM processor with open source Linux operating system is studied with the objective to apply ARM-Linux embedded systems in manufacturing the nuclear equipment. We use the development board of the company to learn and to build the workflow for an embedded system, then basing on the knowledge we design a motherboard embedded systems interface with the peripherals is buttons, LEDs through GPIO interface and connected with GM counting system via RS232 interface. The results of this study are: i) The procedures for working with embedded systems: process customization, installation embedded operating system and installation process, configure the development tools on the host computer; ii) ARM-Linux motherboard embedded systems interface with the peripherals and GM counting system, indicating the counts from GM counting system on the touch screen. (author)

  2. Linux Command Line and Shell Scripting Bible

    CERN Document Server

    Blum, Richard

    2011-01-01

    The authoritative guide to Linux command line and shell scripting?completely updated and revised [it's not a guide to Linux as a whole ? just to scripting] The Linux command line allows you to type specific Linux commands directly to the system so that you can easily manipulate files and query system resources, thereby permitting you to automate commonly used functions and even schedule those programs to run automatically. This new edition is packed with new and revised content, reflecting the many changes to new Linux versions, including coverage of alternative shells to the default bash shel

  3. Development of a portable Linux-based ECG measurement and monitoring system.

    Science.gov (United States)

    Tan, Tan-Hsu; Chang, Ching-Su; Huang, Yung-Fa; Chen, Yung-Fu; Lee, Cheng

    2011-08-01

    This work presents a portable Linux-based electrocardiogram (ECG) signals measurement and monitoring system. The proposed system consists of an ECG front end and an embedded Linux platform (ELP). The ECG front end digitizes 12-lead ECG signals acquired from electrodes and then delivers them to the ELP via a universal serial bus (USB) interface for storage, signal processing, and graphic display. The proposed system can be installed anywhere (e.g., offices, homes, healthcare centers and ambulances) to allow people to self-monitor their health conditions at any time. The proposed system also enables remote diagnosis via Internet. Additionally, the system has a 7-in. interactive TFT-LCD touch screen that enables users to execute various functions, such as scaling a single-lead or multiple-lead ECG waveforms. The effectiveness of the proposed system was verified by using a commercial 12-lead ECG signal simulator and in vivo experiments. In addition to its portability, the proposed system is license-free as Linux, an open-source code, is utilized during software development. The cost-effectiveness of the system significantly enhances its practical application for personal healthcare.

  4. Linux bible

    CERN Document Server

    Negus, Christopher

    2015-01-01

    The industry favorite Linux guide, updated for Red Hat Enterprise Linux 7 and the cloud Linux Bible, 9th Edition is the ultimate hands-on Linux user guide, whether you're a true beginner or a more advanced user navigating recent changes. This updated ninth edition covers the latest versions of Red Hat Enterprise Linux 7 (RHEL 7), Fedora 21, and Ubuntu 14.04 LTS, and includes new information on cloud computing and development with guidance on Openstack and Cloudforms. With a focus on RHEL 7, this practical guide gets you up to speed quickly on the new enhancements for enterprise-quality file s

  5. Low Cost Multisensor Kinematic Positioning and Navigation System with Linux/RTAI

    Directory of Open Access Journals (Sweden)

    Baoxin Hu

    2012-09-01

    Full Text Available Despite its popularity, the development of an embedded real-time multisensor kinematic positioning and navigation system discourages many researchers and developers due to its complicated hardware environment setup and time consuming device driver development. To address these issues, this paper proposed a multisensor kinematic positioning and navigation system built on Linux with Real Time Application Interface (RTAI, which can be constructed in a fast and economical manner upon popular hardware platforms. The authors designed, developed, evaluated and validated the application of Linux/RTAI in the proposed system for the integration of the low cost MEMS IMU and OEM GPS sensors. The developed system with Linux/RTAI as the core of a direct geo-referencing system provides not only an excellent hard real-time performance but also the conveniences for sensor hardware integration and real-time software development. A software framework is proposed in this paper for a universal kinematic positioning and navigation system with loosely-coupled integration architecture. In addition, general strategies of sensor time synchronization in a multisensor system are also discussed. The success of the loosely-coupled GPS-aided inertial navigation Kalman filter is represented via post-processed solutions from road tests.

  6. Channel Bonding in Linux Ethernet Environment using Regular Switching Hub

    Directory of Open Access Journals (Sweden)

    Chih-wen Hsueh

    2004-06-01

    Full Text Available Bandwidth plays an important role for quality of service in most network systems. There are many technologies developed to increase host bandwidth in a LAN environment. Most of them need special hardware support, such as switching hub that supports IEEE Link Aggregation standard. In this paper, we propose a Linux solution to increase the bandwidth between hosts with multiple network adapters connected to a regular switching hub. The approach is implemented as two Linux kernel modules in a LAN environment without modification to the hardware and operating systems on host machines. Packets are dispatched to bonding network adapters for transmission. The proposed approach is backward compatible, flexible and transparent to users and only one IP address is needed for multiple bonding network adapters. Evaluation experiments in TCP and UDP transmission are shown with bandwidth gain proportionally to the number of network adapters. It is suitable for large-scale LAN systems with high bandwidth requirement, such as clustering systems.

  7. The Research on Linux Memory Forensics

    Science.gov (United States)

    Zhang, Jun; Che, ShengBing

    2018-03-01

    Memory forensics is a branch of computer forensics. It does not depend on the operating system API, and analyzes operating system information from binary memory data. Based on the 64-bit Linux operating system, it analyzes system process and thread information from physical memory data. Using ELF file debugging information and propose a method for locating kernel structure member variable, it can be applied to different versions of the Linux operating system. The experimental results show that the method can successfully obtain the sytem process information from physical memory data, and can be compatible with multiple versions of the Linux kernel.

  8. Tuning Linux to meet real time requirements

    Science.gov (United States)

    Herbel, Richard S.; Le, Dang N.

    2007-04-01

    There is a desire to use Linux in military systems. Customers are requesting contractors to use open source to the maximal possible extent in contracts. Linux is probably the best operating system of choice to meet this need. It is widely used. It is free. It is royalty free, and, best of all, it is completely open source. However, there is a problem. Linux was not originally built to be a real time operating system. There are many places where interrupts can and will be blocked for an indeterminate amount of time. There have been several attempts to bridge this gap. One of them is from RTLinux, which attempts to build a microkernel underneath Linux. The microkernel will handle all interrupts and then pass it up to the Linux operating system. This does insure good interrupt latency; however, it is not free [1]. Another is RTAI, which provides a similar typed interface; however, the PowerPC platform, which is used widely in real time embedded community, was stated as "recovering" [2]. Thus this is not suited for military usage. This paper provides a method for tuning a standard Linux kernel so it can meet the real time requirement of an embedded system.

  9. Superiority of CT imaging reconstruction on Linux OS

    International Nuclear Information System (INIS)

    Lin Shaochun; Yan Xufeng; Wu Tengfang; Luo Xiaomei; Cai Huasong

    2010-01-01

    Objective: To compare the speed of CT reconstruction using the Linux and Windows OS. Methods: Shepp-Logan head phantom in different pixel size was projected to obtain the sinogram by using the inverse Fourier transformation, filtered back projection and Radon transformation on both Linux and Windows OS. Results: CT image reconstruction using the Linux operating system was significantly better and more efficient than Windows. Conclusion: CT image reconstruction using the Linux operating system is more efficient. (authors)

  10. Development of Automatic Live Linux Rebuilding System with Flexibility in Science and Engineering Education and Applying to Information Processing Education

    Science.gov (United States)

    Sonoda, Jun; Yamaki, Kota

    We develop an automatic Live Linux rebuilding system for science and engineering education, such as information processing education, numerical analysis and so on. Our system is enable to easily and automatically rebuild a customized Live Linux from a ISO image of Ubuntu, which is one of the Linux distribution. Also, it is easily possible to install/uninstall packages and to enable/disable init daemons. When we rebuild a Live Linux CD using our system, we show number of the operations is 8, and the rebuilding time is about 33 minutes on CD version and about 50 minutes on DVD version. Moreover, we have applied the rebuilded Live Linux CD in a class of information processing education in our college. As the results of a questionnaires survey from our 43 students who used the Live Linux CD, we obtain that the our Live Linux is useful for about 80 percents of students. From these results, we conclude that our system is able to easily and automatically rebuild a useful Live Linux in short time.

  11. MOLA: a bootable, self-configuring system for virtual screening using AutoDock4/Vina on computer clusters

    Directory of Open Access Journals (Sweden)

    Abreu Rui MV

    2010-10-01

    Full Text Available Abstract Background Virtual screening of small molecules using molecular docking has become an important tool in drug discovery. However, large scale virtual screening is time demanding and usually requires dedicated computer clusters. There are a number of software tools that perform virtual screening using AutoDock4 but they require access to dedicated Linux computer clusters. Also no software is available for performing virtual screening with Vina using computer clusters. In this paper we present MOLA, an easy-to-use graphical user interface tool that automates parallel virtual screening using AutoDock4 and/or Vina in bootable non-dedicated computer clusters. Implementation MOLA automates several tasks including: ligand preparation, parallel AutoDock4/Vina jobs distribution and result analysis. When the virtual screening project finishes, an open-office spreadsheet file opens with the ligands ranked by binding energy and distance to the active site. All results files can automatically be recorded on an USB-flash drive or on the hard-disk drive using VirtualBox. MOLA works inside a customized Live CD GNU/Linux operating system, developed by us, that bypass the original operating system installed on the computers used in the cluster. This operating system boots from a CD on the master node and then clusters other computers as slave nodes via ethernet connections. Conclusion MOLA is an ideal virtual screening tool for non-experienced users, with a limited number of multi-platform heterogeneous computers available and no access to dedicated Linux computer clusters. When a virtual screening project finishes, the computers can just be restarted to their original operating system. The originality of MOLA lies on the fact that, any platform-independent computer available can he added to the cluster, without ever using the computer hard-disk drive and without interfering with the installed operating system. With a cluster of 10 processors, and a

  12. Hadoop operations and cluster management cookbook

    CERN Document Server

    Guo, Shumin

    2013-01-01

    Solve specific problems using individual self-contained code recipes, or work through the book to develop your capabilities. This book is packed with easy-to-follow code and commands used for illustration, which makes your learning curve easy and quick.If you are a Hadoop cluster system administrator with Unix/Linux system management experience and you are looking to get a good grounding in how to set up and manage a Hadoop cluster, then this book is for you. It's assumed that you will have some experience in Unix/Linux command line already, as well as being familiar with network communication

  13. Kali Linux wireless penetration testing essentials

    CERN Document Server

    Alamanni, Marco

    2015-01-01

    This book is targeted at information security professionals, penetration testers and network/system administrators who want to get started with wireless penetration testing. No prior experience with Kali Linux and wireless penetration testing is required, but familiarity with Linux and basic networking concepts is recommended.

  14. Linux Security Cookbook

    CERN Document Server

    Barrett, Daniel J; Byrnes, Robert G

    2003-01-01

    Computer security is an ongoing process, a relentless contest between system administrators and intruders. A good administrator needs to stay one step ahead of any adversaries, which often involves a continuing process of education. If you're grounded in the basics of security, however, you won't necessarily want a complete treatise on the subject each time you pick up a book. Sometimes you want to get straight to the point. That's exactly what the new Linux Security Cookbook does. Rather than provide a total security solution for Linux computers, the authors present a series of easy-to-fol

  15. High-speed data acquisition with the Solaris and Linux operating systems

    International Nuclear Information System (INIS)

    Zilker, M.; Heimann, P.

    2000-01-01

    In this paper, we discuss whether Solaris and Linux are suitable for data acquisition systems in soft real time conditions. As an example we consider a plasma diagnostic (Mirnov coils), which collects data for a complete plasma discharge of about 10 s from up to 72 channels. Each ADC-Channel generates a data stream of 4 MB/s. To receive these data streams an eight-channel Hotlink PCI interface board was designed. With a prototype system using Solaris and the driver developed by us we investigate important properties of the operating system such as the I/O performance and scheduling of processes. We compare the Solaris operating system on the Ultra Sparc platform with Linux on the Intel platform. Finally, some points of user program development are mentioned to show how the application can make the most efficient use of the underlying high-speed I/O system

  16. Kali Linux assuring security by penetration testing

    CERN Document Server

    Ali, Shakeel; Allen, Lee

    2014-01-01

    Written as an interactive tutorial, this book covers the core of Kali Linux with real-world examples and step-by-step instructions to provide professional guidelines and recommendations for you. The book is designed in a simple and intuitive manner that allows you to explore the whole Kali Linux testing process or study parts of it individually.If you are an IT security professional who has a basic knowledge of Unix/Linux operating systems, including an awareness of information security factors, and want to use Kali Linux for penetration testing, then this book is for you.

  17. CTEx Beowulf cluster for MCNP performance

    International Nuclear Information System (INIS)

    Gonzaga, Roberto N.; Amorim, Aneuri S. de; Balthar, Mario Cesar V.

    2011-01-01

    This work is an introduction to the CTEx Nuclear Defense Department's Beowulf Cluster. Building a Beowulf Cluster is a complex learning process that greatly depends upon your hardware and software requirements. The feasibility and efficiency of performing MCNP5 calculations with a small, heterogeneous computing cluster built in Red Hat's Fedora Linux operating system personal computers (PC) are explored. The performance increases that may be expected with such clusters are estimated for cases that typify general radiation transport calculations. Our results show that the speed increase from additional slave PCs is nearly linear up to 10 processors. The pre compiled parallel binary version of MCNP uses the Message-Passing Interface (MPI) protocol. The use of this pre compiled parallel version of MCNP5 with the MPI protocol on a small, heterogeneous computing cluster built from Red Hat's Fedora Linux operating system PCs is the subject of this work. (author)

  18. A camac data acquisition system based on PC-Linux

    International Nuclear Information System (INIS)

    Ribas, R.V.

    2002-01-01

    A multi-parametric data acquisition system for Nuclear Physics experiments using camac instrumentation on a personal computer with the Linux operating system in described. The system is very reliable, inexpensive and is capable of handling event rates up to 4-6 k events/s. In the present version, the maximum number of parameters to be acquired is limited only by the number of camac modules that can be fitted in one camac crate

  19. Using Linux PCs in DAQ applications

    CERN Document Server

    Ünel, G; Beck, H P; Cetin, S A; Conka, T; Crone, G J; Fernandes, A; Francis, D; Joosb, M; Lehmann, G; López, J; Mailov, A A; Mapelli, Livio P; Mornacchi, Giuseppe; Niculescu, M; Petersen, J; Tremblet, L J; Veneziano, Stefano; Wildish, T; Yasu, Y

    2000-01-01

    ATLAS Data Acquisition/Event Filter "-1" (DAQ/EF1) project provides the opportunity to explore the use of commodity hardware (PCs) and Open Source Software (Linux) in DAQ applications. In DAQ/EF-1 there is an element called the LDAQ which is responsible for providing local run-control, error-handling and reporting for a number of read- out modules in front end crates. This element is also responsible for providing event data for monitoring and for the interface with the global control and monitoring system (Back-End). We present the results of an evaluation of the Linux operating system made in the context of DAQ/EF-1 where there are no strong real-time requirements. We also report on our experience in implementing the LDAQ on a VMEbus based PC (the VMIVME-7587) and a desktop PC linked to VMEbus with a Bit3 interface both running Linux. We then present the problems encountered during the integration with VMEbus, the status of the LDAQ implementation and draw some conclusions on the use of Linux in DAQ applica...

  20. Kali Linux CTF blueprints

    CERN Document Server

    Buchanan, Cameron

    2014-01-01

    Taking a highly practical approach and a playful tone, Kali Linux CTF Blueprints provides step-by-step guides to setting up vulnerabilities, in-depth guidance to exploiting them, and a variety of advice and ideas to build and customising your own challenges. If you are a penetration testing team leader or individual who wishes to challenge yourself or your friends in the creation of penetration testing assault courses, this is the book for you. The book assumes a basic level of penetration skills and familiarity with the Kali Linux operating system.

  1. Preparing a scientific manuscript in Linux: Today's possibilities and limitations.

    Science.gov (United States)

    Tchantchaleishvili, Vakhtang; Schmitto, Jan D

    2011-10-22

    Increasing number of scientists are enthusiastic about using free, open source software for their research purposes. Authors' specific goal was to examine whether a Linux-based operating system with open source software packages would allow to prepare a submission-ready scientific manuscript without the need to use the proprietary software. Preparation and editing of scientific manuscripts is possible using Linux and open source software. This letter to the editor describes key steps for preparation of a publication-ready scientific manuscript in a Linux-based operating system, as well as discusses the necessary software components. This manuscript was created using Linux and open source programs for Linux.

  2. LPI Linux Certification in a Nutshell

    CERN Document Server

    Haeder, Adam; Pessanha, Bruno; Stanger, James

    2010-01-01

    Linux deployment continues to increase, and so does the demand for qualified and certified Linux system administrators. If you're seeking a job-based certification from the Linux Professional Institute (LPI), this updated guide will help you prepare for the technically challenging LPIC Level 1 Exams 101 and 102. The third edition of this book is a meticulously researched reference to these exams, written by trainers who work closely with LPI. You'll find an overview of each exam, a summary of the core skills you need, review questions and exercises, as well as a study guide, a practice test,

  3. The Linux based distributed data acquisition system for the ISTRA+ experiment

    International Nuclear Information System (INIS)

    Filin, A.; Inyakin, A.; Novikov, V.; Obraztsov, V.; Smirnov, N.; Vlassov, E.; Yuschenko, O.

    2001-01-01

    The DAQ hardware of the ISTRA+ experiment consists of the VME system crate that contains two PCI-VME bridges interfacing two PCs with VME, external interrupts receiver, the readout controller for dedicated front-end electronics, the readout controller buffer memory module, the VME-CAMAC interface, and additional control modules. The DAQ computing consist of 6 PCs running the Linux operating system and linked into LAN. The first PC serves the external interrupts and acquires the data from front-end electronic. The second one is the slow control computer. The remaining PCs host the monitoring and data analysis software. The Linux based DAQ software provides the external interrupts processing, the data acquisition, recording, and distribution between monitoring and data analysis tasks running at DAQ PCs. The monitoring programs are based on two packages for data visualization: home-written one and the ROOT system. MySQL is used as a DAQ database

  4. Linux Networking Cookbook

    CERN Document Server

    Schroder, Carla

    2008-01-01

    If you want a book that lays out the steps for specific Linux networking tasks, one that clearly explains the commands and configurations, this is the book for you. Linux Networking Cookbook is a soup-to-nuts collection of recipes that covers everything you need to know to perform your job as a Linux network administrator. You'll dive straight into the gnarly hands-on work of building and maintaining a computer network

  5. Running Linux

    CERN Document Server

    Dalheimer, Matthias Kalle

    2006-01-01

    The fifth edition of Running Linux is greatly expanded, reflecting the maturity of the operating system and the teeming wealth of software available for it. Hot consumer topics such as audio and video playback applications, groupware functionality, and spam filtering are covered, along with the basics in configuration and management that always made the book popular.

  6. Real-time data collection in Linux: a case study.

    Science.gov (United States)

    Finney, S A

    2001-05-01

    Multiuser UNIX-like operating systems such as Linux are often considered unsuitable for real-time data collection because of the potential for indeterminate timing latencies resulting from preemptive scheduling. In this paper, Linux is shown to be fully adequate for precisely controlled programming with millisecond resolution or better. The Linux system calls that subserve such timing control are described and tested and then utilized in a MIDI-based program for tapping and music performance experiments. The timing of this program, including data input and output, is shown to be accurate at the millisecond level. This demonstrates that Linux, with proper programming, is suitable for real-time experiment software. In addition, the detailed description and test of both the operating system facilities and the application program itself may serve as a model for publicly documenting programming methods and software performance on other operating systems.

  7. Linux utilities cookbook

    CERN Document Server

    Lewis, James Kent

    2013-01-01

    A Cookbook-style guide packed with examples and illustrations, it offers organized learning through recipes and step-by-step instructions. The book is designed so that you can pick exactly what you need, when you need it.Written for anyone that would like to familiarize themselves with Linux. This book is perfect migrating from Windows to Linux and will save your time and money, learn exactly how to and where to begin working with Linux and troubleshooting in easy steps.

  8. Linux command line and shell scripting bible

    CERN Document Server

    Blum, Richard

    2014-01-01

    Talk directly to your system for a faster workflow with automation capability Linux Command Line and Shell Scripting Bible is your essential Linux guide. With detailed instruction and abundant examples, this book teaches you how to bypass the graphical interface and communicate directly with your computer, saving time and expanding capability. This third edition incorporates thirty pages of new functional examples that are fully updated to align with the latest Linux features. Beginning with command line fundamentals, the book moves into shell scripting and shows you the practical application

  9. A real-time data transmission method based on Linux for physical experimental readout systems

    International Nuclear Information System (INIS)

    Cao Ping; Song Kezhu; Yang Junfeng

    2012-01-01

    In a typical physical experimental instrument, such as a fusion or particle physical application, the readout system generally implements an interface between the data acquisition (DAQ) system and the front-end electronics (FEE). The key task of a readout system is to read, pack, and forward the data from the FEE to the back-end data concentration center in real time. To guarantee real-time performance, the VxWorks operating system (OS) is widely used in readout systems. However, VxWorks is not an open-source OS, which gives it has many disadvantages. With the development of multi-core processor and new scheduling algorithm, Linux OS exhibits performance in real-time applications similar to that of VxWorks. It has been successfully used even for some hard real-time systems. Discussions and evaluations of real-time Linux solutions for a possible replacement of VxWorks arise naturally. In this paper, a real-time transmission method based on Linux is introduced. To reduce the number of transfer cycles for large amounts of data, a large block of contiguous memory buffer for DMA transfer is allocated by modifying the Linux Kernel (version 2.6) source code slightly. To increase the throughput for network transmission, the user software is designed into formation of parallelism. To achieve high performance in real-time data transfer from hardware to software, mapping techniques must be used to avoid unnecessary data copying. A simplified readout system is implemented with 4 readout modules in a PXI crate. This system can support up to 48 MB/s data throughput from the front-end hardware to the back-end concentration center through a Gigabit Ethernet connection. There are no restrictions on the use of this method, hardware or software, which means that it can be easily migrated to other interrupt related applications.

  10. Fedora Bible 2010 Edition Featuring Fedora Linux 12

    CERN Document Server

    Negus, Christopher

    2010-01-01

    The perfect companion for mastering the latest version of Fedora. As a free, open source Linux operating system sponsored by Red Hat, Fedora can either be a stepping stone to Enterprise or used as a viable operating system for those looking for frequent updates. Written by veteran authors of perennial bestsellers, this book serves as an ideal companion for Linux users and offers a thorough look at the basics of the new Fedora 12. Step-by-step instructions make the Linux installation simple while clear explanations walk you through best practices for taking advantage of the desktop interface. Y

  11. Replacing OSE with Real Time capable Linux

    OpenAIRE

    Boman, Simon; Rutgersson, Olof

    2009-01-01

    For many years OSE has been a common used operating system, with real time extensions enhancements, in embed-ded systems. But in the last decades, Linux has grown and became a competitor to common operating systems and, in recent years, even as an operating system with real time extensions. With this in mind, ÅF was interested in replacing the quite expensive OSE with some distribution of the open source based Linux on a PowerPC MPC8360. Therefore, our purpose with thesis is to implement Linu...

  12. Abstract of talk for Silicon Valley Linux Users Group

    Science.gov (United States)

    Clanton, Sam

    2003-01-01

    The use of Linux for research at NASA Ames is discussed.Topics include:work with the Atmospheric Physics branch on software for a spectrometer to be used in the CRYSTAL-FACE mission this summer; work on in the Neuroengineering Lab with code IC including an introduction to the extension of the human senses project,advantages with using linux for real-time biological data processing,algorithms utilized on a linux system, goals of the project,slides of people with Neuroscan caps on, and progress that has been made and how linux has helped.

  13. Remote Boot of a Diskless Linux Client for Operating System Integrity

    National Research Council Canada - National Science Library

    Allen, Bruce

    2002-01-01

    .... The diskless Linux client is organized to provide read-write files over NFS at home, read-only files over NFS for accessing bulky immutable utilities, and sone volatile RAM disk files to allow the Linux Kernel to boot...

  14. CompTIA Linux+ Complete Study Guide (Exams LX0-101 and LX0-102)

    CERN Document Server

    Smith, Roderick W

    2010-01-01

    Prepare for CompTIA's Linux+ Exams. As the Linux server and desktop markets continue to grow, so does the need for qualified Linux administrators. CompTIA's Linux+ certification (Exams LX0-101 and LX0-102) includes the very latest enhancements to the popular open source operating system. This detailed guide not only covers all key exam topics—such as using Linux command-line tools, understanding the boot process and scripts, managing files and file systems, managing system security, and much more—it also builds your practical Linux skills with real-world examples. Inside, you'll find:. Full co

  15. Linux Server Security

    CERN Document Server

    Bauer, Michael D

    2005-01-01

    Linux consistently appears high up in the list of popular Internet servers, whether it's for the Web, anonymous FTP, or general services such as DNS and delivering mail. But security is the foremost concern of anyone providing such a service. Any server experiences casual probe attempts dozens of time a day, and serious break-in attempts with some frequency as well. This highly regarded book, originally titled Building Secure Servers with Linux, combines practical advice with a firm knowledge of the technical tools needed to ensure security. The book focuses on the most common use of Linux--

  16. Web application for monitoring mainframe computer, Linux operating systems and application servers

    OpenAIRE

    Dimnik, Tomaž

    2016-01-01

    This work presents the idea and the realization of web application for monitoring the operation of the mainframe computer, servers with Linux operating system and application servers. Web application is intended for administrators of these systems, as an aid to better understand the current state, load and operation of the individual components of the server systems.

  17. Faults in Linux

    DEFF Research Database (Denmark)

    Palix, Nicolas Jean-Michel; Thomas, Gaël; Saha, Suman

    2011-01-01

    In 2001, Chou et al. published a study of faults found by applying a static analyzer to Linux versions 1.0 through 2.4.1. A major result of their work was that the drivers directory contained up to 7 times more of certain kinds of faults than other directories. This result inspired a number...... of development and research efforts on improving the reliability of driver code. Today Linux is used in a much wider range of environments, provides a much wider range of services, and has adopted a new development and release model. What has been the impact of these changes on code quality? Are drivers still...... a major problem? To answer these questions, we have transported the experiments of Chou et al. to Linux versions 2.6.0 to 2.6.33, released between late 2003 and early 2010. We find that Linux has more than doubled in size during this period, but that the number of faults per line of code has been...

  18. Research on application of VME based embedded linux

    International Nuclear Information System (INIS)

    Ji Xiaolu; Ye Mei; Zhu Kejun; Li Xiaonan; Wang Yifang

    2010-01-01

    It describes the feasibility and realization of using embedded Linux in DAQ readout system of high energy physics experiment. The first part, the hardware and software framework is introduced. And then emphasis is placed on the key technologies during the system realization. The development is based on the VME bus and vme u niverse driver. Finally, the test result is presented: the readout system can work well in Embedded Linux OS. (authors)

  19. Analysis of Linux kernel as a complex network

    International Nuclear Information System (INIS)

    Gao, Yichao; Zheng, Zheng; Qin, Fangyun

    2014-01-01

    Operating system (OS) acts as an intermediary between software and hardware in computer-based systems. In this paper, we analyze the core of the typical Linux OS, Linux kernel, as a complex network to investigate its underlying design principles. It is found that the Linux Kernel Network (LKN) is a directed network and its out-degree follows an exponential distribution while the in-degree follows a power-law distribution. The correlation between topology and functions is also explored, by which we find that LKN is a highly modularized network with 12 key communities. Moreover, we investigate the robustness of LKN under random failures and intentional attacks. The result shows that the failure of the large in-degree nodes providing basic services will do more damage on the whole system. Our work may shed some light on the design of complex software systems

  20. Unsteady, Cooled Turbine Simulation Using a PC-Linux Analysis System

    Science.gov (United States)

    List, Michael G.; Turner, Mark G.; Chen, Jen-Pimg; Remotigue, Michael G.; Veres, Joseph P.

    2004-01-01

    The fist stage of the high-pressure turbine (HPT) of the GE90 engine was simulated with a three-dimensional unsteady Navier-Sokes solver, MSU Turbo, which uses source terms to simulate the cooling flows. In addition to the solver, its pre-processor, GUMBO, and a post-processing and visualization tool, Turbomachinery Visual3 (TV3) were run in a Linux environment to carry out the simulation and analysis. The solver was run both with and without cooling. The introduction of cooling flow on the blade surfaces, case, and hub and its effects on both rotor-vane interaction as well the effects on the blades themselves were the principle motivations for this study. The studies of the cooling flow show the large amount of unsteadiness in the turbine and the corresponding hot streak migration phenomenon. This research on the GE90 turbomachinery has also led to a procedure for running unsteady, cooled turbine analysis on commodity PC's running the Linux operating system.

  1. Lecture 11: Systemtap : Patching the linux kernel on the fly

    CERN Multimedia

    CERN. Geneva

    2013-01-01

    The presentation will describe the usage of Systemtap, in CERN Scientific Linux environment. Systemtap is a tool that allows developers and administrators to write and reuse simple scripts to deeply examine the activities of a live Linux system. We will go through the life cycle of a system tap module : creation, packaging, deployment. It will focus on how we used it recently at CERN as a workaround to patch a 0-day. Thomas Oulevey is a member if the IT department at CERN where he is an active member of the Linux team which supports 9’000 servers, 3’000 desktop systems and more than 5’000 active users. Before CERN he worked at the former astrophysics department of CERN, now called the European Southern Observatory, based in Chile. He was used to maintain the core telescope Linux systems and monitoring infrastructure.

  2. Pro Linux system administration learn to build systems for your business using free and open source software

    CERN Document Server

    Matotek, Dennis; Lieverdink, Peter

    2017-01-01

    This book aims to ease the entry of businesses to the world of zero-cost software running on Linux. It takes a layered, component-based approach to open source business systems, while training system administrators as the builders of business infrastructure.

  3. CompactPCI/Linux platform for medium level control system on FTU

    International Nuclear Information System (INIS)

    Wang, L.; Centioli, C.; Iannone, F.; Panella, M.; Mazza, G.; Vitale, V.

    2004-01-01

    In large fusion experiments, such as tokamak devices, there are common trends for slow control systems. Because of complexity of the plants, several tokamaks adopt the so-called 'standard model' (SM) based on a three levels hierarchical control: (i) high level control (HLC) - the supervisor; (ii) medium level control (MLC) - I/O field equipments interface and concentration units and (iii) low level control (LLC) - the programmable logic controllers (PLC). FTU control system was designed with SM concepts and, in its 15 years life cycle, it underwent several developments. The latest evolution was mandatory, due to the obsolescence of the MLC CPUs, based on VME/Motorola 68030 with OS9 operating system. Therefore, we had to look for cost-effective solutions and we chose a CompactPCI-Intel x86 platform with Linux operating system. A software porting has been done taking into account the differences between OS9 and Linux operating system in terms of inter/network processes communications and I/O multi-ports serial driver. This paper describes the hardware/software architecture of the new MLC system emphasising the reliability and the low costs of the open source solutions. Moreover, the huge amount of software packages available in open source environment will assure a less painful maintenance, and will open the way to further improvements of the system itself

  4. Compact PCI/Linux platform in FTU slow control system

    International Nuclear Information System (INIS)

    Iannone, F.; Centioli, C.; Panella, M.; Mazza, G.; Vitale, V.; Wang, L.

    2004-01-01

    In large fusion experiments, such as tokamak devices, there is a common trend for slow control systems. Because of complexity of the plants, the so-called 'Standard Model' (SM) in slow control has been adopted on several tokamak machines. This model is based on a three-level hierarchical control: 1) High-Level Control (HLC) with a supervisory function; 2) Medium-Level Control (MLC) to interface and concentrate I/O field equipment; 3) Low-Level Control (LLC) with hard real-time I/O function, often managed by PLCs. FTU (Frascati Tokamak Upgrade) control system designed with SM concepts has underwent several stages of developments in its fifteen years duration of runs. The latest evolution was inevitable, due to the obsolescence of the MLC CPUs, based on VME-MOTOROLA 68030 with OS9 operating system. A large amount of C code was developed for that platform to route the data flow from LLC, which is constituted by 24 Westinghouse Numalogic PC-700 PLCs with about 8000 field-points, to HLC, based on a commercial Object-Oriented Real-Time database on Alpha/CompaqTru64 platform. Therefore, authors have to look for cost-effective solutions and finally a CompactPCI-Intel x86 platform with Linux operating system was chosen. A software porting has been done, taking into account the differences between OS9 and Linux operating system in terms of Inter/Network Processes Communications and I/O multi-ports serial driver. This paper describes the hardware/software architecture of the new MLC system, emphasizing the reliability and the low costs of the open source solutions. Moreover, a huge amount of software packages available in open source environment will assure a less painful maintenance, and will open the way to further improvements of the system itself. (authors)

  5. Supporting the Secure Halting of User Sessions and Processes in the Linux Operating System

    National Research Council Canada - National Science Library

    Brock, Jerome

    2001-01-01

    .... Only when a session must be reactivated are its processes returned to a runnable state. This thesis presents an approach for adding this "secure halting" functionality to the Linux operating system...

  6. Rebootless Linux Kernel Patching with Ksplice Uptrack at BNL

    International Nuclear Information System (INIS)

    Hollowell, Christopher; Pryor, James; Smith, Jason

    2012-01-01

    Ksplice/Oracle Uptrack is a software tool and update subscription service which allows system administrators to apply security and bug fix patches to the Linux kernel running on servers/workstations without rebooting them. The RHIC/ATLAS Computing Facility (RACF) at Brookhaven National Laboratory (BNL) has deployed Uptrack on nearly 2,000 hosts running Scientific Linux and Red Hat Enterprise Linux. The use of this software has minimized downtime, and increased our security posture. In this paper, we provide an overview of Ksplice's rebootless kernel patch creation/insertion mechanism, and our experiences with Uptrack.

  7. Construction of a Linux based chemical and biological information system.

    Science.gov (United States)

    Molnár, László; Vágó, István; Fehér, András

    2003-01-01

    A chemical and biological information system with a Web-based easy-to-use interface and corresponding databases has been developed. The constructed system incorporates all chemical, numerical and textual data related to the chemical compounds, including numerical biological screen results. Users can search the database by traditional textual/numerical and/or substructure or similarity queries through the web interface. To build our chemical database management system, we utilized existing IT components such as ORACLE or Tripos SYBYL for database management and Zope application server for the web interface. We chose Linux as the main platform, however, almost every component can be used under various operating systems.

  8. Aplicación de RT-Linux en el control de motores de pasos. Parte II; Appication of RT-Linux in the Control of Steps Motors. Part II

    Directory of Open Access Journals (Sweden)

    Ernesto Duany Renté

    2011-02-01

    Full Text Available Este trabajo complementa al presentado anteriormente: "Aplicación de RT-Linux en el control de motoresde pasos. Primera parte", de manera que se puedan relacionar a las tareas de adquisición y control para laobtención de un sistema lo más exacto posible. Las técnicas empleadas son las de tiempo real aprovechandolas posibilidades del microkernel RT-Linux y los software libres contenidos en sistemas Unix/Linux. Lasseñales se obtienen mediante un conversor AD y mostradas en pantalla empleando el Gnuplot.  The work presented in this paper is a complement to the control and acquisition tasks which were explainedin "Application of RT-Linux in the Control of Steps Motors. First Part", so that those both real time taskscan be fully related in order to make the whole control system more accurate. The employed techniquesare those of Real Time Taking advantage of the possibilities of the micro kernel RT-Linux and the freesoftware distributed in the Unix/Linux operating systems. The signals are obtained by means of an ADconverter and are shown in screen using Gnuplot.

  9. Design of software platform based on linux operating system for γ-spectrometry instrument

    International Nuclear Information System (INIS)

    Hong Tianqi; Zhou Chen; Zhang Yongjin

    2008-01-01

    This paper described the design of γ-spectrometry instrument software platform based on s3c2410a processor with arm920t core, emphases are focused on analyzing the integrated application of embedded linux operating system, yaffs file system and qt/embedded GUI development library. It presented a new software platform in portable instrument for γ measurement. (authors)

  10. Design and Implementation of Linux Access Control Model

    Institute of Scientific and Technical Information of China (English)

    Wei Xiaomeng; Wu Yongbin; Zhuo Jingchuan; Wang Jianyun; Haliqian Mayibula

    2017-01-01

    In this paper,the design and implementation of an access control model for Linux system are discussed in detail. The design is based on the RBAC model and combines with the inherent characteristics of the Linux system,and the support for the process and role transition is added.The core idea of the model is that the file is divided into different categories,and access authority of every category is distributed to several roles.Then,roles are assigned to users of the system,and the role of the user can be transited from one to another by running the executable file.

  11. Shell Scripting Expert Recipes for Linux, Bash and more

    CERN Document Server

    Parker, Steve

    2011-01-01

    A compendium of shell scripting recipes that can immediately be used, adjusted, and applied The shell is the primary way of communicating with the Unix and Linux systems, providing a direct way to program by automating simple-to-intermediate tasks. With this book, Linux expert Steve Parker shares a collection of shell scripting recipes that can be used as is or easily modified for a variety of environments or situations. The book covers shell programming, with a focus on Linux and the Bash shell; it provides credible, real-world relevance, as well as providing the flexible tools to get started

  12. CompTIA Linux+ study guide exam LX0-103 and exam LX0-104

    CERN Document Server

    Bresnahan, Christine

    2015-01-01

    CompTIA Authorized Linux+ prepCompTIA Linux+ Study Guide is your comprehensive study guide for the Linux+ Powered by LPI certification exams. With complete coverage of 100% of the objectives on both exam LX0-103 and exam LX0-104, this study guide provides clear, concise information on all aspects of Linux administration, with a focus on the latest version of the exam. You'll gain the insight of examples drawn from real-world scenarios, with detailed guidance and authoritative coverage of key topics, including GNU and Unix commands, system operation, system administration, system services, secu

  13. Linux real-time framework for fusion devices

    Energy Technology Data Exchange (ETDEWEB)

    Neto, Andre [Associacao Euratom-IST, Instituto de Plasmas e Fusao Nuclear, Av. Rovisco Pais, 1049-001 Lisboa (Portugal)], E-mail: andre.neto@cfn.ist.utl.pt; Sartori, Filippo; Piccolo, Fabio [Euratom-UKAEA, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Barbalace, Antonio [Euratom-ENEA Association, Consorzio RFX, 35127 Padova (Italy); Vitelli, Riccardo [Dipartimento di Informatica, Sistemi e Produzione, Universita di Roma, Tor Vergata, Via del Politecnico 1-00133, Roma (Italy); Fernandes, Horacio [Associacao Euratom-IST, Instituto de Plasmas e Fusao Nuclear, Av. Rovisco Pais, 1049-001 Lisboa (Portugal)

    2009-06-15

    A new framework for the development and execution of real-time codes is currently being developed and commissioned at JET. The foundations of the system are Linux, the Real Time Application Interface (RTAI) and a wise exploitation of the new i386 multi-core processors technology. The driving motivation was the need to find a real-time operating system for the i386 platform able to satisfy JET Vertical Stabilisation Enhancement project requirements: 50 {mu}s cycle time. Even if the initial choice was the VxWorks operating system, it was decided to explore an open source alternative, mostly because of the costs involved in the commercial product. The work started with the definition of a precise set of requirements and milestones to achieve: Linux distribution and kernel versions to be used for the real-time operating system; complete characterization of the Linux/RTAI real-time capabilities; exploitation of the multi-core technology; implementation of all the required and missing features; commissioning of the system. Latency and jitter measurements were compared for Linux and RTAI in both user and kernel-space. The best results were attained using the RTAI kernel solution where the time to reschedule a real-time task after an external interrupt is of 2.35 {+-} 0.35 {mu}s. In order to run the real-time codes in the kernel-space, a solution to provide user-space functionalities to the kernel modules had to be designed. This novel work provided the most common functions from the standard C library and transparent interaction with files and sockets to the kernel real-time modules. Kernel C++ support was also tested, further developed and integrated in the framework. The work has produced very convincing results so far: complete isolation of the processors assigned to real-time from the Linux non real-time activities, high level of stability over several days of benchmarking operations and values well below 3 {mu}s for task rescheduling after external interrupt. From

  14. Linux real-time framework for fusion devices

    International Nuclear Information System (INIS)

    Neto, Andre; Sartori, Filippo; Piccolo, Fabio; Barbalace, Antonio; Vitelli, Riccardo; Fernandes, Horacio

    2009-01-01

    A new framework for the development and execution of real-time codes is currently being developed and commissioned at JET. The foundations of the system are Linux, the Real Time Application Interface (RTAI) and a wise exploitation of the new i386 multi-core processors technology. The driving motivation was the need to find a real-time operating system for the i386 platform able to satisfy JET Vertical Stabilisation Enhancement project requirements: 50 μs cycle time. Even if the initial choice was the VxWorks operating system, it was decided to explore an open source alternative, mostly because of the costs involved in the commercial product. The work started with the definition of a precise set of requirements and milestones to achieve: Linux distribution and kernel versions to be used for the real-time operating system; complete characterization of the Linux/RTAI real-time capabilities; exploitation of the multi-core technology; implementation of all the required and missing features; commissioning of the system. Latency and jitter measurements were compared for Linux and RTAI in both user and kernel-space. The best results were attained using the RTAI kernel solution where the time to reschedule a real-time task after an external interrupt is of 2.35 ± 0.35 μs. In order to run the real-time codes in the kernel-space, a solution to provide user-space functionalities to the kernel modules had to be designed. This novel work provided the most common functions from the standard C library and transparent interaction with files and sockets to the kernel real-time modules. Kernel C++ support was also tested, further developed and integrated in the framework. The work has produced very convincing results so far: complete isolation of the processors assigned to real-time from the Linux non real-time activities, high level of stability over several days of benchmarking operations and values well below 3 μs for task rescheduling after external interrupt. From being the

  15. Open source clustering software.

    Science.gov (United States)

    de Hoon, M J L; Imoto, S; Nolan, J; Miyano, S

    2004-06-12

    We have implemented k-means clustering, hierarchical clustering and self-organizing maps in a single multipurpose open-source library of C routines, callable from other C and C++ programs. Using this library, we have created an improved version of Michael Eisen's well-known Cluster program for Windows, Mac OS X and Linux/Unix. In addition, we generated a Python and a Perl interface to the C Clustering Library, thereby combining the flexibility of a scripting language with the speed of C. The C Clustering Library and the corresponding Python C extension module Pycluster were released under the Python License, while the Perl module Algorithm::Cluster was released under the Artistic License. The GUI code Cluster 3.0 for Windows, Macintosh and Linux/Unix, as well as the corresponding command-line program, were released under the same license as the original Cluster code. The complete source code is available at http://bonsai.ims.u-tokyo.ac.jp/mdehoon/software/cluster. Alternatively, Algorithm::Cluster can be downloaded from CPAN, while Pycluster is also available as part of the Biopython distribution.

  16. Ubuntu Linux Toolbox 1000 + Commands for Ubuntu and Debian Power Users

    CERN Document Server

    Negus, Christopher

    2008-01-01

    In this handy, compact guide, you'll explore a ton of powerful Ubuntu Linux commands while you learn to use Ubuntu Linux as the experts do: from the command line. Try out more than 1,000 commands to find and get software, monitor system health and security, and access network resources. Then, apply the skills you learn from this book to use and administer desktops and servers running Ubuntu, Debian, and KNOPPIX or any other Linux distribution.

  17. EMBEDDED LINUX BASED ALBUM BROWSER SYSTEM AT MUSIC STORES

    Directory of Open Access Journals (Sweden)

    Suryadiputra Liawatimena

    2009-01-01

    Full Text Available The goal of this research is the creation of an album browser system at a music store based on embedded Linux. It is expected with this system; it will help the promotion of said music store and make the customers activity at the store simpler and easier. This system uses NFS for networking, database system, ripping software, and GUI development. The research method used are and laboratory experiments to test the system’s hardware using TPC-57 (Touch Panel Computer 5.7" SA2410 ARM-9 Medallion CPU Module and software using QtopiaCore. The result of the research are; 1. The database query process is working properly; 2. The audio data buffering process is working properly. With those experiment results, it can be concluded that the summary of this research is that the system is ready to be implemented and used in the music stores.

  18. Linux thin-client conversion in a large cardiology practice: initial experience.

    Science.gov (United States)

    Echt, Martin P; Rosen, Jordan

    2004-01-01

    Capital Cardiology Associates (CCA) is a single-specialty cardiology practice with offices in New York and Massachusetts. In 2003, CCA converted its IT system from a Microsoft-based network to a Linux network employing Linux thin-client technology with overall positive outcomes.

  19. Experiences constructing and running large shared clusters at CERN

    International Nuclear Information System (INIS)

    Bahyl, V.; Barroso, M.; Charbonnier, C.; Eldik, J. van; Jones, P.; Kleinwort, T.; Smith, T.

    2001-01-01

    The latest steps in the steady evolution of the CERN Computer Centre have been to reduce the multitude of clusters and architectures and to concentrate on commodity hardware. An active RISC decommissioning program has been undertaken to encourage migration to Linux, and a program of merging dedicated experiment clusters into larger shared facilities has been launched. The authors describe these programs and the experiences running the resultant multi-hundred node shared Linux clusters

  20. A WEB-BASED SOLUTION TO VISUALIZE OPERATIONAL MONITORING LINUX CLUSTER FOR THE PROTODUNE DATA QUALITY MONITORING CLUSTER

    CERN Document Server

    Mosesane, Badisa

    2017-01-01

    The Neutrino computing cluster made of 300 Dell PowerEdge 1950 U1 nodes serves an integral role to the CERN Neutrino Platform (CENF). It represents an effort to foster fundamental research in the field of Neutrino physics as it provides data processing facility. We cannot begin to over emphasize the need for data quality monitoring coupled with automating system configurations and remote monitoring of the cluster. To achieve these, a software stack has been chosen to implement automatic propagation of configurations across all the nodes in the cluster. The bulk of these discusses and delves more into the automated configuration management system on this cluster to enable the fast online data processing and Data Quality (DQM) process for the Neutrino Platform cluster (npcmp.cern.ch).

  1. Evaluation of mosix-Linux farm performances in GRID environment

    International Nuclear Information System (INIS)

    Barone, F.; Rosa, M.de; Rosa, R.de.; Eleuteri, A.; Esposito, R.; Mastroserio, P.; Milano, L.; Taurino, F.; Tortone, G.

    2001-01-01

    The MOSIX extensions to the Linux Operating System allow the creation of high-performance Linux Farms and an excellent integration of the several CPUs of the Farm, whose computational power can be furtherly increased and made more effective by networking them within the GRID environment. Following this strategy, the authors started to perform computational tests using two independent farms within the GRID environment. In particular, the authors performed a preliminary evaluation of the distributed computing efficiency with a MOSIX Linux farm in the simulation of gravitational waves data analysis from coalescing binaries. To this task, two different techniques were compared: the classical matched filters technique and one of its possible evolutions, based on a global optimisation technique

  2. Feasibility study of BES data processing and physics analysis on a PC/Linux platform

    International Nuclear Information System (INIS)

    Rong Gang; He Kanglin; Zhao Jiawei; Heng Yuekun; Zhang Chun

    1999-01-01

    The authors report a feasibility study of off-line BES data processing (data reconstruction and Detector simulation) on a PC/Linux platform and an application of the PC/Linux system in D/Ds physics analysis. The authors compared the results obtained from the PC/Linux with that from HP workstation. It shows that PC/Linux platform can do BES data offline analysis as good as UNIX workstation do, but it is much powerful and economical

  3. Analyzing Security-Enhanced Linux Policy Specifications

    National Research Council Canada - National Science Library

    Archer, Myla

    2003-01-01

    NSA's Security-Enhanced (SE) Linux enhances Linux by providing a specification language for security policies and a Flask-like architecture with a security server for enforcing policies defined in the language...

  4. Real Time Linux - The RTOS for Astronomy?

    Science.gov (United States)

    Daly, P. N.

    The BoF was attended by about 30 participants and a free CD of real time Linux-based upon RedHat 5.2-was available. There was a detailed presentation on the nature of real time Linux and the variants for hard real time: New Mexico Tech's RTL and DIAPM's RTAI. Comparison tables between standard Linux and real time Linux responses to time interval generation and interrupt response latency were presented (see elsewhere in these proceedings). The present recommendations are to use RTL for UP machines running the 2.0.x kernels and RTAI for SMP machines running the 2.2.x kernel. Support, both academically and commercially, is available. Some known limitations were presented and the solutions reported e.g., debugging and hardware support. The features of RTAI (scheduler, fifos, shared memory, semaphores, message queues and RPCs) were described. Typical performance statistics were presented: Pentium-based oneshot tasks running > 30kHz, 486-based oneshot tasks running at ~ 10 kHz, periodic timer tasks running in excess of 90 kHz with average zero jitter peaking to ~ 13 mus (UP) and ~ 30 mus (SMP). Some detail on kernel module programming, including coding examples, were presented showing a typical data acquisition system generating simulated (random) data writing to a shared memory buffer and a fifo buffer to communicate between real time Linux and user space. All coding examples were complete and tested under RTAI v0.6 and the 2.2.12 kernel. Finally, arguments were raised in support of real time Linux: it's open source, free under GPL, enables rapid prototyping, has good support and the ability to have a fully functioning workstation capable of co-existing hard real time performance. The counter weight-the negatives-of lack of platforms (x86 and PowerPC only at present), lack of board support, promiscuous root access and the danger of ignorance of real time programming issues were also discussed. See ftp://orion.tuc.noao.edu/pub/pnd/rtlbof.tgz for the StarOffice overheads

  5. Parallel Processing Performance Evaluation of Mixed T10/T100 Ethernet Topologies on Linux Pentium Systems

    National Research Council Canada - National Science Library

    Decato, Steven

    1997-01-01

    ... performed on relatively inexpensive off the shelf components. Alternative network topologies were implemented using 10 and 100 megabit-per-second Ethernet cards under the Linux operating system on Pentium based personal computer platforms...

  6. AliEnFS - a Linux File System for the AliEn Grid Services

    OpenAIRE

    Peters, Andreas J.; Saiz, P.; Buncic, P.

    2003-01-01

    Among the services offered by the AliEn (ALICE Environment http://alien.cern.ch) Grid framework there is a virtual file catalogue to allow transparent access to distributed data-sets using various file transfer protocols. $alienfs$ (AliEn File System) integrates the AliEn file catalogue as a new file system type into the Linux kernel using LUFS, a hybrid user space file system framework (Open Source http://lufs.sourceforge.net). LUFS uses a special kernel interface level called VFS (Virtual F...

  7. O Linux e a perspectiva da dádiva

    Directory of Open Access Journals (Sweden)

    Renata Apgaua

    2004-06-01

    Full Text Available O objetivo deste trabalho é analisar o surgimento e consolidação do sistema operacional Linux em um contexto marcado pela hegemonia de sistemas operacionais comerciais, sendo o Windows/Microsoft o exemplo paradigmático. O idealizador do Linux optou por abrir o seu código-fonte e oferecê-lo, gratuitamente, na Internet. Desde então, pessoas de diversas partes do mundo têm participado do seu desenvolvimento. Busca-se, assim, através deste estudo, analisar as características desse espaço de sociabilidade, onde as trocas apontam para outra lógica que não a do mercado. A proposta de compreender os laços sociais no universo Linux, a partir da perspectiva da dádiva, acaba remetendo a outra discussão, que também merecerá atenção nesse estudo, qual seja: a atualidade da dádiva. Releituras de Mauss, feitas por Godbout e Caillé, indicam que a dádiva, em seu "sistema de transformações", encontra-se presente nas sociedades contemporâneas, mas não apenas nos interstícios sociais, conforme afirmava o próprio Mauss.This work's goal is to analyze the appearance and consolidation of the Linux operational system in a context marked by the hegemony of commercial operational systems, taking the Windows/Microsoft as the paradigmatic example. The creator of Linux chose to make it open-source and offer it free of charge, in the Internet. Since then, people from the various parts of the world have participated in its development. This study, therefore, seeks to analyse the features of this space of sociability, where the exchange points to another logic different of that one adopted by the market. The proposal of comprehending the social ties of the Linux universe through the perspective of gift ends up sending us into another discussion, which will also deserve attention in this study, that would be: the recentness of gift. Re-interpretations of Mauss, made by Godbout and Caillé, indicate that gif, in its "changing system", is present in

  8. [Design of an embedded stroke rehabilitation apparatus system based on Linux computer engineering].

    Science.gov (United States)

    Zhuang, Pengfei; Tian, XueLong; Zhu, Lin

    2014-04-01

    A realizaton project of electrical stimulator aimed at motor dysfunction of stroke is proposed in this paper. Based on neurophysiological biofeedback, this system, using an ARM9 S3C2440 as the core processor, integrates collection and display of surface electromyography (sEMG) signal, as well as neuromuscular electrical stimulation (NMES) into one system. By embedding Linux system, the project is able to use Qt/Embedded as a graphical interface design tool to accomplish the design of stroke rehabilitation apparatus. Experiments showed that this system worked well.

  9. Linux, OpenBSD, and Talisker: A Comparative Complexity Analysis

    National Research Council Canada - National Science Library

    Smith, Kevin

    2002-01-01

    .... Rigorous engineering principles are applicable across a broad range of systems. The purpose of this study is to analyze and compare three operating systems, including two general-purpose operating systems (Linux and OpenBSD...

  10. The performance analysis of linux networking - packet receiving

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Wenji; Crawford, Matt; Bowden, Mark; /Fermilab

    2006-11-01

    The computing models for High-Energy Physics experiments are becoming ever more globally distributed and grid-based, both for technical reasons (e.g., to place computational and data resources near each other and the demand) and for strategic reasons (e.g., to leverage equipment investments). To support such computing models, the network and end systems, computing and storage, face unprecedented challenges. One of the biggest challenges is to transfer scientific data sets--now in the multi-petabyte (10{sup 15} bytes) range and expected to grow to exabytes within a decade--reliably and efficiently among facilities and computation centers scattered around the world. Both the network and end systems should be able to provide the capabilities to support high bandwidth, sustained, end-to-end data transmission. Recent trends in technology are showing that although the raw transmission speeds used in networks are increasing rapidly, the rate of advancement of microprocessor technology has slowed down. Therefore, network protocol-processing overheads have risen sharply in comparison with the time spent in packet transmission, resulting in degraded throughput for networked applications. More and more, it is the network end system, instead of the network, that is responsible for degraded performance of network applications. In this paper, the Linux system's packet receive process is studied from NIC to application. We develop a mathematical model to characterize the Linux packet receiving process. Key factors that affect Linux systems network performance are analyzed.

  11. Web penetration testing with Kali Linux

    CERN Document Server

    Muniz, Joseph

    2013-01-01

    Web Penetration Testing with Kali Linux contains various penetration testing methods using BackTrack that will be used by the reader. It contains clear step-by-step instructions with lot of screenshots. It is written in an easy to understand language which will further simplify the understanding for the user.""Web Penetration Testing with Kali Linux"" is ideal for anyone who is interested in learning how to become a penetration tester. It will also help the users who are new to Kali Linux and want to learn the features and differences in Kali versus Backtrack, and seasoned penetration testers

  12. Modeling Security-Enhanced Linux Policy Specifications for Analysis (Preprint)

    National Research Council Canada - National Science Library

    Archer, Myla; Leonard, Elizabeth; Pradella, Matteo

    2003-01-01

    Security-Enhanced (SE) Linux is a modification of Linux initially released by NSA in January 2001 that provides a language for specifying Linux security policies and, as in the Flask architecture, a security server...

  13. Development and application of remote video monitoring system for combine harvester based on embedded Linux

    Science.gov (United States)

    Chen, Jin; Wang, Yifan; Wang, Xuelei; Wang, Yuehong; Hu, Rui

    2017-01-01

    Combine harvester usually works in sparsely populated areas with harsh environment. In order to achieve the remote real-time video monitoring of the working state of combine harvester. A remote video monitoring system based on ARM11 and embedded Linux is developed. The system uses USB camera for capturing working state video data of the main parts of combine harvester, including the granary, threshing drum, cab and cut table. Using JPEG image compression standard to compress video data then transferring monitoring screen to remote monitoring center over the network for long-range monitoring and management. At the beginning of this paper it describes the necessity of the design of the system. Then it introduces realization methods of hardware and software briefly. And then it describes detailedly the configuration and compilation of embedded Linux operating system and the compiling and transplanting of video server program are elaborated. At the end of the paper, we carried out equipment installation and commissioning on combine harvester and then tested the system and showed the test results. In the experiment testing, the remote video monitoring system for combine harvester can achieve 30fps with the resolution of 800x600, and the response delay in the public network is about 40ms.

  14. SS-Wrapper: a package of wrapper applications for similarity searches on Linux clusters

    Directory of Open Access Journals (Sweden)

    Lefkowitz Elliot J

    2004-10-01

    Full Text Available Abstract Background Large-scale sequence comparison is a powerful tool for biological inference in modern molecular biology. Comparing new sequences to those in annotated databases is a useful source of functional and structural information about these sequences. Using software such as the basic local alignment search tool (BLAST or HMMPFAM to identify statistically significant matches between newly sequenced segments of genetic material and those in databases is an important task for most molecular biologists. Searching algorithms are intrinsically slow and data-intensive, especially in light of the rapid growth of biological sequence databases due to the emergence of high throughput DNA sequencing techniques. Thus, traditional bioinformatics tools are impractical on PCs and even on dedicated UNIX servers. To take advantage of larger databases and more reliable methods, high performance computation becomes necessary. Results We describe the implementation of SS-Wrapper (Similarity Search Wrapper, a package of wrapper applications that can parallelize similarity search applications on a Linux cluster. Our wrapper utilizes a query segmentation-search (QS-search approach to parallelize sequence database search applications. It takes into consideration load balancing between each node on the cluster to maximize resource usage. QS-search is designed to wrap many different search tools, such as BLAST and HMMPFAM using the same interface. This implementation does not alter the original program, so newly obtained programs and program updates should be accommodated easily. Benchmark experiments using QS-search to optimize BLAST and HMMPFAM showed that QS-search accelerated the performance of these programs almost linearly in proportion to the number of CPUs used. We have also implemented a wrapper that utilizes a database segmentation approach (DS-BLAST that provides a complementary solution for BLAST searches when the database is too large to fit into

  15. SS-Wrapper: a package of wrapper applications for similarity searches on Linux clusters.

    Science.gov (United States)

    Wang, Chunlin; Lefkowitz, Elliot J

    2004-10-28

    Large-scale sequence comparison is a powerful tool for biological inference in modern molecular biology. Comparing new sequences to those in annotated databases is a useful source of functional and structural information about these sequences. Using software such as the basic local alignment search tool (BLAST) or HMMPFAM to identify statistically significant matches between newly sequenced segments of genetic material and those in databases is an important task for most molecular biologists. Searching algorithms are intrinsically slow and data-intensive, especially in light of the rapid growth of biological sequence databases due to the emergence of high throughput DNA sequencing techniques. Thus, traditional bioinformatics tools are impractical on PCs and even on dedicated UNIX servers. To take advantage of larger databases and more reliable methods, high performance computation becomes necessary. We describe the implementation of SS-Wrapper (Similarity Search Wrapper), a package of wrapper applications that can parallelize similarity search applications on a Linux cluster. Our wrapper utilizes a query segmentation-search (QS-search) approach to parallelize sequence database search applications. It takes into consideration load balancing between each node on the cluster to maximize resource usage. QS-search is designed to wrap many different search tools, such as BLAST and HMMPFAM using the same interface. This implementation does not alter the original program, so newly obtained programs and program updates should be accommodated easily. Benchmark experiments using QS-search to optimize BLAST and HMMPFAM showed that QS-search accelerated the performance of these programs almost linearly in proportion to the number of CPUs used. We have also implemented a wrapper that utilizes a database segmentation approach (DS-BLAST) that provides a complementary solution for BLAST searches when the database is too large to fit into the memory of a single node. Used together

  16. The Linux command line a complete introduction

    CERN Document Server

    Shotts, William E

    2012-01-01

    You've experienced the shiny, point-and-click surface of your Linux computer—now dive below and explore its depths with the power of the command line. The Linux Command Line takes you from your very first terminal keystrokes to writing full programs in Bash, the most popular Linux shell. Along the way you'll learn the timeless skills handed down by generations of gray-bearded, mouse-shunning gurus: file navigation, environment configuration, command chaining, pattern matching with regular expressions, and more.

  17. The visual and remote analyzing software for a Linux-based radiation information acquisition system

    International Nuclear Information System (INIS)

    Fan Zhaoyang; Zhang Li; Chen Zhiqiang

    2003-01-01

    A visual and remote analyzing software for the radiation information, which has the merit of universality and credibility, is developed based on the Linux operating system and the TCP/IP network protocol. The software is applied to visually debug and real time monitor of the high-speed radiation information acquisition system, and a safe, direct and timely control can assured. The paper expatiates the designing thought of the software, which provides the reference for other software with the same purpose for the similar systems

  18. ISAC EPICS on Linux: the march of the penguins

    International Nuclear Information System (INIS)

    Richards, J.; Nussbaumer, R.; Rapaz, S.; Waters, G.

    2012-01-01

    The DC linear accelerators of the ISAC radioactive beam facility at TRIUMF do not impose rigorous timing constraints on the control system. Therefore a real-time operating system is not essential for device control. The ISAC Control System is completing a move to the use of the Linux operating system for hosting all EPICS IOCs. The IOC platforms include GE-Fanuc VME based CPUs for control of most optics and diagnostics, rack mounted servers for supervising PLCs, small desktop PCs for GPIB and RS232 instruments, as well as embedded ARM processors controlling CAN-bus devices that provide a suitcase sized control system. This article focuses on the experience of creating a customized Linux distribution for front-end IOC deployment. Rationale, a road-map of the process, and efficiency advantages in personnel training and system management realized by using a single OS will be discussed. (authors)

  19. Quality of service on Linux for the Atlas TDAQ event building network

    International Nuclear Information System (INIS)

    Yasu, Y.; Manabe, A.; Fujii, H.; Watase, Y.; Nagasaka, Y.; Hasegawa, Y.; Shimojima, M.; Nomachi, M.

    2001-01-01

    Congestion control for packets sent on a network is important for DAQ systems that contain an event builder using switching network technologies. Quality of Service (QoS) is a technique for congestion control. Recent Linux releases provide QoS in the kernel to manage network traffic. The authors have analyzed the packet-loss and packet distribution for the event builder prototype of the Atlas TDAQ system. The authors used PC/Linux with Gigabit Ethernet network as the testbed. The results showed that QoS using CBQ and TBF eliminated packet loss on UDP/IP transfer while the UDP/IP transfer in best effort made lots of packet loss. The result also showed that the QoS overhead was small. The authors concluded that QoS on Linux performed efficiently in TCP/IP and UDP/IP and will have an important role of the Atlas TDAQ system

  20. PCI-VME bridge device driver design of a high-performance data acquisition and control system on LINUX

    International Nuclear Information System (INIS)

    Sun Yan; Ye Mei; Zhang Nan; Zhao Jingwei

    2000-01-01

    Data Acquisition and Control is an important part of Nuclear Electronic and Nuclear Detection application in HEP. The key methods are introduced for designing LINUX Device Driver of PCI-VME Bridge Device based on the realized Data Acquisition and Control System

  1. Memanfaatkan Sistem Operasi Linux Untuk Keamanan Data Pada E-commerce

    OpenAIRE

    Isnania

    2012-01-01

    E-commerce is one of the major networks to do the transaction, where security is an issue that must be considered vital to the security of customer data and transactions. To realize the process of e-commerce, let prepared operating system (OS) that are reliable to secure the transaction path and also Dynamic Database back end that provides a product catalog that will be sold online. For technology, we can adopt open source technologies that are all available on linux. On linux it's too bundle...

  2. PCI-VME bridge device driver design of a high-performance data acquisition and control system on LINUX

    International Nuclear Information System (INIS)

    Sun Yan; Ye Mei; Zhang Nan; Zhao Jingwei

    2001-01-01

    Data acquisition and control is an important part of nuclear electronic and nuclear detection application in HEP. The key method has been introduced for designing LINUX device driver of PCI-VME bridge device based on realized by authors' data acquisition and control system

  3. Enforcing the use of API functions in Linux code

    DEFF Research Database (Denmark)

    Lawall, Julia; Muller, Gilles; Palix, Nicolas Jean-Michel

    2009-01-01

    In the Linux kernel source tree, header files typically define many small functions that have a simple behavior but are critical to ensure readability, correctness, and maintainability. We have observed, however, that some Linux code does not use these functions systematically. In this paper, we...... in the header file include/linux/usb.h....

  4. Understanding Collateral Evolution in Linux Device Drivers

    DEFF Research Database (Denmark)

    Padioleau, Yoann; Lawall, Julia Laetitia; Muller, Gilles

    2006-01-01

    no tools to help in this process, collateral evolution is thus time consuming and error prone.In this paper, we present a qualitative and quantitative assessment of collateral evolution in Linux device driver code. We provide a taxonomy of evolutions and collateral evolutions, and use an automated patch......-analysis tool that we have developed to measure the number of evolutions and collateral evolutions that affect device drivers between Linux versions 2.2 and 2.6. In particular, we find that from one version of Linux to the next, collateral evolutions can account for up to 35% of the lines modified in such code....

  5. Communication to Linux users

    CERN Multimedia

    IT Department

    We would like to inform you that the aging “phone” Linux command will stop working: On lxplus on 30 November 2009, On lxbatch on 4 January 2010, and is replaced by the new “phonebook” command, currently available on SLC4 and SLC5 Linux. As the new “phonebook” command has different syntax and output formats from the “phone” command, please update and test all scripts currently using “phone” before the above dates. You can refer to the article published on the IT Service Status Board, under the Service Changes section. Please send any comments to it-dep-phonebook-feedback@cern.ch Best regards, IT-UDS User Support Section

  6. ARTiS, an Asymmetric Real-Time Scheduler for Linux on Multi-Processor Architectures

    OpenAIRE

    Piel , Éric; Marquet , Philippe; Soula , Julien; Osuna , Christophe; Dekeyser , Jean-Luc

    2005-01-01

    The ARTiS system is a real-time extension of the GNU/Linux scheduler dedicated to SMP (Symmetric Multi-Processors) systems. It allows to mix High Performance Computing and real-time. ARTiS exploits the SMP architecture to guarantee the preemption of a processor when the system has to schedule a real-time task. The implementation is available as a modification of the Linux kernel, especially focusing (but not restricted to) IA-64 architecture. The basic idea of ARTiS is to assign a selected se...

  7. Real-time Linux operating system for plasma control on FTU--implementation advantages and first experimental results

    International Nuclear Information System (INIS)

    Vitale, V.; Centioli, C.; Iannone, F.; Mazza, G.; Panella, M.; Pangione, L.; Podda, S.; Zaccarian, L.

    2004-01-01

    In this paper, we report on the experiment carried out at the Frascati Tokamak Upgrade (FTU) on the porting of the plasma control system (PCS) from a LynxOS architecture to an open source Linux real-time architecture. The old LynxOS system was implemented on a VME/PPC604r embedded controller guaranteeing successful plasma position, density and current control. The new RTAI-Linux operating system has shown to easily adapt to the VME hardware via a VME/INTELx86 embedded controller. The advantages of the new solution versus the old one are not limited to the reduced cost of the new architecture (based on the open-source characteristic of the RTAI architecture) but also enhanced by the response time of the real-time system which, also through an optimization of the real-time code, has been reduced from 150 μs (LynxOS) to 70 μs (RTAI). The new real-time operating system is also shown to be suitable for new extended control activities, whose implementation is also possible based on the reduced duty cycle duration, which leaves space for the real-time implementation of nonlinear control laws. We report here on recent experiments related to the optimization of the coupling between additional radiofrequency power and plasma

  8. Real-time Linux operating system for plasma control on FTU--implementation advantages and first experimental results

    Energy Technology Data Exchange (ETDEWEB)

    Vitale, V. E-mail: vitale@frascati.enea.it; Centioli, C.; Iannone, F.; Mazza, G.; Panella, M.; Pangione, L.; Podda, S.; Zaccarian, L

    2004-06-01

    In this paper, we report on the experiment carried out at the Frascati Tokamak Upgrade (FTU) on the porting of the plasma control system (PCS) from a LynxOS architecture to an open source Linux real-time architecture. The old LynxOS system was implemented on a VME/PPC604r embedded controller guaranteeing successful plasma position, density and current control. The new RTAI-Linux operating system has shown to easily adapt to the VME hardware via a VME/INTELx86 embedded controller. The advantages of the new solution versus the old one are not limited to the reduced cost of the new architecture (based on the open-source characteristic of the RTAI architecture) but also enhanced by the response time of the real-time system which, also through an optimization of the real-time code, has been reduced from 150 {mu}s (LynxOS) to 70 {mu}s (RTAI). The new real-time operating system is also shown to be suitable for new extended control activities, whose implementation is also possible based on the reduced duty cycle duration, which leaves space for the real-time implementation of nonlinear control laws. We report here on recent experiments related to the optimization of the coupling between additional radiofrequency power and plasma.

  9. GSI operation software: migration from OpenVMS TO Linux

    International Nuclear Information System (INIS)

    Huhmann, R.; Froehlich, G.; Juelicher, S.; Schaa, V.R.W.

    2012-01-01

    The current operation software at GSI, controlling the linac, beam transfer lines, synchrotron and storage ring, has been developed over a period of more than two decades using OpenVMS on Alpha-Workstations. The GSI accelerator facilities will serve as an injector chain for the new FAIR accelerator complex for which a control system is currently developed. To enable reuse and integration of parts of the distributed GSI software system, in particular the linac operation software, within the FAIR control system, the corresponding software components must be migrated to Linux. Inter-operability with FAIR controls applications is achieved by adding a generic middle-ware interface accessible from Java applications. For porting applications to Linux a set of libraries and tools has been developed covering the necessary OpenVMS system functionality. Currently, core applications and services are already ported or rewritten and functionally tested but not in operational usage. This paper presents the current status of the project and concepts for putting the migrated software into operation. (authors)

  10. Design and implementation of a scalable monitor system (IF-monitor) for Linux clusters

    International Nuclear Information System (INIS)

    Zhang Weiyi; Yu Chuansong; Sun Gongxing; Gu Ming

    2003-01-01

    PC clusters have become a cost-effective solution for high performance computing, usually only with the abilities of resource management and job scheduling, and unfortunately, with lack of powerful monitoring for built PC Farms. Therefore it is like a 'black box' for administrators who don't know how they run and where the bottlenecks are. In present there are a few of running PC Farms such as BES-Farm, LHC-Farm, YBJ-Farm at IHEP, CAS. As the scale of PC Farms growing and the IHEP campus grid computing environment implemented, it is more difficult to predict how these PC Farms perform. As a result, the SNMP-based tool called IF-Monitor that allows effective monitoring of large clusters have been designed and developed at IHEP. (authors)

  11. Galaxy CloudMan: delivering cloud compute clusters.

    Science.gov (United States)

    Afgan, Enis; Baker, Dannon; Coraor, Nate; Chapman, Brad; Nekrutenko, Anton; Taylor, James

    2010-12-21

    Widespread adoption of high-throughput sequencing has greatly increased the scale and sophistication of computational infrastructure needed to perform genomic research. An alternative to building and maintaining local infrastructure is "cloud computing", which, in principle, offers on demand access to flexible computational infrastructure. However, cloud computing resources are not yet suitable for immediate "as is" use by experimental biologists. We present a cloud resource management system that makes it possible for individual researchers to compose and control an arbitrarily sized compute cluster on Amazon's EC2 cloud infrastructure without any informatics requirements. Within this system, an entire suite of biological tools packaged by the NERC Bio-Linux team (http://nebc.nerc.ac.uk/tools/bio-linux) is available for immediate consumption. The provided solution makes it possible, using only a web browser, to create a completely configured compute cluster ready to perform analysis in less than five minutes. Moreover, we provide an automated method for building custom deployments of cloud resources. This approach promotes reproducibility of results and, if desired, allows individuals and labs to add or customize an otherwise available cloud system to better meet their needs. The expected knowledge and associated effort with deploying a compute cluster in the Amazon EC2 cloud is not trivial. The solution presented in this paper eliminates these barriers, making it possible for researchers to deploy exactly the amount of computing power they need, combined with a wealth of existing analysis software, to handle the ongoing data deluge.

  12. Explorations of the implementation of a parallel IDW interpolation algorithm in a Linux cluster-based parallel GIS

    Science.gov (United States)

    Huang, Fang; Liu, Dingsheng; Tan, Xicheng; Wang, Jian; Chen, Yunping; He, Binbin

    2011-04-01

    To design and implement an open-source parallel GIS (OP-GIS) based on a Linux cluster, the parallel inverse distance weighting (IDW) interpolation algorithm has been chosen as an example to explore the working model and the principle of algorithm parallel pattern (APP), one of the parallelization patterns for OP-GIS. Based on an analysis of the serial IDW interpolation algorithm of GRASS GIS, this paper has proposed and designed a specific parallel IDW interpolation algorithm, incorporating both single process, multiple data (SPMD) and master/slave (M/S) programming modes. The main steps of the parallel IDW interpolation algorithm are: (1) the master node packages the related information, and then broadcasts it to the slave nodes; (2) each node calculates its assigned data extent along one row using the serial algorithm; (3) the master node gathers the data from all nodes; and (4) iterations continue until all rows have been processed, after which the results are outputted. According to the experiments performed in the course of this work, the parallel IDW interpolation algorithm can attain an efficiency greater than 0.93 compared with similar algorithms, which indicates that the parallel algorithm can greatly reduce processing time and maximize speed and performance.

  13. Fast scalar data buffering interface in Linux 2.6 kernel

    International Nuclear Information System (INIS)

    Homs, A.

    2012-01-01

    Key instrumentation devices like counter/timers, analog-to-digital converters and encoders provide scalar data input. Many of them allow fast acquisitions, but do not provide hardware triggering or buffering mechanisms. A Linux 2.4 kernel driver called Hook was developed at the ESRF as a generic software-triggered buffering interface. This work presents the portage of the ESRF Hook interface to the Linux 2.6 kernel. The interface distinguishes 2 independent functional groups: trigger event generators and data channels. Devices in the first group create software events, like hardware interrupts generated by timers or external signals. On each event, one or more device channels on the second group are read and stored in kernel buffers. The event generators and data channels to be read are fully configurable before each sequence. Designed for fast acquisitions, the Hook implementation is well adapted to multi-CPU systems, where the interrupt latency is notably reduced. On heavily loaded dual-core PCs running standard (non real time) Linux, data can be taken at 1 KHz without losing events. Additional features include full integration into the /sys virtual file-system and hot-plug devices support. (author)

  14. Perbandingan proxy pada linux dan windows untuk mempercepat browsing website

    Directory of Open Access Journals (Sweden)

    Dafwen Toresa

    2017-05-01

    Full Text Available AbstrakPada saat ini sangat banyak organisasi, baik pendidikan, pemerintahan,  maupun perusahaan swasta berusaha membatasi akses para pengguna ke internet dengan alasan bandwidth yang dimiliki mulai terasa lambat ketika para penggunanya mulai banyak yang melakukan browsing ke internet. Mempercepat akses browsing menjadi perhatian utama dengan memanfaatkan teknologi Proxy server. Penggunaan proxy server perlu mempertimbangkan sistem operasi pada server dan tool yang digunakan belum diketahui performansi terbaiknya pada sistem operasi apa.  Untuk itu dirasa perlu untuk menganalisis performan Proxy server pada sistem operasi berbeda yaitu Sistem Operasi Linux dengan tools Squid  dan Sistem Operasi Windows dengan tool Winroute. Kajian ini dilakukan untuk mengetahui perbandingan kecepatan browsing dari komputer pengguna (client. Browser yang digunakan di komputer pengguna adalah Mozilla Firefox. Penelitian ini menggunakan 2 komputer klien dengan pengujian masing-masingnya 5 kali pengujian pengaksesan/browsing web yang dituju melalui proxy server. Dari hasil pengujian yang dilakukan, diperoleh kesimpulan bahwa penerapan proxy server di sistem operasi linux dengan tools squid lebih cepat browsing dari klien menggunakan web browser yang sama dan komputer klien yang berbeda dari pada proxy server sistem operasi windows dengan tools winroute.  Kata kunci: Proxy, Bandwidth, Browsing, Squid, Winroute AbstractAt this time very many organizations, both education, government, and private companies try to limit the access of users to the internet on the grounds that the bandwidth owned began to feel slow when the users began to do a lot of browsing to the internet. Speed up browsing access is a major concern by utilizing Proxy server technology. The use of proxy servers need to consider the operating system on the server and the tool used is not yet known the best performance on what operating system. For that it is necessary to analyze Performance Proxy

  15. Performance comparison analysis library communication cluster system using merge sort

    Science.gov (United States)

    Wulandari, D. A. R.; Ramadhan, M. E.

    2018-04-01

    Begins by using a single processor, to increase the speed of computing time, the use of multi-processor was introduced. The second paradigm is known as parallel computing, example cluster. The cluster must have the communication potocol for processing, one of it is message passing Interface (MPI). MPI have many library, both of them OPENMPI and MPICH2. Performance of the cluster machine depend on suitable between performance characters of library communication and characters of the problem so this study aims to analyze the comparative performances libraries in handling parallel computing process. The case study in this research are MPICH2 and OpenMPI. This case research execute sorting’s problem to know the performance of cluster system. The sorting problem use mergesort method. The research method is by implementing OpenMPI and MPICH2 on a Linux-based cluster by using five computer virtual then analyze the performance of the system by different scenario tests and three parameters for to know the performance of MPICH2 and OpenMPI. These performances are execution time, speedup and efficiency. The results of this study showed that the addition of each data size makes OpenMPI and MPICH2 have an average speed-up and efficiency tend to increase but at a large data size decreases. increased data size doesn’t necessarily increased speed up and efficiency but only execution time example in 100000 data size. OpenMPI has a execution time greater than MPICH2 example in 1000 data size average execution time with MPICH2 is 0,009721 and OpenMPI is 0,003895 OpenMPI can customize communication needs.

  16. A Quantitative Analysis of Variability Warnings in Linux

    DEFF Research Database (Denmark)

    Melo, Jean; Flesborg, Elvis; Brabrand, Claus

    2015-01-01

    In order to get insight into challenges with quality in highly-configurable software, we analyze one of the largest open source projects, the Linux kernel, and quantify basic properties of configuration-related warnings. We automatically analyze more than 20 thousand valid and distinct random...... configurations, in a computation that lasted more than a month. We count and classify a total of 400,000 warnings to get an insight in the distribution of warning types, and the location of the warnings. We run both on a stable and unstable version of the Linux kernel. The results show that Linux contains...

  17. PhyLIS: a simple GNU/Linux distribution for phylogenetics and phyloinformatics.

    Science.gov (United States)

    Thomson, Robert C

    2009-07-30

    PhyLIS is a free GNU/Linux distribution that is designed to provide a simple, standardized platform for phylogenetic and phyloinformatic analysis. The operating system incorporates most commonly used phylogenetic software, which has been pre-compiled and pre-configured, allowing for straightforward application of phylogenetic methods and development of phyloinformatic pipelines in a stable Linux environment. The software is distributed as a live CD and can be installed directly or run from the CD without making changes to the computer. PhyLIS is available for free at http://www.eve.ucdavis.edu/rcthomson/phylis/.

  18. Hadoop cluster deployment

    CERN Document Server

    Zburivsky, Danil

    2013-01-01

    This book is a step-by-step tutorial filled with practical examples which will show you how to build and manage a Hadoop cluster along with its intricacies.This book is ideal for database administrators, data engineers, and system administrators, and it will act as an invaluable reference if you are planning to use the Hadoop platform in your organization. It is expected that you have basic Linux skills since all the examples in this book use this operating system. It is also useful if you have access to test hardware or virtual machines to be able to follow the examples in the book.

  19. FLUKA-LIVE-an embedded framework, for enabling a computer to execute FLUKA under the control of a Linux OS

    International Nuclear Information System (INIS)

    Cohen, A.; Battistoni, G.; Mark, S.

    2008-01-01

    This paper describes a Linux-based OS framework for integrating the FLUKA Monte Carlo software (currently distributed only for Linux) into a CD-ROM, resulting in a complete environment for a scientist to edit, link and run FLUKA routines-without the need to install a UNIX/Linux operating system. The building process includes generating from scratch a complete operating system distribution which will, when operative, build all necessary components for successful operation of FLUKA software and libraries. Various source packages, as well as the latest kernel sources, are freely available from the Internet. These sources are used to create a functioning Linux system that integrates several core utilities in line with the main idea-enabling FLUKA to act as if it was running under a popular Linux distribution or even a proprietary UNIX workstation. On boot-up a file system will be created and the contents from the CD will be uncompressed and completely loaded into RAM-after which the presence of the CD is no longer necessary, and could be removed for use on a second computer. The system can operate on any i386 PC as long as it can boot from a CD

  20. Research and implementation of intelligent gateway driver layer based on Linux bus

    Directory of Open Access Journals (Sweden)

    ZHANG Jian

    2016-10-01

    Full Text Available Currently,in the field of smart home,there is no relevant organization that yet has proposed an unified protocol standard.It increases the complexity and limitations of heterogeneous gateway software framework design that different vendor′s devices have different communication mode and protocol standards.In this paper,a serial of interfaces are provided by Linux kernel,and a virtual bus is registered under Linux.The physical device drivers are able to connect to the virtual bus.The detailed designs of the communication protocol are placed in the underlying adapters,making the integration of heterogeneous networks more natural.At the same time,designing the intelligent gateway system driver layer based on Linux bus can let the application layer be more unified and clear logical.And it also let the hardware access network become more convenient and distinct.

  1. PhyLIS: A Simple GNU/Linux Distribution for Phylogenetics and Phyloinformatics

    Directory of Open Access Journals (Sweden)

    Robert C. Thomson

    2009-01-01

    Full Text Available PhyLIS is a free GNU/Linux distribution that is designed to provide a simple, standardized platform for phylogenetic and phyloinformatic analysis. The operating system incorporates most commonly used phylogenetic software, which has been pre-compiled and pre-configured, allowing for straightforward application of phylogenetic methods and development of phyloinformatic pipelines in a stable Linux environment. The software is distributed as a live CD and can be installed directly or run from the CD without making changes to the computer. PhyLIS is available for free at http://www.eve.ucdavis.edu/rcthomson/phylis/.

  2. Getting Priorities Straight: Improving Linux Support for Database I/O

    DEFF Research Database (Denmark)

    Hall, Christoffer; Bonnet, Philippe

    2005-01-01

    advantage of Linux asynchronous I/O and how Linux can help MySQL/InnoDB best take advantage of the underlying I/O bandwidth. This is a crucial problem for the increasing number of MySQL servers deployed for very large database applications. In this paper, we rst show that the conservative I/O submission......The Linux 2.6 kernel supports asynchronous I/O as a result of propositions from the database industry. This is a positive evolution but is it a panacea? In the context of the Badger project, a collaboration between MySQL AB and University of Copenhagen, we evaluate how MySQL/InnoDB can best take...... policy used by InnoDB (as well as Oracle 9.2) leads to an under-utilization of the available I/O bandwidth. We then show that introducing prioritized asynchronous I/O in Linux will allow MySQL/InnoDB and the other Linux databases to fully utilize the available I/O bandwith using a more aggressive I...

  3. Čiščenje operacijskega sistema GNU/Linux

    OpenAIRE

    OBLAK, DENIS

    2018-01-01

    Cilj diplomskega dela je izdelava aplikacije, ki bo pomagala očistiti operacijski sistem Linux in bo delala v večini distribucij. V teoretičnem delu je obravnavano čiščenje operacijskega sistema Linux, ki sprosti prostor na disku in omogoči boljše delovanje sistema. Sistematično so pregledani in teoretično predstavljeni tehnike čiščenja in obstoječa orodja za operacijski sistem Linux. V nadaljevanju je predstavljeno čiščenje operacijskih sistemov Windows in MacOS. Hkrati so predstavljen...

  4. Development of fibre channel disk clusters. Final report for period September 2, 1998 - March 17, 1999

    International Nuclear Information System (INIS)

    Dunn, W.L.; Justice, J.R.; Stockert, T.D.; Barker, A.R.; Yacout, A.M.

    1999-01-01

    This report documents the accomplishments of a Phase I project whose purpose was to demonstrate feasibility of developing inexpensive and fast data storage using multi-host Fibre Channel disk clusters. In Phase I, a working file system called ZFS was developed and tested. The ZFS approach was designed to be suited for high energy physics applications, but is general and flexible enough to be useful for other high-volume applications. The ZFS approach, which borrows from the networking concept of cut-through routing, uses Linux boxes and disk clusters in a Fibre Channel--Arbitrated Loop architecture. In ZFS, file locking and other meta-data level operations are carried out over the primary data network, after which all data are sent directly over a Fibre Channel between the workstation and the disk cluster. No intermediate server is required. Substantially higher throughputs than in traditional networked disk architectures have been demonstrated. The ZFS architecture is described and tests of the first implementation of ZFS at Fermilab are discussed. The current system is implemented for Linux and is being optimized for Fermilab's needs, but extensions to other operating systems and other data-intensive applications are clearly foreseen

  5. Linux toys II 9 Cool New Projects for Home, Office, and Entertainment

    CERN Document Server

    Negus, Christopher

    2006-01-01

    Builds on the success of the original Linux Toys (0-7645-2508-5) and adds projects using different Linux distributionsAll-new toys in this edition include a car computer system with built-in entertainment and navigation features, bootable movies, a home surveillance monitor, a LEGO Mindstorms robot, and a weather mapping stationIntroduces small business opportunities with an Internet radio station and Internet caf ̌projectsCompanion Web site features specialized hardware drivers, software interfaces, music and game software, project descriptions, and discussion forumsIncludes a CD-ROM with scr

  6. Recent advances in PC-Linux systems for electronic structure computations by optimized compilers and numerical libraries.

    Science.gov (United States)

    Yu, Jen-Shiang K; Yu, Chin-Hui

    2002-01-01

    One of the most frequently used packages for electronic structure research, GAUSSIAN 98, is compiled on Linux systems with various hardware configurations, including AMD Athlon (with the "Thunderbird" core), AthlonMP, and AthlonXP (with the "Palomino" core) systems as well as the Intel Pentium 4 (with the "Willamette" core) machines. The default PGI FORTRAN compiler (pgf77) and the Intel FORTRAN compiler (ifc) are respectively employed with different architectural optimization options to compile GAUSSIAN 98 and test the performance improvement. In addition to the BLAS library included in revision A.11 of this package, the Automatically Tuned Linear Algebra Software (ATLAS) library is linked against the binary executables to improve the performance. Various Hartree-Fock, density-functional theories, and the MP2 calculations are done for benchmarking purposes. It is found that the combination of ifc with ATLAS library gives the best performance for GAUSSIAN 98 on all of these PC-Linux computers, including AMD and Intel CPUs. Even on AMD systems, the Intel FORTRAN compiler invariably produces binaries with better performance than pgf77. The enhancement provided by the ATLAS library is more significant for post-Hartree-Fock calculations. The performance on one single CPU is potentially as good as that on an Alpha 21264A workstation or an SGI supercomputer. The floating-point marks by SpecFP2000 have similar trends to the results of GAUSSIAN 98 package.

  7. Implementing Discretionary Access Control with Time Character in Linux and Performance Analysis

    Institute of Scientific and Technical Information of China (English)

    TAN Liang; ZHOU Ming-Tian

    2006-01-01

    DAC (Discretionary Access Control Policy) is access control based on ownership relations between subject and object, the subject can discretionarily decide on that who, by what methods, can access his owns object. In this paper, the system time is looked as a basic secure element. The DAC_T (Discretionary Access Control Policy with Time Character) is presented and formalized. The DAC_T resolves that the subject can discretionarily decide that who, on when, can access his owns objects. And then the DAC_T is implemented on Linux based on GFAC (General Framework for Access Control), and the algorithm is put forward. Finally, the performance analysis for the DAC_T_Linux is carried out. It is proved that the DAC_T_Linux not only can realize time constraints between subject and object but also can still be accepted by us though its performance have been decreased.

  8. Scientific Cluster Deployment and Recovery - Using puppet to simplify cluster management

    Science.gov (United States)

    Hendrix, Val; Benjamin, Doug; Yao, Yushu

    2012-12-01

    Deployment, maintenance and recovery of a scientific cluster, which has complex, specialized services, can be a time consuming task requiring the assistance of Linux system administrators, network engineers as well as domain experts. Universities and small institutions that have a part-time FTE with limited time for and knowledge of the administration of such clusters can be strained by such maintenance tasks. This current work is the result of an effort to maintain a data analysis cluster (DAC) with minimal effort by a local system administrator. The realized benefit is the scientist, who is the local system administrator, is able to focus on the data analysis instead of the intricacies of managing a cluster. Our work provides a cluster deployment and recovery process (CDRP) based on the puppet configuration engine allowing a part-time FTE to easily deploy and recover entire clusters with minimal effort. Puppet is a configuration management system (CMS) used widely in computing centers for the automatic management of resources. Domain experts use Puppet's declarative language to define reusable modules for service configuration and deployment. Our CDRP has three actors: domain experts, a cluster designer and a cluster manager. The domain experts first write the puppet modules for the cluster services. A cluster designer would then define a cluster. This includes the creation of cluster roles, mapping the services to those roles and determining the relationships between the services. Finally, a cluster manager would acquire the resources (machines, networking), enter the cluster input parameters (hostnames, IP addresses) and automatically generate deployment scripts used by puppet to configure it to act as a designated role. In the event of a machine failure, the originally generated deployment scripts along with puppet can be used to easily reconfigure a new machine. The cluster definition produced in our CDRP is an integral part of automating cluster deployment

  9. Effective electron-density map improvement and structure validation on a Linux multi-CPU web cluster: The TB Structural Genomics Consortium Bias Removal Web Service.

    Science.gov (United States)

    Reddy, Vinod; Swanson, Stanley M; Segelke, Brent; Kantardjieff, Katherine A; Sacchettini, James C; Rupp, Bernhard

    2003-12-01

    Anticipating a continuing increase in the number of structures solved by molecular replacement in high-throughput crystallography and drug-discovery programs, a user-friendly web service for automated molecular replacement, map improvement, bias removal and real-space correlation structure validation has been implemented. The service is based on an efficient bias-removal protocol, Shake&wARP, and implemented using EPMR and the CCP4 suite of programs, combined with various shell scripts and Fortran90 routines. The service returns improved maps, converted data files and real-space correlation and B-factor plots. User data are uploaded through a web interface and the CPU-intensive iteration cycles are executed on a low-cost Linux multi-CPU cluster using the Condor job-queuing package. Examples of map improvement at various resolutions are provided and include model completion and reconstruction of absent parts, sequence correction, and ligand validation in drug-target structures.

  10. Development of a laboratory model of SSSC using RTAI on Linux ...

    Indian Academy of Sciences (India)

    ... capability to Linux Gen- eral Purpose Operating System (GPOS) over and above the capabilities of non ... Introduction. Power transfer ... of a controller prototyping environment is Matlab/Simulink/Real-time Workshop software, which can be ...

  11. Open discovery: An integrated live Linux platform of Bioinformatics tools.

    Science.gov (United States)

    Vetrivel, Umashankar; Pilla, Kalabharath

    2008-01-01

    Historically, live linux distributions for Bioinformatics have paved way for portability of Bioinformatics workbench in a platform independent manner. Moreover, most of the existing live Linux distributions limit their usage to sequence analysis and basic molecular visualization programs and are devoid of data persistence. Hence, open discovery - a live linux distribution has been developed with the capability to perform complex tasks like molecular modeling, docking and molecular dynamics in a swift manner. Furthermore, it is also equipped with complete sequence analysis environment and is capable of running windows executable programs in Linux environment. Open discovery portrays the advanced customizable configuration of fedora, with data persistency accessible via USB drive or DVD. The Open Discovery is distributed free under Academic Free License (AFL) and can be downloaded from http://www.OpenDiscovery.org.in.

  12. Two-factor Authorization in Linux

    Directory of Open Access Journals (Sweden)

    L. S. Nosov

    2010-03-01

    Full Text Available Identification and authentication realization in OS Linux on basis of external USB-device and on basis of PAM-module program realization by the example answer on control question (enigma is considered.

  13. LPIC-2 Linux Professional Institute Certification Study Guide Exams 201 and 202

    CERN Document Server

    Smith, Roderick W

    2011-01-01

    The first book to cover the LPIC-2 certification Linux allows developers to update source code freely, making it an excellent, low-cost, secure alternative to alternate, more expensive operating systems. It is for this reason that the demand for IT professionals to have an LPI certification is so strong. This study guide provides unparalleled coverage of the LPIC-2 objectives for exams 201 and 202. Clear and concise coverage examines all Linux administration topics while practical, real-world examples enhance your learning process. On the CD, you'll find the Sybex Test Engine, electronic flash

  14. Scientific Cluster Deployment and Recovery – Using puppet to simplify cluster management

    International Nuclear Information System (INIS)

    Hendrix, Val; Yao Yushu; Benjamin, Doug

    2012-01-01

    Deployment, maintenance and recovery of a scientific cluster, which has complex, specialized services, can be a time consuming task requiring the assistance of Linux system administrators, network engineers as well as domain experts. Universities and small institutions that have a part-time FTE with limited time for and knowledge of the administration of such clusters can be strained by such maintenance tasks. This current work is the result of an effort to maintain a data analysis cluster (DAC) with minimal effort by a local system administrator. The realized benefit is the scientist, who is the local system administrator, is able to focus on the data analysis instead of the intricacies of managing a cluster. Our work provides a cluster deployment and recovery process (CDRP) based on the puppet configuration engine allowing a part-time FTE to easily deploy and recover entire clusters with minimal effort. Puppet is a configuration management system (CMS) used widely in computing centers for the automatic management of resources. Domain experts use Puppet's declarative language to define reusable modules for service configuration and deployment. Our CDRP has three actors: domain experts, a cluster designer and a cluster manager. The domain experts first write the puppet modules for the cluster services. A cluster designer would then define a cluster. This includes the creation of cluster roles, mapping the services to those roles and determining the relationships between the services. Finally, a cluster manager would acquire the resources (machines, networking), enter the cluster input parameters (hostnames, IP addresses) and automatically generate deployment scripts used by puppet to configure it to act as a designated role. In the event of a machine failure, the originally generated deployment scripts along with puppet can be used to easily reconfigure a new machine. The cluster definition produced in our CDRP is an integral part of automating cluster deployment

  15. Use of Low-Cost Acquisition Systems with an Embedded Linux Device for Volcanic Monitoring.

    Science.gov (United States)

    Moure, David; Torres, Pedro; Casas, Benito; Toma, Daniel; Blanco, María José; Del Río, Joaquín; Manuel, Antoni

    2015-08-19

    This paper describes the development of a low-cost multiparameter acquisition system for volcanic monitoring that is applicable to gravimetry and geodesy, as well as to the visual monitoring of volcanic activity. The acquisition system was developed using a System on a Chip (SoC) Broadcom BCM2835 Linux operating system (based on DebianTM) that allows for the construction of a complete monitoring system offering multiple possibilities for storage, data-processing, configuration, and the real-time monitoring of volcanic activity. This multiparametric acquisition system was developed with a software environment, as well as with different hardware modules designed for each parameter to be monitored. The device presented here has been used and validated under different scenarios for monitoring ocean tides, ground deformation, and gravity, as well as for monitoring with images the island of Tenerife and ground deformation on the island of El Hierro.

  16. LPIC-1 Linux Professional Institute certification study guide exam 101-400 and exam 102-400

    CERN Document Server

    Bresnahan, Christine

    2015-01-01

    Thorough LPIC-1 exam prep, with complete coverage and bonus study tools LPIC-1Study Guide is your comprehensive source for the popular Linux Professional Institute Certification Level 1 exam, fully updated to reflect the changes to the latest version of the exam. With 100% coverage of objectives for both LPI 101 and LPI 102, this book provides clear and concise information on all Linux administration topics and practical examples drawn from real-world experience. Authoritative coverage of key exam topics includes GNU and UNIX commands, devices, file systems, file system hierarchy, user interf

  17. Kali Linux social engineering

    CERN Document Server

    Singh, Rahul

    2013-01-01

    This book is a practical, hands-on guide to learning and performing SET attacks with multiple examples.Kali Linux Social Engineering is for penetration testers who want to use BackTrack in order to test for social engineering vulnerabilities or for those who wish to master the art of social engineering attacks.

  18. Assessment of VME-PCI Interfaces with Linux Drivers

    CERN Document Server

    Schossmater, K; CERN. Geneva

    2000-01-01

    Abstract This report summarises the performance measurements and experiences recorded by testing three commercial VME-PCI interfaces with their Linux drivers. These interfaces are manufactured by Wiener, National Instruments and SBS Bit 3. The developed C programs are reading/writing a VME memory in different transfer modes via these interfaces. A dual processor HP Kayak XA-s workstation was used with the CERN certified Red Hat Linux 6.1 running on it.

  19. Communication Software Performance for Linux Clusters with Mesh Connections

    Energy Technology Data Exchange (ETDEWEB)

    Jie Chen; William Watson

    2003-09-01

    Recent progress in copper based commodity Gigabit Ethernet interconnects enables constructing clusters to achieve extremely high I/O bandwidth at low cost with mesh connections. However, the TCP/IP protocol stack cannot match the improved performance of Gigabit Ethernet networks especially in the case of multiple interconnects on a single host. In this paper, we evaluate and compare the performance characteristics of TCP/IP and M-VIA software that is an implementation of VIA.In particular, we focus on the performance of the software systems for a mesh communication architecture and demonstrate the feasibility of using multiple Gigabit Ethernet cards on one host to achieve aggregated bandwidth and latency that are not only better than what TCP provides but also compare favorably to some of the special purpose high-speed networks. In addition, implementation of a new M-VIA driver for one type of Gigabit Ethernet card will be discussed.

  20. Mastering Kali Linux for advanced penetration testing

    CERN Document Server

    Beggs, Robert W

    2014-01-01

    This book provides an overview of the kill chain approach to penetration testing, and then focuses on using Kali Linux to provide examples of how this methodology is applied in the real world. After describing the underlying concepts, step-by-step examples are provided that use selected tools to demonstrate the techniques. If you are an IT professional or a security consultant who wants to maximize the success of your network testing using some of the advanced features of Kali Linux, then this book is for you. This book will teach you how to become an expert in the pre-engagement, management,

  1. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community.

    Science.gov (United States)

    Krampis, Konstantinos; Booth, Tim; Chapman, Brad; Tiwari, Bela; Bicak, Mesude; Field, Dawn; Nelson, Karen E

    2012-03-19

    A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly

  2. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community

    Science.gov (United States)

    2012-01-01

    Background A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Results Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Conclusions Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the

  3. The Case for A Hierarchal System Model for Linux Clusters

    Energy Technology Data Exchange (ETDEWEB)

    Seager, M; Gorda, B

    2009-06-05

    The computer industry today is no longer driven, as it was in the 40s, 50s and 60s, by High-performance computing requirements. Rather, HPC systems, especially Leadership class systems, sit on top of a pyramid investment mode. Figure 1 shows a representative pyramid investment model for systems hardware. At the base of the pyramid is the huge investment (order 10s of Billions of US Dollars per year) in semiconductor fabrication and process technologies. These costs, which are approximately doubling with every generation, are funded from investments multiple markets: enterprise, desktops, games, embedded and specialized devices. Over and above these base technology investments are investments for critical technology elements such as microprocessor, chipsets and memory ASIC components. Investments for these components are spread across the same markets as the base semiconductor processes investments. These second tier investments are approximately half the size of the lower level of the pyramid. The next technology investment layer up, tier 3, is more focused on scalable computing systems such as those needed for HPC and other markets. These tier 3 technology elements include networking (SAN, WAN and LAN), interconnects and large scalable SMP designs. Above these is tier 4 are relatively small investments necessary to build very large, scalable systems high-end or Leadership class systems. Primary among these are the specialized network designs of vertically integrated systems, etc.

  4. Measuring performances of linux hyper visors

    International Nuclear Information System (INIS)

    Chierici, A.; Veraldi, R.; Salomoni, D.

    2009-01-01

    Virtualisation is a now proven software technology that is rapidly transforming the I T landscape and fundamentally changing the way people make computations and implement services. Recently, all major software producers (e.g., Microsoft and Red Hat) developed or acquired virtualisation technologies. Our institute (http://www.CNAF.INFN.it) is a Tier l for experiments carried on at the Large Hadron Collider at CERN (http://lhc.web.CERN.ch/lhc/) and is experiencing several benefits from virtualisation technologies, like improving fault tolerance, providing efficient hardware resource usage and increasing security. Currently, the virtualisation solution we adopted is xen, which is well supported by the Scientific Linux distribution, widely used by the High-Energy Physics (HEP) community. Since Scientific Linux is based on Red Hat E S, we felt the need to investigate performances and usability differences with the new k vm technology, recently acquired by Red Hat. The case study of this work is the Tier2 site for the LHCb experiment hosted at our institute; all major grid elements for this Tier2 run on xen virtual machines smoothly. We will investigate the impact on performance and stability that a migration to k vm would entail on the Tier2 site, as well as the effort required by a system administrator to deploy the migration.

  5. Extracting Feature Model Changes from the Linux Kernel Using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2014-01-01

    The Linux kernel feature model has been studied as an example of large scale evolving feature model and yet details of its evolution are not known. We present here a classification of feature changes occurring on the Linux kernel feature model, as well as a tool, FMDiff, designed to automatically

  6. A package of Linux scripts for the parallelization of Monte Carlo simulations

    Science.gov (United States)

    Badal, Andreu; Sempau, Josep

    2006-09-01

    also provided. Typical running time:The execution time of each script largely depends on the number of computers that are used, the actions that are to be performed and, to a lesser extent, on the network connexion bandwidth. Unusual features of the program:Any computer on the Internet with a Secure Shell client/server program installed can be used as a node of a virtual computer cluster for parallel calculations with the sequential source code. The simplicity of the parallelization scheme makes the use of this package a straightforward task, which does not require installing any additional libraries. Program summary 2Title of program:seedsMLCG Catalogue identifier:ADYE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYE_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, Northern Ireland Computer for which the program is designed and others in which it is operable:Any computer with a FORTRAN compiler Operating systems under which the program has been tested:Linux (RedHat 8.0, SuSe 8.1, Debian Woody 3.1), MS Windows (2000, XP) Compilers:GNU FORTRAN g77 (Linux and Windows); g95 (Linux); Intel Fortran Compiler 7.1 (Linux); Compaq Visual Fortran 6.1 (Windows) Programming language used:FORTRAN 77 No. of bits in a word:32 Memory required to execute with typical data:500 kilobytes No. of lines in distributed program, including test data, etc.:492 No. of bytes in distributed program, including test data, etc.:5582 Distribution format:tar.gz Nature of the physical problem:Statistically independent results from different runs of a Monte Carlo code can be obtained using uncorrelated sequences of random numbers on each execution. Multiplicative linear congruential generators (MLCG), or other generators that are based on them such as RANECU, can be adapted to produce these sequences. Method of solution:For a given MLCG, the presented program calculates initialization values that produce disjoint, consecutive sequences of pseudo

  7. Real-time data acquisition and feedback control using Linux Intel computers

    International Nuclear Information System (INIS)

    Penaflor, B.G.; Ferron, J.R.; Piglowski, D.A.; Johnson, R.D.; Walker, M.L.

    2006-01-01

    This paper describes the experiences of the DIII-D programming staff in adapting Linux based Intel computing hardware for use in real-time data acquisition and feedback control systems. Due to the highly dynamic and unstable nature of magnetically confined plasmas in tokamak fusion experiments, real-time data acquisition and feedback control systems are in routine use with all major tokamaks. At DIII-D, plasmas are created and sustained using a real-time application known as the digital plasma control system (PCS). During each experiment, the PCS periodically samples data from hundreds of diagnostic signals and provides these data to control algorithms implemented in software. These algorithms compute the necessary commands to send to various actuators that affect plasma performance. The PCS consists of a group of rack mounted Intel Xeon computer systems running an in-house customized version of the Linux operating system tailored specifically to meet the real-time performance needs of the plasma experiments. This paper provides a more detailed description of the real-time computing hardware and custom developed software, including recent work to utilize dual Intel Xeon equipped computers within the PCS

  8. Hard Real-Time Linux for Off-The-Shelf Multicore Architectures

    OpenAIRE

    Radder, Dirk

    2015-01-01

    This document describes the research results that were obtained from the development of a real-time extension for the Linux operating system. The paper describes a full extension of the kernel, which enables hard real-time performance on a 64-bit x86 architecture. In the first part of this study, real-time systems are categorized and concepts of real-time operating systems are introduced to the reader. In addition, numerous well-known real-time operating systems are considered. QNX Neutrino, ...

  9. Kali Linux cookbook

    CERN Document Server

    Pritchett, Willie

    2013-01-01

    A practical, cookbook style with numerous chapters and recipes explaining the penetration testing. The cookbook-style recipes allow you to go directly to your topic of interest if you are an expert using this book as a reference, or to follow topics throughout a chapter to gain in-depth knowledge if you are a beginner.This book is ideal for anyone who wants to get up to speed with Kali Linux. It would also be an ideal book to use as a reference for seasoned penetration testers.

  10. Developing and Benchmarking Native Linux Applications on Android

    Science.gov (United States)

    Batyuk, Leonid; Schmidt, Aubrey-Derrick; Schmidt, Hans-Gunther; Camtepe, Ahmet; Albayrak, Sahin

    Smartphones get increasingly popular where more and more smartphone platforms emerge. Special attention was gained by the open source platform Android which was presented by the Open Handset Alliance (OHA) hosting members like Google, Motorola, and HTC. Android uses a Linux kernel and a stripped-down userland with a custom Java VM set on top. The resulting system joins the advantages of both environments, while third-parties are intended to develop only Java applications at the moment.

  11. The Linux farm at the RCF

    International Nuclear Information System (INIS)

    Chan, A.W.; Hogue, R.W.; Throwe, T.G.; Yanuklis, T.A.

    2001-01-01

    A description of the Linux Farm at the RHIC Computing Facility (RCF) is presented. The RCF is a dedicated data processing facility for RHIC, which became operational in the summer of 2000 at Brookhaven National Laboratory

  12. Commodity Cluster Computing for Remote Sensing Applications using Red Hat LINUX

    Science.gov (United States)

    Dorband, John

    2003-01-01

    Since 1994, we have been doing research at Goddard Space Flight Center on implementing a wide variety of applications on commodity based computing clusters. This talk is about these clusters and haw they are used on these applications including ones for remote sensing.

  13. Diversifying the Department of Defense Network Enterprise with Linux

    Science.gov (United States)

    2010-03-01

    protection of DoD infrastructure. In the competitive marketplace, strategy is defined as a firm’s theory on how it gains high levels of performance...practice of discontinuing support to legacy systems. Microsoft also needs to convey it was in the user’s best interest to upgrade the operating... stockholders , Microsoft acknowledged recent notable competitors in the market place threatening their long time monopolistic enterprise. Linux (a popular

  14. Setup of the development tools for a small-sized controller built in a robot using Linux

    International Nuclear Information System (INIS)

    Lee, Jae Cheol; Jun, Hyeong Seop; Choi, Yu Rak; Kim, Jae Hee

    2004-03-01

    This report explains a setup method of practical development tools for robot control software programming. Well constituted development tools make a programmer more productive and a program more reliable. We ported a proven operating system to the target board (our robot's controller) to avoid such convention. We selected open source Linux as operating system, because it is free, reliable, flexible and widely used. First, we setup the host computer with Linux, and installed a cross compiler on it. And we ported Linux to the target board and connected to the host computer with ethernet, and setup NFS to both the host and the target. So the target board can use host computer's hard disk as it's own disk. Next, we installed gdb server on the target board and gdb client and DDD to the host computer for debugging the target program in the host computer with graphic environment. Finally, we patched the target board's Linux kernel with another one which have realtime capability. In this way, we can have a realtime embedded hardware controller for a robot with convenient software developing tools. All source programs are edited and compiled on the host computer, and executable codes exist in the NFS mounted directory that can be accessed from target board's directory. We can execute and debugging the code by means of logging into the target through the ethernet or the serial line

  15. Real-time head movement system and embedded Linux implementation for the control of power wheelchairs.

    Science.gov (United States)

    Nguyen, H T; King, L M; Knight, G

    2004-01-01

    Mobility has become very important for our quality of life. A loss of mobility due to an injury is usually accompanied by a loss of self-confidence. For many individuals, independent mobility is an important aspect of self-esteem. Head movement is a natural form of pointing and can be used to directly replace the joystick whilst still allowing for similar control. Through the use of embedded LINUX and artificial intelligence, a hands-free head movement wheelchair controller has been designed and implemented successfully. This system provides for severely disabled users an effective power wheelchair control method with improved posture, ease of use and attractiveness.

  16. Convolutional Neural Network on Embedded Linux System-on-Chip: A Methodology and Performance Benchmark

    Science.gov (United States)

    2016-05-01

    Linux® is a registered trademark of Linus Torvalds. NVIDIA ® is a registered trademark of NVIDIA Corporation. Oracle® is a registered trademark of...two NVIDIA ® GTX580 GPUs [3]. Therefore, for this initial work, we decided to concentrate on small networks and small datasets until the methods are

  17. Computation cluster for Monte Carlo calculations

    Energy Technology Data Exchange (ETDEWEB)

    Petriska, M.; Vitazek, K.; Farkas, G.; Stacho, M.; Michalek, S. [Dep. Of Nuclear Physics and Technology, Faculty of Electrical Engineering and Information, Technology, Slovak Technical University, Ilkovicova 3, 81219 Bratislava (Slovakia)

    2010-07-01

    Two computation clusters based on Rocks Clusters 5.1 Linux distribution with Intel Core Duo and Intel Core Quad based computers were made at the Department of the Nuclear Physics and Technology. Clusters were used for Monte Carlo calculations, specifically for MCNP calculations applied in Nuclear reactor core simulations. Optimization for computation speed was made on hardware and software basis. Hardware cluster parameters, such as size of the memory, network speed, CPU speed, number of processors per computation, number of processors in one computer were tested for shortening the calculation time. For software optimization, different Fortran compilers, MPI implementations and CPU multi-core libraries were tested. Finally computer cluster was used in finding the weighting functions of neutron ex-core detectors of VVER-440. (authors)

  18. Computation cluster for Monte Carlo calculations

    International Nuclear Information System (INIS)

    Petriska, M.; Vitazek, K.; Farkas, G.; Stacho, M.; Michalek, S.

    2010-01-01

    Two computation clusters based on Rocks Clusters 5.1 Linux distribution with Intel Core Duo and Intel Core Quad based computers were made at the Department of the Nuclear Physics and Technology. Clusters were used for Monte Carlo calculations, specifically for MCNP calculations applied in Nuclear reactor core simulations. Optimization for computation speed was made on hardware and software basis. Hardware cluster parameters, such as size of the memory, network speed, CPU speed, number of processors per computation, number of processors in one computer were tested for shortening the calculation time. For software optimization, different Fortran compilers, MPI implementations and CPU multi-core libraries were tested. Finally computer cluster was used in finding the weighting functions of neutron ex-core detectors of VVER-440. (authors)

  19. Malware Memory Analysis of the Jynx2 Linux Rootkit (Part 1): Investigating a Publicly Available Linux Rootkit Using the Volatility Memory Analysis Framework

    Science.gov (United States)

    2014-10-01

    represented by the Minister of National Defence, 2014 © Sa Majesté la Reine (en droit du Canada), telle que représentée par le ministre de la Défense...analysis techniques is outside the scope of this work, as it requires a comprehensive study of operating system internals and software reverse engineering...2 Peripheral concerns 2.1 Why examine Linux memory images or make them available? After extensively searching the available public

  20. Free and open source software at CERN: integration of drivers in the Linux kernel

    International Nuclear Information System (INIS)

    Gonzalez Cobas, J.D.; Iglesias Gonsalvez, S.; Howard Lewis, J.; Serrano, J.; Vanga, M.; Cota, E.G.; Rubini, A.; Vaga, F.

    2012-01-01

    Most device drivers written for accelerator control systems suffer from a severe lack of portability due to the ad hoc nature of the code, often embodied with intimate knowledge of the particular machine it is deployed in. In this paper we challenge this practice by arguing for the opposite approach: development in the open, which in our case translates into the integration of our code within the Linux kernel. We make our case by describing the upstream merge effort of the tsi148 driver, a critical (and complex) component of the control system. The encouraging results from this effort have then led us to follow the same approach with two more ambitious projects, currently in the works: Linux support for the upcoming FMC boards and a new I/O subsystem. (authors)

  1. Transitioning to Intel-based Linux Servers in the Payload Operations Integration Center

    Science.gov (United States)

    Guillebeau, P. L.

    2004-01-01

    The MSFC Payload Operations Integration Center (POIC) is the focal point for International Space Station (ISS) payload operations. The POIC contains the facilities, hardware, software and communication interface necessary to support payload operations. ISS ground system support for processing and display of real-time spacecraft and telemetry and command data has been operational for several years. The hardware components were reaching end of life and vendor costs were increasing while ISS budgets were becoming severely constrained. Therefore it has been necessary to migrate the Unix portions of our ground systems to commodity priced Intel-based Linux servers. hardware architecture including networks, data storage, and highly available resources. This paper will concentrate on the Linux migration implementation for the software portion of our ground system. The migration began with 3.5 million lines of code running on Unix platforms with separate servers for telemetry, command, Payload information management systems, web, system control, remote server interface and databases. The Intel-based system is scheduled to be available for initial operational use by August 2004 The overall migration to Intel-based Linux servers in the control center involves changes to the This paper will address the Linux migration study approach including the proof of concept, criticality of customer buy-in and importance of beginning with POSlX compliant code. It will focus on the development approach explaining the software lifecycle. Other aspects of development will be covered including phased implementation, interim milestones and metrics measurements and reporting mechanisms. This paper will also address the testing approach covering all levels of testing including development, development integration, IV&V, user beta testing and acceptance testing. Test results including performance numbers compared with Unix servers will be included. need for a smooth transition while maintaining

  2. Kali Linux wireless penetration testing beginner's guide

    CERN Document Server

    Ramachandran, Vivek

    2015-01-01

    If you are a security professional, pentester, or anyone interested in getting to grips with wireless penetration testing, this is the book for you. Some familiarity with Kali Linux and wireless concepts is beneficial.

  3. An integrated genetic data environment (GDE)-based LINUX interface for analysis of HIV-1 and other microbial sequences.

    Science.gov (United States)

    De Oliveira, T; Miller, R; Tarin, M; Cassol, S

    2003-01-01

    Sequence databases encode a wealth of information needed to develop improved vaccination and treatment strategies for the control of HIV and other important pathogens. To facilitate effective utilization of these datasets, we developed a user-friendly GDE-based LINUX interface that reduces input/output file formatting. GDE was adapted to the Linux operating system, bioinformatics tools were integrated with microbe-specific databases, and up-to-date GDE menus were developed for several clinically important viral, bacterial and parasitic genomes. Each microbial interface was designed for local access and contains Genbank, BLAST-formatted and phylogenetic databases. GDE-Linux is available for research purposes by direct application to the corresponding author. Application-specific menus and support files can be downloaded from (http://www.bioafrica.net).

  4. Enabling rootless Linux containers in multi-user environments. The udocker tool

    Energy Technology Data Exchange (ETDEWEB)

    Gomes, Jorge; David, Mario; Alves, Luis; Martins, Jo ao; Pina, Jo ao [Laboratorio de Instrumentacao e Fisica Experimental de Particulas (LIP), Lisboa (Portugal); Bagnaschi, Emanuele [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Campos, Isabel; Lopez-Garcia, Alvaro; Orviz, Pablo [IFCA, Consejo Superior de Investigaciones Cientificas-CSIC, Santander (Spain)

    2017-11-15

    Containers are increasingly used as means to distribute and run Linux services and applications. In this paper we describe the architectural design and implementation of udocker a tool to execute Linux containers in user mode and we describe a few practical applications for a range of scientific codes meeting different requirements: from single core execution to MPI parallel execution and execution on GPGPUs.

  5. Enabling rootless Linux containers in multi-user environments. The udocker tool

    International Nuclear Information System (INIS)

    Gomes, Jorge; David, Mario; Alves, Luis; Martins, Jo ao; Pina, Jo ao; Bagnaschi, Emanuele; Campos, Isabel; Lopez-Garcia, Alvaro; Orviz, Pablo

    2017-11-01

    Containers are increasingly used as means to distribute and run Linux services and applications. In this paper we describe the architectural design and implementation of udocker a tool to execute Linux containers in user mode and we describe a few practical applications for a range of scientific codes meeting different requirements: from single core execution to MPI parallel execution and execution on GPGPUs.

  6. Empirical tests of Zipf's law mechanism in open source Linux distribution.

    Science.gov (United States)

    Maillart, T; Sornette, D; Spaeth, S; von Krogh, G

    2008-11-21

    Zipf's power law is a ubiquitous empirical regularity found in many systems, thought to result from proportional growth. Here, we establish empirically the usually assumed ingredients of stochastic growth models that have been previously conjectured to be at the origin of Zipf's law. We use exceptionally detailed data on the evolution of open source software projects in Linux distributions, which offer a remarkable example of a growing complex self-organizing adaptive system, exhibiting Zipf's law over four full decades.

  7. PENGUKURAN KINERJA ROUND-ROBIN SCHEDULER UNTUK LINUX VIRTUAL SERVER PADA KASUS WEB SERVER

    Directory of Open Access Journals (Sweden)

    Royyana Muslim Ijtihadie

    2005-07-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Dengan meningkatnya perkembangan jumlah pengguna internet dan mulai diadopsinya penggunaan internet dalam kehidupan sehari-hari, maka lalulintas data di Internet telah meningkat secara signifikan. Sejalan dengan itu pula beban kerja server-server yang memberikan service di Internet juga mengalami kenaikan yang cukup signifikan. Hal tersebut dapat mengakibatkan suatu server mengalami kelebihan beban pada suatu saat. Untuk mengatasi hal tersebut maka diterapkan skema konfigurasi server cluster menggunakan konsep load balancing. Load balancing server menerapkan algoritma dalam melakukan pembagian tugas. Algoritma round robin telah digunakan pada Linux Virtual Server. Penelitian ini melakukan pengukuran kinerja terhadap Linux Virtual Server yang menggunakan algoritma round robin untuk melakukan penjadwalan pembagian beban terhadap server. Penelitian ini mengukur performa dari sisi client yang mencoba mengakses web server.performa yang diukur adalah jumlah request yang bisa diselesaikan perdetik (request per second, waktu untuk menyelesaikan per satu request, dan   throughput yang dihasilkan. Dari hasil percobaan didapatkan bahwa penggunaan LVS bisa meningkatkan performa, yaitu menaikkan jumlah request per detik

  8. Impact on TRMM Products of Conversion to Linux

    Science.gov (United States)

    Stocker, Erich Franz; Kwiatkowski, John

    2008-01-01

    In June 2008, TRMM data processing will be assumed by the Precipitation Processing System (PPS). This change will also mean a change in the hardware production environment from an SGI 32 bit IRIX processing environment to a Linux (Beowulf) 64 bit processing environment. This change of platform and operating system addressing (32 to 64) has some influence on data values in the TRMM data products. This paper will describe the transition architecture and scheduling. It will also provide an analysis of what the nature of the product differences will be. It will demonstrate that the differences are not scientifically significant and are generally not visible. However, they are not always identical with those which the SGI would produce.

  9. MySQL databases as part of the Online Business, using a platform based on Linux

    Directory of Open Access Journals (Sweden)

    Ion-Sorin STROE

    2011-09-01

    Full Text Available The Internet is a business development environment that has major advantages over traditional environment. From a financial standpoint, the initial investment is much reduced and, as yield, the chances of success are considerably higher. Developing an online business also depends on the manager’s ability to use the best solutions, sustainable on a long term. The current trend is to decrease the costs for the technical platform by adopting open-source license products. Such platform is based on a Linux operating system and a database system based on MySQL product. This article aims to answer two basic questions: “A platform based on Linux and MySQL can handle the demands of an online business?” and “Adopting such a solution has the effect of increasing profitability?”

  10. Kernel Korner : The Linux keyboard driver

    NARCIS (Netherlands)

    Brouwer, A.E.

    1995-01-01

    Our Kernel Korner series continues with an article describing the Linux keyboard driver. This article is not for "Kernel Hackers" only--in fact, it will be most useful to those who wish to use their own keyboard to its fullest potential, and those who want to write programs to take advantage of the

  11. Convolutional Neural Network on Embedded Linux(trademark) System-on-Chip: A Methodology and Performance Benchmark

    Science.gov (United States)

    2016-05-01

    Linux® is a registered trademark of Linus Torvalds. NVIDIA ® is a registered trademark of NVIDIA Corporation. Oracle® is a registered trademark of...two NVIDIA ® GTX580 GPUs [3]. Therefore, for this initial work, we decided to concentrate on small networks and small datasets until the methods are

  12. Lin4Neuro: a customized Linux distribution ready for neuroimaging analysis.

    Science.gov (United States)

    Nemoto, Kiyotaka; Dan, Ippeita; Rorden, Christopher; Ohnishi, Takashi; Tsuzuki, Daisuke; Okamoto, Masako; Yamashita, Fumio; Asada, Takashi

    2011-01-25

    A variety of neuroimaging software packages have been released from various laboratories worldwide, and many researchers use these packages in combination. Though most of these software packages are freely available, some people find them difficult to install and configure because they are mostly based on UNIX-like operating systems. We developed a live USB-bootable Linux package named "Lin4Neuro." This system includes popular neuroimaging analysis tools. The user interface is customized so that even Windows users can use it intuitively. The boot time of this system was only around 40 seconds. We performed a benchmark test of inhomogeneity correction on 10 subjects of three-dimensional T1-weighted MRI scans. The processing speed of USB-booted Lin4Neuro was as fast as that of the package installed on the hard disk drive. We also installed Lin4Neuro on a virtualization software package that emulates the Linux environment on a Windows-based operation system. Although the processing speed was slower than that under other conditions, it remained comparable. With Lin4Neuro in one's hand, one can access neuroimaging software packages easily, and immediately focus on analyzing data. Lin4Neuro can be a good primer for beginners of neuroimaging analysis or students who are interested in neuroimaging analysis. It also provides a practical means of sharing analysis environments across sites.

  13. Linux vallutab arvutiilma / Scott Handy ; interv. Kristjan Otsmann

    Index Scriptorium Estoniae

    Handy, Scott

    2000-01-01

    IBM tarkvaragrupi Linuxi lahenduste turundusdirektor S. Handy prognoosib, et kolme-nelja aasta pärast kasutab tasuta operatsioonisüsteemi Linux sama palju arvuteid kui Windowsi operatsioonisüsteemi

  14. Servidor Linux para conexiones seguras de una LAN a Internet

    OpenAIRE

    Escartín Vigo, José Antonio

    2005-01-01

    Este documento esta elaborado para describir la implementación de un servidor GNU/Linux, así como especificar y resolver los principales problemas que un administrador se encuentra al poner en funcionamiento un servidor. Se aprenderá a configurar un servidor GNU/Linux describiendo los principales servicios utilizados para compartir archivos, páginas web, correo y otros que veremos más adelante. La herramienta de configuración Webmin, que se detalla en uno de los últimos capítulos es indepe...

  15. Reuse of the compact nuclear simulator software under PC with Linux

    International Nuclear Information System (INIS)

    Cha, K. H.; Park, J. C.; Kwon, K. C.; Lee, G. Y.

    2000-01-01

    This study was approached to reuse source programs for a nuclear simulator under PC with Open Source Software(OSS) and to extend its applicability. Source programs in the Compact Nuclear Simulator(CNS), which has been operated for institutional research and training in KAERI, were reused and implemented for Linux-PC environment with the aim of supporting the study. PC with 500 MHz processor and Linux 2.2.5-22 kernel were utilized for the reuse implementation and it was investigated for some applications, through the functional testing for its main functions as interfaced with compact control panels in the current CNS. Development and upgrade of small-scale simulators, establishment of process simulation for PC, and development of prototype predictive simulation, can effectively be enabled with the experience though the reuse implementation was limited to port only CNS programs for PC with Linux

  16. Millisecond accuracy video display using OpenGL under Linux.

    Science.gov (United States)

    Stewart, Neil

    2006-02-01

    To measure people's reaction times to the nearest millisecond, it is necessary to know exactly when a stimulus is displayed. This article describes how to display stimuli with millisecond accuracy on a normal CRT monitor, using a PC running Linux. A simple C program is presented to illustrate how this may be done within X Windows using the OpenGL rendering system. A test of this system is reported that demonstrates that stimuli may be consistently displayed with millisecond accuracy. An algorithm is presented that allows the exact time of stimulus presentation to be deduced, even if there are relatively large errors in measuring the display time.

  17. BSD Portals for LINUX 2.0

    Science.gov (United States)

    McNab, A. David; woo, Alex (Technical Monitor)

    1999-01-01

    Portals, an experimental feature of 4.4BSD, extend the file system name space by exporting certain open () requests to a user-space daemon. A portal daemon is mounted into the file name space as if it were a standard file system. When the kernel resolves a pathname and encounters a portal mount point, the remainder of the path is passed to the portal daemon. Depending on the portal "pathname" and the daemon's configuration, some type of open (2) is performed. The resulting file descriptor is passed back to the kernel which eventually returns it to the user, to whom it appears that a "normal" open has occurred. A proxy portalfs file system is responsible for kernel interaction with the daemon. The overall effect is that the portal daemon performs an open (2) on behalf of the kernel, possibly hiding substantial complexity from the calling process. One particularly useful application is implementing a connection service that allows simple scripts to open network sockets. This paper describes the implementation of portals for LINUX 2.0.

  18. Getting priorities straight: improving Linux support for database I/O

    DEFF Research Database (Denmark)

    Hall, Christoffer; Bonnet, Philippe

    2005-01-01

    The Linux 2.6 kernel supports asynchronous I/O as a result of propositions from the database industry. This is a positive evolution but is it a panacea? In the context of the Badger project, a collaboration between MySQL AB and University of Copenhagen, ......The Linux 2.6 kernel supports asynchronous I/O as a result of propositions from the database industry. This is a positive evolution but is it a panacea? In the context of the Badger project, a collaboration between MySQL AB and University of Copenhagen, ...

  19. PC clusters at CERN's PC farm

    CERN Multimedia

    Patrice Loïez

    2001-01-01

    These Linux-based PC clusters are mainly used for batch and interactive data processing. When the LHC starts operation in 2008, it will produce enough data every year to fill a stack of CDS 20 km tall, so high quality processing is required. To further facilitate this the LHC Computing Grid (LCG) has been set up to share processing power between facilities around the world.

  20. State of the art of parallel scientific visualization applications on PC clusters

    International Nuclear Information System (INIS)

    Juliachs, M.

    2004-01-01

    In this state of the art on parallel scientific visualization applications on PC clusters, we deal with both surface and volume rendering approaches. We first analyze available PC cluster configurations and existing parallel rendering software components for parallel graphics rendering. CEA/DIF has been studying cluster visualization since 2001. This report is part of a study to set up a new visualization research platform. This platform consisting of an eight-node PC cluster under Linux and a tiled display was installed in collaboration with Versailles-Saint-Quentin University in August 2003. (author)

  1. Linux Incident Response Volatile Data Analysis Framework

    Science.gov (United States)

    McFadden, Matthew

    2013-01-01

    Cyber incident response is an emphasized subject area in cybersecurity in information technology with increased need for the protection of data. Due to ongoing threats, cybersecurity imposes many challenges and requires new investigative response techniques. In this study a Linux Incident Response Framework is designed for collecting volatile data…

  2. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    Energy Technology Data Exchange (ETDEWEB)

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold (Hal) Edward; Stevenson, Joel O.; Benner, Robert E., Jr. (.,; .); Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  3. Design and Achievement of User Interface Automation Testing of Linux Based on Element Tree of DogTail

    Directory of Open Access Journals (Sweden)

    Yuan Wen-Chao

    2017-01-01

    Full Text Available As Linux gets more popular around the world, the advantage of the open source on software makes people do automated UI test by unified testing framework. UI software testing can guarantee the rationality of User Interface of Linux and accuracy of the UI’s widgets. In order to set free from fuzzy and repeated manual testing, and improve efficiency, this paper achieves automation testing of UI under Linux, and proposes a method to identify and test UI widgets under Linux, which is according to element tree of DogTail automaton testing framework. It achieves automation test of UI under Linux. According to this method, Aiming at the product of Red Hat Subscription Manager under Red Hat Enterprise Linux, it designs the automation test plan of this series of product’s dialogs. After many tests, it is indicated that this plan can identify UI widgets accurately and rationally, describe the structure of software clearly, avoid software errors and improve efficiency of the software. Simultaneously, it also can be used in the internationalization testing for checking translation during software internationalization.

  4. Automating the Port of Linux to the VirtualLogix Hypervisor using Semantic Patches

    DEFF Research Database (Denmark)

    Armand, Francois; Muller, Gilles; Lawall, Julia Laetitia

    2008-01-01

    of Linux to the VLX hypervisor.  Coccinelle provides a notion of semantic patches, which are more abstract than standard patches, and thus are potentially applicable to a wider range of OS versions.  We have applied this approach in the context of Linux versions 2.6.13, 2.6.14, and 2.6.15, for the ARM...

  5. SmPL: A Domain-Specific Language for Specifying Collateral Evolutions in Linux Device Drivers

    DEFF Research Database (Denmark)

    Padioleau, Yoann; Lawall, Julia Laetitia; Muller, Gilles

    2007-01-01

    identifying the affected files and modifying all of the code fragments in these files that in some way depend on the changed interface. We have studied the collateral evolution problem in the context of Linux device drivers. Currently, collateral evolutions in Linux are mostly done manually using a text...

  6. Linux: Hacia una revolución silenciosa de la sociedad de la información

    Directory of Open Access Journals (Sweden)

    Pascuale Sofia

    2004-01-01

    Full Text Available l presente artículo intenta realizar una demostración de las cualidades globales que posee el nuevo sistema operativo LINUX a nivel técnico, y develar el cambio que está engendrando en el sector económico y en el mundo cultural. Esto se realiza por medio, de un análisis comparativo entre los sistemas operativos: Comerciales (Microsoft y Open Source (LINUX. El mundo de hoy está caracterizado por cambios rádicales y rápidos, ocurriendo con mayor frecuencia en el sector de la informática. Actualmente en éste sector y específicamente en el ámbito del sofware, es LINUX el nuevo sistema operativo que está modificando el mundo de la informática. Todo ello se efectuó sobre los lineamientos metodológicos exploratorios, porque la literatura sobre los avances de Linux es escasa, por lo tanto el trabajo responde a la sintesis de un amplio trabajo (Conferencias, Exposiciones en Universidades, Asociaciones de empresas, entre otros de los autores, llevaron a cabo desde que el producto LINUX es conocido y trabajado por una pequeña elite de técnicos.

  7. IP Security für Linux

    OpenAIRE

    Parthey, Mirko

    2001-01-01

    Die Nutzung des Internet für sicherheitskritische Anwendungen erfordert kryptographische Schutzmechanismen. IP Security (IPsec) definiert dafür geeignete Protokolle. Diese Arbeit gibt einen Überblick über IPsec. Eine IPsec-Implementierung für Linux (FreeS/WAN) wird auf Erweiterbarkeit und Praxistauglichkeit untersucht. Using the Internet in security-critical areas requires cryptographic protection, for which IP Security (IPsec) defines suitable protocols. This paper gives an overview of IP...

  8. High-Performance, Multi-Node File Copies and Checksums for Clustered File Systems

    Science.gov (United States)

    Kolano, Paul Z.; Ciotti, Robert B.

    2012-01-01

    Modern parallel file systems achieve high performance using a variety of techniques, such as striping files across multiple disks to increase aggregate I/O bandwidth and spreading disks across multiple servers to increase aggregate interconnect bandwidth. To achieve peak performance from such systems, it is typically necessary to utilize multiple concurrent readers/writers from multiple systems to overcome various singlesystem limitations, such as number of processors and network bandwidth. The standard cp and md5sum tools of GNU coreutils found on every modern Unix/Linux system, however, utilize a single execution thread on a single CPU core of a single system, and hence cannot take full advantage of the increased performance of clustered file systems. Mcp and msum are drop-in replacements for the standard cp and md5sum programs that utilize multiple types of parallelism and other optimizations to achieve maximum copy and checksum performance on clustered file systems. Multi-threading is used to ensure that nodes are kept as busy as possible. Read/write parallelism allows individual operations of a single copy to be overlapped using asynchronous I/O. Multinode cooperation allows different nodes to take part in the same copy/checksum. Split-file processing allows multiple threads to operate concurrently on the same file. Finally, hash trees allow inherently serial checksums to be performed in parallel. Mcp and msum provide significant performance improvements over standard cp and md5sum using multiple types of parallelism and other optimizations. The total speed-ups from all improvements are significant. Mcp improves cp performance over 27x, msum improves md5sum performance almost 19x, and the combination of mcp and msum improves verified copies via cp and md5sum by almost 22x. These improvements come in the form of drop-in replacements for cp and md5sum, so are easily used and are available for download as open source software at http://mutil.sourceforge.net.

  9. Infecting Windows, Linux & Mac in one go

    CERN Multimedia

    Computer Security Team

    2012-01-01

    Still love bashing on Windows as you believe it is an insecure operating system? Hold on a second! Just recently, a vulnerability has been published for Java 7.   It affects Windows/Linux PCs and Macs, Internet Explorer, Safari and Firefox. In fact, it affects all computers that have enabled the Java 7 plug-in in their browser (Java 6 and earlier is not affected). Once you visit a malicious website (and there are plenty already out in the wild), your computer is infected… That's "Game Over" for you.      And this is not the first time. For a while now, attackers have not been targeting the operating system itself, but rather aiming at vulnerabilities inherent in e.g. your Acrobat Reader, Adobe Flash or Java programmes. All these are standard plug-ins added into your favourite web browser which make your web-surfing comfortable (or impossible when you un-install them). A single compromised web-site, however, is sufficient to prob...

  10. Teaching Hands-On Linux Host Computer Security

    Science.gov (United States)

    Shumba, Rose

    2006-01-01

    In the summer of 2003, a project to augment and improve the teaching of information assurance courses was started at IUP. Thus far, ten hands-on exercises have been developed. The exercises described in this article, and presented in the appendix, are based on actions required to secure a Linux host. Publicly available resources were used to…

  11. Embedded Linux projects using Yocto project cookbook

    CERN Document Server

    González, Alex

    2015-01-01

    If you are an embedded developer learning about embedded Linux with some experience with the Yocto project, this book is the ideal way to become proficient and broaden your knowledge with examples that are immediately applicable to your embedded developments. Experienced embedded Yocto developers will find new insight into working methodologies and ARM specific development competence.

  12. The Linux kernel as flexible product-line architecture

    NARCIS (Netherlands)

    M. de Jonge (Merijn)

    2002-01-01

    textabstractThe Linux kernel source tree is huge ($>$ 125 MB) and inflexible (because it is difficult to add new kernel components). We propose to make this architecture more flexible by assembling kernel source trees dynamically from individual kernel components. Users then, can select what

  13. Vabavarana levitatav Linux alles viib end massidesse / Erik Aru

    Index Scriptorium Estoniae

    Aru, Erik

    2004-01-01

    Tasuta operatsioonisüsteem Linux leiab maailmas aina laialdasemat kasutust. Operatsioonisüsteemi eeldusteks peetakse töö- ja viirusekindlust. Lisad: Toshiba lahing DVD tuleviku pärast. Vastuseks vt. Maaleht 16. dets. lk. 12

  14. An Introduction to Parallel Cluster Computing Using PVM for Computer Modeling and Simulation of Engineering Problems

    International Nuclear Information System (INIS)

    Spencer, VN

    2001-01-01

    An investigation has been conducted regarding the ability of clustered personal computers to improve the performance of executing software simulations for solving engineering problems. The power and utility of personal computers continues to grow exponentially through advances in computing capabilities such as newer microprocessors, advances in microchip technologies, electronic packaging, and cost effective gigabyte-size hard drive capacity. Many engineering problems require significant computing power. Therefore, the computation has to be done by high-performance computer systems that cost millions of dollars and need gigabytes of memory to complete the task. Alternately, it is feasible to provide adequate computing in the form of clustered personal computers. This method cuts the cost and size by linking (clustering) personal computers together across a network. Clusters also have the advantage that they can be used as stand-alone computers when they are not operating as a parallel computer. Parallel computing software to exploit clusters is available for computer operating systems like Unix, Windows NT, or Linux. This project concentrates on the use of Windows NT, and the Parallel Virtual Machine (PVM) system to solve an engineering dynamics problem in Fortran

  15. Memory Analysis of the KBeast Linux Rootkit: Investigating Publicly Available Linux Rootkit Using the Volatility Memory Analysis Framework

    Science.gov (United States)

    2015-06-01

    examine how a computer forensic investigator/incident handler, without specialised computer memory or software reverse engineering skills , can successfully...memory images and malware, this new series of reports will be directed at those who must analyse Linux malware-infected memory images. The skills ...disable 1287 1000 1000 /usr/lib/policykit-1-gnome/polkit-gnome-authentication- agent-1 1310 1000 1000 /usr/lib/pulseaudio/pulse/gconf- helper 1350

  16. A self contained Linux based data acquisition system for 2D detectors with delay line readout

    International Nuclear Information System (INIS)

    Beltran, D.; Toledo, J.; Klora, A.C.; Ramos-Lerate, I.; Martinez, J.C.

    2007-01-01

    This article describes a fast and self-contained data acquisition system for 2D gas-filled detectors with delay line readout. It allows the realization of time resolved experiments in the millisecond scale. The acquisition system comprises of an industrial PC running Linux, a commercial time-to-digital converter and an in-house developed histogramming PCI card. The PC provides a mass storage for images and a graphical user interface for system monitoring and control. The histogramming card builds images with a maximum count rate of 5 MHz limited by the time-to-digital converter. Histograms are transferred to the PC at 85 MB/s. This card also includes a time frame generator, a calibration channel unit and eight digital outputs for experiment control. The control software was developed for easy integration into a beamline, including scans. The system is fully operational at the Spanish beamline BM16 at the ESRF in France, the neutron beamlines Adam and Eva at the ILL in France, the Max Plank Institute in Stuttgart in Germany, the University of Copenhagen in Denmark and at the future ALBA synchrotron in Spain. Some representative collected images from synchrotron and neutron beamlines are presented

  17. SARUS: A Synthetic Aperture Real-Time Ultrasound System

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Holten-Lund, Hans; Nilsson, Ronnie Thorup

    2013-01-01

    -resolution images/s. Both RF element data and beamformed data can be stored in the system for later storage and processing. The stored data can be transferred in parallel using the system’s sixty-four 1-Gbit Ethernet interfaces at a theoretical rate of 3.2 GB/s to a 144-core Linux cluster....

  18. Linux containers networking: performance and scalability of kernel modules

    NARCIS (Netherlands)

    Claassen, J.; Koning, R.; Grosso, P.; Oktug Badonnel, S.; Ulema, M.; Cavdar, C.; Zambenedetti Granville, L.; dos Santos, C.R.P.

    2016-01-01

    Linux container virtualisation is gaining momentum as lightweight technology to support cloud and distributed computing. Applications relying on container architectures might at times rely on inter-container communication, and container networking solutions are emerging to address this need.

  19. Linux aitab olla sõltumatu / Jon Hall ; interv. Kristjan Otsmann

    Index Scriptorium Estoniae

    Hall, Jon

    2002-01-01

    Eesti peaks kasutama rohkem avatud lähtekoodil põhinevat tarkvara, sest see seab Eesti väiksemasse sõltuvusse välismaistest tarkvaratootjatest, ütles intervjuus Postimehele Linux Internationali juht

  20. Implementación del servicio de voz sobre IP en redes Linux y redes telefónicas análogas, utilizando software de comunicación sobre Linux

    OpenAIRE

    Campos, Jorge Alberto; Guzmán, Mauricio Orlando; González Jiménez, Francisco Alirio

    2007-01-01

    Implementación del servicio de voz sobre IP en redes Linux y redes telefónicas análogas, utilizando software de comunicación sobre Linux La tecnología de voz sobre protocolos de comunicación TCP/IP (VoIP, Voice over IP) es la que permite la transmisión de la voz a través de redes digitales (LAN WAN, INTERNET, etc) en forma de paquetes de datos, utilizando para ello la infraestructura de intercambio de datos instalada. La telefonía IP es una aplicación inmediata de esta tecnología de forma ...

  1. Feasibility study of BES data off-line processing and D/Ds physics analysis on a PC/Linux platform

    International Nuclear Information System (INIS)

    Rong Gang; He Kanglin; Heng Yuekun; Zhang Chun; Liu Huaimin; Cheng Baosen; Yan Wuguang; Mai Jimao; Zhao Haiwen

    2000-01-01

    The authors report a feasibility study of BES data off-line processing (BES data off-line reconstruction and Monte Carlo simulation) and D/Ds physics analysis on a PC/Linux platform. The authors compared the results obtained from the PC/Linux with that from HP/UNIX workstation. It shows that PC/Linux platform can do BES data off-line analysis as good as HP/UNIX workstation, and is much powerful and economical

  2. LHC@home online tutorial for Linux users - recording

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    A step-by-step online tutorial for LHC@home by Karolina Bozek It contains detailed instructions for Linux users on how-to-join this volunteer computing project.  This 5' linked from http://lhcathome.web.cern.ch/join-us CLICK Here to see the commands to copy/paste for installing BOINC and the VirtualBox.

  3. Empirical Testing of the CySeMoL Tool for Cyber Security Assessment – Case Study of Linux Server and MySQL

    OpenAIRE

    Rabbani, Talvia

    2016-01-01

    In this Master Thesis, several common applications used with MySQL and Linux server are modelled using the Enterprise Architecture Analysis Tool (EAAT) and the Cyber Security Modelling Language (CySeMoL), both developed by the Department of Industrial Information and Control System (ICS) at KTH. The objective of this study is to use the CySeMoL tool to evaluate the feasibility and correctness of the tool by simulating some particular type of attacks on a real life Linux server. A few common a...

  4. Dynamically allocated virtual clustering management system

    Science.gov (United States)

    Marcus, Kelvin; Cannata, Jess

    2013-05-01

    The U.S Army Research Laboratory (ARL) has built a "Wireless Emulation Lab" to support research in wireless mobile networks. In our current experimentation environment, our researchers need the capability to run clusters of heterogeneous nodes to model emulated wireless tactical networks where each node could contain a different operating system, application set, and physical hardware. To complicate matters, most experiments require the researcher to have root privileges. Our previous solution of using a single shared cluster of statically deployed virtual machines did not sufficiently separate each user's experiment due to undesirable network crosstalk, thus only one experiment could be run at a time. In addition, the cluster did not make efficient use of our servers and physical networks. To address these concerns, we created the Dynamically Allocated Virtual Clustering management system (DAVC). This system leverages existing open-source software to create private clusters of nodes that are either virtual or physical machines. These clusters can be utilized for software development, experimentation, and integration with existing hardware and software. The system uses the Grid Engine job scheduler to efficiently allocate virtual machines to idle systems and networks. The system deploys stateless nodes via network booting. The system uses 802.1Q Virtual LANs (VLANs) to prevent experimentation crosstalk and to allow for complex, private networks eliminating the need to map each virtual machine to a specific switch port. The system monitors the health of the clusters and the underlying physical servers and it maintains cluster usage statistics for historical trends. Users can start private clusters of heterogeneous nodes with root privileges for the duration of the experiment. Users also control when to shutdown their clusters.

  5. DEBROS: design and use of a Linux-like RTOS on an inexpensive 8-bit single board computer

    International Nuclear Information System (INIS)

    Davis, M.A.

    2012-01-01

    As the power, complexity, and capabilities of embedded processors continue to grow, it is easy to forget just how much can be done with inexpensive Single Board Computers (SBCs) based on 8-bit processors. When the proprietary, non-standard tools from the vendor for one such embedded computer became a major roadblock, I embarked on a project to expand my own knowledge and provide a more flexible, standards based alternative. Inspired by the early work done on operating systems such as UNIX TM , Linux, and Minix, I wrote DEBROS (the Davis Embedded Baby Real-time Operating System), which is a fully preemptive, priority-based OS with soft real-time capabilities that provides a subset of standard Linux/UNIX compatible system calls such as stdio, BSD sockets, pipes, semaphores, etc. The end result was a much more flexible, standards-based development environment which allowed me to simplify my programming model, expand diagnostic capabilities, and reduce the time spent monitoring and applying updates to the hundreds of devices in the lab currently using such inexpensive hardware. (author)

  6. FTAP: a Linux-based program for tapping and music experiments.

    Science.gov (United States)

    Finney, S A

    2001-02-01

    This paper describes FTAP, a flexible data collection system for tapping and music experiments. FTAP runs on standard PC hardware with the Linux operating system and can process input keystrokes and auditory output with reliable millisecond resolution. It uses standard MIDI devices for input and output and is particularly flexible in the area of auditory feedback manipulation. FTAP can run a wide variety of experiments, including synchronization/continuation tasks (Wing & Kristofferson, 1973), synchronization tasks combined with delayed auditory feedback (Aschersleben & Prinz, 1997), continuation tasks with isolated feedback perturbations (Wing, 1977), and complex alterations of feedback in music performance (Finney, 1997). Such experiments have often been implemented with custom hardware and software systems, but with FTAP they can be specified by a simple ASCII text parameter file. FTAP is available at no cost in source-code form.

  7. On methods to increase the security of the Linux kernel

    International Nuclear Information System (INIS)

    Matvejchikov, I.V.

    2014-01-01

    Methods to increase the security of the Linux kernel for the implementation of imposed protection tools have been examined. The methods of incorporation into various subsystems of the kernel on the x86 architecture have been described [ru

  8. LXtoo: an integrated live Linux distribution for the bioinformatics community.

    Science.gov (United States)

    Yu, Guangchuang; Wang, Li-Gen; Meng, Xiao-Hua; He, Qing-Yu

    2012-07-19

    Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo.

  9. How to detect trap cluster systems?

    International Nuclear Information System (INIS)

    Mandowski, Arkadiusz

    2008-01-01

    Spatially correlated traps and recombination centres (trap-recombination centre pairs and larger clusters) are responsible for many anomalous phenomena that are difficult to explain in the framework of both classical models, i.e. model of localized transitions (LT) and the simple trap model (STM), even with a number of discrete energy levels. However, these 'anomalous' effects may provide a good platform for identifying trap cluster systems. This paper considers selected cluster-type effects, mainly relating to an anomalous dependence of TL on absorbed dose in the system of isolated clusters (ICs). Some consequences for interacting cluster (IAC) systems, involving both localized and delocalized transitions occurring simultaneously, are also discussed

  10. After the first five years: central Linux support at DESY

    International Nuclear Information System (INIS)

    Knut Woller; Thorsten Kleinwort; Peter Jung

    2001-01-01

    The authors will describe how Linux is embedded into DESY's unix computing, their support concept and policies, tools used and developed, and the challenges which they are facing now that the number of supported PCs is rapidly approaching one thousand

  11. Audio Arduino - an ALSA (Advanced Linux Sound Architecture) audio driver for FTDI-based Arduinos

    DEFF Research Database (Denmark)

    Dimitrov, Smilen; Serafin, Stefania

    2011-01-01

    be considered to be a system, that encompasses design decisions on both hardware and software levels - that also demand a certain understanding of the architecture of the target PC operating system. This project outlines how an Arduino Duemillanove board (containing a USB interface chip, manufactured by Future...... Technology Devices International Ltd [FTDI] company) can be demonstrated to behave as a full-duplex, mono, 8-bit 44.1 kHz soundcard, through an implementation of: a PC audio driver for ALSA (Advanced Linux Sound Architecture); a matching program for the Arduino's ATmega microcontroller - and nothing more...

  12. LightNVM: The Linux Open-Channel SSD Subsystem

    DEFF Research Database (Denmark)

    Bjørling, Matias; Gonzalez, Javier; Bonnet, Philippe

    2017-01-01

    resource utilization. We propose that SSD management trade-offs should be handled through Open-Channel SSDs, a new class of SSDs, that give hosts control over their internals. We present our experience building LightNVM, the Linux Open-Channel SSD subsystem. We introduce a new Physical Page Ad- dress I...... to limit read latency variability and that it can be customized to achieve predictable I/O latencies....

  13. Aplicación de RT-Linux en el control de motores de pasos. Parte I; Application of RT-Linux in the Control of Steps Mators. Part I

    Directory of Open Access Journals (Sweden)

    Ernesto Duany Renté

    2011-02-01

    Full Text Available La idea fundamental de este artículo es mostrar cómo controlar un motor de paso empleando el puertoparalelo de una computadora, y demostrar la eficiencia temporal de las aplicaciones que se desarrollan ensistemas preparados para ejecutar tareas de precisión; aprovechar al máximo la capacidad de tiempo realestricto que brinda RT-Linux para el control de accionamientos eléctricos. Se desarrolla un pequeño softwareen lenguaje C que envía las señales al puerto paralelo en el tiempo necesario. Este software no estádiseñado con fines comerciales, solo permite realizar pruebas sobre el circuito de control diseñado paraeste propósito.  The fundamental idea of this article is to control a steps motor using the parallel port of a computer anddemonstrate the temporary efficiency of the applications that are executed in prepared systems to executetasks of real time. To take advantage of to the maximum capacity of strict real time that RT-Linux offers usfor the control of electric workings. A small software is developed in language C that sent the signs to theparallel port in the necessary time. This software is not designed with commercial, only allows to carry outtests on the control circuit designed for this purpose.

  14. Linux Adventures on a Laptop. Computers in Small Libraries

    Science.gov (United States)

    Roberts, Gary

    2005-01-01

    This article discusses the pros and cons of open source software, such as Linux. It asserts that despite the technical difficulties of installing and maintaining this type of software, ultimately it is helpful in terms of knowledge acquisition and as a beneficial investment librarians can make in themselves, their libraries, and their patrons.…

  15. FIREWALL E SEGURANÇA DE SISTEMAS APLICADO AO LINUX

    Directory of Open Access Journals (Sweden)

    Rodrigo Ribeiro

    2017-04-01

    Full Text Available Tendo em vista a evolução da internet no mundo, torna-se necessário investir na segurança da informação, alguns importantes conceitos referentes as redes de computadores e sua evolução direcionam para o surgimento de novas vulnerabilidades. O objetivo principal deste trabalho é comprovar que, por meio da utilização de software livre como o Linux e suas ferramentas, é possível criar um cenário seguro contra alguns ataques, por meio de testes em ambientes controlados utilizando-se de arquiteturas testadas em tempo real e verificando qual o potencial de uso entre uma pesquisa autoral sobre o assunto, a partir dessa ideia, foi possível reconhecer a grande utilização dos mecanismos de segurança, validando a eficiência de tais ferramentas estudadas na mitigação de ataques a redes de computares. Os sistemas de defesa da plataforma Linux são extremamente eficientes e atende ao objetivo de prevenir uma rede de acesso indevido.

  16. [Study for lung sound acquisition module based on ARM and Linux].

    Science.gov (United States)

    Lu, Qiang; Li, Wenfeng; Zhang, Xixue; Li, Junmin; Liu, Longqing

    2011-07-01

    A acquisition module with ARM and Linux as a core was developed. This paper presents the hardware configuration and the software design. It is shown that the module can extract human lung sound reliably and effectively.

  17. In the land of the dinosaurs, how to survive experience with building of midrange computing cluster

    Energy Technology Data Exchange (ETDEWEB)

    Chevel, A E [Petersburg Nuclear Physics Institute, Gatchina (Russian Federation); Lauret, J [SUNY at Stony Brook (United States)

    2001-07-01

    The authors discuss how to put into operation a midrange computing cluster for the Nuclear Chemistry Group (NCG) of the Stage University of New York at STONY Brook (SUNY-SB). The NCG is part and one of the collaborators within the RHIC/Phenix experiment located at the Brookhaven National Laboratory (BNL). The Phenix detector system produces about half a PB (or 500 TB) of data a year and our goal was to provide to this remote collaborating facility the means to be part of the analysis process. The computing installation was put into operation at the beginning of the year 2000. The cluster consists of 32 peripheral machines running under Linux and central server Alpha 4100 under Digital Unix 4.0f (formally True Unix 64). The realization process is under discussion.

  18. In the land of the dinosaurs, how to survive experience with building of midrange computing cluster

    International Nuclear Information System (INIS)

    Chevel, A.E.; Lauret, J.

    2001-01-01

    The authors discuss how to put into operation a midrange computing cluster for the Nuclear Chemistry Group (NCG) of the Stage University of New York at STONY Brook (SUNY-SB). The NCG is part and one of the collaborators within the RHIC/Phenix experiment located at the Brookhaven National Laboratory (BNL). The Phenix detector system produces about half a PB (or 500 TB) of data a year and our goal was to provide to this remote collaborating facility the means to be part of the analysis process. The computing installation was put into operation at the beginning of the year 2000. The cluster consists of 32 peripheral machines running under Linux and central server Alpha 4100 under Digital Unix 4.0f (formally True Unix 64). The realization process is under discussion

  19. An approach to improving the structure of error-handling code in the linux kernel

    DEFF Research Database (Denmark)

    Saha, Suman; Lawall, Julia; Muller, Gilles

    2011-01-01

    The C language does not provide any abstractions for exception handling or other forms of error handling, leaving programmers to devise their own conventions for detecting and handling errors. The Linux coding style guidelines suggest placing error handling code at the end of each function, where...... an automatic program transformation that transforms error-handling code into this style. We have applied our transformation to the Linux 2.6.34 kernel source code, on which it reorganizes the error handling code of over 1800 functions, in about 25 minutes....

  20. Automatic management software for large-scale cluster system

    International Nuclear Information System (INIS)

    Weng Yunjian; Chinese Academy of Sciences, Beijing; Sun Gongxing

    2007-01-01

    At present, the large-scale cluster system faces to the difficult management. For example the manager has large work load. It needs to cost much time on the management and the maintenance of large-scale cluster system. The nodes in large-scale cluster system are very easy to be chaotic. Thousands of nodes are put in big rooms so that some managers are very easy to make the confusion with machines. How do effectively carry on accurate management under the large-scale cluster system? The article introduces ELFms in the large-scale cluster system. Furthermore, it is proposed to realize the large-scale cluster system automatic management. (authors)

  1. Oak Ridge Institutional Cluster Autotune Test Drive Report

    Energy Technology Data Exchange (ETDEWEB)

    Jibonananda, Sanyal [ORNL; New, Joshua Ryan [ORNL

    2014-02-01

    The Oak Ridge Institutional Cluster (OIC) provides general purpose computational resources for the ORNL staff to run computation heavy jobs that are larger than desktop applications but do not quite require the scale and power of the Oak Ridge Leadership Computing Facility (OLCF). This report details the efforts made and conclusions derived in performing a short test drive of the cluster resources on Phase 5 of the OIC. EnergyPlus was used in the analysis as a candidate user program and the overall software environment was evaluated against anticipated challenges experienced with resources such as the shared memory-Nautilus (JICS) and Titan (OLCF). The OIC performed within reason and was found to be acceptable in the context of running EnergyPlus simulations. The number of cores per node and the availability of scratch space per node allow non-traditional desktop focused applications to leverage parallel ensemble execution. Although only individual runs of EnergyPlus were executed, the software environment on the OIC appeared suitable to run ensemble simulations with some modifications to the Autotune workflow. From a standpoint of general usability, the system supports common Linux libraries, compilers, standard job scheduling software (Torque/Moab), and the OpenMPI library (the only MPI library) for MPI communications. The file system is a Panasas file system which literature indicates to be an efficient file system.

  2. Launching large computing applications on a disk-less cluster

    International Nuclear Information System (INIS)

    Schwemmer, Rainer; Caicedo Carvajal, Juan Manuel; Neufeld, Niko

    2011-01-01

    The LHCb Event Filter Farm system is based on a cluster of the order of 1.500 disk-less Linux nodes. Each node runs one instance of the filtering application per core. The amount of cores in our current production environment is 8 per machine for the old cluster and 12 per machine on extension of the cluster. Each instance has to load about 1.000 shared libraries, weighting 200 MB from several directory locations from a central repository. The repository is currently hosted on a SAN and exported via NFS. The libraries are all available in the local file system cache on every node. Loading a library still causes a huge number of requests to the server though, because the loader will try to probe every available path. Measurements show there are between 100.000-200.000 calls per application instance start up. Multiplied by the numbers of cores in the farm, this translates into a veritable DDoS attack on the servers, which lasts several minutes. Since the application is being restarted frequently, a better solution had to be found.scp Rolling out the software to the nodes is out of the question, because they have no disks and the software in it's entirety is too large to put into a ram disk. To solve this problem we developed a FUSE based file systems which acts as a permanent, controllable cache that keeps the essential files that are necessary in stock.

  3. Hierarchical Image Segmentation of Remotely Sensed Data using Massively Parallel GNU-LINUX Software

    Science.gov (United States)

    Tilton, James C.

    2003-01-01

    A hierarchical set of image segmentations is a set of several image segmentations of the same image at different levels of detail in which the segmentations at coarser levels of detail can be produced from simple merges of regions at finer levels of detail. In [1], Tilton, et a1 describes an approach for producing hierarchical segmentations (called HSEG) and gave a progress report on exploiting these hierarchical segmentations for image information mining. The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HSWO) approach to region growing, which was described as early as 1989 by Beaulieu and Goldberg. The HSWO approach seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing (e.g. Horowitz and T. Pavlidis, [3]). In addition, HSEG optionally interjects between HSWO region growing iterations, merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the utility of the segmentation results, especially for larger images, it also significantly increases HSEG s computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) was devised, which includes special code to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. The recursive nature of RHSEG makes for a straightforward parallel implementation. This paper describes the HSEG algorithm, its recursive formulation (referred to as RHSEG), and the implementation of RHSEG using massively parallel GNU-LINUX software. Results with Landsat TM data are included comparing RHSEG with classic

  4. IMPLEMENTASI MANAJEMEN BANDWIDTH DENGAN DISIPLIN ANTRIAN HIERARCHICAL TOKEN BUCKET (HTB PADA SISTEM OPERASI LINUX

    Directory of Open Access Journals (Sweden)

    Muhammad Nugraha

    2017-01-01

    Full Text Available Important Problem on Internet networking is exhausted resource and bandwidth by some user while other user did not get service properly. To overcome that problem we need to implement traffic control and bandwidth management system in router. In this research author want to implement Hierarchical Token Bucket algorithm as queue discipline (qdisc to get bandwidth management accurately in order the user can get bandwidth properly. The result of this research is form the management bandwidth cheaply and efficiently by using Hierarchical Token Bucket qdisc on Linux operating system were able to manage the user as we want.

  5. Implementasi Manajemen Bandwidth Dengan Disiplin Antrian Hierarchical Token Bucket (HTB Pada Sistem Operasi Linux

    Directory of Open Access Journals (Sweden)

    Muhammad Nugraha

    2016-09-01

    Full Text Available Important Problem on Internet networking is exhausted resource and bandwidth by some user while other user did not get service properly. To overcome that problem we need to implement traffic control and bandwidth management system in router. In this research author want to implement Hierarchical Token Bucket algorithm as queue discipline (qdisc to get bandwidth management accurately in order the user can get bandwidth properly. The result of this research is form the management bandwidth cheaply and efficiently by using Hierarchical Token Bucket qdisc on Linux operating system were able to manage the user as we want.

  6. Debugging Nondeterministic Failures in Linux Programs through Replay Analysis

    Directory of Open Access Journals (Sweden)

    Shakaiba Majeed

    2018-01-01

    Full Text Available Reproducing a failure is the first and most important step in debugging because it enables us to understand the failure and track down its source. However, many programs are susceptible to nondeterministic failures that are hard to reproduce, which makes debugging extremely difficult. We first address the reproducibility problem by proposing an OS-level replay system for a uniprocessor environment that can capture and replay nondeterministic events needed to reproduce a failure in Linux interactive and event-based programs. We then present an analysis method, called replay analysis, based on the proposed record and replay system to diagnose concurrency bugs in such programs. The replay analysis method uses a combination of static analysis, dynamic tracing during replay, and delta debugging to identify failure-inducing memory access patterns that lead to concurrency failure. The experimental results show that the presented record and replay system has low-recording overhead and hence can be safely used in production systems to catch rarely occurring bugs. We also present few concurrency bug case studies from real-world applications to prove the effectiveness of the proposed bug diagnosis framework.

  7. Drowning in PC Management: Could a Linux Solution Save Us?

    Science.gov (United States)

    Peters, Kathleen A.

    2004-01-01

    Short on funding and IT staff, a Western Canada library struggled to provide adequate public computing resources. Staff turned to a Linux-based solution that supports up to 10 users from a single computer, and blends Web browsing and productivity applications with session management, Internet filtering, and user authentication. In this article,…

  8. Analisis Perbandingan Load Balancing Web Server Tunggal Dengan Web Server Cluster Menggunakan Linux Virtual Server

    OpenAIRE

    Lukitasari, Desy; Oklilas, Ahmad Fali

    2010-01-01

    Virtual server adalah server yang mempunyai skalabilitas dan ketersedian yang tinggi yang dibangun diatas sebuah cluster dari beberapa real server. Real server dan load balancer akan saling terkoneksi baik dalam jaringan lokal kecepatan tinggi atau yang terpisah secara geografis. Load balancer dapat mengirim permintaan-permintaan ke server yang berbeda dan membuat paralel service dari sebuah cluster pada sebuah alamat IP tunggal dan meminta pengiriman dapat menggunakan teknologi IP load...

  9. A PC parallel port button box provides millisecond response time accuracy under Linux.

    Science.gov (United States)

    Stewart, Neil

    2006-02-01

    For psychologists, it is sometimes necessary to measure people's reaction times to the nearest millisecond. This article describes how to use the PC parallel port to receive signals from a button box to achieve millisecond response time accuracy. The workings of the parallel port, the corresponding port addresses, and a simple Linux program for controlling the port are described. A test of the speed and reliability of button box signal detection is reported. If the reader is moderately familiar with Linux, this article should provide sufficient instruction for him or her to build and test his or her own parallel port button box. This article also describes how the parallel port could be used to control an external apparatus.

  10. Porting oxbash to linux and its application in SD-shell calculations

    International Nuclear Information System (INIS)

    Suman, H.; Suleiman, S.

    1998-01-01

    Oxbash, a code for nuclear structure calculations within the shell model approach, was ported to Linux that is a UNIX clone for PC's. Due to many faults in the code version we had, deep corrective actions in the code had to be undertaken. This was done through intensive use of UNIX utilities like sed, nm, make in addition to proper shell script programming. Our version contained calls for missing subroutines. Some of these were included from C- and f90 libraries. Others had to be written separately. All these actions were organized and automated through a robust system of M akefiles . Finally the code was tested and applied for nuclei with 18 and 20 nucleons. (author)

  11. ATLAS grid compute cluster with virtualized service nodes

    International Nuclear Information System (INIS)

    Mejia, J; Stonjek, S; Kluth, S

    2010-01-01

    The ATLAS Computing Grid consists of several hundred compute clusters distributed around the world as part of the Worldwide LHC Computing Grid (WLCG). The Grid middleware and the ATLAS software which has to be installed on each site, often require a certain Linux distribution and sometimes even specific version thereof. On the other hand, mostly due to maintenance reasons, computer centres install the same operating system and version on all computers. This might lead to problems with the Grid middleware if the local version is different from the one for which it has been developed. At RZG we partly solved this conflict by using virtualization technology for the service nodes. We will present the setup used at RZG and show how it helped to solve the problems described above. In addition we will illustrate the additional advantages gained by the above setup.

  12. MEMBANGUN SERVER BERBASIS LINUX PADA JARINGAN LAN DI LABOR SISTEM INFORMASI JURUSAN TEKNOLOGI INFORMASI POLITEKNIK NEGERI PADANG

    Directory of Open Access Journals (Sweden)

    Fifi Rasyidah

    2014-03-01

    Full Text Available The System Information Laboratory of Information Technology Department Polytechnic State of Padang has 30 units computer as education facilities to support learning process. All of computers used at same time in a learning section. This case causing trouble to monitoring each students activities. In order to get the solution for the lecturer, the writer then construct a server by using Linux operation system and client by using windows system operation in which Samba File Server is needed. By using this samba, the lecturer will be able to share the data and will be able to use the server as data storage media. Besides that, the writer will also use VNC (Virtual network connection to simplify the process of monitoring and supervising client working system. Based on the result gotten after the writer done some experiment, it can be concluded that Samba File Server can also be used after some configuration is applied on certain files. Moreover, the writer also conclude that VNC can control the entire of the client. The writer suggests that Samba File server which will be used is the latest version one which has more feature than the previous one, it is suggested that the configuration of VNC is applied on Ubuntu Linux since the service is available. Kata Kunci : Samba File Server, VNC, Ubuntu installation

  13. Properties of the open cluster system

    International Nuclear Information System (INIS)

    Janes, K.A.; Tilley, C.; Lynga, G.

    1988-01-01

    A system of weights corresponding to the precision of open cluster data is described. Using these weights, some properties of open clusters can be studied more accurately than was possible earlier. It is clear that there are three types of objects: unbound clusters, bound clusters in the thin disk, and older bound clusters. Galactic gradients of metallicity, longevity, and linear diameter are studied. Distributions at right angles to the galactic plane are discussed in the light of the different cluster types. The clumping of clusters in complexes is studied. An estimate of the selection effects influencing the present material of open cluster data is made in order to evaluate the role played by open clusters in the history of the galactic disk. 58 references

  14. Fedora Linux A Complete Guide to Red Hat's Community Distribution

    CERN Document Server

    Tyler, Chris

    2009-01-01

    Whether you are running the stable version of Fedora Core or bleeding-edge Rawhide releases, this book has something for every level of user. The modular, lab-based approach not only shows you how things work--but also explains why--and provides you with the answers you need to get up and running with Fedora Linux.

  15. Climate tools in mainstream Linux distributions

    Science.gov (United States)

    McKinstry, Alastair

    2015-04-01

    Debian/meterology is a project to integrate climate tools and analysis software into the mainstream Debian/Ubuntu Linux distributions. This work describes lessons learnt, and recommends practices for scientific software to be adopted and maintained in OS distributions. In addition to standard analysis tools (cdo,, grads, ferret, metview, ncl, etc.), software used by the Earth System Grid Federation was chosen for integraion, to enable ESGF portals to be built on this base; however exposing scientific codes via web APIs enables security weaknesses, normally ignorable, to be exposed. How tools are hardened, and what changes are required to handle security upgrades, are described. Secondly, to enable libraries and components (e.g. Python modules) to be integrated requires planning by writers: it is not sufficient to assume users can upgrade their code when you make incompatible changes. Here, practices are recommended to enable upgrades and co-installability of C, C++, Fortran and Python codes. Finally, software packages such as NetCDF and HDF5 can be built in multiple configurations. Tools may then expect incompatible versions of these libraries (e.g. serial and parallel) to be simultaneously available; how this was solved in Debian using "pkg-config" and shared library interfaces is described, and best practices for software writers to enable this are summarised.

  16. Quark cluster model in the three-nucleon system

    International Nuclear Information System (INIS)

    Osman, A.

    1986-11-01

    The quark cluster model is used to investigate the structure of the three-nucleon systems. The nucleon-nucleon interaction is proposed considering the colour-nucleon clusters and incorporating the quark degrees of freedom. The quark-quark potential in the quark compound bag model agrees with the central force potentials. The confinement potential reduces the short-range repulsion. The colour van der Waals force is determined. Then, the probability of quark clusters in the three-nucleon bound state systems are numerically calculated using realistic nuclear wave functions. The results of the present calculations show that quarks cluster themselves in three-quark systems building the quark cluster model for the trinucleon system. (author)

  17. Operating system MINIX

    OpenAIRE

    JIRKŮ, Radek

    2012-01-01

    This thesis introduces readers to the MINIX operating system, which was used in the creation of the Linux OS. It discusses the history and development of the system and explains its core and the file system. Also solves MINIX installation and configuration of the virtual machine at each step and deals with disputes that had a creator of Minix creator of Linux. In conclusion, compared with Linux MINIX and summarizes the advantages and disadvantages and the use of the operating system at present.

  18. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    Energy Technology Data Exchange (ETDEWEB)

    Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brightwell, Ron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In this paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.

  19. RELAP5-3D developmental assessment: Comparison of version 4.2.1i on Linux and Windows

    Energy Technology Data Exchange (ETDEWEB)

    Bayless, Paul D. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2014-06-01

    Figures have been generated comparing the parameters used in the developmental assessment of the RELAP5-3D code, version 4.2i, compiled on Linux and Windows platforms. The figures, which are the same as those used in Volume III of the RELAP5-3D code manual, compare calculations using the semi-implicit solution scheme with available experiment data. These figures provide a quick, visual indication of how the code predictions differ between the Linux and Windows versions.

  20. RELAP5-3D Developmental Assessment. Comparison of Version 4.3.4i on Linux and Windows

    International Nuclear Information System (INIS)

    Bayless, Paul David

    2015-01-01

    Figures have been generated comparing the parameters used in the developmental assessment of the RELAP5-3D code, version 4.3i, compiled on Linux and Windows platforms. The figures, which are the same as those used in Volume III of the RELAP5-3D code manual, compare calculations using the semi-implicit solution scheme with available experiment data. These figures provide a quick, visual indication of how the code predictions differ between the Linux and Windows versions.

  1. Supersymmetry for nuclear cluster systems

    International Nuclear Information System (INIS)

    Levai, G.; Cseh, J.; Van Isacker, P.

    2001-01-01

    A supersymmetry scheme is proposed for nuclear cluster systems. The bosonic sector of the superalgebra describes the relative motion of the clusters, while its fermionic sector is associated with their internal structure. An example of core+α configurations is discussed in which the core is a p-shell nucleus and the underlying superalgebra is U(4/12). The α-cluster states of the nuclei 20 Ne and 19 F are analysed and correlations between their spectra, electric quadrupole transitions, and one-nucleon transfer reactions are interpreted in terms of U(4/12) supersymmetry. (author)

  2. State of the art of parallel scientific visualization applications on PC clusters; Etat de l'art des applications de visualisation scientifique paralleles sur grappes de PC

    Energy Technology Data Exchange (ETDEWEB)

    Juliachs, M

    2004-07-01

    In this state of the art on parallel scientific visualization applications on PC clusters, we deal with both surface and volume rendering approaches. We first analyze available PC cluster configurations and existing parallel rendering software components for parallel graphics rendering. CEA/DIF has been studying cluster visualization since 2001. This report is part of a study to set up a new visualization research platform. This platform consisting of an eight-node PC cluster under Linux and a tiled display was installed in collaboration with Versailles-Saint-Quentin University in August 2003. (author)

  3. Lattice QCD calculations on commodity clusters at DESY

    International Nuclear Information System (INIS)

    Gellrich, A.; Pop, D.; Wegner, P.; Wittig, H.; Hasenbusch, M.; Jansen, K.

    2003-06-01

    Lattice Gauge Theory is an integral part of particle physics that requires high performance computing in the multi-Tflops regime. These requirements are motivated by the rich research program and the physics milestones to be reached by the lattice community. Over the last years the enormous gains in processor performance, memory bandwidth, and external I/O bandwidth for parallel applications have made commodity clusters exploiting PCs or workstations also suitable for large Lattice Gauge Theory applications. For more than one year two clusters have been operated at the two DESY sites in Hamburg and Zeuthen, consisting of 32 resp. 16 dual-CPU PCs, equipped with Intel Pentium 4 Xeon processors. Interconnection of the nodes is done by way of Myrinet. Linux was chosen as the operating system. In the course of the projects benchmark programs for architectural studies were developed. The performance of the Wilson-Dirac Operator (also in an even-odd preconditioned version) as the inner loop of the Lattice QCD (LQCD) algorithms plays the most important role in classifying the hardware basis to be used. Using the SIMD streaming extensions (SSE/SSE2) on Intel's Pentium 4 Xeon CPUs give promising results for both the single CPU and the parallel version. The parallel performance, in addition to the CPU power and the memory throughput, is nevertheless strongly influenced by the behavior of hardware components like the PC chip-set and the communication interfaces. The paper starts by giving a short explanation about the physics background and the motivation for using PC clusters for Lattice QCD. Subsequently, the concept, implementation, and operating experiences of the two clusters are discussed. Finally, the paper presents benchmark results and discusses comparisons to systems with different hardware components including Myrinet-, GigaBit-Ethernet-, and Infiniband-based interconnects. (orig.)

  4. Linux OS integrated modular avionics application development framework with apex API of ARINC653 specification

    Directory of Open Access Journals (Sweden)

    Anna V. Korneenkova

    2017-01-01

    Full Text Available The framework is made to provide tools to develop the integrated modular avionics (IMA applications, which could be launched on the target platform LynxOs-178 without modifying their source code. The framework usage helps students to form skills for developing modern modules of the avionics. In addition, students obtain deeper knowledge for the development of competencies in the field of technical creativity by using of the framework.The article describes the architecture and implementation of the Linux OS framework for ARINC653 compliant OS application development.The proposed approach reduces ARINC-653 application development costs and gives a unified tool to implement OS vendor independent code that meets specification. To achieve import substitution free and open-source Linux OS is used as an environment for developing IMA applications.The proposed framework is applicable for using as the tool to develop IMA applications and as the tool for development of the following competencies: the ability to master techniques of using software to solve practical problems, the ability to develop components of hardware and software systems and databases, using modern tools and programming techniques, the ability to match hardware and software tools in the information and automated systems, the readiness to apply the fundamentals of informatics and programming to designing, constructing and testing of software products, the readiness to apply basic methods and tools of software development, knowledge of various technologies of software development.

  5. Recommending the heterogeneous cluster type multi-processor system computing

    International Nuclear Information System (INIS)

    Iijima, Nobukazu

    2010-01-01

    Real-time reactor simulator had been developed by reusing the equipment of the Musashi reactor and its performance improvement became indispensable for research tools to increase sampling rate with introduction of arithmetic units using multi-Digital Signal Processor(DSP) system (cluster). In order to realize the heterogeneous cluster type multi-processor system computing, combination of two kinds of Control Processor (CP) s, Cluster Control Processor (CCP) and System Control Processor (SCP), were proposed with Large System Control Processor (LSCP) for hierarchical cluster if needed. Faster computing performance of this system was well evaluated by simulation results for simultaneous execution of plural jobs and also pipeline processing between clusters, which showed the system led to effective use of existing system and enhancement of the cost performance. (T. Tanaka)

  6. Q-systems as cluster algebras

    International Nuclear Information System (INIS)

    Kedem, Rinat

    2008-01-01

    Q-systems first appeared in the analysis of the Bethe equations for the XXX model and generalized Heisenberg spin chains (Kirillov and Reshetikhin 1987 Zap. Nauchn. Sem. Leningr. Otd. Mat. Inst. Steklov. 160 211-21, 301). Such systems are known to exist for any simple Lie algebra and many other Kac-Moody algebras. We formulate the Q-system associated with any simple, simply-laced Lie algebras g in the language of cluster algebras (Fomin and Zelevinsky 2002 J. Am. Math. Soc. 15 497-529), and discuss the relation of the polynomiality property of the solutions of the Q-system in the initial variables, which follows from the representation-theoretical interpretation, to the Laurent phenomenon in cluster algebras (Fomin and Zelevinsky 2002 Adv. Appl. Math. 28 119-44)

  7. A real-time computer simulation of nuclear simulator software using standard PC hardware and linux environments

    International Nuclear Information System (INIS)

    Cha, K. H.; Kweon, K. C.

    2001-01-01

    A feasibility study, which standard PC hardware and Real-Time Linux are applied to real-time computer simulation of software for a nuclear simulator, is presented in this paper. The feasibility prototype was established with the existing software in the Compact Nuclear Simulator (CNS). Throughout the real-time implementation in the feasibility prototype, we has identified that the approach can enable the computer-based predictive simulation to be approached, due to both the remarkable improvement in real-time performance and the less efforts for real-time implementation under standard PC hardware and Real-Time Linux envrionments

  8. State of the art of parallel scientific visualization applications on PC clusters; Etat de l'art des applications de visualisation scientifique paralleles sur grappes de PC

    Energy Technology Data Exchange (ETDEWEB)

    Juliachs, M

    2004-07-01

    In this state of the art on parallel scientific visualization applications on PC clusters, we deal with both surface and volume rendering approaches. We first analyze available PC cluster configurations and existing parallel rendering software components for parallel graphics rendering. CEA/DIF has been studying cluster visualization since 2001. This report is part of a study to set up a new visualization research platform. This platform consisting of an eight-node PC cluster under Linux and a tiled display was installed in collaboration with Versailles-Saint-Quentin University in August 2003. (author)

  9. Hybrid cloud and cluster computing paradigms for life science applications.

    Science.gov (United States)

    Qiu, Judy; Ekanayake, Jaliya; Gunarathne, Thilina; Choi, Jong Youl; Bae, Seung-Hee; Li, Hui; Zhang, Bingjing; Wu, Tak-Lon; Ruan, Yang; Ekanayake, Saliya; Hughes, Adam; Fox, Geoffrey

    2010-12-21

    Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments.

  10. Switching the JLab Accelerator Operations Environment from an HP-UX Unix-based to a PC/Linux-based environment

    International Nuclear Information System (INIS)

    Mcguckin, Theodore

    2008-01-01

    The Jefferson Lab Accelerator Controls Environment (ACE) was predominantly based on the HP-UX Unix platform from 1987 through the summer of 2004. During this period the Accelerator Machine Control Center (MCC) underwent a major renovation which included introducing Redhat Enterprise Linux machines, first as specialized process servers and then gradually as general login servers. As computer programs and scripts required to run the accelerator were modified, and inherent problems with the HP-UX platform compounded, more development tools became available for use with Linux and the MCC began to be converted over. In May 2008 the last HP-UX Unix login machine was removed from the MCC, leaving only a few Unix-based remote-login servers still available. This presentation will explore the process of converting an operational Control Room environment from the HP-UX to Linux platform as well as the many hurdles that had to be overcome throughout the transition period

  11. Clustering execution in a processing system to increase power savings

    Energy Technology Data Exchange (ETDEWEB)

    Bose, Pradip; Buyuktosunoglu, Alper; Jacobson, Hans M.; Vega, Augusto J.

    2018-04-03

    Embodiments relate to clustering execution in a processing system. An aspect includes accessing a control flow graph that defines a data dependency and an execution sequence of a plurality of tasks of an application that executes on a plurality of system components. The execution sequence of the tasks in the control flow graph is modified as a clustered control flow graph that clusters active and idle phases of a system component while maintaining the data dependency. The clustered control flow graph is sent to an operating system, where the operating system utilizes the clustered control flow graph for scheduling the tasks.

  12. Clustering execution in a processing system to increase power savings

    Science.gov (United States)

    Bose, Pradip; Buyuktosunoglu, Alper; Jacobson, Hans M.; Vega, Augusto J.

    2018-03-20

    Embodiments relate to clustering execution in a processing system. An aspect includes accessing a control flow graph that defines a data dependency and an execution sequence of a plurality of tasks of an application that executes on a plurality of system components. The execution sequence of the tasks in the control flow graph is modified as a clustered control flow graph that clusters active and idle phases of a system component while maintaining the data dependency. The clustered control flow graph is sent to an operating system, where the operating system utilizes the clustered control flow graph for scheduling the tasks.

  13. Dynamically Allocated Virtual Clustering Management System Users Guide

    Science.gov (United States)

    2016-11-01

    ARL-SR-0366 ● NOV 2016 US Army Research Laboratory Dynamically Allocated Virtual Clustering Management System User’s Guide by... Clustering Management System User’s Guide by Kelvin M Marcus Computational and Information Sciences Directorate, ARL...

  14. Laboratorio de Seguridad Informática con Kali Linux

    OpenAIRE

    Gutiérrez Benito, Fernando

    2014-01-01

    Laboratorio de Seguridad Informática usando la distribución Linux Kali, un sistema operativo dedicado a la auditoría de seguridad informática. Se emplearán herramientas especializadas en los distintos campos de la seguridad, como nmap, Metaspoit, w3af, John the Ripper o Aircrack-ng. Se intentará que los alumnos comprendan la necesidad de crear aplicaciones seguras así como pueda servir de base para aquellos que deseen continuar en el mundo de la seguridad informática. Grado en Ingeniería T...

  15. Documenting and automating collateral evolutions in Linux device drivers

    DEFF Research Database (Denmark)

    Padioleau, Yoann; Hansen, René Rydhof; Lawall, Julia

    2008-01-01

    . Manually performing such collateral evolutions is time-consuming and unreliable, and has lead to errors when modifications have not been done consistently. In this paper, we present an automatic program transformation tool, Coccinelle, for documenting and automating device driver collateral evolutions...... programmer. We have evaluated our approach on 62 representative collateral evolutions that were previously performed manually in Linux 2.5 and 2.6. On a test suite of over 5800 relevant driver files, the semantic patches for these collateral evolutions update over 93% of the files completely...

  16. THE HST/ACS COMA CLUSTER SURVEY. IV. INTERGALACTIC GLOBULAR CLUSTERS AND THE MASSIVE GLOBULAR CLUSTER SYSTEM AT THE CORE OF THE COMA GALAXY CLUSTER

    International Nuclear Information System (INIS)

    Peng, Eric W.; Ferguson, Henry C.; Goudfrooij, Paul; Hammer, Derek; Lucey, John R.; Marzke, Ronald O.; Puzia, Thomas H.; Carter, David; Balcells, Marc; Bridges, Terry; Chiboucas, Kristin; Del Burgo, Carlos; Graham, Alister W.; Guzman, Rafael; Hudson, Michael J.; Matkovic, Ana

    2011-01-01

    Intracluster stellar populations are a natural result of tidal interactions in galaxy clusters. Measuring these populations is difficult, but important for understanding the assembly of the most massive galaxies. The Coma cluster of galaxies is one of the nearest truly massive galaxy clusters and is host to a correspondingly large system of globular clusters (GCs). We use imaging from the HST/ACS Coma Cluster Survey to present the first definitive detection of a large population of intracluster GCs (IGCs) that fills the Coma cluster core and is not associated with individual galaxies. The GC surface density profile around the central massive elliptical galaxy, NGC 4874, is dominated at large radii by a population of IGCs that extend to the limit of our data (R +4000 -5000 (systematic) IGCs out to this radius, and that they make up ∼70% of the central GC system, making this the largest GC system in the nearby universe. Even including the GC systems of other cluster galaxies, the IGCs still make up ∼30%-45% of the GCs in the cluster core. Observational limits from previous studies of the intracluster light (ICL) suggest that the IGC population has a high specific frequency. If the IGC population has a specific frequency similar to high-S N dwarf galaxies, then the ICL has a mean surface brightness of μ V ∼ 27 mag arcsec -2 and a total stellar mass of roughly 10 12 M sun within the cluster core. The ICL makes up approximately half of the stellar luminosity and one-third of the stellar mass of the central (NGC 4874+ICL) system. The color distribution of the IGC population is bimodal, with blue, metal-poor GCs outnumbering red, metal-rich GCs by a ratio of 4:1. The inner GCs associated with NGC 4874 also have a bimodal distribution in color, but with a redder metal-poor population. The fraction of red IGCs (20%), and the red color of those GCs, implies that IGCs can originate from the halos of relatively massive, L* galaxies, and not solely from the disruption of

  17. REMASTERING SISTEM OPERASI BERBASIS OPEN SOURCE LINUX UNTUK PEMBELAJARAN KIMIA (STUDI KASUS PADA MATA KULIAH KOMPUTASI DATA JURUSAN ANALIS KIMIA UNDIKSHA

    Directory of Open Access Journals (Sweden)

    Ni Wayan Martiningsih

    2015-01-01

    Full Text Available Penelitian ini bertujuan untuk merancang pengembangan remastering sistem operasi berbasis open source linux untuk pembelajaran kimia pada mata kuliah Komputasi Data dan mengetahui respon mahasiswa terhadap remastering sistem operasi tersebut. Rancangan yang digunakan dalam penelitian ini adalah rancangan penelitian pengembangan (Research and Development. Pengumpulan data respon mahasiswa dilakukan dengan cara pemberian angket kepada mahasiswa. Data yang terkumpul dianalisis secara statistik deskriptif. Rancangan remastering dilakukan berdasarkan analisis kebutuhan pada mata kuliah Komputasi Data. Remastering dirancang menggunakan Linux Ubuntu 10.4 dan menggabungkan program aplikasi kimia yaitu Avogadro, Bkchen, Chemical calculator, dwawXTL, GabEdit, GchemPaint, Gperiodic, Kalzium, PeriodicTable, XdrawChem. Respon mahasiswa terhadap  pengembangan remastering sistem operasi linux untuk pembelajaran kimia pada mata kuliah Komputasi Data adalah sangat positif

  18. Subspace identification of distributed clusters of homogeneous systems

    NARCIS (Netherlands)

    Yu, C.; Verhaegen, M.H.G.

    2017-01-01

    This note studies the identification of a network comprised of interconnected clusters of LTI systems. Each cluster consists of homogeneous dynamical systems, and its interconnections with the rest of the network are unmeasurable. A subspace identification method is proposed for identifying a single

  19. Monte Carlo simulations on a 9-node PC cluster

    International Nuclear Information System (INIS)

    Gouriou, J.

    2001-01-01

    Monte Carlo simulation methods are frequently used in the fields of medical physics, dosimetry and metrology of ionising radiation. Nevertheless, the main drawback of this technique is to be computationally slow, because the statistical uncertainty of the result improves only as the square root of the computational time. We present a method, which allows to reduce by a factor 10 to 20 the used effective running time. In practice, the aim was to reduce the calculation time in the LNHB metrological applications from several weeks to a few days. This approach includes the use of a PC-cluster, under Linux operating system and PVM parallel library (version 3.4). The Monte Carlo codes EGS4, MCNP and PENELOPE have been implemented on this platform and for the two last ones adapted for running under the PVM environment. The maximum observed speedup is ranging from a factor 13 to 18 according to the codes and the problems to be simulated. (orig.)

  20. Research on Linux Trusted Boot Method Based on Reverse Integrity Verification

    Directory of Open Access Journals (Sweden)

    Chenlin Huang

    2016-01-01

    Full Text Available Trusted computing aims to build a trusted computing environment for information systems with the help of secure hardware TPM, which has been proved to be an effective way against network security threats. However, the TPM chips are not yet widely deployed in most computing devices so far, thus limiting the applied scope of trusted computing technology. To solve the problem of lacking trusted hardware in existing computing platform, an alternative security hardware USBKey is introduced in this paper to simulate the basic functions of TPM and a new reverse USBKey-based integrity verification model is proposed to implement the reverse integrity verification of the operating system boot process, which can achieve the effect of trusted boot of the operating system in end systems without TPMs. A Linux operating system booting method based on reverse integrity verification is designed and implemented in this paper, with which the integrity of data and executable files in the operating system are verified and protected during the trusted boot process phase by phase. It implements the trusted boot of operation system without TPM and supports remote attestation of the platform. Enhanced by our method, the flexibility of the trusted computing technology is greatly improved and it is possible for trusted computing to be applied in large-scale computing environment.

  1. MiniDLNA_tweaker : Aplicació per configurar i previsualitzar el servidor MiniDLNA per a GNU/Linux

    OpenAIRE

    Martínez Lloveras, Jordi

    2013-01-01

    L'aplicació MiniDLNA per a GNU/Linux és un servidor lleuger que compleix els estàndards DLNA/UPnP configurable a través d'un simple arxiu de text, això la fa ideal per al propòsit d'implantació d'un servidor que ofereixi els continguts a tots els dispositius que compleixin els estàndards esmentats. La aplicación MiniDLNA para GNU/Linux es un servidor ligero que cumple los estándares DLNA/UPnP configurable a través de un simple archivo de texto, esto la hace ideal para el propósito de impla...

  2. [Making a low cost IPSec router on Linux and the assessment for practical use].

    Science.gov (United States)

    Amiki, M; Horio, M

    2001-09-01

    We installed Linux and FreeS/WAN on a PC/AT compatible machine to make an IPSec router. We measured the time of ping/ftp, only in the university, between the university and the external network. Between the university and the external network (the Internet), there were no differences. Therefore, we concluded that CPU load was not remarkable at low speed networks, because packets exchanged via the Internet are small, or compressions of VPN are more effective than encoding and decoding. On the other hand, in the university, the IPSec router performed down about 20-30% compared with normal IP communication, but this is not a serious problem for practical use. Recently, VPN machines are becoming cheaper, but they do not function sufficiently to create a fundamental VPN environment. Therefore, if one wants a fundamental VPN environment at a low cost, we believe you should select a VPN router on Linux.

  3. PsyToolkit: a software package for programming psychological experiments using Linux.

    Science.gov (United States)

    Stoet, Gijsbert

    2010-11-01

    PsyToolkit is a set of software tools for programming psychological experiments on Linux computers. Given that PsyToolkit is freely available under the Gnu Public License, open source, and designed such that it can easily be modified and extended for individual needs, it is suitable not only for technically oriented Linux users, but also for students, researchers on small budgets, and universities in developing countries. The software includes a high-level scripting language, a library for the programming language C, and a questionnaire presenter. The software easily integrates with other open source tools, such as the statistical software package R. PsyToolkit is designed to work with external hardware (including IoLab and Cedrus response keyboards and two common digital input/output boards) and to support millisecond timing precision. Four in-depth examples explain the basic functionality of PsyToolkit. Example 1 demonstrates a stimulus-response compatibility experiment. Example 2 demonstrates a novel mouse-controlled visual search experiment. Example 3 shows how to control light emitting diodes using PsyToolkit, and Example 4 shows how to build a light-detection sensor. The last two examples explain the electronic hardware setup such that they can even be used with other software packages.

  4. WYSIWIB: A Declarative Approach to Finding API Protocols and Bugs in Linux Code

    DEFF Research Database (Denmark)

    Lawall, Julia; Brunel, Julien Pierre Manuel; Palix, Nicolas Jean-Michel

    2009-01-01

    the tools on specific kinds of bugs and to relate the results to patterns in the source code. We propose a declarative approach to bug finding in Linux OS code using a control-flow based program search engine. Our approach is WYSIWIB (What You See Is Where It Bugs), since the programmer expresses...

  5. NSTX-U Advances in Real-Time C++11 on Linux

    Science.gov (United States)

    Erickson, Keith G.

    2015-08-01

    Programming languages like C and Ada combined with proprietary embedded operating systems have dominated the real-time application space for decades. The new C++11 standard includes native, language-level support for concurrency, a required feature for any nontrivial event-oriented real-time software. Threads, Locks, and Atomics now exist to provide the necessary tools to build the structures that make up the foundation of a complex real-time system. The National Spherical Torus Experiment Upgrade (NSTX-U) at the Princeton Plasma Physics Laboratory (PPPL) is breaking new ground with the language as applied to the needs of fusion devices. A new Digital Coil Protection System (DCPS) will serve as the main protection mechanism for the magnetic coils, and it is written entirely in C++11 running on Concurrent Computer Corporation's real-time operating system, RedHawk Linux. It runs over 600 algorithms in a 5 kHz control loop that determine whether or not to shut down operations before physical damage occurs. To accomplish this, NSTX-U engineers developed software tools that do not currently exist elsewhere, including real-time atomic synchronization, real-time containers, and a real-time logging framework. Together with a recent (and carefully configured) version of the GCC compiler, these tools enable data acquisition, processing, and output using a conventional operating system to meet a hard real-time deadline (that is, missing one periodic is a failure) of 200 microseconds.

  6. NSTX-U Advances in Real-Time C++11 on Linux

    International Nuclear Information System (INIS)

    Erickson, Keith G.

    2015-01-01

    Programming languages like C and Ada combined with proprietary embedded operating systems have dominated the real-time application space for decades. The new C++11standard includes native, language-level support for concurrency, a required feature for any nontrivial event-oriented real-time software. Threads, Locks, and Atomics now exist to provide the necessary tools to build the structures that make up the foundation of a complex real-time system. The National Spherical Torus Experiment Upgrade (NSTX-U) at the Princeton Plasma Physics Laboratory (PPPL) is breaking new ground with the language as applied to the needs of fusion devices. A new Digital Coil Protection System (DCPS) will serve as the main protection mechanism for the magnetic coils, and it is written entirely in C++11 running on Concurrent Computer Corporation's real-time operating system, RedHawk Linux. It runs over 600 algorithms in a 5 kHz control loop that determine whether or not to shut down operations before physical damage occurs. To accomplish this, NSTX-U engineers developed software tools that do not currently exist elsewhere, including real-time atomic synchronization, real-time containers, and a real-time logging framework. Together with a recent (and carefully configured) version of the GCC compiler, these tools enable data acquisition, processing, and output using a conventional operating system to meet a hard real-time deadline (that is, missing one periodic is a failure) of 200 microseconds

  7. A Framework for Adaptable Operating and Runtime Systems

    Energy Technology Data Exchange (ETDEWEB)

    Sterling, Thomas [Indiana Univ., Bloomington, IN (United States)

    2014-03-04

    The emergence of new classes of HPC systems where performance improvement is enabled by Moore’s Law for technology is manifest through multi-core-based architectures including specialized GPU structures. Operating systems were originally designed for control of uniprocessor systems. By the 1980s multiprogramming, virtual memory, and network interconnection were integral services incorporated as part of most modern computers. HPC operating systems were primarily derivatives of the Unix model with Linux dominating the Top-500 list. The use of Linux for commodity clusters was first pioneered by the NASA Beowulf Project. However, the rapid increase in number of cores to achieve performance gain through technology advances has exposed the limitations of POSIX general-purpose operating systems in scaling and efficiency. This project was undertaken through the leadership of Sandia National Laboratories and in partnership of the University of New Mexico to investigate the alternative of composable lightweight kernels on scalable HPC architectures to achieve superior performance for a wide range of applications. The use of composable operating systems is intended to provide a minimalist set of services specifically required by a given application to preclude overheads and operational uncertainties (“OS noise”) that have been demonstrated to degrade efficiency and operational consistency. This project was undertaken as an exploration to investigate possible strategies and methods for composable lightweight kernel operating systems towards support for extreme scale systems.

  8. Vector Nonlinear Time-Series Analysis of Gamma-Ray Burst Datasets on Heterogeneous Clusters

    Directory of Open Access Journals (Sweden)

    Ioana Banicescu

    2005-01-01

    Full Text Available The simultaneous analysis of a number of related datasets using a single statistical model is an important problem in statistical computing. A parameterized statistical model is to be fitted on multiple datasets and tested for goodness of fit within a fixed analytical framework. Definitive conclusions are hopefully achieved by analyzing the datasets together. This paper proposes a strategy for the efficient execution of this type of analysis on heterogeneous clusters. Based on partitioning processors into groups for efficient communications and a dynamic loop scheduling approach for load balancing, the strategy addresses the variability of the computational loads of the datasets, as well as the unpredictable irregularities of the cluster environment. Results from preliminary tests of using this strategy to fit gamma-ray burst time profiles with vector functional coefficient autoregressive models on 64 processors of a general purpose Linux cluster demonstrate the effectiveness of the strategy.

  9. Properties of the disk system of globular clusters

    International Nuclear Information System (INIS)

    Armandroff, T.E.

    1989-01-01

    A large refined data sample is used to study the properties and origin of the disk system of globular clusters. A scale height for the disk cluster system of 800-1500 pc is found which is consistent with scale-height determinations for samples of field stars identified with the Galactic thick disk. A rotational velocity of 193 + or - 29 km/s and a line-of-sight velocity dispersion of 59 + or - 14 km/s have been found for the metal-rich clusters. 70 references

  10. The next Generation of Exascale-class Systems: the ExaNeSt Project

    OpenAIRE

    R. Ammendolay; A. Biagioni; P. Cretaro; O. Frezza; F. Lo Cicero; A. Lonardo; M. Martinelli; P. S. Paolucci; E. Pastorelli; F. Simula; P. Vicini; G. Taffoni; J. Goodacree; M. Lujn; J. Navaridas

    2017-01-01

    The ExaNeSt project started on December 2015 and is funded by EU H2020 research framework (call H2020-FETHPC-2014, n. 671553) to study the adoption of low-cost, Linux-based power-efficient 64-bit ARM processors clusters for Exascale-class systems. The ExaNeSt consortium pools partners with industrial and academic research expertise in storage, interconnects and applications that share a vision of an European Exascale-class supercomputer. Their goal is designing and implementing a physical rac...

  11. The system of indicators for regional cluster formation assessment

    Directory of Open Access Journals (Sweden)

    A. A. Mantsaeva

    2016-01-01

    Full Text Available The article shows the result of working-out the cluster formation assessment system, and each indicator of this system reflect the specific clusters property - cooperation and efficiency Completeness and depth of the system of indicators provided by systematic approach and a representing of quantitative and qualitative aspects of cluster formation process. A feature of the technique is the use of indicators that require a special accounting and enable tracking of a certain stage of cluster development. Testing the system of indicators produced by the example on the tourism industry, which is due, firstly, the high development rate of the tourist services sphere in comparison with the branches of material production, and, secondly, the increased interest in the establishment of regional tourism and recreation clusters with the country's leadership. Quantitative indicators of the formation and development of tourism and recreation clusters – geographic proximity of companies cluster members, the effectiveness of the sector for the regional economy, innovation activity, exports of goods and services, intended for the regions of the South and the North Caucasian Federal District. Universality technique ensures its empirical base - official data from Rosstat, the Federal Agency for Tourism, as well as the results of mass opinion polls carried out in all regions of the country as part of the annual “"Monitoring the quality of public and municipal services” (on the Republic of Kalmykia material. In general, we believe that the application of the developed system of indicators will contribute to intensify and improve the quality of cluster policy, implemented by the regional executive bodies and local authorities.

  12. Programación de LEGO MindStorms bajo GNU/Linux

    OpenAIRE

    Matellán Olivera, Vicente; Heras Quirós, Pedro de las; Centeno González, José; González Barahona, Jesús

    2002-01-01

    GNU/Linux sobre un ordenador personal es la opción libre preferida por muchos desarrolladores de aplicaciones, pero también es una plataforma de desarrollo muy popular para otros sistemas, incluida la programación de robots, en particular es muy adecuada para jugar con los LEGO Mindstorms. En este artículo presentaremos las dos opciones más extendidas a la hora de programar estos juguetes: NQC y LegOS. NQC es una versión reducida de C que permite el desarrollo rápido de programas ...

  13. BioSMACK: a linux live CD for genome-wide association analyses.

    Science.gov (United States)

    Hong, Chang Bum; Kim, Young Jin; Moon, Sanghoon; Shin, Young-Ah; Go, Min Jin; Kim, Dong-Joon; Lee, Jong-Young; Cho, Yoon Shin

    2012-01-01

    Recent advances in high-throughput genotyping technologies have enabled us to conduct a genome-wide association study (GWAS) on a large cohort. However, analyzing millions of single nucleotide polymorphisms (SNPs) is still a difficult task for researchers conducting a GWAS. Several difficulties such as compatibilities and dependencies are often encountered by researchers using analytical tools, during the installation of software. This is a huge obstacle to any research institute without computing facilities and specialists. Therefore, a proper research environment is an urgent need for researchers working on GWAS. We developed BioSMACK to provide a research environment for GWAS that requires no configuration and is easy to use. BioSMACK is based on the Ubuntu Live CD that offers a complete Linux-based operating system environment without installation. Moreover, we provide users with a GWAS manual consisting of a series of guidelines for GWAS and useful examples. BioSMACK is freely available at http://ksnp.cdc. go.kr/biosmack.

  14. FOSSIL SYSTEMS IN THE 400d CLUSTER CATALOG

    International Nuclear Information System (INIS)

    Voevodkin, Alexey; Borozdin, Konstantin; Heitmann, Katrin; Habib, Salman; Vikhlinin, Alexey; Mescheryakov, Alexander; Burenin, Rodion; Hornstrup, Allan

    2010-01-01

    We report the discovery of seven new fossil systems in the 400d cluster survey. Our search targets nearby, z ≤ 0.2, and X-ray bright, L X ≥ 10 43 erg s -1 , clusters of galaxies. Where available, we measure the optical luminosities from Sloan Digital Sky Survey images, thereby obtaining uniform sets of both X-ray and optical data. Our selection criteria identify 12 fossil systems, out of which five are known from previous studies. While in general agreement with earlier results, our larger sample size allows us to put tighter constraints on the number density of fossil clusters. It has been previously reported that fossil groups are more X-ray bright than other X-ray groups of galaxies for the same optical luminosity. We find, however, that the X-ray brightness of massive fossil systems is consistent with that of the general population of galaxy clusters and follows the same L X -L opt scaling relation.

  15. The signatures of the parental cluster on field planetary systems

    Science.gov (United States)

    Cai, Maxwell Xu; Portegies Zwart, Simon; van Elteren, Arjen

    2018-03-01

    Due to the high stellar densities in young clusters, planetary systems formed in these environments are likely to have experienced perturbations from encounters with other stars. We carry out direct N-body simulations of multiplanet systems in star clusters to study the combined effects of stellar encounters and internal planetary dynamics. These planetary systems eventually become part of the Galactic field population as the parental cluster dissolves, which is where most presently known exoplanets are observed. We show that perturbations induced by stellar encounters lead to distinct signatures in the field planetary systems, most prominently, the excited orbital inclinations and eccentricities. Planetary systems that form within the cluster's half-mass radius are more prone to such perturbations. The orbital elements are most strongly excited in the outermost orbit, but the effect propagates to the entire planetary system through secular evolution. Planet ejections may occur long after a stellar encounter. The surviving planets in these reduced systems tend to have, on average, higher inclinations and larger eccentricities compared to systems that were perturbed less strongly. As soon as the parental star cluster dissolves, external perturbations stop affecting the escaped planetary systems, and further evolution proceeds on a relaxation time-scale. The outer regions of these ejected planetary systems tend to relax so slowly that their state carries the memory of their last strong encounter in the star cluster. Regardless of the stellar density, we observe a robust anticorrelation between multiplicity and mean inclination/eccentricity. We speculate that the `Kepler dichotomy' observed in field planetary systems is a natural consequence of their early evolution in the parental cluster.

  16. Development of small scale cluster computer for numerical analysis

    Science.gov (United States)

    Zulkifli, N. H. N.; Sapit, A.; Mohammed, A. N.

    2017-09-01

    In this study, two units of personal computer were successfully networked together to form a small scale cluster. Each of the processor involved are multicore processor which has four cores in it, thus made this cluster to have eight processors. Here, the cluster incorporate Ubuntu 14.04 LINUX environment with MPI implementation (MPICH2). Two main tests were conducted in order to test the cluster, which is communication test and performance test. The communication test was done to make sure that the computers are able to pass the required information without any problem and were done by using simple MPI Hello Program where the program written in C language. Additional, performance test was also done to prove that this cluster calculation performance is much better than single CPU computer. In this performance test, four tests were done by running the same code by using single node, 2 processors, 4 processors, and 8 processors. The result shows that with additional processors, the time required to solve the problem decrease. Time required for the calculation shorten to half when we double the processors. To conclude, we successfully develop a small scale cluster computer using common hardware which capable of higher computing power when compare to single CPU processor, and this can be beneficial for research that require high computing power especially numerical analysis such as finite element analysis, computational fluid dynamics, and computational physics analysis.

  17. QCS: a system for querying, clustering and summarizing documents.

    Energy Technology Data Exchange (ETDEWEB)

    Dunlavy, Daniel M.; Schlesinger, Judith D. (Center for Computing Sciences, Bowie, MD); O' Leary, Dianne P. (University of Maryland, College Park, MD); Conroy, John M. (Center for Computing Sciences, Bowie, MD)

    2006-10-01

    Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel hybrid information retrieval system--the Query, Cluster, Summarize (QCS) system--which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of components in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test sets from the Document Understanding Conferences (DUC) along with the best known automatic metric for summarization system evaluation, ROUGE. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence 'trimming', and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of the design

  18. QCS : a system for querying, clustering, and summarizing documents.

    Energy Technology Data Exchange (ETDEWEB)

    Dunlavy, Daniel M.

    2006-08-01

    Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel hybrid information retrieval system--the Query, Cluster, Summarize (QCS) system--which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of components in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test sets from the Document Understanding Conferences (DUC) along with the best known automatic metric for summarization system evaluation, ROUGE. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence ''trimming'', and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of

  19. Efficient heterogeneous execution of Monte Carlo shielding calculations on a Beowulf cluster

    International Nuclear Information System (INIS)

    Dewar, D.; Hulse, P.; Cooper, A.; Smith, N.

    2005-01-01

    Recent work has been done in using a high-performance 'Beowulf' cluster computer system for the efficient distribution of Monte Carlo shielding calculations. This has enabled the rapid solution of complex shielding problems at low cost and with greater modularity and scalability than traditional platforms. The work has shown that a simple approach to distributing the workload is as efficient as using more traditional techniques such as PVM (Parallel Virtual Machine). In addition, when used in an operational setting this technique is fairer with the use of resources than traditional methods, in that it does not tie up a single computing resource but instead shares the capacity with other tasks. These developments in computing technology have enabled shielding problems to be solved that would have taken an unacceptably long time to run on traditional platforms. This paper discusses the BNFL Beowulf cluster and a number of tests that have recently been run to demonstrate the efficiency of the asynchronous technique in running the MCBEND program. The BNFL Beowulf currently consists of 84 standard PCs running RedHat Linux. Current performance of the machine has been estimated to be between 40 and 100 Gflop s -1 . When the whole system is employed on one problem up to four million particles can be tracked per second. There are plans to review its size in line with future business needs. (authors)

  20. SPECT detector system design based on embedded system

    International Nuclear Information System (INIS)

    Zhang Weizheng; Zhao Shujun; Zhang Lei; Sun Yuanling

    2007-01-01

    A single-photon emission computed tomography detector system based on embedded Linux designed. This system is composed of detector module, data acquisition module, ARM MPU module, network interface communication module and human machine interface module. Its software uses multithreading technology based on embedded Linux. It can achieve high speed data acquisition, real-time data correction and network data communication. It can accelerate the data acquisition and decrease the dead time. The accuracy and the stability of the system can be improved. (authors)

  1. Debian 7 system administration best practices

    CERN Document Server

    Pollei, Rich

    2013-01-01

    A step-by-step, example-based guide to learning how to install and administer the Debian Linux distribution.Debian 7: System Administration Best Practices is for users and administrators who are new to Debian, or for seasoned administrators who are switching to Debian from another Linux distribution. A basic knowledge of Linux or UNIX systems is useful, but not strictly required. Since the book is a high level guide, the reader should be willing to go to the referenced material for further details and practical examples.

  2. The properties of the disk system of globular clusters

    Science.gov (United States)

    Armandroff, Taft E.

    1989-01-01

    A large refined data sample is used to study the properties and origin of the disk system of globular clusters. A scale height for the disk cluster system of 800-1500 pc is found which is consistent with scale-height determinations for samples of field stars identified with the Galactic thick disk. A rotational velocity of 193 + or - 29 km/s and a line-of-sight velocity dispersion of 59 + or - 14 km/s have been found for the metal-rich clusters.

  3. Cluster Computing for Embedded/Real-Time Systems

    Science.gov (United States)

    Katz, D.; Kepner, J.

    1999-01-01

    Embedded and real-time systems, like other computing systems, seek to maximize computing power for a given price, and thus can significantly benefit from the advancing capabilities of cluster computing.

  4. Supra-galactic colour patterns in globular cluster systems

    Science.gov (United States)

    Forte, Juan C.

    2017-07-01

    An analysis of globular cluster systems associated with galaxies included in the Virgo and Fornax Hubble Space Telescope-Advanced Camera Surveys reveals distinct (g - z) colour modulation patterns. These features appear on composite samples of globular clusters and, most evidently, in galaxies with absolute magnitudes Mg in the range from -20.2 to -19.2. These colour modulations are also detectable on some samples of globular clusters in the central galaxies NGC 1399 and NGC 4486 (and confirmed on data sets obtained with different instruments and photometric systems), as well as in other bright galaxies in these clusters. After discarding field contamination, photometric errors and statistical effects, we conclude that these supra-galactic colour patterns are real and reflect some previously unknown characteristic. These features suggest that the globular cluster formation process was not entirely stochastic but included a fraction of clusters that formed in a rather synchronized fashion over large spatial scales, and in a tentative time lapse of about 1.5 Gy at redshifts z between 2 and 4. We speculate that the putative mechanism leading to that synchronism may be associated with large scale feedback effects connected with violent star-forming events and/or with supermassive black holes.

  5. NSTX-U Control System Upgrades

    International Nuclear Information System (INIS)

    Erickson, K.G.; Gates, D.A.; Gerhardt, S.P.; Lawson, J.E.; Mozulay, R.; Sichta, P.; Tchilinguirian, G.J.

    2014-01-01

    The National Spherical Tokamak Experiment (NSTX) is undergoing a wealth of upgrades (NSTX-U). These upgrades, especially including an elongated pulse length, require broad changes to the control system that has served NSTX well. A new fiber serial Front Panel Data Port input and output (I/O) stream will supersede the aging copper parallel version. Driver support for the new I/O and cyber security concerns require updating the operating system from Redhat Enterprise Linux (RHEL) v4 to RedHawk (based on RHEL) v6. While the basic control system continues to use the General Atomics Plasma Control System (GA PCS), the effort to forward port the entire software package to run under 64-bit Linux instead of 32-bit Linux included PCS modifications subsequently shared with GA and other PCS users. Software updates focused on three key areas: (1) code modernization through coding standards (C99/C11), (2) code portability and maintainability through use of the GA PCS code generator, and (3) support of 64-bit platforms. Central to the control system upgrade is the use of a complete real time (RT) Linux platform provided by Concurrent Computer Corporation, consisting of a computer (iHawk), an operating system and drivers (RedHawk), and RT tools (NightStar). Strong vendor support coupled with an extensive RT toolset influenced this decision. The new real-time Linux platform, I/O, and software engineering will foster enhanced capability and performance for NSTX-U plasma control

  6. NSTX-U Control System Upgrades

    Energy Technology Data Exchange (ETDEWEB)

    Erickson, K.G., E-mail: kerickso@pppl.gov; Gates, D.A.; Gerhardt, S.P.; Lawson, J.E.; Mozulay, R.; Sichta, P.; Tchilinguirian, G.J.

    2014-06-15

    The National Spherical Tokamak Experiment (NSTX) is undergoing a wealth of upgrades (NSTX-U). These upgrades, especially including an elongated pulse length, require broad changes to the control system that has served NSTX well. A new fiber serial Front Panel Data Port input and output (I/O) stream will supersede the aging copper parallel version. Driver support for the new I/O and cyber security concerns require updating the operating system from Redhat Enterprise Linux (RHEL) v4 to RedHawk (based on RHEL) v6. While the basic control system continues to use the General Atomics Plasma Control System (GA PCS), the effort to forward port the entire software package to run under 64-bit Linux instead of 32-bit Linux included PCS modifications subsequently shared with GA and other PCS users. Software updates focused on three key areas: (1) code modernization through coding standards (C99/C11), (2) code portability and maintainability through use of the GA PCS code generator, and (3) support of 64-bit platforms. Central to the control system upgrade is the use of a complete real time (RT) Linux platform provided by Concurrent Computer Corporation, consisting of a computer (iHawk), an operating system and drivers (RedHawk), and RT tools (NightStar). Strong vendor support coupled with an extensive RT toolset influenced this decision. The new real-time Linux platform, I/O, and software engineering will foster enhanced capability and performance for NSTX-U plasma control.

  7. ZIO: The Ultimate Linux I/O Framework

    CERN Document Server

    Gonzalez Cobas, J D; Rubini, A; Nellaga, S; Vaga, F

    2014-01-01

    ZIO (with Z standing for “The Ultimate I/O” Framework) was developed for CERN with the specific needs of physics labs in mind, which are poorly addressed in the mainstream Linux kernel. ZIO provides a framework for industrial, high-bandwith, high-channel count I/O device drivers (digitizers, function generators, timing devices like TDCs) with performance, generality and scalability as design goals. Among its features, it offers abstractions for • both input and output channels, and channel sets • run-time selection of trigger types • run-time selection of buffer types • sysfs-based configuration • char devices for data and metadata • a socket interface (PF ZIO) as alternative to char devices In this paper, we discuss the design and implementation of ZIO, and describe representative cases of driver development for typical and exotic applications: drivers for the FMC (FPGAMezzanine Card, see [1]) boards developed at CERN like the FMC ADC 100Msps digitizer, FMC TDC timestamp counter, and FMC DEL ...

  8. Traveling cluster approximation for uncorrelated amorphous systems

    International Nuclear Information System (INIS)

    Kaplan, T.; Sen, A.K.; Gray, L.J.; Mills, R.

    1985-01-01

    In this paper, the authors apply the TCA concepts to spatially disordered, uncorrelated systems (e.g., fluids or amorphous metals without short-range order). This is the first approximation scheme for amorphous systems that takes cluster effects into account while preserving the Herglotz property for any amount of disorder. They have performed some computer calculations for the pair TCA, for the model case of delta-function potentials on a one-dimensional random chain. These results are compared with exact calculations (which, in principle, taken into account all cluster effects) and with the CPA, which is the single-site TCA. The density of states for the pair TCA clearly shows some improvement over the CPA, and yet, apparently, the pair approximation distorts some of the features of the exact results. They conclude that the effects of large clusters are much more important in an uncorrelated liquid metal than in a substitutional alloy. As a result, the pair TCA, which does quite a nice job for alloys, is not adequate for the liquid. Larger clusters must be treated exactly, and therefore an n-TCA with n > 2 must be used

  9. Smartphone qualification & linux-based tools for CubeSat computing payloads

    Science.gov (United States)

    Bridges, C. P.; Yeomans, B.; Iacopino, C.; Frame, T. E.; Schofield, A.; Kenyon, S.; Sweeting, M. N.

    Modern computers are now far in advance of satellite systems and leveraging of these technologies for space applications could lead to cheaper and more capable spacecraft. Together with NASA AMES's PhoneSat, the STRaND-1 nanosatellite team has been developing and designing new ways to include smart-phone technologies to the popular CubeSat platform whilst mitigating numerous risks. Surrey Space Centre (SSC) and Surrey Satellite Technology Ltd. (SSTL) have led in qualifying state-of-the-art COTS technologies and capabilities - contributing to numerous low-cost satellite missions. The focus of this paper is to answer if 1) modern smart-phone software is compatible for fast and low-cost development as required by CubeSats, and 2) if the components utilised are robust to the space environment. The STRaND-1 smart-phone payload software explored in this paper is united using various open-source Linux tools and generic interfaces found in terrestrial systems. A major result from our developments is that many existing software and hardware processes are more than sufficient to provide autonomous and operational payload object-to-object and file-based management solutions. The paper will provide methodologies on the software chains and tools used for the STRaND-1 smartphone computing platform, the hardware built with space qualification results (thermal, thermal vacuum, and TID radiation), and how they can be implemented in future missions.

  10. Disability in the UN cluster system

    Directory of Open Access Journals (Sweden)

    Adele Perry

    2010-07-01

    Full Text Available The cluster system offers space for raising awareness among humanitarian actors and for putting disability on the agenda, but it impairs local and cross-cutting dynamics at field level.

  11. Integrated photometry of globular star clusters in the Vilnius system

    International Nuclear Information System (INIS)

    Zdanavichyus, K.V.

    1983-01-01

    Integrated colour indices in the Vilnius photometric system and newly determined colour excesses Esub(B-V) for 39 globular clusters are presented. It is shown that the coincidence of integrated spectral types are not a sufficient criterion for the identity of intrinsic colour indices of globular clusters. Relation of integrated colour indices with the slope of the giant branch S and with the horizontal branch morphological type D is investigated. Integrated colour indices of clusters with a blue horizontal branch show no correlation with either D or S. The increase of colour indices of the clusters of types D >= 4 correlates with the distribution of stars along the horizontal branch. Integrated photometry of globular star clusters in the Vilnius multicoloured photometric system permits to determine their colour excesses from some Q diagrams and normal colour index. Integral normal colour indexes and Q parameters for I globular star clusters of the Mironov group display small changes as compared to clusters of group 2. Colour indexes among star clusters having only red horizontal branches (D=7) change most considerably

  12. ARC Code TI: NodeMon

    Data.gov (United States)

    National Aeronautics and Space Administration — NodeMon is a resource utilization monitor tailored to the Altix architecture, but is applicable to any Linux system or cluster. It allows distributed resource...

  13. B =5 Skyrmion as a two-cluster system

    Science.gov (United States)

    Gudnason, Sven Bjarke; Halcrow, Chris

    2018-06-01

    The classical B =5 Skyrmion can be approximated by a two-cluster system in which a B =1 Skyrmion is attached to a core B =4 Skyrmion. We quantize this system, allowing the B =1 to freely orbit the core. The configuration space is 11 dimensional but simplifies significantly after factoring out the overall spin and isospin degrees of freedom. We exactly solve the free quantum problem and then include an interaction potential between the Skyrmions numerically. The resulting energy spectrum is compared to the corresponding nuclei—the helium-5/lithium-5 isodoublet. We find approximate parity doubling not seen in the experimental data. In addition, we fail to obtain the correct ground-state spin. The framework laid out for this two-cluster system can readily be modified for other clusters and in particular for other B =4 n +1 nuclei, of which B =5 is the simplest example.

  14. Design of multi-channel analyzer's monitoring system based on embedded system

    International Nuclear Information System (INIS)

    Yang Tao; Wei Yixiang

    2007-01-01

    A new Multi-Channel Analyzer's Monitoring system based on ARM9 Embedded system is introduced in this paper. Some solutions to problem are also discussed during the procedure of design, installation and debugging on Linux system. The Monitoring system is developed by using MiniGUI and Linux software system API, with the functions of collecting, displaying and I/O data controlling 1024 channels datum. They are all realized in real time, with the merits of low cost, small size and portability. All these lay the foundation of developing homemade Digital and Portable nuclear spectrometers. (authors)

  15. Implementation of the On-the-fly Encryption for the Linux OS Based on Certified CPS

    Directory of Open Access Journals (Sweden)

    Alexander Mikhailovich Korotin

    2013-02-01

    Full Text Available The article is devoted to tools for on-the-fly encryption and a method to implement such tool for the Linux OS based on a certified CPS.The idea is to modify the existing tool named eCryptfs. Russian cryptographic algorithms will be used in the user and kernel modes.

  16. Modeling the formation of globular cluster systems in the Virgo cluster

    International Nuclear Information System (INIS)

    Li, Hui; Gnedin, Oleg Y.

    2014-01-01

    The mass distribution and chemical composition of globular cluster (GC) systems preserve fossil record of the early stages of galaxy formation. The observed distribution of GC colors within massive early-type galaxies in the ACS Virgo Cluster Survey (ACSVCS) reveals a multi-modal shape, which likely corresponds to a multi-modal metallicity distribution. We present a simple model for the formation and disruption of GCs that aims to match the ACSVCS data. This model tests the hypothesis that GCs are formed during major mergers of gas-rich galaxies and inherit the metallicity of their hosts. To trace merger events, we use halo merger trees extracted from a large cosmological N-body simulation. We select 20 halos in the mass range of 2 × 10 12 to 7 × 10 13 M ☉ and match them to 19 Virgo galaxies with K-band luminosity between 3 × 10 10 and 3 × 10 11 L ☉ . To set the [Fe/H] abundances, we use an empirical galaxy mass-metallicity relation. We find that a minimal merger ratio of 1:3 best matches the observed cluster metallicity distribution. A characteristic bimodal shape appears because metal-rich GCs are produced by late mergers between massive halos, while metal-poor GCs are produced by collective merger activities of less massive hosts at early times. The model outcome is robust to alternative prescriptions for cluster formation rate throughout cosmic time, but a gradual evolution of the mass-metallicity relation with redshift appears to be necessary to match the observed cluster metallicities. We also affirm the age-metallicity relation, predicted by an earlier model, in which metal-rich clusters are systematically several billion younger than their metal-poor counterparts.

  17. An update on perfmon and the struggle to get into the Linux kernel

    Energy Technology Data Exchange (ETDEWEB)

    Nowak, Andrzej, E-mail: Andrzej.Nowak@cern.c [CERN openlab (Switzerland)

    2010-04-01

    At CHEP2007 we reported on the perfmon2 subsystem as a tool for interfacing to the PMUs (Performance Monitoring Units) which are found in the hardware of all modern processors (from AMD, Intel, SUN, IBM, MIPS, etc.). The intent was always to get the subsystem into the Linux kernel by default. This paper reports on how progress was made (after long discussions) and will also show the latest additions to the subsystems.

  18. An update on perfmon and the struggle to get into the Linux kernel

    International Nuclear Information System (INIS)

    Nowak, Andrzej

    2010-01-01

    At CHEP2007 we reported on the perfmon2 subsystem as a tool for interfacing to the PMUs (Performance Monitoring Units) which are found in the hardware of all modern processors (from AMD, Intel, SUN, IBM, MIPS, etc.). The intent was always to get the subsystem into the Linux kernel by default. This paper reports on how progress was made (after long discussions) and will also show the latest additions to the subsystems.

  19. Fossil systems in the 400d cluster catalog

    DEFF Research Database (Denmark)

    Voevodkin, Alexey; Borozdin, Konstantin; Heitmann, Katrin

    2010-01-01

    We report the discovery of seven new fossil systems in the 400d cluster survey. Our search targets nearby, z ≤ 0.2, and X-ray bright, LX ≥ 10^43 erg s-1, clusters of galaxies. Where available, we measure the optical luminosities from Sloan Digital Sky Survey images, thereby obtaining uniform sets...

  20. Attitude Estimation in Fractionated Spacecraft Cluster Systems

    Science.gov (United States)

    Hadaegh, Fred Y.; Blackmore, James C.

    2011-01-01

    An attitude estimation was examined in fractioned free-flying spacecraft. Instead of a single, monolithic spacecraft, a fractionated free-flying spacecraft uses multiple spacecraft modules. These modules are connected only through wireless communication links and, potentially, wireless power links. The key advantage of this concept is the ability to respond to uncertainty. For example, if a single spacecraft module in the cluster fails, a new one can be launched at a lower cost and risk than would be incurred with onorbit servicing or replacement of the monolithic spacecraft. In order to create such a system, however, it is essential to know what the navigation capabilities of the fractionated system are as a function of the capabilities of the individual modules, and to have an algorithm that can perform estimation of the attitudes and relative positions of the modules with fractionated sensing capabilities. Looking specifically at fractionated attitude estimation with startrackers and optical relative attitude sensors, a set of mathematical tools has been developed that specify the set of sensors necessary to ensure that the attitude of the entire cluster ( cluster attitude ) can be observed. Also developed was a navigation filter that can estimate the cluster attitude if these conditions are satisfied. Each module in the cluster may have either a startracker, a relative attitude sensor, or both. An extended Kalman filter can be used to estimate the attitude of all modules. A range of estimation performances can be achieved depending on the sensors used and the topology of the sensing network.

  1. Extension of the DIRAC workload management system to allow use of distributed windows resources

    International Nuclear Information System (INIS)

    Li, Y Y; Harrison, K; Parker, M A; Lyutsarev, V; Tsaregorodtsev, A

    2008-01-01

    The DIRAC Workload Management System of the LHCb experiment allows coordinated use of globally distributed computing power and data storage. The system was initially deployed on the Linux platforms, where it has been used very successfully both for collaboration-wide production activities and for single-user physics studies. To increase the resources available to LHCb, DIRAC has been extended so that it also allows use of Microsoft Windows machines. As DIRAC is mostly written in Python, a large part of the code base was already platform independent, but Windows-specific solutions have had to be found in areas such as certificate-based authentication and secure file transfers, where .NetGridFTP has been used. In addition, new code has been written to deal with the way that jobs are run and monitored under Windows, enabling interaction with Microsoft Windows Compute Cluster Server 2003 on sets of machines were this is available. The result is a system that allows users transparent access to Linux and Windows distributed resources. This paper gives details of the Windows-specific developments for DIRAC; outlines the experience gained in deploying the system at a number of sites, and reports on the performance achieved running the LHCb data-processing applications

  2. Cluster Computing For Real Time Seismic Array Analysis.

    Science.gov (United States)

    Martini, M.; Giudicepietro, F.

    A seismic array is an instrument composed by a dense distribution of seismic sen- sors that allow to measure the directional properties of the wavefield (slowness or wavenumber vector) radiated by a seismic source. Over the last years arrays have been widely used in different fields of seismological researches. In particular they are applied in the investigation of seismic sources on volcanoes where they can be suc- cessfully used for studying the volcanic microtremor and long period events which are critical for getting information on the volcanic systems evolution. For this reason arrays could be usefully employed for the volcanoes monitoring, however the huge amount of data produced by this type of instruments and the processing techniques which are quite time consuming limited their potentiality for this application. In order to favor a direct application of arrays techniques to continuous volcano monitoring we designed and built a small PC cluster able to near real time computing the kinematics properties of the wavefield (slowness or wavenumber vector) produced by local seis- mic source. The cluster is composed of 8 Intel Pentium-III bi-processors PC working at 550 MHz, and has 4 Gigabytes of RAM memory. It runs under Linux operating system. The developed analysis software package is based on the Multiple SIgnal Classification (MUSIC) algorithm and is written in Fortran. The message-passing part is based upon the LAM programming environment package, an open-source imple- mentation of the Message Passing Interface (MPI). The developed software system includes modules devote to receiving date by internet and graphical applications for the continuous displaying of the processing results. The system has been tested with a data set collected during a seismic experiment conducted on Etna in 1999 when two dense seismic arrays have been deployed on the northeast and the southeast flanks of this volcano. A real time continuous acquisition system has been simulated by

  3. Implementasi Cluster Server pada Raspberry Pi dengan Menggunakan Metode Load Balancing

    Directory of Open Access Journals (Sweden)

    Ridho Habi Putra

    2016-06-01

    Full Text Available Server merupakan bagian penting dalam sebuah layanan didalam jaringan komputer. Peran server dapat menentukan kualitas baik buruknya dari layanan tersebut. Kegagalan dari sebuah server bisa disebabkan oleh beberapa faktor diantaranya kerusakan perangkat keras, sistem jaringan serta aliran listrik. Salah satu solusi untuk mengatasi kegagalan server dalam suatu jaringan komputer adalah dengan melakukan clustering server.  Tujuan dari penelitian ini adalah untuk mengukur kemampuan Raspberry Pi (Raspi digunakan sebagai web server. Raspberry Pi yang digunakan menggunakan Raspberry Pi 2 Model B dengan menggunakan processor ARM Cortex-A7 berjalan pada frekuensi 900MHz dengan memiliki RAM 1GB. Sistem operasi yang digunakan pada Raspberry Pi adalah Linux Debian Wheezy. Konsep penelitian ini menggunakan empat buah perangkat Raspberry Pi dimana dua Raspi digunakan sebagai web server dan dua Raspi lainnya digunakan sebagai penyeimbang beban (Load Balancer serta database server. Metode yang digunakan dalam pembangunan cluster server ini menggunakan metode load balancing, dimana beban server bekerja secara merata di masing-masing node. Pengujian yang diterapkan dengan melakukan perbandingan kinerja dari Raspbery Pi yang menangani lalu lintas data secara tunggal tanpa menggunakan load balancer serta pengujian Raspberry Pi dengan menggunakan load balancer sebagai beban penyeimbang antara anggota cluster server.

  4. 77 FR 5864 - BluePoint Linux Software Corp., China Bottles Inc., Long-e International, Inc., and Nano...

    Science.gov (United States)

    2012-02-06

    ... SECURITIES AND EXCHANGE COMMISSION [File No. 500-1] BluePoint Linux Software Corp., China Bottles Inc., Long-e International, Inc., and Nano Superlattice Technology, Inc.; Order of Suspension of... that there is a lack of current and accurate information concerning the securities of Nano Superlattice...

  5. ANALISIS DAN PERANCANGAN ARSITEKTUR SISTEM OTENTIKASI TERINTEGRASI ANTARA PLATFORM LINUX, WINDOWS 2000, DAN NOVELL NETWARE: STUDI KASUS JURUSAN TEKNIK INFORMATIKA FTIF ITS

    Directory of Open Access Journals (Sweden)

    Rully Soelaiman

    2003-07-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Jurusan Teknik Infomatika merupakan suatu organisasi yang menggunakan jaringan komputer yang diakses dari beberapa domain dan menggunakan sistem operasi terpisah. Masing-masing sistem tersebut menggunakan pengelolaan autentikasi yang terpisah, dengan kenyataan bahwa seharusnya dapat diakses oleh setiap anggota organisasi ini. Kebutuhan pengguna dan pengelola jaringan akan efisiensi pemakaian informasi autentikasi menjadi permasalahan yang akan dibahasa dana makalah ini. Pada makalah ini, dilakukan analisis kemungkinan dilakukannya otentikasi terintegrasi pada jaringan komputer Teknik Informatika yang menggunakan Windows 2000, Linux, dan Novell Netware. Analisis dilakukan dengan meninjau kemampuan integrasi direktori, metode otentikasi, dan kerjasama dengan sistem lain. Dari hasil pemetaan terhadap kebutuhan dan ketersediaan sumber daya teknologi pada jurusan, dipilih solusi otentikasi menggunakan Samba dan OpenLDAP untuk melayani permintaan otentikasi dari Windows 2000 dan Linux. Uji coba telah dilakukan untuk otentikasi client Windows 2000 dan Linux , mencakup login dari masing-masing sistem operasi, domain yang berbeda, menggunakan satu username dan password. Uji coba juga dilakukan terhadap proses

  6. The RAppArmor Package: Enforcing Security Policies in R Using Dynamic Sandboxing on Linux

    Directory of Open Access Journals (Sweden)

    Jeroen Ooms

    2013-11-01

    Full Text Available The increasing availability of cloud computing and scientific super computers brings great potential for making R accessible through public or shared resources. This allows us to efficiently run code requiring lots of cycles and memory, or embed R functionality into, e.g., systems and web services. However some important security concerns need to be addressed before this can be put in production. The prime use case in the design of R has always been a single statistician running R on the local machine through the interactive console. Therefore the execution environment of R is entirely unrestricted, which could result in malicious behavior or excessive use of hardware resources in a shared environment. Properly securing an R process turns out to be a complex problem. We describe various approaches and illustrate potential issues using some of our personal experiences in hosting public web services. Finally we introduce the RAppArmor package: a Linux based reference implementation for dynamic sandboxing in R on the level of the operating system.

  7. The role of the Pauli principle in three-cluster systems composed of identical clusters

    International Nuclear Information System (INIS)

    Lashko, Yu.A.; Filippov, G.F.

    2009-01-01

    Within the microscopic model based on the algebraic version of the resonating group method the role of the Pauli principle in the formation of continuum wave function of nuclear systems composed of three identical s-clusters has been investigated. Emphasis is placed upon the study of the exchange effects contained in the genuine three-cluster norm kernel. Three-fermion, three-boson, three-dineutron (3d ' ) and 3α systems are considered in detail. Simple analytical method of constructing the norm kernel for 3α system is suggested. The Pauli-allowed basis functions for the 3α and 3d ' systems are given in an explicit form and asymptotic behavior of these functions is established. Complete classification of the eigenfunctions and the eigenvalues of the 12 C norm kernel by the 8 Be=α+α eigenvalues has been given for the first time. Spectrum of the 12 C norm kernel is compared to that of the 5 H system.

  8. Cluster-based localization and tracking in ubiquitous computing systems

    CERN Document Server

    Martínez-de Dios, José Ramiro; Torres-González, Arturo; Ollero, Anibal

    2017-01-01

    Localization and tracking are key functionalities in ubiquitous computing systems and techniques. In recent years a very high variety of approaches, sensors and techniques for indoor and GPS-denied environments have been developed. This book briefly summarizes the current state of the art in localization and tracking in ubiquitous computing systems focusing on cluster-based schemes. Additionally, existing techniques for measurement integration, node inclusion/exclusion and cluster head selection are also described in this book.

  9. Implantación de una Plataforma GNU/Linux en la gestión del hogar digital focalizada a la domótica,multimedia-ocio y las TICs.

    OpenAIRE

    Zálvez Rico, Juan Pedro

    2011-01-01

    In the near future, access to data and home automation systems in the home supported by the implementation of GNU-Linux operating systems in our homes, a reality of the first order for the implementation of real homes and automated digital awarding a cost reduction by the intensive use of the resources provided directly by the Open Software Community. The future possibilities seem almost endless: telecommuting, centralized household accounts, software distributed online training for young peo...

  10. bcl::Cluster : A method for clustering biological molecules coupled with visualization in the Pymol Molecular Graphics System.

    Science.gov (United States)

    Alexander, Nathan; Woetzel, Nils; Meiler, Jens

    2011-02-01

    Clustering algorithms are used as data analysis tools in a wide variety of applications in Biology. Clustering has become especially important in protein structure prediction and virtual high throughput screening methods. In protein structure prediction, clustering is used to structure the conformational space of thousands of protein models. In virtual high throughput screening, databases with millions of drug-like molecules are organized by structural similarity, e.g. common scaffolds. The tree-like dendrogram structure obtained from hierarchical clustering can provide a qualitative overview of the results, which is important for focusing detailed analysis. However, in practice it is difficult to relate specific components of the dendrogram directly back to the objects of which it is comprised and to display all desired information within the two dimensions of the dendrogram. The current work presents a hierarchical agglomerative clustering method termed bcl::Cluster. bcl::Cluster utilizes the Pymol Molecular Graphics System to graphically depict dendrograms in three dimensions. This allows simultaneous display of relevant biological molecules as well as additional information about the clusters and the members comprising them.

  11. Schedulability-Driven Partitioning and Mapping for Multi-Cluster Real-Time Systems

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2004-01-01

    We present an approach to partitioning and mapping for multi-cluster embedded systems consisting of time-triggered and event-triggered clusters, interconnected via gateways. We have proposed a schedulability analysis for such systems, including a worst-case queuing delay analysis for the gateways...

  12. Kualitas Jaringan Pada Jaringan Virtual Local Area Network (VLAN Yang Menerapkan Linux Terminal Server Project (LTSP

    Directory of Open Access Journals (Sweden)

    Lipur Sugiyanta

    2017-12-01

    Full Text Available Virtual Local Area Network (VLAN merupakan sebuah teknik dalam jaringan komputer untuk menciptakan beberapa jaringan yang berbeda tetapi masih merupakan sebuah jaringan lokal yang tidak terbatas pada lokasi fisik seperti LAN sedangkan Linux Terminal Server Project (LTSP merupakan sebuah teknik terminal server yang dapat memperbanyak workstation dengan hanya menggunakan sebuah Linux server. Dalam membangun sebuah jaringan komputer perlu memperhatikan beberapa hal dan salah satunya adalah kualitas jaringan dari jaringan yang dibangun. Pada penelitian ini bertujuan untuk mengetahui pengaruh jumlah client terhadap kualitas jaringan berdasarkan parameter delay dan packet loss pada jaringan VLAN yang menerapkan LTSP. Oleh karena itu, penelitian ini menggunakan jenis metode penelitian kualitatif dengan memperhatikan standar yang digunakan dalam penelitian yaitu standar International Telecommunication Union – Telecommunication (ITU-T. Penerapan penelitian ini menggunakan sistem operasi pada server adalah Ubuntu Desktop 14.04 LTS. Berdasarkan dari hasil penelitian yang ditemukan dapat disimpulkan bahwa benar terbukti bahwa makin banyak client yang dilayani oleh server maka akan menurunkan kualitas jaringan berdasarkan parameter Quality of Service (QoS yang digunakan yaitu delay dan packet loss.

  13. Design Optimization of Multi-Cluster Embedded Systems for Real-Time Applications

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2004-01-01

    We present an approach to design optimization of multi-cluster embedded systems consisting of time-triggered and event-triggered clusters, interconnected via gateways. In this paper, we address design problems which are characteristic to multi-clusters: partitioning of the system functionality...... into time-triggered and event-triggered domains, process mapping, and the optimization of parameters corresponding to the communication protocol. We present several heuristics for solving these problems. Our heuristics are able to find schedulable implementations under limited resources, achieving...... an efficient utilization of the system. The developed algorithms are evaluated using extensive experiments and a real-life example....

  14. Design Optimization of Multi-Cluster Embedded Systems for Real-Time Applications

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2006-01-01

    We present an approach to design optimization of multi-cluster embedded systems consisting of time-triggered and event-triggered clusters, interconnected via gateways. In this paper, we address design problems which are characteristic to multi-clusters: partitioning of the system functionality...... into time-triggered and event-triggered domains, process mapping, and the optimization of parameters corresponding to the communication protocol. We present several heuristics for solving these problems. Our heuristics are able to find schedulable implementations under limited resources, achieving...... an efficient utilization of the system. The developed algorithms are evaluated using extensive experiments and a real-life example....

  15. genepop'007: a complete re-implementation of the genepop software for Windows and Linux.

    Science.gov (United States)

    Rousset, François

    2008-01-01

    This note summarizes developments of the genepop software since its first description in 1995, and in particular those new to version 4.0: an extended input format, several estimators of neighbourhood size under isolation by distance, new estimators and confidence intervals for null allele frequency, and less important extensions to previous options. genepop now runs under Linux as well as under Windows, and can be entirely controlled by batch calls. © 2007 The Author.

  16. Autonomic Cluster Management System (ACMS): A Demonstration of Autonomic Principles at Work

    Science.gov (United States)

    Baldassari, James D.; Kopec, Christopher L.; Leshay, Eric S.; Truszkowski, Walt; Finkel, David

    2005-01-01

    Cluster computing, whereby a large number of simple processors or nodes are combined together to apparently function as a single powerful computer, has emerged as a research area in its own right. The approach offers a relatively inexpensive means of achieving significant computational capabilities for high-performance computing applications, while simultaneously affording the ability to. increase that capability simply by adding more (inexpensive) processors. However, the task of manually managing and con.guring a cluster quickly becomes impossible as the cluster grows in size. Autonomic computing is a relatively new approach to managing complex systems that can potentially solve many of the problems inherent in cluster management. We describe the development of a prototype Automatic Cluster Management System (ACMS) that exploits autonomic properties in automating cluster management.

  17. TRANSPORT AND LOGISTICS CLUSTER IN AN ECONOMIC SYSTEM OF A REGION

    Directory of Open Access Journals (Sweden)

    I.G. Menshenina

    2008-09-01

    Full Text Available The main types of clusters are described in the article. The function of a transport and logistics model is also described using the theory of graphs. The relationship of clusters is shown in the economic system of a region, and the main role of transport and logistics cluster is emphasized as a good condition for the effective functioning of other clusters in the region.

  18. Schedulability-Driven Frame Packing for Multi-Cluster Distributed Embedded Systems

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2003-01-01

    We present an approach to frame packing for multi-cluster distributed embedded systems consisting of time-triggered and event-triggered clusters, interconnected via gateways. In our approach, the application messages are packed into frames such that the application is schedulable. Thus, we have...... also proposed a schedulability analysis for applications consisting of mixed event-triggered and time-triggered processes and messages, and a worst case queuing delay analysis for the gateways, responsible for routing inter-cluster traffic. Optimization heuristics for frame packing aiming at producing...... a schedulable system have been proposed. Extensive experiments and a real-life example show the efficiency of our frame-packing approach....

  19. Heterologous expression of pikromycin biosynthetic gene cluster using Streptomyces artificial chromosome system.

    Science.gov (United States)

    Pyeon, Hye-Rim; Nah, Hee-Ju; Kang, Seung-Hoon; Choi, Si-Sun; Kim, Eung-Soo

    2017-05-31

    Heterologous expression of biosynthetic gene clusters of natural microbial products has become an essential strategy for titer improvement and pathway engineering of various potentially-valuable natural products. A Streptomyces artificial chromosomal conjugation vector, pSBAC, was previously successfully applied for precise cloning and tandem integration of a large polyketide tautomycetin (TMC) biosynthetic gene cluster (Nah et al. in Microb Cell Fact 14(1):1, 2015), implying that this strategy could be employed to develop a custom overexpression scheme of natural product pathway clusters present in actinomycetes. To validate the pSBAC system as a generally-applicable heterologous overexpression system for a large-sized polyketide biosynthetic gene cluster in Streptomyces, another model polyketide compound, the pikromycin biosynthetic gene cluster, was preciously cloned and heterologously expressed using the pSBAC system. A unique HindIII restriction site was precisely inserted at one of the border regions of the pikromycin biosynthetic gene cluster within the chromosome of Streptomyces venezuelae, followed by site-specific recombination of pSBAC into the flanking region of the pikromycin gene cluster. Unlike the previous cloning process, one HindIII site integration step was skipped through pSBAC modification. pPik001, a pSBAC containing the pikromycin biosynthetic gene cluster, was directly introduced into two heterologous hosts, Streptomyces lividans and Streptomyces coelicolor, resulting in the production of 10-deoxymethynolide, a major pikromycin derivative. When two entire pikromycin biosynthetic gene clusters were tandemly introduced into the S. lividans chromosome, overproduction of 10-deoxymethynolide and the presence of pikromycin, which was previously not detected, were both confirmed. Moreover, comparative qRT-PCR results confirmed that the transcription of pikromycin biosynthetic genes was significantly upregulated in S. lividans containing tandem

  20. Exploring the Dynamics of Exoplanetary Systems in a Young Stellar Cluster

    Science.gov (United States)

    Thornton, Jonathan Daniel; Glaser, Joseph Paul; Wall, Joshua Edward

    2018-01-01

    I describe a dynamical simulation of planetary systems in a young star cluster. One rather arbitrary aspect of cluster simulations is the choice of initial conditions. These are typically chosen from some standard model, such as Plummer or King, or from a “fractal” distribution to try to model young clumpy systems. Here I adopt the approach of realizing an initial cluster model directly from a detailed magnetohydrodynamical model of cluster formation from a 1000-solar-mass interstellar gas cloud, with magnetic fields and radiative and wind feedback from massive stars included self-consistently. The N-body simulation of the stars and planets starts once star formation is largely over and feedback has cleared much of the gas from the region where the newborn stars reside. It continues until the cluster dissolves in the galactic field. Of particular interest is what would happen to the free-floating planets created in the gas cloud simulation. Are they captured by a star or are they ejected from the cluster? This method of building a dynamical cluster simulation directly from the results of a cluster formation model allows us to better understand the evolution of young star clusters and enriches our understanding of extrasolar planet development in them. These simulations were performed within the AMUSE simulation framework, and combine N-body, multiples and background potential code.

  1. BioconductorBuntu: a Linux distribution that implements a web-based DNA microarray analysis server.

    Science.gov (United States)

    Geeleher, Paul; Morris, Dermot; Hinde, John P; Golden, Aaron

    2009-06-01

    BioconductorBuntu is a custom distribution of Ubuntu Linux that automatically installs a server-side microarray processing environment, providing a user-friendly web-based GUI to many of the tools developed by the Bioconductor Project, accessible locally or across a network. System installation is via booting off a CD image or by using a Debian package provided to upgrade an existing Ubuntu installation. In its current version, several microarray analysis pipelines are supported including oligonucleotide, dual-or single-dye experiments, including post-processing with Gene Set Enrichment Analysis. BioconductorBuntu is designed to be extensible, by server-side integration of further relevant Bioconductor modules as required, facilitated by its straightforward underlying Python-based infrastructure. BioconductorBuntu offers an ideal environment for the development of processing procedures to facilitate the analysis of next-generation sequencing datasets. BioconductorBuntu is available for download under a creative commons license along with additional documentation and a tutorial from (http://bioinf.nuigalway.ie).

  2. Traveling-cluster approximation for uncorrelated amorphous systems

    International Nuclear Information System (INIS)

    Sen, A.K.; Mills, R.; Kaplan, T.; Gray, L.J.

    1984-01-01

    We have developed a formalism for including cluster effects in the one-electron Green's function for a positionally disordered (liquid or amorphous) system without any correlation among the scattering sites. This method is an extension of the technique known as the traveling-cluster approximation (TCA) originally obtained and applied to a substitutional alloy by Mills and Ratanavararaksa. We have also proved the appropriate fixed-point theorem, which guarantees, for a bounded local potential, that the self-consistent equations always converge upon iteration to a unique, Herglotz solution. To our knowledge, this is the only analytic theory for considering cluster effects. Furthermore, we have performed some computer calculations in the pair TCA, for the model case of delta-function potentials on a one-dimensional random chain. These results have been compared with ''exact calculations'' (which, in principle, take into account all cluster effects) and with the coherent-potential approximation (CPA), which is the single-site TCA. The density of states for the pair TCA clearly shows some improvement over the CPA and yet, apparently, the pair approximation distorts some of the features of the exact results

  3. Advanced cluster methods for correlated-electron systems

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, Andre

    2015-04-27

    In this thesis, quantum cluster methods are used to calculate electronic properties of correlated-electron systems. A special focus lies in the determination of the ground state properties of a 3/4 filled triangular lattice within the one-band Hubbard model. At this filling, the electronic density of states exhibits a so-called van Hove singularity and the Fermi surface becomes perfectly nested, causing an instability towards a variety of spin-density-wave (SDW) and superconducting states. While chiral d+id-wave superconductivity has been proposed as the ground state in the weak coupling limit, the situation towards strong interactions is unclear. Additionally, quantum cluster methods are used here to investigate the interplay of Coulomb interactions and symmetry-breaking mechanisms within the nematic phase of iron-pnictide superconductors. The transition from a tetragonal to an orthorhombic phase is accompanied by a significant change in electronic properties, while long-range magnetic order is not established yet. The driving force of this transition may not only be phonons but also magnetic or orbital fluctuations. The signatures of these scenarios are studied with quantum cluster methods to identify the most important effects. Here, cluster perturbation theory (CPT) and its variational extention, the variational cluster approach (VCA) are used to treat the respective systems on a level beyond mean-field theory. Short-range correlations are incorporated numerically exactly by exact diagonalization (ED). In the VCA, long-range interactions are included by variational optimization of a fictitious symmetry-breaking field based on a self-energy functional approach. Due to limitations of ED, cluster sizes are limited to a small number of degrees of freedom. For the 3/4 filled triangular lattice, the VCA is performed for different cluster symmetries. A strong symmetry dependence and finite-size effects make a comparison of the results from different clusters difficult

  4. RTSPM: real-time Linux control software for scanning probe microscopy.

    Science.gov (United States)

    Chandrasekhar, V; Mehta, M M

    2013-01-01

    Real time computer control is an essential feature of scanning probe microscopes, which have become important tools for the characterization and investigation of nanometer scale samples. Most commercial (and some open-source) scanning probe data acquisition software uses digital signal processors to handle the real time data processing and control, which adds to the expense and complexity of the control software. We describe here scan control software that uses a single computer and a data acquisition card to acquire scan data. The computer runs an open-source real time Linux kernel, which permits fast acquisition and control while maintaining a responsive graphical user interface. Images from a simulated tuning-fork based microscope as well as a standard topographical sample are also presented, showing some of the capabilities of the software.

  5. Techniques for Representation of Regional Clusters in Geographical In-formation Systems

    Directory of Open Access Journals (Sweden)

    Adriana REVEIU

    2011-01-01

    Full Text Available This paper provides an overview of visualization techniques adapted for regional clusters presentation in Geographic Information Systems. Clusters are groups of companies and insti-tutions co-located in a specific geographic region and linked by interdependencies in providing a related group of products and services. The regional clusters can be visualized by projecting the data into two-dimensional space or using parallel coordinates. Cluster membership is usually represented by different colours or by dividing clusters into several panels of a grille display. Taking into consideration regional clusters requirements and the multilevel administrative division of the Romania’s territory, I used two cartograms: NUTS2- regions and NUTS3- counties, to illustrate the tools for regional clusters representation.

  6. Management system of ELHEP cluster machine for FEL photonics design

    Science.gov (United States)

    Zysik, Jacek; Poźniak, Krzysztof; Romaniuk, Ryszard

    2006-10-01

    A multipurpose, distributed MatLab calculations oriented, cluster machine was assembled in PERG/ELHEP laboratory at ISE/WUT. It is predicted mainly for advanced photonics and FPGA/DSP based systems design for Free Electron Laser. It will be used also for student projects for superconducting accelerator and FEL. Here we present one specific side of cluster design. For an intense, distributed daily work with the cluster, it is important to have a good interface and practical access to all machine resources. A complex management system was implemented in PERG laboratory. It helps all registered users to work using all necessary applications, communicate with other logged in people, check all the news and gather all necessary information about what is going on in the system, how it is utilized, etc. The system is also very practical for administrator purposes, it helps to keep controlling who is using the resources and for how long. It provides different privileges for different applications and many more. The system is introduced as a freeware, using open source code and can be modified by system operators or super-users who are interested in nonstandard system configuration.

  7. LINK codes TRAC-BF1/PARCSv2.7 in LINUX without external communication interface; Acoplamiento de los codigos TRAC-BF1/PARCSv2.7 en Linux sin interfaz externa de comunicacion

    Energy Technology Data Exchange (ETDEWEB)

    Barrachina, T.; Garcia-Fenoll, M.; Abarca, A.; Miro, R.; Verdu, G.; Concejal, A.; Solar, A.

    2014-07-01

    The TRAC-BF1 code is still widely used by the nuclear industry for safety analysis. The plant models developed using this code are highly validated, so it is advisable to continue improving this code before migrating to another completely different code. The coupling with the NRC neutronic code PARCSv2.7 increases the simulation capabilities in transients in which the power distribution plays an important role. In this paper, the procedure for the coupling of TRAC-BF1 and PARCSv2.7 codes without PVM and in Linux is presented. (Author)

  8. A Preliminary Study Application Clustering System in Acoustic Emission Monitoring

    Directory of Open Access Journals (Sweden)

    Saiful Bahari Nur Amira Afiza

    2017-01-01

    Full Text Available Acoustic Emission (AE is a non-destructive testing known as assessment on damage detection in structural engineering. It also can be used to discriminate the different types of damage occurring in a composite materials. The main problem associated with the data analysis is the discrimination between the different AE sources and analysis of the AE signal in order to identify the most critical damage mechanism. Clustering analysis is a technique in which the set of object are assigned to a group called cluster. The objective of the cluster analysis is to separate a set of data into several classes that reflect the internal structure of data. In this paper was used k-means algorithm for partitioned clustering method, numerous effort have been made to improve the performance of application k-means clustering algorithm. This paper presents a current review on application clustering system in Acoustic Emission.

  9. Fault detection of flywheel system based on clustering and principal component analysis

    Directory of Open Access Journals (Sweden)

    Wang Rixin

    2015-12-01

    Full Text Available Considering the nonlinear, multifunctional properties of double-flywheel with closed-loop control, a two-step method including clustering and principal component analysis is proposed to detect the two faults in the multifunctional flywheels. At the first step of the proposed algorithm, clustering is taken as feature recognition to check the instructions of “integrated power and attitude control” system, such as attitude control, energy storage or energy discharge. These commands will ask the flywheel system to work in different operation modes. Therefore, the relationship of parameters in different operations can define the cluster structure of training data. Ordering points to identify the clustering structure (OPTICS can automatically identify these clusters by the reachability-plot. K-means algorithm can divide the training data into the corresponding operations according to the reachability-plot. Finally, the last step of proposed model is used to define the relationship of parameters in each operation through the principal component analysis (PCA method. Compared with the PCA model, the proposed approach is capable of identifying the new clusters and learning the new behavior of incoming data. The simulation results show that it can effectively detect the faults in the multifunctional flywheels system.

  10. Cluster-based bulk metallic glass formation in Fe-Si-B-Nb alloy systems

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, C L; Wang, Q; Li, F W; Li, Y H; Wang, Y M; Dong, C [State Key Laboratory of Materials Modification, Dalian University of Technology (DUT), Dalian 116024 (China); Zhang, W; Inoue, A, E-mail: dong@dlut.edu.c [Institute for Materials Research (IMR), Tohoku University, Katahira 2-1-1, Aoba-Ku, Sendai 980-8577 (Japan)

    2009-01-01

    Bulk metallic glass formations have been explored in Fe-B-Si-Nb alloy system using the so-called atomic cluster line approach in combination with minor alloying guideline. The atomic cluster line refers to a straight line linking binary cluster to the third element in a ternary system. The basic ternary compositions in Fe-B-Si system are determined by the inetersection points of two cluster lines, namely Fe-B cluster to Si and Fe-Si cluster to B, and then further alloyed with 3-5 at. % Nb for enhancing glass forming abilities. BMG rods with a diameter of 3 mm are formed under the case of minor Nb alloying the basic intersecting compositions of Fe{sub 8}B{sub 3}-Si with Fe{sub 12}Si-B and Fe{sub 8}B{sub 2}-Si with Fe{sub 9}Si-B. The BMGs also exhibit high Vickers hardness (H{sub v}) of 1130-1164 and high Young's modulous (E) of 170-180 GPa

  11. Optimizing 10-Gigabit Ethernet for Networks of Workstations, Clusters, and Grids: A Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Wu-chun

    2003-10-13

    This paper presents a case study of the 10-Gigabit Ethernet (10GbE) adapter from Intel(reg sign). Specifically, with appropriate optimizations to the configurations of the 10GbE adapter and TCP, we demonstrate that the 10GbE adapter can perform well in local-area, storage-area, system-area, and wide-area networks. For local-area, storage-area, and system-area networks in support of networks of workstations, network-attached storage, and clusters, respectively, we can achieve over 7-Gb/s end-to-end throughput and 12-{micro}s end-to-end latency between applications running on Linux-based PCs. For the wide-area network in support of grids, we broke the recently-set Internet2 Land Speed Record by 2.5 times by sustaining an end-to-end TCP/IP throughput of 2.38 Gb/s between Sunnyvale, California and Geneva, Switzerland (i.e., 10,037 kilometers) to move over a terabyte of data in less than an hour. Thus, the above results indicate that 10GbE may be a cost-effective solution across a multitude of computing environments.

  12. Low latency protocol for transmission of measurement data from FPGA to Linux computer via 10 Gbps Ethernet link

    International Nuclear Information System (INIS)

    Zabolotny, W.M.

    2015-01-01

    This paper presents FADE-10G—an integrated solution for modern multichannel measurement systems. Its main aim is a low latency, reliable transmission of measurement data from FPGA-based front-end electronic boards (FEBs) to a computer-based node in the Data Acquisition System (DAQ), using a standard Ethernet 1 Gbps or 10 Gbps link. In addition to transmission of data, the system allows the user to send reliably simple control commands from DAQ to FEB and to receive responses. The aim of the work is to provide a possible simple base solution, which can be adapted by the end user to his or her particular needs. Therefore, the emphasis is put on the minimal consumption of FPGA resources in FEB and the minimal CPU load in the DAQ computer. The open source implementation of the FPGA IP core and the Linux kernel driver published under permissive license facilitates modifications and reuse of the solution. The system has been successfully tested in real hardware, both with 1 Gbps and 10 Gbps links

  13. An Overview of Android Operating System and Its Security Features

    OpenAIRE

    Rajinder Singh

    2014-01-01

    Android operating system is one of the most widely used operating system these days. Android Operating System is mainly divided into four main layers: the kernel, libraries, application framework and applications. Its kernel is based on Linux. Linux kernel is used to manage core system services such as virtual memory, networking, drivers, and power management. In these paper different features of architecture of Android OS as well security features of Android OS are discussed.

  14. Mode 3 knowledge production: Systems and systems theory, clusters and networks

    OpenAIRE

    Carayannis, Elias G.; Campbell, David F. J.; Rehman, Scheherazade S.

    2016-01-01

    With the comprehensive term of "Mode 3," we want to draw a conceptual link between systems and systems theory and want to demonstrate further how this can be applied to knowledge in the next steps. Systems can be understood as being composed of "elements", which are tied together by a "self-rationale". For innovation, often innovation clusters and innovation networks are being regarded as important. By leveraging systems theory for innovation concepts, one can implement references between the...

  15. Real Time Advanced Clustering System

    Directory of Open Access Journals (Sweden)

    Giuseppe Spampinato

    2017-05-01

    Full Text Available This paper describes a system to gather information from a stationary camera to identify moving objects. The proposed solution makes only use of motion vectors between adjacent frames, obtained from any algorithm. Starting from them, the system is able to retrieve clusters of moving objects in a scene acquired by an image sensor device. Since all the system is only based on optical flow, it is really simple and fast, to be easily integrated directly in low cost cameras. The experimental results show fast and robust performance of our method. The ANSI-C code has been tested on the ARM Cortex A15 CPU @2.32GHz, obtaining an impressive fps, about 3000 fps, excluding optical flow computation and I/O. Moreover, the system has been tested for different applications, cross traffic alert and video surveillance, in different conditions, indoor and outdoor, and with different lenses.

  16. DB2 9 for Linux, Unix, and Windows database administration certification study guide

    CERN Document Server

    Sanders, Roger E

    2007-01-01

    In DB2 9 for Linux, UNIX, and Windows Database Administration Certification Study Guide, Roger E. Sanders-one of the world's leading DB2 authors and an active participant in the development of IBM's DB2 certification exams-covers everything a reader needs to know to pass the DB2 9 UDB DBA Certification Test (731).This comprehensive study guide steps you through all of the topics that are covered on the test, including server management, data placement, database access, analyzing DB2 activity, DB2 utilities, high availability, security, and much more. Each chapter contains an extensive set of p

  17. Implementação de um sistema SIP para o sistema operacional Linux

    OpenAIRE

    Davison Gonzaga da Silva

    2003-01-01

    Resumo: Este trabalho apresenta a implementação de um Sistema de VoIP usando o Protocolo SIP. Este Sistema SIP foi desenvolvido para o Linux, usando-se a linguagem C++ em conjunto com a biblioteca QT. O Sistema SIP é composto de três entidades básicas: o Terminal SIP, o Proxy e o Servidor de Registros. O Terminal SIP é a entidade responsável por estabelecer sessões SIP com outros Terminais SIP. Para o Terminal SIP, foi desenvolvida uma biblioteca de acesso à placa de áudio, que permite a modi...

  18. Clustering analysis of water distribution systems: identifying critical components and community impacts.

    Science.gov (United States)

    Diao, K; Farmani, R; Fu, G; Astaraie-Imani, M; Ward, S; Butler, D

    2014-01-01

    Large water distribution systems (WDSs) are networks with both topological and behavioural complexity. Thereby, it is usually difficult to identify the key features of the properties of the system, and subsequently all the critical components within the system for a given purpose of design or control. One way is, however, to more explicitly visualize the network structure and interactions between components by dividing a WDS into a number of clusters (subsystems). Accordingly, this paper introduces a clustering strategy that decomposes WDSs into clusters with stronger internal connections than external connections. The detected cluster layout is very similar to the community structure of the served urban area. As WDSs may expand along with urban development in a community-by-community manner, the correspondingly formed distribution clusters may reveal some crucial configurations of WDSs. For verification, the method is applied to identify all the critical links during firefighting for the vulnerability analysis of a real-world WDS. Moreover, both the most critical pipes and clusters are addressed, given the consequences of pipe failure. Compared with the enumeration method, the method used in this study identifies the same group of the most critical components, and provides similar criticality prioritizations of them in a more computationally efficient time.

  19. A Science Portal and Archive for Extragalactic Globular Cluster Systems Data

    Science.gov (United States)

    Young, Michael; Rhode, Katherine L.; Gopu, Arvind

    2015-01-01

    For several years we have been carrying out a wide-field imaging survey of the globular cluster populations of a sample of giant spiral, S0, and elliptical galaxies with distances of ~10-30 Mpc. We use mosaic CCD cameras on the WIYN 3.5-m and Kitt Peak 4-m telescopes to acquire deep BVR imaging of each galaxy and then analyze the data to derive global properties of the globular cluster system. In addition to measuring the total numbers, specific frequencies, spatial distributions, and color distributions for the globular cluster populations, we have produced deep, high-quality images and lists of tens to thousands of globular cluster candidates for the ~40 galaxies included in the survey.With the survey nearing completion, we have been exploring how to efficiently disseminate not only the overall results, but also all of the relevant data products, to the astronomical community. Here we present our solution: a scientific portal and archive for extragalactic globular cluster systems data. With a modern and intuitive web interface built on the same framework as the WIYN One Degree Imager Portal, Pipeline, and Archive (ODI-PPA), our system will provide public access to the survey results and the final stacked mosaic images of the target galaxies. In addition, the astrometric and photometric data for thousands of identified globular cluster candidates, as well as for all point sources detected in each field, will be indexed and searchable. Where available, spectroscopic follow-up data will be paired with the candidates. Advanced imaging tools will enable users to overlay the cluster candidates and other sources on the mosaic images within the web interface, while metadata charting tools will allow users to rapidly and seamlessly plot the survey results for each galaxy and the data for hundreds of thousands of individual sources. Finally, we will appeal to other researchers with similar data products and work toward making our portal a central repository for data

  20. LINK codes TRAC-BF1/PARCSv2.7 in LINUX without external communication interface

    International Nuclear Information System (INIS)

    Barrachina, T.; Garcia-Fenoll, M.; Abarca, A.; Miro, R.; Verdu, G.; Concejal, A.; Solar, A.

    2014-01-01

    The TRAC-BF1 code is still widely used by the nuclear industry for safety analysis. The plant models developed using this code are highly validated, so it is advisable to continue improving this code before migrating to another completely different code. The coupling with the NRC neutronic code PARCSv2.7 increases the simulation capabilities in transients in which the power distribution plays an important role. In this paper, the procedure for the coupling of TRAC-BF1 and PARCSv2.7 codes without PVM and in Linux is presented. (Author)

  1. High speed real-time wavefront processing system for a solid-state laser system

    Science.gov (United States)

    Liu, Yuan; Yang, Ping; Chen, Shanqiu; Ma, Lifang; Xu, Bing

    2008-03-01

    A high speed real-time wavefront processing system for a solid-state laser beam cleanup system has been built. This system consists of a core2 Industrial PC (IPC) using Linux and real-time Linux (RT-Linux) operation system (OS), a PCI image grabber, a D/A card. More often than not, the phase aberrations of the output beam from solid-state lasers vary fast with intracavity thermal effects and environmental influence. To compensate the phase aberrations of solid-state lasers successfully, a high speed real-time wavefront processing system is presented. Compared to former systems, this system can improve the speed efficiently. In the new system, the acquisition of image data, the output of control voltage data and the implementation of reconstructor control algorithm are treated as real-time tasks in kernel-space, the display of wavefront information and man-machine conversation are treated as non real-time tasks in user-space. The parallel processing of real-time tasks in Symmetric Multi Processors (SMP) mode is the main strategy of improving the speed. In this paper, the performance and efficiency of this wavefront processing system are analyzed. The opened-loop experimental results show that the sampling frequency of this system is up to 3300Hz, and this system can well deal with phase aberrations from solid-state lasers.

  2. Parallel implementation of D-Phylo algorithm for maximum likelihood clusters.

    Science.gov (United States)

    Malik, Shamita; Sharma, Dolly; Khatri, Sunil Kumar

    2017-03-01

    This study explains a newly developed parallel algorithm for phylogenetic analysis of DNA sequences. The newly designed D-Phylo is a more advanced algorithm for phylogenetic analysis using maximum likelihood approach. The D-Phylo while misusing the seeking capacity of k -means keeps away from its real constraint of getting stuck at privately conserved motifs. The authors have tested the behaviour of D-Phylo on Amazon Linux Amazon Machine Image(Hardware Virtual Machine)i2.4xlarge, six central processing unit, 122 GiB memory, 8  ×  800 Solid-state drive Elastic Block Store volume, high network performance up to 15 processors for several real-life datasets. Distributing the clusters evenly on all the processors provides us the capacity to accomplish a near direct speed if there should arise an occurrence of huge number of processors.

  3. The use of a VAX cluster for the DIII-D data acquisition system

    International Nuclear Information System (INIS)

    McHarg, B.B. Jr.

    1991-11-01

    The DIII-D tokamak is a large fusion energy research experiment funded by the Department of Energy. The experiment currently collects nearly 40 Mbytes of data from each shot of the experiment. In the past, most of this data was acquired through the MODCOMP Classic data acquisition computers and then transferred to a DEC VAX computer system for permanent archiving and storage. A much smaller amount of data was acquired from a few MicroVAX based data acquisition systems. In the last two years, MicroVAX based systems have become the standard means for adding new diagnostic data and account for half the total data. There are now 17 VAX systems of various types at the DIII-D facility. As more diagnostics and data are added, it takes increasing of time to merge the data into the central shot file. The system management of so many systems has become increasingly time consuming as well. To improve the efficiency of the overall data acquisition system, a mixed interconnect VAX cluster has been formed consisting of 16 VAX computers. In the cluster, the software protocol for passing data around the cluster is much more efficient than using DECnet. The cluster has also greatly simplified the procedure of backing up disks. Another big improvement is the use of a VAX console system which ties all the console ports of the computers into one central computer system which then manages the entire cluster

  4. The upper level of control system of electron accelerators

    International Nuclear Information System (INIS)

    Gribov, I.V.; Nedeoglo, F.N.; Shvedunov, I.V.

    2005-01-01

    The upper level software of a three-level control system that supports several electron accelerators is described. This software operates in the Linux and RTLinux (Real Time Linux) environment. The object information model functions on the basis of a parametric description supported by the SQLite Data Base Management System. The Javascript sublanguage is used for script forming, and the Qt Designer application is used to construct the user interface [ru

  5. Real-Time linux dynamic clamp: a fast and flexible way to construct virtual ion channels in living cells.

    Science.gov (United States)

    Dorval, A D; Christini, D J; White, J A

    2001-10-01

    We describe a system for real-time control of biological and other experiments. This device, based around the Real-Time Linux operating system, was tested specifically in the context of dynamic clamping, a demanding real-time task in which a computational system mimics the effects of nonlinear membrane conductances in living cells. The system is fast enough to represent dozens of nonlinear conductances in real time at clock rates well above 10 kHz. Conductances can be represented in deterministic form, or more accurately as discrete collections of stochastically gating ion channels. Tests were performed using a variety of complex models of nonlinear membrane mechanisms in excitable cells, including simulations of spatially extended excitable structures, and multiple interacting cells. Only in extreme cases does the computational load interfere with high-speed "hard" real-time processing (i.e., real-time processing that never falters). Freely available on the worldwide web, this experimental control system combines good performance. immense flexibility, low cost, and reasonable ease of use. It is easily adapted to any task involving real-time control, and excels in particular for applications requiring complex control algorithms that must operate at speeds over 1 kHz.

  6. A Self-Organizing Spatial Clustering Approach to Support Large-Scale Network RTK Systems.

    Science.gov (United States)

    Shen, Lili; Guo, Jiming; Wang, Lei

    2018-06-06

    The network real-time kinematic (RTK) technique can provide centimeter-level real time positioning solutions and play a key role in geo-spatial infrastructure. With ever-increasing popularity, network RTK systems will face issues in the support of large numbers of concurrent users. In the past, high-precision positioning services were oriented towards professionals and only supported a few concurrent users. Currently, precise positioning provides a spatial foundation for artificial intelligence (AI), and countless smart devices (autonomous cars, unmanned aerial-vehicles (UAVs), robotic equipment, etc.) require precise positioning services. Therefore, the development of approaches to support large-scale network RTK systems is urgent. In this study, we proposed a self-organizing spatial clustering (SOSC) approach which automatically clusters online users to reduce the computational load on the network RTK system server side. The experimental results indicate that both the SOSC algorithm and the grid algorithm can reduce the computational load efficiently, while the SOSC algorithm gives a more elastic and adaptive clustering solution with different datasets. The SOSC algorithm determines the cluster number and the mean distance to cluster center (MDTCC) according to the data set, while the grid approaches are all predefined. The side-effects of clustering algorithms on the user side are analyzed with real global navigation satellite system (GNSS) data sets. The experimental results indicate that 10 km can be safely used as the cluster radius threshold for the SOSC algorithm without significantly reducing the positioning precision and reliability on the user side.

  7. A Binary System in the Hyades Cluster Hosting a Neptune-Sized Planet

    Science.gov (United States)

    Feinstein, Adina; Ciardi, David; Crossfield, Ian; Schlieder, Joshua; Petigura, Erik; David, Trevor J.; Bristow, Makennah; Patel, Rahul; Arnold, Lauren; Benneke, Björn; Christiansen, Jessie; Dressing, Courtney; Fulton, Benjamin; Howard, Andrew; Isaacson, Howard; Sinukoff, Evan; Thackeray, Beverly

    2018-01-01

    We report the discovery of a Neptune-size planet (Rp = 3.0Rearth) in the Hyades Cluster. The host star is in a binary system, comprising a K5V star and M7/8V star with a projected separation of 40 AU. The planet orbits the primary star with an orbital period of 17.3 days and a transit duration of 3 hours. The host star is bright (V = 11.2, J = 9.1) and so may be a good target for precise radial velocity measurements. The planet is the first Neptune-sized planet to be found orbiting in a binary system within an open cluster. The Hyades is the nearest star cluster to the Sun, has an age of 625-750 Myr, and forms one of the fundamental rungs in the distance ladder; understanding the planet population in such a well-studied cluster can help us understand and set contraints on the formation and evolution of planetary systems.

  8. Phase correlation and clustering of a nearest neighbour coupled oscillators system

    CERN Document Server

    Ei-Nashar, H F

    2002-01-01

    We investigated the phases in a system of nearest neighbour coupled oscillators before complete synchronization in frequency occurs. We found that when oscillators under the influence of coupling form a cluster of the same time-average frequency, their phases start to correlate. An order parameter, which measures this correlation, starts to grow at this stage until it reaches maximum. This means that a time-average phase locked state is reached between the oscillators inside the cluster of the same time- average frequency. At this strength the cluster attracts individual oscillators or a cluster to join in. We also observe that clustering in averaged frequencies orders the phases of the oscillators. This behavior is found at all the transition points studied.

  9. Phase correlation and clustering of a nearest neighbour coupled oscillators system

    International Nuclear Information System (INIS)

    EI-Nashar, Hassan F.

    2002-09-01

    We investigated the phases in a system of nearest neighbour coupled oscillators before complete synchronization in frequency occurs. We found that when oscillators under the influence of coupling form a cluster of the same time-average frequency, their phases start to correlate. An order parameter, which measures this correlation, starts to grow at this stage until it reaches maximum. This means that a time-average phase locked state is reached between the oscillators inside the cluster of the same time- average frequency. At this strength the cluster attracts individual oscillators or a cluster to join in. We also observe that clustering in averaged frequencies orders the phases of the oscillators. This behavior is found at all the transition points studied. (author)

  10. Cluster Synchronization of Diffusively Coupled Nonlinear Systems: A Contraction-Based Approach

    Science.gov (United States)

    Aminzare, Zahra; Dey, Biswadip; Davison, Elizabeth N.; Leonard, Naomi Ehrich

    2018-04-01

    Finding the conditions that foster synchronization in networked nonlinear systems is critical to understanding a wide range of biological and mechanical systems. However, the conditions proved in the literature for synchronization in nonlinear systems with linear coupling, such as has been used to model neuronal networks, are in general not strict enough to accurately determine the system behavior. We leverage contraction theory to derive new sufficient conditions for cluster synchronization in terms of the network structure, for a network where the intrinsic nonlinear dynamics of each node may differ. Our result requires that network connections satisfy a cluster-input-equivalence condition, and we explore the influence of this requirement on network dynamics. For application to networks of nodes with FitzHugh-Nagumo dynamics, we show that our new sufficient condition is tighter than those found in previous analyses that used smooth or nonsmooth Lyapunov functions. Improving the analytical conditions for when cluster synchronization will occur based on network configuration is a significant step toward facilitating understanding and control of complex networked systems.

  11. Nuclear clustering - a cluster core model study

    International Nuclear Information System (INIS)

    Paul Selvi, G.; Nandhini, N.; Balasubramaniam, M.

    2015-01-01

    Nuclear clustering, similar to other clustering phenomenon in nature is a much warranted study, since it would help us in understanding the nature of binding of the nucleons inside the nucleus, closed shell behaviour when the system is highly deformed, dynamics and structure at extremes. Several models account for the clustering phenomenon of nuclei. We present in this work, a cluster core model study of nuclear clustering in light mass nuclei

  12. DSN Beowulf Cluster-Based VLBI Correlator

    Science.gov (United States)

    Rogstad, Stephen P.; Jongeling, Andre P.; Finley, Susan G.; White, Leslie A.; Lanyi, Gabor E.; Clark, John E.; Goodhart, Charles E.

    2009-01-01

    The NASA Deep Space Network (DSN) requires a broadband VLBI (very long baseline interferometry) correlator to process data routinely taken as part of the VLBI source Catalogue Maintenance and Enhancement task (CAT M&E) and the Time and Earth Motion Precision Observations task (TEMPO). The data provided by these measurements are a crucial ingredient in the formation of precision deep-space navigation models. In addition, a VLBI correlator is needed to provide support for other VLBI related activities for both internal and external customers. The JPL VLBI Correlator (JVC) was designed, developed, and delivered to the DSN as a successor to the legacy Block II Correlator. The JVC is a full-capability VLBI correlator that uses software processes running on multiple computers to cross-correlate two-antenna broadband noise data. Components of this new system (see Figure 1) consist of Linux PCs integrated into a Beowulf Cluster, an existing Mark5 data storage system, a RAID array, an existing software correlator package (SoftC) originally developed for Delta DOR Navigation processing, and various custom- developed software processes and scripts. Parallel processing on the JVC is achieved by assigning slave nodes of the Beowulf cluster to process separate scans in parallel until all scans have been processed. Due to the single stream sequential playback of the Mark5 data, some ramp-up time is required before all nodes can have access to required scan data. Core functions of each processing step are accomplished using optimized C programs. The coordination and execution of these programs across the cluster is accomplished using Pearl scripts, PostgreSQL commands, and a handful of miscellaneous system utilities. Mark5 data modules are loaded on Mark5 Data systems playback units, one per station. Data processing is started when the operator scans the Mark5 systems and runs a script that reads various configuration files and then creates an experiment-dependent status database

  13. Pseudo-potential method for taking into account the Pauli principle in cluster systems

    International Nuclear Information System (INIS)

    Krasnopol'skii, V.M.; Kukulin, V.I.

    1975-01-01

    In order to take account of the Pauli principle in cluster systems (such as 3α, α + α + n) a convenient method of renormalization of the cluster-cluster deep attractive potentials with forbidden states is suggested. The renormalization consists of adding projectors upon the occupied states with an infinite coupling constant to the initial deep potential which means that we pass to pseudo-potentials. The pseudo-potential approach in projecting upon the noneigenstates is shown to be equivalent to the orthogonality condition model of Saito et al. The orthogonality of the many-particle wave function to the forbidden states of each two-cluster sub-system is clearly demonstrated

  14. Cluster analysis in systems of magnetic spheres and cubes

    Energy Technology Data Exchange (ETDEWEB)

    Pyanzina, E.S., E-mail: elena.pyanzina@urfu.ru [Ural Federal University, Lenin Av. 51, Ekaterinburg (Russian Federation); Gudkova, A.V. [Ural Federal University, Lenin Av. 51, Ekaterinburg (Russian Federation); Donaldson, J.G. [University of Vienna, Sensengasse 8, Vienna (Austria); Kantorovich, S.S. [Ural Federal University, Lenin Av. 51, Ekaterinburg (Russian Federation); University of Vienna, Sensengasse 8, Vienna (Austria)

    2017-06-01

    In the present work we use molecular dynamics simulations and graph-theory based cluster analysis to compare self-assembly in systems of magnetic spheres, and cubes where the dipole moment is oriented along the side of the cube in the [001] crystallographic direction. We show that under the same conditions cubes aggregate far less than their spherical counterparts. This difference can be explained in terms of the volume of phase space in which the formation of the bond is thermodynamically advantageous. It follows that this volume is much larger for a dipolar sphere than for a dipolar cube. - Highlights: • A comparison of the degree of self-assembly in systems of magnetic spheres and cubes. • Spheres are more likely to form larger clusters than cubes. • Differences in microstructure will manifest in the magnetic response of each system.

  15. A Self-Organizing Spatial Clustering Approach to Support Large-Scale Network RTK Systems

    Directory of Open Access Journals (Sweden)

    Lili Shen

    2018-06-01

    Full Text Available The network real-time kinematic (RTK technique can provide centimeter-level real time positioning solutions and play a key role in geo-spatial infrastructure. With ever-increasing popularity, network RTK systems will face issues in the support of large numbers of concurrent users. In the past, high-precision positioning services were oriented towards professionals and only supported a few concurrent users. Currently, precise positioning provides a spatial foundation for artificial intelligence (AI, and countless smart devices (autonomous cars, unmanned aerial-vehicles (UAVs, robotic equipment, etc. require precise positioning services. Therefore, the development of approaches to support large-scale network RTK systems is urgent. In this study, we proposed a self-organizing spatial clustering (SOSC approach which automatically clusters online users to reduce the computational load on the network RTK system server side. The experimental results indicate that both the SOSC algorithm and the grid algorithm can reduce the computational load efficiently, while the SOSC algorithm gives a more elastic and adaptive clustering solution with different datasets. The SOSC algorithm determines the cluster number and the mean distance to cluster center (MDTCC according to the data set, while the grid approaches are all predefined. The side-effects of clustering algorithms on the user side are analyzed with real global navigation satellite system (GNSS data sets. The experimental results indicate that 10 km can be safely used as the cluster radius threshold for the SOSC algorithm without significantly reducing the positioning precision and reliability on the user side.

  16. An algorithm of a real time image tracking system using a camera with pan/tilt motors on an embedded system

    Science.gov (United States)

    Kim, Hie-Sik; Nam, Chul; Ha, Kwan-Yong; Ayurzana, Odgeral; Kwon, Jong-Won

    2005-12-01

    The embedded systems have been applied to many fields, including households and industrial sites. The user interface technology with simple display on the screen was implemented more and more. The user demands are increasing and the system has more various applicable fields due to a high penetration rate of the Internet. Therefore, the demand for embedded system is tend to rise. An embedded system for image tracking was implemented. This system is used a fixed IP for the reliable server operation on TCP/IP networks. Using an USB camera on the embedded Linux system developed a real time broadcasting of video image on the Internet. The digital camera is connected at the USB host port of the embedded board. All input images from the video camera are continuously stored as a compressed JPEG file in a directory at the Linux web-server. And each frame image data from web camera is compared for measurement of displacement Vector. That used Block matching algorithm and edge detection algorithm for past speed. And the displacement vector is used at pan/tilt motor control through RS232 serial cable. The embedded board utilized the S3C2410 MPU, which used the ARM 920T core form Samsung. The operating system was ported to embedded Linux kernel and mounted of root file system. And the stored images are sent to the client PC through the web browser. It used the network function of Linux and it developed a program with protocol of the TCP/IP.

  17. Discovery of four gravitational lensing systems by clusters in the SDSS DR6

    International Nuclear Information System (INIS)

    Wen Zhonglue; Han Jinlin; Xu Xiangyang; Jiang Yunying; Guo Zhiqing; Wang Pengfei; Liu Fengshan

    2009-01-01

    We report the discovery of 4 strong gravitational lensing systems by visual inspections of the Sloan Digital Sky Survey images of galaxy clusters in Data Release 6 (SDSS DR6). Two of the four systems show Einstein rings while the others show tangential giant arcs. These arcs or rings have large angular separations (> 8) from the bright central galaxies and show bluer color compared with the red cluster galaxies. In addition, we found 5 probable and 4 possible lenses by galaxy clusters. (letters)

  18. Embedded Real-Time Linux for Instrument Control and Data Logging

    Science.gov (United States)

    Clanton, Sam; Gore, Warren J. (Technical Monitor)

    2002-01-01

    When I moved to the west. coast to take a job at NASA's Ames Research Center in Mountain View, CA, I was impressed with the variety of equipment and software which scientists at the center use to conduct their research. was happy to find that I was just as likely to see a machine running Lenox as one running Windows in the offices and laboratories of NASA Ames (although many people seem to use Moos around here). I was especially happy to find that the particular group with whom I was going to work, the Atmospheric Physics Branch at Ames, relied almost entirely on Lenox machines for their day-to-day work. So it was no surprise that when it was time to construct a new control system for one of their most important pieces of hardware, a switch from an unpredictable DOS-based platform to an Embedded Linux-based one was a decision easily made. The system I am working on is called the Solar Spectral Flux Radiometer (SSFR), a PC-104 based system custom-built by Dr. Warren Gore at Ames. Dr. Gore, Dr. Peter Pilewskie, Dr. Maura Robberies and Larry Pezzolo use the SSFR in their research. The team working on the controller project consists of Dr. Gore, John Pommier, and myself. The SSFR is used by the ,cities Atmospheric Radiation Group to measure solar spectral irradiance at moderate resolution to determine the radiative effect of clouds, aerosols, and gases on climate, and also to infer the physical properties of aerosols and clouds. Two identical SSFR's have been built and successfully deployed in three field missions: 1) the Department of Energy Atmospheric Radiation Measurement (ARM) Enhanced Shortwave Experiment (ARESE) II in February/March, 2000; 2) the Puerto Rico Dust Experiment (PRIDE) in July, 2000; and 3) the South African Regional Science Initiative (SAFARI) in August/September, 2000. Additionally, the SSFR was used to acquire water vapor spectra using the Ames Diameter base-path multiple-reflection absorption cell in a laboratory experiment.

  19. Multi-terabyte EIDE disk arrays running Linux RAID5

    International Nuclear Information System (INIS)

    Sanders, D.A.; Cremaldi, L.M.; Eschenburg, V.; Godang, R.; Joy, M.D.; Summers, D.J.; Petravick, D.L.

    2004-01-01

    High-energy physics experiments are currently recording large amounts of data and in a few years will be recording prodigious quantities of data. New methods must be developed to handle this data and make analysis at universities possible. Grid Computing is one method; however, the data must be cached at the various Grid nodes. We examine some storage techniques that exploit recent developments in commodity hardware. Disk arrays using RAID level 5 (RAID-5) include both parity and striping. The striping improves access speed. The parity protects data in the event of a single disk failure, but not in the case of multiple disk failures. We report on tests of dual-processor Linux Software RAID-5 arrays and Hardware RAID-5 arrays using a 12-disk 3ware controller, in conjunction with 250 and 300 GB disks, for use in offline high-energy physics data analysis. The price of IDE disks is now less than $1/GB. These RAID-5 disk arrays can be scaled to sizes affordable to small institutions and used when fast random access at low cost is important

  20. Multi-terabyte EIDE disk arrays running Linux RAID5

    Energy Technology Data Exchange (ETDEWEB)

    Sanders, D.A.; Cremaldi, L.M.; Eschenburg, V.; Godang, R.; Joy, M.D.; Summers, D.J.; /Mississippi U.; Petravick, D.L.; /Fermilab

    2004-11-01

    High-energy physics experiments are currently recording large amounts of data and in a few years will be recording prodigious quantities of data. New methods must be developed to handle this data and make analysis at universities possible. Grid Computing is one method; however, the data must be cached at the various Grid nodes. We examine some storage techniques that exploit recent developments in commodity hardware. Disk arrays using RAID level 5 (RAID-5) include both parity and striping. The striping improves access speed. The parity protects data in the event of a single disk failure, but not in the case of multiple disk failures. We report on tests of dual-processor Linux Software RAID-5 arrays and Hardware RAID-5 arrays using a 12-disk 3ware controller, in conjunction with 250 and 300 GB disks, for use in offline high-energy physics data analysis. The price of IDE disks is now less than $1/GB. These RAID-5 disk arrays can be scaled to sizes affordable to small institutions and used when fast random access at low cost is important.

  1. Schedulability Analysis and Optimization for the Synthesis of Multi-Cluster Distributed Embedded Systems

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2003-01-01

    We present an approach to schedulability analysis for the synthesis of multi-cluster distributed embedded systems consisting of time-triggered and event-triggered clusters, interconnected via gateways. We have also proposed a buffer size and worst case queuing delay analysis for the gateways......, responsible for routing inter-cluster traffic. Optimization heuristics for the priority assignment and synthesis of bus access parameters aimed at producing a schedulable system with minimal buffer needs have been proposed. Extensive experiments and a real-life example show the efficiency of our approaches....

  2. Schedulability Analysis and Optimization for the Synthesis of Multi-Cluster Distributed Embedded Systems

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2003-01-01

    An approach to schedulability analysis for the synthesis of multi-cluster distributed embedded systems consisting of time-triggered and event-triggered clusters, interconnected via gateways, is presented. A buffer size and worst case queuing delay analysis for the gateways, responsible for routing...... inter-cluster traffic, is also proposed. Optimisation heuristics for the priority assignment and synthesis of bus access parameters aimed at producing a schedulable system with minimal buffer needs have been proposed. Extensive experiments and a real-life example show the efficiency of the approaches....

  3. DEVELOPMENT OF TRANSPORT SUBSYSTEM STREAMING DATA REPLICATION CLUSTER IN CORBA-SYSTEM WITH ZEROMQ TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    F. A. Kozlov

    2013-03-01

    Full Text Available The article deals with the peculiarities of distributed cluster system creation with streaming data replication. Ways of replication cluster implementation in CORBA-systems with ZeroMq technology are presented. Major advantages of ZeroMQ technology over similar technologies are considered in this type distributed systems creation.

  4. 75 FR 17700 - Energy Efficient Building Systems Regional Innovation Cluster Initiative-Joint Federal Funding...

    Science.gov (United States)

    2010-04-07

    ... economically dynamic regional innovation cluster focused on energy efficient buildings technologies and systems...-risk, high-reward research that overcomes technology challenges through approaches that span basic... DEPARTMENT OF ENERGY Energy Efficient Building Systems Regional Innovation Cluster Initiative...

  5. Development of KOMAC Beam Monitoring System Using EPICS

    Energy Technology Data Exchange (ETDEWEB)

    Song, Young-Gi; Yun, Sang-Pil; Kim, Han-Sung; Kwon, Hyeok-Jung; Cho, Yong-Sub [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    The beam loss signals must be digitized and the sampling has to be synchronized to a reference signal which is an external trigger for beam operation. The digitized data must be accessible by the Experimental Physics and Industrial Control System (EPICS)-based control system, which manages the whole accelerator control. In order to satisfy the requirement, an Input /Output Controller (IOC), which runs Linux on a CPU module with PCI express based Analog to Digital Converter (ADC) modules, has been adopted. An associated linux driver and EPICS device support module also have been developed. The IOC meets the requirements and the development and maintenance of the software for the IOC is considerably efficient. The data acquisition system running EPICS will be used in increasing phase of KOrea Multi-purpose Accelerator Complex (KOMAC) beam power. The beam monitoring system integrates BLM and BPM signals into control system and offers real-time data to operators. The IOC, which is implemented with Linux and PCI driver, has supported data acquisition as a very flexible solution.

  6. Development of KOMAC Beam Monitoring System Using EPICS

    International Nuclear Information System (INIS)

    Song, Young-Gi; Yun, Sang-Pil; Kim, Han-Sung; Kwon, Hyeok-Jung; Cho, Yong-Sub

    2014-01-01

    The beam loss signals must be digitized and the sampling has to be synchronized to a reference signal which is an external trigger for beam operation. The digitized data must be accessible by the Experimental Physics and Industrial Control System (EPICS)-based control system, which manages the whole accelerator control. In order to satisfy the requirement, an Input /Output Controller (IOC), which runs Linux on a CPU module with PCI express based Analog to Digital Converter (ADC) modules, has been adopted. An associated linux driver and EPICS device support module also have been developed. The IOC meets the requirements and the development and maintenance of the software for the IOC is considerably efficient. The data acquisition system running EPICS will be used in increasing phase of KOrea Multi-purpose Accelerator Complex (KOMAC) beam power. The beam monitoring system integrates BLM and BPM signals into control system and offers real-time data to operators. The IOC, which is implemented with Linux and PCI driver, has supported data acquisition as a very flexible solution

  7. FPGA cluster for high-performance AO real-time control system

    Science.gov (United States)

    Geng, Deli; Goodsell, Stephen J.; Basden, Alastair G.; Dipper, Nigel A.; Myers, Richard M.; Saunter, Chris D.

    2006-06-01

    Whilst the high throughput and low latency requirements for the next generation AO real-time control systems have posed a significant challenge to von Neumann architecture processor systems, the Field Programmable Gate Array (FPGA) has emerged as a long term solution with high performance on throughput and excellent predictability on latency. Moreover, FPGA devices have highly capable programmable interfacing, which lead to more highly integrated system. Nevertheless, a single FPGA is still not enough: multiple FPGA devices need to be clustered to perform the required subaperture processing and the reconstruction computation. In an AO real-time control system, the memory bandwidth is often the bottleneck of the system, simply because a vast amount of supporting data, e.g. pixel calibration maps and the reconstruction matrix, need to be accessed within a short period. The cluster, as a general computing architecture, has excellent scalability in processing throughput, memory bandwidth, memory capacity, and communication bandwidth. Problems, such as task distribution, node communication, system verification, are discussed.

  8. EVTux: Uma Distribuição Linux para a Disciplina de EVT e para as Artes

    Directory of Open Access Journals (Sweden)

    José Alberto Rodrigues

    2012-09-01

    Full Text Available O EVTux é uma distribuição de Linux que tem por base o trabalho de investigação que desenvolvemos sobre a integração de ferramentas digitais na disciplina de Educação Visual e Tecnológica. Após dezoito meses de estudo e desenvolvimento do projeto com um grupo de cerca de noventa colaboradores, recensearam-se quatrocentas e trinta ferramentas digitais passíveis de integração em contexto de Educação Visual e Tecnológica. Dessa listagem surgiu a posterior catalogação e categorização das ferramentas, tendo em conta os conteúdos e áreas de exploração da disciplina. O EVTux tem pré-instaladas todas as aplicações para Linux, bem como, integradas no browser, as ferramentas digitais que não necessitam de instalação e abrem diretamente a partir de uma página na internet, para além dos mais de trezentos e cinquenta manuais de apoio à utilização dessas ferramentas. Já disponível, o EVTux constitui-se como um poderoso recurso que agrega todo o trabalho do EVTdigital, sendo uma ferramenta de eleição para os docentes desta disciplina e das Artes utilizarem em contexto de sala de aula.

  9. Clustering of tethered satellite system simulation data by an adaptive neuro-fuzzy algorithm

    Science.gov (United States)

    Mitra, Sunanda; Pemmaraju, Surya

    1992-01-01

    Recent developments in neuro-fuzzy systems indicate that the concepts of adaptive pattern recognition, when used to identify appropriate control actions corresponding to clusters of patterns representing system states in dynamic nonlinear control systems, may result in innovative designs. A modular, unsupervised neural network architecture, in which fuzzy learning rules have been embedded is used for on-line identification of similar states. The architecture and control rules involved in Adaptive Fuzzy Leader Clustering (AFLC) allow this system to be incorporated in control systems for identification of system states corresponding to specific control actions. We have used this algorithm to cluster the simulation data of Tethered Satellite System (TSS) to estimate the range of delta voltages necessary to maintain the desired length rate of the tether. The AFLC algorithm is capable of on-line estimation of the appropriate control voltages from the corresponding length error and length rate error without a priori knowledge of their membership functions and familarity with the behavior of the Tethered Satellite System.

  10. Real-time mobile customer short message system design and implementation

    Science.gov (United States)

    Han, Qirui; Sun, Fang

    To expand the current mobile phone short message service, and to make the contact between schools, teachers, parents and feedback of the modern school office system more timely and conveniently, designed and developed the Short Message System based on the Linux platform. The state-of-the-art principles and designed proposals in the Short Message System based on the Linux platform are introduced. Finally we propose an optimized secure access authentication method. At present, many schools,vbusinesses and research institutions ratify the promotion and application the messaging system gradually, which has shown benign market prospects.

  11. Influence of system temperature on the micro-structures and dynamics of dust clusters in dusty plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Song, Y. L.; Huang, F., E-mail: huangfeng@cau.edu.cn [College of Science, China Agricultural University, Beijing 100083 (China); He, Y. F.; Wu, L. [College of Information and Electrical Engineering, China Agricultural University, Beijing 100083 (China); Liu, Y. H. [School of Physics and Optoelectronic Engineering, Ludong University, Yantai 264025 (China); Chen, Z. Y. [Department of Physics, Beijing University of Chemical Technology, Beijing 100029 (China); Yu, M. Y. [Institute for Fusion Theory and Simulation, Zhejiang University, Hangzhou 310027 (China); Institute for Theoretical Physics I, Ruhr University, D-44801 Bochum (Germany)

    2015-06-15

    Influence of the system temperature on the micro-structures and dynamics of dust clusters in dusty plasmas is investigated through laboratory experiment and molecular dynamics simulation. The micro-structures, defect numbers, and pair correlation function of the dust clusters are studied for different system temperatures. The dust grains' trajectories, the mean square displacement, and the corresponding self-diffusion coefficient of the clusters are calculated for different temperatures for illustrating the phase properties of the dust clusters. The simulation results confirm that with the increase in system temperature, the micro-structures and dynamics of dust clusters are gradually changed, which qualitatively agree with experimental results.

  12. FURTHER DEFINITION OF THE MASS-METALLICITY RELATION IN GLOBULAR CLUSTER SYSTEMS AROUND BRIGHTEST CLUSTER GALAXIES

    International Nuclear Information System (INIS)

    Cockcroft, Robert; Harris, William E.; Wehner, Elizabeth M. H.; Whitmore, Bradley C.; Rothberg, Barry

    2009-01-01

    We combine the globular cluster (GC) data for 15 brightest cluster galaxies and use this material to trace the mass-metallicity relations (MMRs) in their globular cluster systems (GCSs). This work extends previous studies which correlate the properties of the MMR with those of the host galaxy. Our combined data sets show a mean trend for the metal-poor subpopulation that corresponds to a scaling of heavy-element abundance with cluster mass Z ∼ M 0.30±0.05 . No trend is seen for the metal-rich subpopulation which has a scaling relation that is consistent with zero. We also find that the scaling exponent is independent of the GCS specific frequency and host galaxy luminosity, except perhaps for dwarf galaxies. We present new photometry in (g',i') obtained with Gemini/GMOS for the GC populations around the southern giant ellipticals NGC 5193 and IC 4329. Both galaxies have rich cluster populations which show up as normal, bimodal sequences in the color-magnitude diagram. We test the observed MMRs and argue that they are statistically real, and not an artifact caused by the method we used. We also argue against asymmetric contamination causing the observed MMR as our mean results are no different from other contamination-free studies. Finally, we compare our method to the standard bimodal fitting method (KMM or RMIX) and find our results are consistent. Interpretation of these results is consistent with recent models for GC formation in which the MMR is determined by GC self-enrichment during their brief formation period.

  13. Migration of the Three-dimensional Wind Field (3DWF) Model from Linux to Windows and Mobile Platforms

    Science.gov (United States)

    2017-11-01

    Results in netCDF 11 4.3 Morphological Data Generation 16 5. 3DWF on Mobile Platforms 17 5.1 3DWF on Windows Mobile Devices 18 5.2 3DWF Migration to...Windows and Mobile Platforms by Giap Huynh and Yansen Wang Approved for public release; distribution is unlimited. NOTICES...Migration of the Three-dimensional Wind Field (3DWF) Model from Linux to Windows and Mobile Platforms by Giap Huynh and Yansen Wang

  14. MONO FOR CROSS-PLATFORM CONTROL SYSTEM ENVIRONMENT

    International Nuclear Information System (INIS)

    Nishimura, Hiroshi; Timossi, Chris

    2006-01-01

    Mono is an independent implementation of the .NET Framework by Novell that runs on multiple operating systems (including Windows, Linux and Macintosh) and allows any .NET compatible application to run unmodified. For instance Mono can run programs with graphical user interfaces (GUI) developed with the C(number s ign) language on Windows with Visual Studio (a full port of WinForm for Mono is in progress). We present the results of tests we performed to evaluate the portability of our controls system .NET applications from MS Windows to Linux

  15. Clustering and Recurring Anomaly Identification: Recurring Anomaly Detection System (ReADS)

    Science.gov (United States)

    McIntosh, Dawn

    2006-01-01

    This viewgraph presentation reviews the Recurring Anomaly Detection System (ReADS). The Recurring Anomaly Detection System is a tool to analyze text reports, such as aviation reports and maintenance records: (1) Text clustering algorithms group large quantities of reports and documents; Reduces human error and fatigue (2) Identifies interconnected reports; Automates the discovery of possible recurring anomalies; (3) Provides a visualization of the clusters and recurring anomalies We have illustrated our techniques on data from Shuttle and ISS discrepancy reports, as well as ASRS data. ReADS has been integrated with a secure online search

  16. Multi-agent grid system Agent-GRID with dynamic load balancing of cluster nodes

    Science.gov (United States)

    Satymbekov, M. N.; Pak, I. T.; Naizabayeva, L.; Nurzhanov, Ch. A.

    2017-12-01

    In this study the work presents the system designed for automated load balancing of the contributor by analysing the load of compute nodes and the subsequent migration of virtual machines from loaded nodes to less loaded ones. This system increases the performance of cluster nodes and helps in the timely processing of data. A grid system balances the work of cluster nodes the relevance of the system is the award of multi-agent balancing for the solution of such problems.

  17. Simplifying the Access to HPC Resources by Integrating them in the Application GUI

    KAUST Repository

    van Waveren, Matthijs

    2016-06-22

    The computing landscape of KAUST is increasing in complexity. Researchers have access to the 9th fastest supercomputer in the world (Shaheen II) and several other HPC clusters. They work on local Windows, Mac, or Linux workstations. In order to facilitate the access of the HPC systems, we have developed interfaces to several research applications that automate input data transfer, job submission and retrieval of results. The user now submits his jobs to the cluster from within the application GUI on his workstation, and does not have to physically go onto the cluster anymore.

  18. Diametrical clustering for identifying anti-correlated gene clusters.

    Science.gov (United States)

    Dhillon, Inderjit S; Marcotte, Edward M; Roshan, Usman

    2003-09-01

    Clustering genes based upon their expression patterns allows us to predict gene function. Most existing clustering algorithms cluster genes together when their expression patterns show high positive correlation. However, it has been observed that genes whose expression patterns are strongly anti-correlated can also be functionally similar. Biologically, this is not unintuitive-genes responding to the same stimuli, regardless of the nature of the response, are more likely to operate in the same pathways. We present a new diametrical clustering algorithm that explicitly identifies anti-correlated clusters of genes. Our algorithm proceeds by iteratively (i). re-partitioning the genes and (ii). computing the dominant singular vector of each gene cluster; each singular vector serving as the prototype of a 'diametric' cluster. We empirically show the effectiveness of the algorithm in identifying diametrical or anti-correlated clusters. Testing the algorithm on yeast cell cycle data, fibroblast gene expression data, and DNA microarray data from yeast mutants reveals that opposed cellular pathways can be discovered with this method. We present systems whose mRNA expression patterns, and likely their functions, oppose the yeast ribosome and proteosome, along with evidence for the inverse transcriptional regulation of a number of cellular systems.

  19. THE ACS FORNAX CLUSTER SURVEY. X. COLOR GRADIENTS OF GLOBULAR CLUSTER SYSTEMS IN EARLY-TYPE GALAXIES

    International Nuclear Information System (INIS)

    Liu Chengze; Peng, Eric W.; Jordan, Andres; Ferrarese, Laura; Blakeslee, John P.; Cote, Patrick; Mei, Simona

    2011-01-01

    We use the largest homogeneous sample of globular clusters (GCs), drawn from the ACS Virgo Cluster Survey (ACSVCS) and ACS Fornax Cluster Survey (ACSFCS), to investigate the color gradients of GC systems in 76 early-type galaxies. We find that most GC systems possess an obvious negative gradient in (g-z) color with radius (bluer outward), which is consistent with previous work. For GC systems displaying color bimodality, both metal-rich and metal-poor GC subpopulations present shallower but significant color gradients on average, and the mean color gradients of these two subpopulations are of roughly equal strength. The field of view of ACS mainly restricts us to measuring the inner gradients of the studied GC systems. These gradients, however, can introduce an aperture bias when measuring the mean colors of GC subpopulations from relatively narrow central pointings. Inferred corrections to previous work imply a reduced significance for the relation between the mean color of metal-poor GCs and their host galaxy luminosity. The GC color gradients also show a dependence with host galaxy mass where the gradients are weakest at the ends of the mass spectrum-in massive galaxies and dwarf galaxies-and strongest in galaxies of intermediate mass, around a stellar mass of M * ∼10 10 M sun . We also measure color gradients for field stars in the host galaxies. We find that GC color gradients are systematically steeper than field star color gradients, but the shape of the gradient-mass relation is the same for both. If gradients are caused by rapid dissipational collapse and weakened by merging, these color gradients support a picture where the inner GC systems of most intermediate-mass and massive galaxies formed early and rapidly with the most massive galaxies having experienced greater merging. The lack of strong gradients in the GC systems of dwarfs, which probably have not experienced many recent major mergers, suggests that low-mass halos were inefficient at retaining

  20. Packaging of control system software

    International Nuclear Information System (INIS)

    Zagar, K.; Kobal, M.; Saje, N.; Zagar, A.; Sabjan, R.; Di Maio, F.; Stepanov, D.

    2012-01-01

    Control system software consists of several parts - the core of the control system, drivers for integration of devices, configuration for user interfaces, alarm system, etc. Once the software is developed and configured, it must be installed to computers where it runs. Usually, it is installed on an operating system whose services it needs, and also in some cases dynamically links with the libraries it provides. Operating system can be quite complex itself - for example, a typical Linux distribution consists of several thousand packages. To manage this complexity, we have decided to rely on Red Hat Package Management system (RPM) to package control system software, and also ensure it is properly installed (i.e., that dependencies are also installed, and that scripts are run after installation if any additional actions need to be performed). As dozens of RPM packages need to be prepared, we are reducing the amount of effort and improving consistency between packages through a Maven-based infrastructure that assists in packaging (e.g., automated generation of RPM SPEC files, including automated identification of dependencies). So far, we have used it to package EPICS, Control System Studio (CSS) and several device drivers. We perform extensive testing on Red Hat Enterprise Linux 5.5, but we have also verified that packaging works on CentOS and Scientific Linux. In this article, we describe in greater detail the systematic system of packaging we are using, and its particular application for the ITER CODAC Core System. (authors)

  1. Securing recommender systems against shilling attacks using social-based clustering

    KAUST Repository

    Zhang, Xiangliang

    2013-07-01

    Recommender systems (RS) have been found supportive and practical in e-commerce and been established as useful aiding services. Despite their great adoption in the user communities, RS are still vulnerable to unscrupulous producers who try to promote their products by shilling the systems. With the advent of social networks new sources of information have been made available which can potentially render RS more resistant to attacks. In this paper we explore the information provided in the form of social links with clustering for diminishing the impact of attacks. We propose two algorithms, CluTr and WCluTr, to combine clustering with "trust" among users. We demonstrate that CluTr and WCluTr enhance the robustness of RS by experimentally evaluating them on data from a public consumer recommender system Epinions.com. © 2013 Springer Science+Business Media New York & Science Press, China.

  2. User and Document Group Approach of Clustering in Tagging Systems

    DEFF Research Database (Denmark)

    Pan, Rong; Xu, Guandong; Dolog, Peter

    2010-01-01

    In this paper, we propose a spectral clustering approach for users and documents group modeling in order to capture the common preference and relatedness of users and documents, and to reduce the time complexity of similarity calculations. In experiments, we investigate the selection of the optim...... amount of clusters. We also show a reduction of the time consuming in calculating the similarity for the recommender systems by selecting a centroid first, and then compare the inside item on behalf of each group....

  3. Cluster analysis of autoantibodies in 852 patients with systemic lupus erythematosus from a single center.

    Science.gov (United States)

    Artim-Esen, Bahar; Çene, Erhan; Şahinkaya, Yasemin; Ertan, Semra; Pehlivan, Özlem; Kamali, Sevil; Gül, Ahmet; Öcal, Lale; Aral, Orhan; Inanç, Murat

    2014-07-01

    Associations between autoantibodies and clinical features have been described in systemic lupus erythematosus (SLE). Herein, we aimed to define autoantibody clusters and their clinical correlations in a large cohort of patients with SLE. We analyzed 852 patients with SLE who attended our clinic. Seven autoantibodies were selected for cluster analysis: anti-DNA, anti-Sm, anti-RNP, anticardiolipin (aCL) immunoglobulin (Ig)G or IgM, lupus anticoagulant (LAC), anti-Ro, and anti-La. Two-step clustering and Kaplan-Meier survival analyses were used. Five clusters were identified. A cluster consisted of patients with only anti-dsDNA antibodies, a cluster of anti-Sm and anti-RNP, a cluster of aCL IgG/M and LAC, and a cluster of anti-Ro and anti-La antibodies. Analysis revealed 1 more cluster that consisted of patients who did not belong to any of the clusters formed by antibodies chosen for cluster analysis. Sm/RNP cluster had significantly higher incidence of pulmonary hypertension and Raynaud phenomenon. DsDNA cluster had the highest incidence of renal involvement. In the aCL/LAC cluster, there were significantly more patients with neuropsychiatric involvement, antiphospholipid syndrome, autoimmune hemolytic anemia, and thrombocytopenia. According to the Systemic Lupus International Collaborating Clinics damage index, the highest frequency of damage was in the aCL/LAC cluster. Comparison of 10 and 20 years survival showed reduced survival in the aCL/LAC cluster. This study supports the existence of autoantibody clusters with distinct clinical features in SLE and shows that forming clinical subsets according to autoantibody clusters may be useful in predicting the outcome of the disease. Autoantibody clusters in SLE may exhibit differences according to the clinical setting or population.

  4. PCIe40 temperature protection system

    CERN Document Server

    Romero Aguilar, Angel

    2017-01-01

    PCIe40 is a high-throughput data-acquisition card based on PCI Express that is currently under development for the next upgrade of the LHCb experiment readout system. As part of this development, SMBus is intended to be used as a lightweight, out-of-band protocol to monitor the health of each data acquisition board. Starting from a simple prototype, the student will work on enabling SMBus communication between a COTS linux host and various on-board sensors, on top of existing linux facilities.

  5. Hydrogen spillover on DV (555-777) graphene – vanadium cluster system: First principles study

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, E. Mathan, E-mail: ranjit.t@res.srmuniv.ac.in, E-mail: mathanranjitha@gmail.com; Thapa, Ranjit, E-mail: ranjit.t@res.srmuniv.ac.in, E-mail: mathanranjitha@gmail.com [SRM Research Institute, SRM University, Kattankulathur, Tamil Nadu - 603203 (India); P, Sabarikirishwaran [Department of Physics and Nanotechnology, SRM University, Kattankulathur, Tamil Nadu - 603203 (India)

    2015-06-24

    Using dispersion corrected density functional theory (DFT+D), the interaction of Vanadium adatom and cluster with divacancy (555-777) defective graphene sheet has been studied elaborately. We explore the prospect of hydrogen storage on V{sub 4} cluster adsorbed divacancy graphene system. It has been observed that V{sub 4} cluster (acting as a catalyst) can dissociate the H{sub 2} molecule into H atoms with very low barrier energy. We introduce the spillover of the atomic hydrogen throughout the surface via external mediator gallane (GaH{sub 3}) to form a hydrogenated system.

  6. Cluster fusion algorithm: application to Lennard-Jones clusters

    DEFF Research Database (Denmark)

    Solov'yov, Ilia; Solov'yov, Andrey V.; Greiner, Walter

    2006-01-01

    paths up to the cluster size of 150 atoms. We demonstrate that in this way all known global minima structures of the Lennard-Jones clusters can be found. Our method provides an efficient tool for the calculation and analysis of atomic cluster structure. With its use we justify the magic number sequence......We present a new general theoretical framework for modelling the cluster structure and apply it to description of the Lennard-Jones clusters. Starting from the initial tetrahedral cluster configuration, adding new atoms to the system and absorbing its energy at each step, we find cluster growing...... for the clusters of noble gas atoms and compare it with experimental observations. We report the striking correspondence of the peaks in the dependence of the second derivative of the binding energy per atom on cluster size calculated for the chain of the Lennard-Jones clusters based on the icosahedral symmetry...

  7. Cluster fusion algorithm: application to Lennard-Jones clusters

    DEFF Research Database (Denmark)

    Solov'yov, Ilia; Solov'yov, Andrey V.; Greiner, Walter

    2008-01-01

    paths up to the cluster size of 150 atoms. We demonstrate that in this way all known global minima structures of the Lennard-Jones clusters can be found. Our method provides an efficient tool for the calculation and analysis of atomic cluster structure. With its use we justify the magic number sequence......We present a new general theoretical framework for modelling the cluster structure and apply it to description of the Lennard-Jones clusters. Starting from the initial tetrahedral cluster configuration, adding new atoms to the system and absorbing its energy at each step, we find cluster growing...... for the clusters of noble gas atoms and compare it with experimental observations. We report the striking correspondence of the peaks in the dependence of the second derivative of the binding energy per atom on cluster size calculated for the chain of the Lennard-Jones clusters based on the icosahedral symmetry...

  8. Sparse Reconstruction of the Merging A520 Cluster System

    Energy Technology Data Exchange (ETDEWEB)

    Peel, Austin [Département d’Astrophysique, IRFU, CEA, Université Paris-Saclay, F-91191 Gif-sur-Yvette (France); Lanusse, François [McWilliams Center for Cosmology, Department of Physics, Carnegie Mellon University, Pittsburgh, PA 15213 (United States); Starck, Jean-Luc, E-mail: austin.peel@cea.fr [Université Paris Diderot, AIM, Sorbonne Paris Cité, CEA, CNRS, F-91191 Gif-sur-Yvette (France)

    2017-09-20

    Merging galaxy clusters present a unique opportunity to study the properties of dark matter in an astrophysical context. These are rare and extreme cosmic events in which the bulk of the baryonic matter becomes displaced from the dark matter halos of the colliding subclusters. Since all mass bends light, weak gravitational lensing is a primary tool to study the total mass distribution in such systems. Combined with X-ray and optical analyses, mass maps of cluster mergers reconstructed from weak-lensing observations have been used to constrain the self-interaction cross-section of dark matter. The dynamically complex Abell 520 (A520) cluster is an exceptional case, even among merging systems: multi-wavelength observations have revealed a surprising high mass-to-light concentration of dark mass, the interpretation of which is difficult under the standard assumption of effectively collisionless dark matter. We revisit A520 using a new sparsity-based mass-mapping algorithm to independently assess the presence of the puzzling dark core. We obtain high-resolution mass reconstructions from two separate galaxy shape catalogs derived from Hubble Space Telescope observations of the system. Our mass maps agree well overall with the results of previous studies, but we find important differences. In particular, although we are able to identify the dark core at a certain level in both data sets, it is at much lower significance than has been reported before using the same data. As we cannot confirm the detection in our analysis, we do not consider A520 as posing a significant challenge to the collisionless dark matter scenario.

  9. Design of the SLAC RCE Platform: A General Purpose ATCA Based Data Acquisition System

    International Nuclear Information System (INIS)

    Herbst, R.; Claus, R.; Freytag, M.; Haller, G.; Huffer, M.; Maldonado, S.; Nishimura, K.; O'Grady, C.; Panetta, J.; Perazzo, A.; Reese, B.; Ruckman, L.; Thayer, J.G.; Weaver, M.

    2015-01-01

    The SLAC RCE platform is a general purpose clustered data acquisition system implemented on a custom ATCA compliant blade, called the Cluster On Board (COB). The core of the system is the Reconfigurable Cluster Element (RCE), which is a system-on-chip design based upon the Xilinx Zynq family of FPGAs, mounted on custom COB daughter-boards. The Zynq architecture couples a dual core ARM Cortex A9 based processor with a high performance 28nm FPGA. The RCE has 12 external general purpose bi-directional high speed links, each supporting serial rates of up to 12Gbps. 8 RCE nodes are included on a COB, each with a 10Gbps connection to an on-board 24-port Ethernet switch integrated circuit. The COB is designed to be used with a standard full-mesh ATCA backplane allowing multiple RCE nodes to be tightly interconnected with minimal interconnect latency. Multiple shelves can be clustered using the front panel 10-gbps connections. The COB also supports local and inter-blade timing and trigger distribution. An experiment specific Rear Transition Module adapts the 96 high speed serial links to specific experiments and allows an experiment-specific timing and busy feedback connection. This coupling of processors with a high performance FPGA fabric in a low latency, multiple node cluster allows high speed data processing that can be easily adapted to any physics experiment. RTEMS and Linux are both ported to the module. The RCE has been used or is the baseline for several current and proposed experiments (LCLS, HPS, LSST, ATLAS-CSC, LBNE, DarkSide, ILC-SiD, etc).

  10. Connecting to HPC VPN | High-Performance Computing | NREL

    Science.gov (United States)

    visualization, and file transfers. NREL Users Logging in to Peregrine Use SSH to login to the system. Your login and password will match your NREL network account login/password. From OS X or Linux, open a terminal login for the Windows HPC Cluster will match your NREL Active Directory login/password that you use to

  11. Intrinsic Variability in Multiple Systems and Clusters: Open Questions

    Science.gov (United States)

    Lampens, P.

    2006-04-01

    It is most interesting and rewarding to probe the stellar structure of stars which belong to a system originating from the same parent cloud as this provides additional and more accurate constraints for the models. New results on pulsating components in multiple systems and clusters are beginning to emerge regularly. Based on concrete studies, I will present still unsolved problems and discuss some of the issues which may affect our understanding of the pulsation physics in such systems but also in general.

  12. Cluster computing for lattice QCD simulations

    International Nuclear Information System (INIS)

    Coddington, P.D.; Williams, A.G.

    2000-01-01

    Full text: Simulations of lattice quantum chromodynamics (QCD) require enormous amounts of compute power. In the past, this has usually involved sharing time on large, expensive machines at supercomputing centres. Over the past few years, clusters of networked computers have become very popular as a low-cost alternative to traditional supercomputers. The dramatic improvements in performance (and more importantly, the ratio of price/performance) of commodity PCs, workstations, and networks have made clusters of off-the-shelf computers an attractive option for low-cost, high-performance computing. A major advantage of clusters is that since they can have any number of processors, they can be purchased using any sized budget, allowing research groups to install a cluster for their own dedicated use, and to scale up to more processors if additional funds become available. Clusters are now being built for high-energy physics simulations. Wuppertal has recently installed ALiCE, a cluster of 128 Alpha workstations running Linux, with a peak performance of 158 G flops. The Jefferson Laboratory in the US has a 16 node Alpha cluster and plans to upgrade to a 256 processor machine. In Australia, several large clusters have recently been installed. Swinburne University of Technology has a cluster of 64 Compaq Alpha workstations used for astrophysics simulations. Early this year our DHPC group constructed a cluster of 116 dual Pentium PCs (i.e. 232 processors) connected by a Fast Ethernet network, which is used by chemists at Adelaide University and Flinders University to run computational chemistry codes. The Australian National University has recently installed a similar PC cluster with 192 processors. The Centre for the Subatomic Structure of Matter (CSSM) undertakes large-scale high-energy physics calculations, mainly lattice QCD simulations. The choice of the computer and network hardware for a cluster depends on the particular applications to be run on the machine. Our

  13. Аdaptive clustering algorithm for recommender systems

    OpenAIRE

    Stekh, Yu.; Artsibasov, V.

    2012-01-01

    In this article adaptive clustering algorithm for recommender systems is developed. Розроблено адаптивний алгоритм кластеризації для рекомендаційних систем.

  14. Clustering of galaxies near damped Lyman-alpha systems with (z) = 2.6

    Science.gov (United States)

    Wolfe, A. M

    1993-01-01

    The galaxy two-point correlation function, xi, at (z) = 2.6 is determined by comparing the number of Ly-alpha-emitting galaxies in narrowband CCD fields selected for the presence of damped L-alpha absorption to their number in randomly selected control fields. Comparisons between the presented determination of (xi), a density-weighted volume average of xi, and model predictions for (xi) at large redshifts show that models in which the clustering pattern is fixed in proper coordinates are highly unlikely, while better agreement is obtained if the clustering pattern is fixed in comoving coordinates. Therefore, clustering of Ly-alpha-emitting galaxies around damped Ly-alpha systems at large redshifts is strong. It is concluded that the faint blue galaxies are drawn from a parent population different from normal galaxies, the presumed offspring of damped Ly-alpha systems.

  15. Customized recommendations for production management clusters of North American automatic milking systems.

    Science.gov (United States)

    Tremblay, Marlène; Hess, Justin P; Christenson, Brock M; McIntyre, Kolby K; Smink, Ben; van der Kamp, Arjen J; de Jong, Lisanne G; Döpfer, Dörte

    2016-07-01

    Automatic milking systems (AMS) are implemented in a variety of situations and environments. Consequently, there is a need to characterize individual farming practices and regional challenges to streamline management advice and objectives for producers. Benchmarking is often used in the dairy industry to compare farms by computing percentile ranks of the production values of groups of farms. Grouping for conventional benchmarking is commonly limited to the use of a few factors such as farms' geographic region or breed of cattle. We hypothesized that herds' production data and management information could be clustered in a meaningful way using cluster analysis and that this clustering approach would yield better peer groups of farms than benchmarking methods based on criteria such as country, region, breed, or breed and region. By applying mixed latent-class model-based cluster analysis to 529 North American AMS dairy farms with respect to 18 significant risk factors, 6 clusters were identified. Each cluster (i.e., peer group) represented unique management styles, challenges, and production patterns. When compared with peer groups based on criteria similar to the conventional benchmarking standards, the 6 clusters better predicted milk produced (kilograms) per robot per day. Each cluster represented a unique management and production pattern that requires specialized advice. For example, cluster 1 farms were those that recently installed AMS robots, whereas cluster 3 farms (the most northern farms) fed high amounts of concentrates through the robot to compensate for low-energy feed in the bunk. In addition to general recommendations for farms within a cluster, individual farms can generate their own specific goals by comparing themselves to farms within their cluster. This is very comparable to benchmarking but adds the specific characteristics of the peer group, resulting in better farm management advice. The improvement that cluster analysis allows for is

  16. 75 FR 7464 - Energy Efficient Building Systems Regional Innovation Cluster Initiative-Joint Federal Funding...

    Science.gov (United States)

    2010-02-19

    ... a regional innovation cluster focused on innovation in energy efficient building technologies and... technology challenges through approaches that span basic research to engineering development to... DEPARTMENT OF ENERGY Energy Efficient Building Systems Regional Innovation Cluster Initiative...

  17. Novel Functions of MicroRNA-17-92 Cluster in the Endocrine System.

    Science.gov (United States)

    Wan, Shan; Chen, Xiang; He, Yuedong; Yu, Xijie

    2018-01-01

    MiR-17-92 cluster is coded by MIR17HG in chromosome 13, which is highly conserved in vertebrates. Published literatures have proved that miR-17-92 cluster critically regulates tumorigenesis and metastasis. Recent researches showed that the miR-17-92 cluster also plays novel functions in the endocrine system. To summarize recent findings on the physiological and pathological roles of miR-17-92 cluster in bone, lipid and glucose metabolisms. MiR-17-92 cluster plays significant regulatory roles in bone development and metabolism through regulating the differentiation and function of osteoblasts and osteoclasts. In addition, miR-17- 92 cluster is nearly involved in every aspect of lipid metabolism. Last but not the least, the miR-17-92 cluster is closely bound up with pancreatic beta cell function, development of type 1 diabetes and insulin resistance. However, whether miR-17-92 cluster is involved in the communication among bone, fat and glucose metabolisms remains unknown. Growing evidence indicates that miR-17-92 cluster plays significant roles in bone, lipid and glucose metabolisms through a variety of signaling pathways. Fully understanding its modulating mechanisms may necessarily facilitate to comprehend the clinical and molecule features of some metabolic disorders such as osteoporosis, arthrosclerosis and diabetes mellitus. It may provide new drug targets to prevent and cure these disorders. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  18. Cluster electric spectroscopy of colloid chemical oxyhydrate systems

    CERN Document Server

    Sucharev, Yu I

    2015-01-01

    This monograph deals with the shape of Liesegang operator and its respective phase diagrams of spontaneous surges and analyzed properties of cluster attractors. It describes the influence of pulsation noise or self-organization current of gel systems in a magnetic field on singularities of optic parameters of yttrium oxyhydrate, as well as on kinetic curves of changes in optic density of oxyhydrate systems, sorptive properties of d- and f-elements, and the structural organization of their colloids. This monograph is meant for postgraduate students, magisters, researchers, and those interested

  19. The HectoMAP Cluster Survey. I. redMaPPer Clusters

    Science.gov (United States)

    Sohn, Jubee; Geller, Margaret J.; Rines, Kenneth J.; Hwang, Ho Seong; Utsumi, Yousuke; Diaferio, Antonaldo

    2018-04-01

    We use the dense HectoMAP redshift survey to explore the properties of 104 redMaPPer cluster candidates. The redMaPPer systems in HectoMAP cover the full range of richness and redshift (0.08 systems included in the Subaru/Hyper Suprime-Cam public data release are bona fide clusters. The median number of spectroscopic members per cluster is ∼20. We include redshifts of 3547 member candidates listed in the redMaPPer catalog whether they are cluster members or not. We evaluate the redMaPPer membership probability spectroscopically. The purity (number of real systems) in redMaPPer exceeds 90% even at the lowest richness. Three massive galaxy clusters (M ∼ 2 × 1013 M ⊙) associated with X-ray emission in the HectoMAP region are not included in the public redMaPPer catalog with λ rich > 20, because they lie outside the cuts for this catalog.

  20. Folksonomies and clustering in the collaborative system CiteULike

    Science.gov (United States)

    Capocci, Andrea; Caldarelli, Guido

    2008-06-01

    We analyze CiteULike, an online collaborative tagging system where users bookmark and annotate scientific papers. Such a system can be naturally represented as a tri-partite graph whose nodes represent papers, users and tags connected by individual tag assignments. The semantics of tags is studied here, in order to uncover the hidden relationships between tags. We find that the clustering coefficient can be used to analyze the semantical patterns among tags.

  1. Folksonomies and clustering in the collaborative system CiteULike

    International Nuclear Information System (INIS)

    Capocci, Andrea; Caldarelli, Guido

    2008-01-01

    We analyze CiteULike, an online collaborative tagging system where users bookmark and annotate scientific papers. Such a system can be naturally represented as a tri-partite graph whose nodes represent papers, users and tags connected by individual tag assignments. The semantics of tags is studied here, in order to uncover the hidden relationships between tags. We find that the clustering coefficient can be used to analyze the semantical patterns among tags

  2. Folksonomies and clustering in the collaborative system CiteULike

    Energy Technology Data Exchange (ETDEWEB)

    Capocci, Andrea [Dip. di Informatica e Sistemistica Universita ' Sapienza' , via Ariosto, 25 00185 Rome (Italy); Caldarelli, Guido [SMC Centre, CNR-INFM, Dip. di Fisica, Universita ' Sapienza' , P.le A. Moro 5, 00185-Rome (Italy)

    2008-06-06

    We analyze CiteULike, an online collaborative tagging system where users bookmark and annotate scientific papers. Such a system can be naturally represented as a tri-partite graph whose nodes represent papers, users and tags connected by individual tag assignments. The semantics of tags is studied here, in order to uncover the hidden relationships between tags. We find that the clustering coefficient can be used to analyze the semantical patterns among tags.

  3. Globular Clusters - Guides to Galaxies

    CERN Document Server

    Richtler, Tom; Joint ESO-FONDAP Workshop on Globular Clusters

    2009-01-01

    The principal question of whether and how globular clusters can contribute to a better understanding of galaxy formation and evolution is perhaps the main driving force behind the overall endeavour of studying globular cluster systems. Naturally, this splits up into many individual problems. The objective of the Joint ESO-FONDAP Workshop on Globular Clusters - Guides to Galaxies was to bring together researchers, both observational and theoretical, to present and discuss the most recent results. Topics covered in these proceedings are: internal dynamics of globular clusters and interaction with host galaxies (tidal tails, evolution of cluster masses), accretion of globular clusters, detailed descriptions of nearby cluster systems, ultracompact dwarfs, formations of massive clusters in mergers and elsewhere, the ACS Virgo survey, galaxy formation and globular clusters, dynamics and kinematics of globular cluster systems and dark matter-related problems. With its wide coverage of the topic, this book constitute...

  4. Cluster consensus in discrete-time networks of multiagents with inter-cluster nonidentical inputs.

    Science.gov (United States)

    Han, Yujuan; Lu, Wenlian; Chen, Tianping

    2013-04-01

    In this paper, cluster consensus of multiagent systems is studied via inter-cluster nonidentical inputs. Here, we consider general graph topologies, which might be time-varying. The cluster consensus is defined by two aspects: intracluster synchronization, the state at which differences between each pair of agents in the same cluster converge to zero, and inter-cluster separation, the state at which agents in different clusters are separated. For intra-cluster synchronization, the concepts and theories of consensus, including the spanning trees, scramblingness, infinite stochastic matrix product, and Hajnal inequality, are extended. As a result, it is proved that if the graph has cluster spanning trees and all vertices self-linked, then the static linear system can realize intra-cluster synchronization. For the time-varying coupling cases, it is proved that if there exists T > 0 such that the union graph across any T-length time interval has cluster spanning trees and all graphs has all vertices self-linked, then the time-varying linear system can also realize intra-cluster synchronization. Under the assumption of common inter-cluster influence, a sort of inter-cluster nonidentical inputs are utilized to realize inter-cluster separation, such that each agent in the same cluster receives the same inputs and agents in different clusters have different inputs. In addition, the boundedness of the infinite sum of the inputs can guarantee the boundedness of the trajectory. As an application, we employ a modified non-Bayesian social learning model to illustrate the effectiveness of our results.

  5. Towards the Availability of the Distributed Cluster Rendering System: Automatic Modeling and Verification

    DEFF Research Database (Denmark)

    Wang, Kemin; Jiang, Zhengtao; Wang, Yongbin

    2012-01-01

    , whenever the number of node-n and related parameters vary, we can create the PRISM model file rapidly and then we can use PRISM model checker to verify ralated system properties. At the end of this study, we analyzed and verified the availability distributions of the Distributed Cluster Rendering System......In this study, we proposed a Continuous Time Markov Chain Model towards the availability of n-node clusters of Distributed Rendering System. It's an infinite one, we formalized it, based on the model, we implemented a software, which can automatically model with PRISM language. With the tool...

  6. European BWR R and D cluster for innovative passive safety systems

    International Nuclear Information System (INIS)

    Hicken, E.F.; Lensa, W. von

    1996-01-01

    The main technological innovation trends for future nuclear power plants tend towards a broader use of passive safety systems for the prevention, mitigation and managing of severe accident scenarios. Several approaches have been undertaken in a number of European countries to study and demonstrate the feasibility and charateristics of innovative passive safety systems. The European BWR R and D Cluster combines those experimental and analytical efforts that are mainly directed to the introduction of passive safety systems into boiling water reactor technology. The Cluster is grouped around thermohydraulic test facilities in Europe for the qualification of innovative BWR safety systems, also taking into account especially the operating experience of the nuclear power plant Dodewaard and other BWRs, which already incorporated some passive safety features. The background, the objectives, the structure of the project and the work programme are presented in this paper as well as an outline of the significance of the expected results. (orig.) [de

  7. A COMPUTER CLUSTER SYSTEM FOR PSEUDO-PARALLEL EXECUTION OF GEANT4 SERIAL APPLICATION

    Directory of Open Access Journals (Sweden)

    Memmo Federici

    2013-12-01

    Full Text Available Simulation of the interactions between particles and matter in studies for developing X-rays detectors generally requires very long calculation times (up to several days or weeks. These times are often a serious limitation for the success of the simulations and for the accuracy of the simulated models. One of the tools used by the scientific community to perform these simulations is Geant4 (Geometry And Tracking [2, 3]. On the best of experience in the design of the AVES cluster computing system, Federici et al. [1], the IAPS (Istituto di Astrofisica e Planetologia Spaziali INAF laboratories were able to develop a cluster computer system dedicated to Geant 4. The Cluster is easy to use and easily expandable, and thanks to the design criteria adopted it achieves an excellent compromise between performance and cost. The management software developed for the Cluster splits the single instance of simulation on the cores available, allowing the use of software written for serial computation to reach a computing speed similar to that obtainable from a native parallel software. The simulations carried out on the Cluster showed an increase in execution time by a factor of 20 to 60 compared to the times obtained with the use of a single PC of medium quality.

  8. Correlations in clusters and related systems. New perspectives on the many-body problem

    International Nuclear Information System (INIS)

    Connerade, J.P.

    1996-01-01

    The contents of the present volume are the proceedings of an Adriatico Research Conference, held at the International Centre for Theoretical Physics in Trieste from 26 to 29 July 1994. The theme of the conference covered many aspects of cooperative effects, beginning with giant resonances in many-electron systems, and particularly in new objects such as metallic clusters, in which collective electron dynamics are a novel feature. The relationship of these resonances with comparable features in nuclear and solid state physics was extensively discussed. Related effects, such as instabilities of valence both in clusters and in solids were explored. Clusters allow one to track the evolution of certain properties from the free atom to the solid state limits as a function of size. The giant resonances concerned not only intra-atomic excitations, but also correlated motions of all delocalized electrons within the cluster. Other systems with unusual properties, such as negative ions, in which correlations play an important role, were also considered. Finally, dynamical effects and the possible interactions between electron-electron correlations and high laser fields were envisaged

  9. Proposed Fuzzy-NN Algorithm with LoRaCommunication Protocol for Clustered Irrigation Systems

    Directory of Open Access Journals (Sweden)

    Sotirios Kontogiannis

    2017-11-01

    Full Text Available Modern irrigation systems utilize sensors and actuators, interconnected together as a single entity. In such entities, A.I. algorithms are implemented, which are responsible for the irrigation process. In this paper, the authors present an irrigation Open Watering System (OWS architecture that spatially clusters the irrigation process into autonomous irrigation sections. Authors’ OWS implementation includes a Neuro-Fuzzy decision algorithm called FITRA, which originates from the Greek word for seed. In this paper, the FITRA algorithm is described in detail, as are experimentation results that indicate significant water conservations from the use of the FITRA algorithm. Furthermore, the authors propose a new communication protocol over LoRa radio as an alternative low-energy and long-range OWS clusters communication mechanism. The experimental scenarios confirm that the FITRA algorithm provides more efficient irrigation on clustered areas than existing non-clustered, time scheduled or threshold adaptive algorithms. This is due to the FITRA algorithm’s frequent monitoring of environmental conditions, fuzzy and neural network adaptation as well as adherence to past irrigation preferences.

  10. Directed clustering coefficient as a measure of systemic risk in complex banking networks

    Science.gov (United States)

    Tabak, Benjamin M.; Takami, Marcelo; Rocha, Jadson M. C.; Cajueiro, Daniel O.; Souza, Sergio R. S.

    2014-01-01

    Recent literature has focused on the study of systemic risk in complex networks. It is clear now, after the crisis of 2008, that the aggregate behavior of the interaction among agents is not straightforward and it is very difficult to predict. Contributing to this debate, this paper shows that the directed clustering coefficient may be used as a measure of systemic risk in complex networks. Furthermore, using data from the Brazilian interbank network, we show that the directed clustering coefficient is negatively correlated with domestic interest rates.

  11. Reduced-dimension power allocation over clustered channels in cognitive radios system under co-channel interference

    KAUST Repository

    Ben Ghorbel, Mahdi

    2014-05-12

    The objective of this paper is to propose a reduceddimension resource allocation scheme in the context of cognitive radio system in presence of co-channel interference between users. We assume a multicarrier transmission for both the primary and secondary systems. Instead of optimizing the powers over all sub-carriers, the sub-carriers are grouped into clusters of sub-carriers, where the power of each sub-carrier is directly related to the power of the correspondent cluster. The power optimization is done only over the set of clusters instead of all sub-carriers which can significantly reduce the complexity of the resource allocation problem. The performance loss of the reduced dimension solution with respect to the optimal solution, where the optimization is carried over all active sub-carriers, allows trading-off complexity versus performance. Numerical evaluation indeed revealed that a limited performance loss occurs by optimizing over a reduced set of clusters instead of the full optimization in the context of cognitive radio systems.

  12. THE M33 GLOBULAR CLUSTER SYSTEM WITH PAndAS DATA: THE LAST OUTER HALO CLUSTER?

    International Nuclear Information System (INIS)

    Cockcroft, Robert; Harris, William E.; Ferguson, Annette M. N.

    2011-01-01

    We use CFHT/MegaCam data to search for outer halo star clusters in M33 as part of the Pan-Andromeda Archaeological Survey. This work extends previous studies out to a projected radius of 50 kpc and covers over 40 deg 2 . We find only one new unambiguous star cluster in addition to the five previously known in the M33 outer halo (10 kpc ≤ r ≤ 50 kpc). Although we identify 2440 cluster candidates of various degrees of confidence from our objective image search procedure, almost all of these are likely background contaminants, mostly faint unresolved galaxies. We measure the luminosity, color, and structural parameters of the new cluster in addition to the five previously known outer halo clusters. At a projected radius of 22 kpc, the new cluster is slightly smaller, fainter, and redder than all but one of the other outer halo clusters, and has g' ∼ 19.9, (g' - i') ∼ 0.6, concentration parameter c ∼ 1.0, a core radius r c ∼ 3.5 pc, and a half-light radius r h ∼ 5.5 pc. For M33 to have so few outer halo clusters compared to M31 suggests either tidal stripping of M33's outer halo clusters by M31, or a very different, much calmer accretion history of M33.

  13. Scalability of DL_POLY on High Performance Computing Platform

    Directory of Open Access Journals (Sweden)

    Mabule Samuel Mabakane

    2017-12-01

    Full Text Available This paper presents a case study on the scalability of several versions of the molecular dynamics code (DL_POLY performed on South Africa‘s Centre for High Performance Computing e1350 IBM Linux cluster, Sun system and Lengau supercomputers. Within this study different problem sizes were designed and the same chosen systems were employed in order to test the performance of DL_POLY using weak and strong scalability. It was found that the speed-up results for the small systems were better than large systems on both Ethernet and Infiniband network. However, simulations of large systems in DL_POLY performed well using Infiniband network on Lengau cluster as compared to e1350 and Sun supercomputer.

  14. Application of the dynamically allocated virtual clustering management system to emulated tactical network experimentation

    Science.gov (United States)

    Marcus, Kelvin

    2014-06-01

    The U.S Army Research Laboratory (ARL) has built a "Network Science Research Lab" to support research that aims to improve their ability to analyze, predict, design, and govern complex systems that interweave the social/cognitive, information, and communication network genres. Researchers at ARL and the Network Science Collaborative Technology Alliance (NS-CTA), a collaborative research alliance funded by ARL, conducted experimentation to determine if automated network monitoring tools and task-aware agents deployed within an emulated tactical wireless network could potentially increase the retrieval of relevant data from heterogeneous distributed information nodes. ARL and NS-CTA required the capability to perform this experimentation over clusters of heterogeneous nodes with emulated wireless tactical networks where each node could contain different operating systems, application sets, and physical hardware attributes. Researchers utilized the Dynamically Allocated Virtual Clustering Management System (DAVC) to address each of the infrastructure support requirements necessary in conducting their experimentation. The DAVC is an experimentation infrastructure that provides the means to dynamically create, deploy, and manage virtual clusters of heterogeneous nodes within a cloud computing environment based upon resource utilization such as CPU load, available RAM and hard disk space. The DAVC uses 802.1Q Virtual LANs (VLANs) to prevent experimentation crosstalk and to allow for complex private networks. Clusters created by the DAVC system can be utilized for software development, experimentation, and integration with existing hardware and software. The goal of this paper is to explore how ARL and the NS-CTA leveraged the DAVC to create, deploy and manage multiple experimentation clusters to support their experimentation goals.

  15. Data storage as a service

    OpenAIRE

    Tomšič, Jan

    2016-01-01

    The purpose of the thesis was comparison of interfaces to network attached file systems and object storage. The thesis describes network file system and mounting procedure in Linux operating system. Object storage and distributed storage systems are explained with examples of usage. Amazon S3 is an example of object store with access trough REST interface. Ceph, a system for distributed object storage, is explained in detail, and a Ceph cluster was deployed for the purpose of this thesis. Cep...

  16. A scalable and practical one-pass clustering algorithm for recommender system

    Science.gov (United States)

    Khalid, Asra; Ghazanfar, Mustansar Ali; Azam, Awais; Alahmari, Saad Ali

    2015-12-01

    KMeans clustering-based recommendation algorithms have been proposed claiming to increase the scalability of recommender systems. One potential drawback of these algorithms is that they perform training offline and hence cannot accommodate the incremental updates with the arrival of new data, making them unsuitable for the dynamic environments. From this line of research, a new clustering algorithm called One-Pass is proposed, which is a simple, fast, and accurate. We show empirically that the proposed algorithm outperforms K-Means in terms of recommendation and training time while maintaining a good level of accuracy.

  17. Properties of an ionised-cluster beam from a vaporised-cluster ion source

    International Nuclear Information System (INIS)

    Takagi, T.; Yamada, I.; Sasaki, A.

    1978-01-01

    A new type of ion source vaporised-metal cluster ion source, has been developed for deposition and epitaxy. A cluster consisting of 10 2 to 10 3 atoms coupled loosely together is formed by adiabatic expansion ejecting the vapour of materials into a high-vacuum region through the nozzle of a heated crucible. The clusters are ionised by electron bombardment and accelerated with neutral clusters toward a substrate. In this paper, mechanisms of cluster formation experimental results of the cluster size (atoms/cluster) and its distribution, and characteristics of the cluster ion beams are reported. The size is calculated from the kinetic equation E = (1/2)mNVsub(ej) 2 , where E is the cluster beam energy, Vsub(ej) is the ejection velocity, m is the mass of atom and N is the cluster size. The energy and the velocity of the cluster are measured by an electrostatic 127 0 energy analyser and a rotating disc system, respectively. The cluster size obtained for Ag is about 5 x 10 2 to 2 x 10 3 atoms. The retarding potential method is used to confirm the results for Ag. The same dependence on cluster size for metals such as Ag, Cu and Pb has been obtained in previous experiments. In the cluster state the cluster ion beam is easily produced by electron bombardment. About 50% of ionised clusters are obtained under typical operation conditions, because of the large ionisation cross sections of the clusters. To obtain a uniform spatial distribution, the ionising electrode system is also discussed. The new techniques are termed ionised-cluster beam deposition (ICBD) and epitaxy (ICBE). (author)

  18. The sluggs survey: HST/ACS mosaic imaging of the NGC 3115 globular cluster system

    Energy Technology Data Exchange (ETDEWEB)

    Jennings, Zachary G.; Romanowsky, Aaron J.; Brodie, Jean P.; Arnold, Jacob A. [University of California Observatories, Santa Cruz, CA 95064 (United States); Strader, Jay [Department of Physics and Astronomy, Michigan State University, East Lansing, Michigan, MI 48824 (United States); Lin, Dacheng; Irwin, Jimmy A.; Wong, Ka-Wah [Department of Physics and Astronomy, University of Alabama, Box 870324, Tuscaloosa, AL 35487 (United States); Sivakoff, Gregory R., E-mail: zgjennin@ucsc.edu [Department of Physics, University of Alberta, Edmonton, Alberta T6G 2E1 (Canada)

    2014-08-01

    We present Hubble Space Telescope/Advanced Camera for Surveys (HST/ACS) g and z photometry and half-light radii R {sub h} measurements of 360 globular cluster (GC) candidates around the nearby S0 galaxy NGC 3115. We also include Subaru/Suprime-Cam g, r, and i photometry of 421 additional candidates. The well-established color bimodality of the GC system is obvious in the HST/ACS photometry. We find evidence for a 'blue tilt' in the blue GC subpopulation, wherein the GCs in the blue subpopulation get redder as luminosity increases, indicative of a mass-metallicity relationship. We find a color gradient in both the red and blue subpopulations, with each group of clusters becoming bluer at larger distances from NGC 3115. The gradient is of similar strength in both subpopulations, but is monotonic and more significant for the blue clusters. On average, the blue clusters have ∼10% larger R {sub h} than the red clusters. This average difference is less than is typically observed for early-type galaxies but does match that measured in the literature for the Sombrero Galaxy (M104), suggesting that morphology and inclination may affect the measured size difference between the red and blue clusters. However, the scatter on the R {sub h} measurements is large. We also identify 31 clusters more extended than typical GCs, which we term ultra-compact dwarf (UCD) candidates. Many of these objects are actually considerably fainter than typical UCDs. While it is likely that a significant number will be background contaminants, six of these UCD candidates are spectroscopically confirmed as NGC 3115 members. To explore the prevalence of low-mass X-ray binaries in the GC system, we match our ACS and Suprime-Cam detections to corresponding Chandra X-ray sources. We identify 45 X-ray-GC matches: 16 among the blue subpopulation and 29 among the red subpopulation. These X-ray/GC coincidence fractions are larger than is typical for most GC systems, probably due to the increased

  19. Molecular depth profiling of multi-layer systems with cluster ion sources

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Juan [Department of Chemistry, Penn State University, University Park, PA 16802 (United States); Winograd, Nicholas [Department of Chemistry, Penn State University, University Park, PA 16802 (United States)]. E-mail: nxw@psu.edu

    2006-07-30

    Cluster bombardment of molecular films has created new opportunities for SIMS research. To more quantitatively examine the interaction of cluster beams with organic materials, we have developed a reproducible platform consisting of a well-defined sugar film (trehalose) doped with peptides. Molecular depth profiles have been acquired with these systems using C{sub 60} {sup +} bombardment. In this study, we utilize this platform to determine the feasibility of examining buried interfaces for multi-layer systems. Using C{sub 60} {sup +} at 20 keV, several systems have been tested including Al/trehalose/Si, Al/trehalose/Al/Si, Ag/trehalose/Si and ice/trehalose/Si. The results show that there can be interactions between the layers during the bombardment process that prevent a simple interpretation of the depth profile. We find so far that the best results are obtained when the mass of the overlayer atoms is less than or nearly equal to the mass of the atoms in buried molecules. In general, these observations suggest that C{sub 60} {sup +} bombardment can be successfully applied to interface characterization of multi-layer systems if the systems are carefully chosen.

  20. Parallel hyperbolic PDE simulation on clusters: Cell versus GPU

    Science.gov (United States)

    Rostrup, Scott; De Sterck, Hans

    2010-12-01

    :http://cpc.cs.qub.ac.uk/summaries/AEGY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL v3 No. of lines in distributed program, including test data, etc.: 59 168 No. of bytes in distributed program, including test data, etc.: 453 409 Distribution format: tar.gz Programming language: C, CUDA Computer: Parallel Computing Clusters. Individual compute nodes may consist of x86 CPU, Cell processor, or x86 CPU with attached NVIDIA GPU accelerator. Operating system: Linux Has the code been vectorised or parallelized?: Yes. Tested on 1-128 x86 CPU cores, 1-32 Cell Processors, and 1-32 NVIDIA GPUs. RAM: Tested on Problems requiring up to 4 GB per compute node. Classification: 12 External routines: MPI, CUDA, IBM Cell SDK Nature of problem: MPI-parallel simulation of Shallow Water equations using high-resolution 2D hyperbolic equation solver on regular Cartesian grids for x86 CPU, Cell Processor, and NVIDIA GPU using CUDA. Solution method: SWsolver provides 3 implementations of a high-resolution 2D Shallow Water equation solver on regular Cartesian grids, for CPU, Cell Processor, and NVIDIA GPU. Each implementation uses MPI to divide work across a parallel computing cluster. Additional comments: Sub-program numdiff is used for the test run.

  1. Cluster management.

    Science.gov (United States)

    Katz, R

    1992-11-01

    Cluster management is a management model that fosters decentralization of management, develops leadership potential of staff, and creates ownership of unit-based goals. Unlike shared governance models, there is no formal structure created by committees and it is less threatening for managers. There are two parts to the cluster management model. One is the formation of cluster groups, consisting of all staff and facilitated by a cluster leader. The cluster groups function for communication and problem-solving. The second part of the cluster management model is the creation of task forces. These task forces are designed to work on short-term goals, usually in response to solving one of the unit's goals. Sometimes the task forces are used for quality improvement or system problems. Clusters are groups of not more than five or six staff members, facilitated by a cluster leader. A cluster is made up of individuals who work the same shift. For example, people with job titles who work days would be in a cluster. There would be registered nurses, licensed practical nurses, nursing assistants, and unit clerks in the cluster. The cluster leader is chosen by the manager based on certain criteria and is trained for this specialized role. The concept of cluster management, criteria for choosing leaders, training for leaders, using cluster groups to solve quality improvement issues, and the learning process necessary for manager support are described.

  2. Effect of Policy Analysis on Indonesia’s Maritime Cluster Development Using System Dynamics Modeling

    Science.gov (United States)

    Nursyamsi, A.; Moeis, A. O.; Komarudin

    2018-03-01

    As an archipelago with two third of its territory consist of water, Indonesia should address more attention to its maritime industry development. One of the catalyst to fasten the maritime industry growth is by developing a maritime cluster. The purpose of this research is to gain understanding of the effect if Indonesia implement maritime cluster policy to the growth of maritime economic and its role to enhance the maritime cluster performance, hence enhancing Indonesia’s maritime industry as well. The result of the constructed system dynamic model simulation shows that with the effect of maritime cluster, the growth of employment rate and maritime economic is much bigger that the business as usual case exponentially. The result implies that the government should act fast to form a legitimate cluster maritime organizer institution so that there will be a synergize, sustainable, and positive maritime cluster environment that will benefit the performance of Indonesia’s maritime industry.

  3. Design and implementation of BESIII online farm

    International Nuclear Information System (INIS)

    Li Fei; Zhu Kejun; Wang Liang; Liu Yingjie; Chinese Academy of Sciences, Beijing

    2007-01-01

    New Beijing spectrometer (BESIII) data acquisition (DAQ) system need handle the high data rates, high speed network transmission and high storage capability requirements. The design and implementation of the BESIII online computing farm with IBM blade server, Linux and other free software are presented. The cluster system is running well currently, able to meet the needs of BESIII experiment and achieved some important results as online software debugging and testing platform. (authors)

  4. Clustering of near clusters versus cluster compactness

    International Nuclear Information System (INIS)

    Yu Gao; Yipeng Jing

    1989-01-01

    The clustering properties of near Zwicky clusters are studied by using the two-point angular correlation function. The angular correlation functions for compact and medium compact clusters, for open clusters, and for all near Zwicky clusters are estimated. The results show much stronger clustering for compact and medium compact clusters than for open clusters, and that open clusters have nearly the same clustering strength as galaxies. A detailed study of the compactness-dependence of correlation function strength is worth investigating. (author)

  5. Low latency network and distributed storage for next generation HPC systems: the ExaNeSt project

    Science.gov (United States)

    Ammendola, R.; Biagioni, A.; Cretaro, P.; Frezza, O.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Paolucci, P. S.; Pastorelli, E.; Pisani, F.; Simula, F.; Vicini, P.; Navaridas, J.; Chaix, F.; Chrysos, N.; Katevenis, M.; Papaeustathiou, V.

    2017-10-01

    With processor architecture evolution, the HPC market has undergone a paradigm shift. The adoption of low-cost, Linux-based clusters extended the reach of HPC from its roots in modelling and simulation of complex physical systems to a broader range of industries, from biotechnology, cloud computing, computer analytics and big data challenges to manufacturing sectors. In this perspective, the near future HPC systems can be envisioned as composed of millions of low-power computing cores, densely packed — meaning cooling by appropriate technology — with a tightly interconnected, low latency and high performance network and equipped with a distributed storage architecture. Each of these features — dense packing, distributed storage and high performance interconnect — represents a challenge, made all the harder by the need to solve them at the same time. These challenges lie as stumbling blocks along the road towards Exascale-class systems; the ExaNeSt project acknowledges them and tasks itself with investigating ways around them.

  6. Reduced-dimension power allocation over clustered channels in cognitive radios system under co-channel interference

    KAUST Repository

    Ben Ghorbel, Mahdi; Guenach, Mamoun; Alouini, Mohamed-Slim

    2014-01-01

    and secondary systems. Instead of optimizing the powers over all sub-carriers, the sub-carriers are grouped into clusters of sub-carriers, where the power of each sub-carrier is directly related to the power of the correspondent cluster. The power optimization

  7. A Coupled User Clustering Algorithm Based on Mixed Data for Web-Based Learning Systems

    Directory of Open Access Journals (Sweden)

    Ke Niu

    2015-01-01

    Full Text Available In traditional Web-based learning systems, due to insufficient learning behaviors analysis and personalized study guides, a few user clustering algorithms are introduced. While analyzing the behaviors with these algorithms, researchers generally focus on continuous data but easily neglect discrete data, each of which is generated from online learning actions. Moreover, there are implicit coupled interactions among the data but are frequently ignored in the introduced algorithms. Therefore, a mass of significant information which can positively affect clustering accuracy is neglected. To solve the above issues, we proposed a coupled user clustering algorithm for Wed-based learning systems by taking into account both discrete and continuous data, as well as intracoupled and intercoupled interactions of the data. The experiment result in this paper demonstrates the outperformance of the proposed algorithm.

  8. The Linux Operating System

    Indian Academy of Sciences (India)

    of America to exercise control over the production, management and use of information resources by promoting the .... development of stable 'production quality' versions. This ... obscurity', to hide potential security problems from malicious.

  9. Development of a synchrotron timing system on a programmable chip

    International Nuclear Information System (INIS)

    Lin Feiyu; Qiao Weimin; Wang Yanyu; Guo Yuhui

    2009-01-01

    A synchrotron requires extremely high time constraints for timing signals, so timing system is very important for a synchrotron control system. A FPGA+ARM+Linux+DSP architecture has been mainly used in timing control of the HIRFL-CSR control system. In this paper, we report the development of the SOPC(System On a Programmable Chip) based on FPGA and uClinux.It can integrate all the functions of ARM+Linux in one single FPGA chip, hence no need of the dedicated ARM chip, and the reduced cost. The maximum operation frequency of this system is 185 MHz. The hardware consumes less than 4% of total resources of FPGA chip. And both the hardware system and the operating system of the SOPC are reconfigurable. The SOPC system has a wide prospect of applications in accelerator engineering and many fields of scientific research. (authors)

  10. Cluster-cluster clustering

    International Nuclear Information System (INIS)

    Barnes, J.; Dekel, A.; Efstathiou, G.; Frenk, C.S.; Yale Univ., New Haven, CT; California Univ., Santa Barbara; Cambridge Univ., England; Sussex Univ., Brighton, England)

    1985-01-01

    The cluster correlation function xi sub c(r) is compared with the particle correlation function, xi(r) in cosmological N-body simulations with a wide range of initial conditions. The experiments include scale-free initial conditions, pancake models with a coherence length in the initial density field, and hybrid models. Three N-body techniques and two cluster-finding algorithms are used. In scale-free models with white noise initial conditions, xi sub c and xi are essentially identical. In scale-free models with more power on large scales, it is found that the amplitude of xi sub c increases with cluster richness; in this case the clusters give a biased estimate of the particle correlations. In the pancake and hybrid models (with n = 0 or 1), xi sub c is steeper than xi, but the cluster correlation length exceeds that of the points by less than a factor of 2, independent of cluster richness. Thus the high amplitude of xi sub c found in studies of rich clusters of galaxies is inconsistent with white noise and pancake models and may indicate a primordial fluctuation spectrum with substantial power on large scales. 30 references

  11. Resistance–temperature relation and atom cluster estimation of In–Bi system melts

    International Nuclear Information System (INIS)

    Geng, Haoran; Wang Zhiming; Zhou Yongzhi; Li Cancan

    2012-01-01

    Highlights: ► A testing device was adopted to measure the electrical resistivity of In–Bi system melts. ► A basically linear relation exists between the resistivity and temperature of In x Bi 100−x melts in measured temperature range. ► Based on Novakovic's assumption, the content of InBi atomic cluster in In x Bi 100−x melt is estimated with ρ ≈ ρ InBi x InBi + ρ m (1 − x InBi ) equation. - Abstract: A testing device for the resistivity of high-temperature melt was adopted to measure the l resistivity of In–Bi system melts at different temperatures. It can be concluded from the analysis and calculation of the experimental results that the resistivity of In x Bi 100−x (x = 0–100) melt is in linear relationship with temperature within the experiment temperature range. The resistivity of the melt decreases with the increasing content of In. The fair consistency of resistivity of In–Bi system melt is found in the heating and cooling processes. On the basis of Novakovic's assumption, we approximately estimated the content of InBi atom clusters in In x Bi 100−x melts with the resistivity data by equation ρ ≈ ρ InBi x InBi + ρ m (1 − x InBi ). In the whole components interval, the content corresponds well with the mole fraction of InBi clusters calculated by Novakovic in the thermodynamic approach. The mole fraction of InBi type atom clusters in the melts reaches the maximum at the point of stoichiometric composition In 50 Bi 50 .

  12. Retrieval with Clustering in a Case-Based Reasoning System for Radiotherapy Treatment Planning

    Science.gov (United States)

    Khussainova, Gulmira; Petrovic, Sanja; Jagannathan, Rupa

    2015-05-01

    Radiotherapy treatment planning aims to deliver a sufficient radiation dose to cancerous tumour cells while sparing healthy organs in the tumour surrounding area. This is a trial and error process highly dependent on the medical staff's experience and knowledge. Case-Based Reasoning (CBR) is an artificial intelligence tool that uses past experiences to solve new problems. A CBR system has been developed to facilitate radiotherapy treatment planning for brain cancer. Given a new patient case the existing CBR system retrieves a similar case from an archive of successfully treated patient cases with the suggested treatment plan. The next step requires adaptation of the retrieved treatment plan to meet the specific demands of the new case. The CBR system was tested by medical physicists for the new patient cases. It was discovered that some of the retrieved cases were not suitable and could not be adapted for the new cases. This motivated us to revise the retrieval mechanism of the existing CBR system by adding a clustering stage that clusters cases based on their tumour positions. A number of well-known clustering methods were investigated and employed in the retrieval mechanism. Results using real world brain cancer patient cases have shown that the success rate of the new CBR retrieval is higher than that of the original system.

  13. Retrieval with Clustering in a Case-Based Reasoning System for Radiotherapy Treatment Planning

    International Nuclear Information System (INIS)

    Khussainova, Gulmira; Petrovic, Sanja; Jagannathan, Rupa

    2015-01-01

    Radiotherapy treatment planning aims to deliver a sufficient radiation dose to cancerous tumour cells while sparing healthy organs in the tumour surrounding area. This is a trial and error process highly dependent on the medical staff's experience and knowledge. Case-Based Reasoning (CBR) is an artificial intelligence tool that uses past experiences to solve new problems. A CBR system has been developed to facilitate radiotherapy treatment planning for brain cancer. Given a new patient case the existing CBR system retrieves a similar case from an archive of successfully treated patient cases with the suggested treatment plan. The next step requires adaptation of the retrieved treatment plan to meet the specific demands of the new case. The CBR system was tested by medical physicists for the new patient cases. It was discovered that some of the retrieved cases were not suitable and could not be adapted for the new cases. This motivated us to revise the retrieval mechanism of the existing CBR system by adding a clustering stage that clusters cases based on their tumour positions. A number of well-known clustering methods were investigated and employed in the retrieval mechanism. Results using real world brain cancer patient cases have shown that the success rate of the new CBR retrieval is higher than that of the original system. (paper)

  14. A Human Activity Recognition System Based on Dynamic Clustering of Skeleton Data

    Directory of Open Access Journals (Sweden)

    Alessandro Manzi

    2017-05-01

    Full Text Available Human activity recognition is an important area in computer vision, with its wide range of applications including ambient assisted living. In this paper, an activity recognition system based on skeleton data extracted from a depth camera is presented. The system makes use of machine learning techniques to classify the actions that are described with a set of a few basic postures. The training phase creates several models related to the number of clustered postures by means of a multiclass Support Vector Machine (SVM, trained with Sequential Minimal Optimization (SMO. The classification phase adopts the X-means algorithm to find the optimal number of clusters dynamically. The contribution of the paper is twofold. The first aim is to perform activity recognition employing features based on a small number of informative postures, extracted independently from each activity instance; secondly, it aims to assess the minimum number of frames needed for an adequate classification. The system is evaluated on two publicly available datasets, the Cornell Activity Dataset (CAD-60 and the Telecommunication Systems Team (TST Fall detection dataset. The number of clusters needed to model each instance ranges from two to four elements. The proposed approach reaches excellent performances using only about 4 s of input data (~100 frames and outperforms the state of the art when it uses approximately 500 frames on the CAD-60 dataset. The results are promising for the test in real context.

  15. A Human Activity Recognition System Based on Dynamic Clustering of Skeleton Data.

    Science.gov (United States)

    Manzi, Alessandro; Dario, Paolo; Cavallo, Filippo

    2017-05-11

    Human activity recognition is an important area in computer vision, with its wide range of applications including ambient assisted living. In this paper, an activity recognition system based on skeleton data extracted from a depth camera is presented. The system makes use of machine learning techniques to classify the actions that are described with a set of a few basic postures. The training phase creates several models related to the number of clustered postures by means of a multiclass Support Vector Machine (SVM), trained with Sequential Minimal Optimization (SMO). The classification phase adopts the X-means algorithm to find the optimal number of clusters dynamically. The contribution of the paper is twofold. The first aim is to perform activity recognition employing features based on a small number of informative postures, extracted independently from each activity instance; secondly, it aims to assess the minimum number of frames needed for an adequate classification. The system is evaluated on two publicly available datasets, the Cornell Activity Dataset (CAD-60) and the Telecommunication Systems Team (TST) Fall detection dataset. The number of clusters needed to model each instance ranges from two to four elements. The proposed approach reaches excellent performances using only about 4 s of input data (~100 frames) and outperforms the state of the art when it uses approximately 500 frames on the CAD-60 dataset. The results are promising for the test in real context.

  16. Embedded multi-channel data acquisition system on FPGA for Aditya Tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Rajpal, Rachana, E-mail: rachana@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Mandaliya, Hitesh, E-mail: hitesh@ipr.res.in [ITER, Cadarache (France); Patel, Jignesh, E-mail: jjp@ipr.res.in [ITER, Cadarache (France); Kumari, Praveena, E-mail: praveena@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Gautam, Pramila, E-mail: pramila@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Raulji, Vismaysinh, E-mail: vismay@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Edappala, Praveenlal, E-mail: praveen@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Pujara, H.D, E-mail: pujara@ipr.res [Institute for Plasma Research, Gandhinagar, Gujarat (India); Jha, R., E-mail: jha@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India)

    2016-11-15

    Highlights: • 64 channel data acquisition, interface to PC/104 bus, using single board computer. • Integration of all components in single hardware to make it standalone and portable. • Development of application software in Qt on Linux platform for better performance and low cost compared to Windows. • Explored and utilized FPGA resources for hardware interfacing. - Abstract: The 64 channel data acquisition board is designed to meet the future demand of acquisition channels for plasma diagnostics. The inherent features of the board are 16 bit resolution, programmable sampling rate upto 200 kS/s/ch and simultaneous acquisition. To make system embedded and compact, 8 Analog Inputs ADC chip, 4M × 16 bit RAM memory, Field Programmable Gate Arrays, PC/104 platform and single board computer are used. High speed timing control signals for all ADCs and RAMs are generated by FPGA. The system is standalone, portable and interface through Ethernet. The acquisition application is developed in Qt. on Linux platform, in SBC. Due to ethernet connectivity and onboard processing, system can be integrated into Aditya and SST-1 data acquisition system. The performance of hardware is tested on Linux and Windows Embedded OS. The paper describes design, hardware and software architecture, implementation and results of 64 channel DAQ system.

  17. Embedded multi-channel data acquisition system on FPGA for Aditya Tokamak

    International Nuclear Information System (INIS)

    Rajpal, Rachana; Mandaliya, Hitesh; Patel, Jignesh; Kumari, Praveena; Gautam, Pramila; Raulji, Vismaysinh; Edappala, Praveenlal; Pujara, H.D; Jha, R.

    2016-01-01

    Highlights: • 64 channel data acquisition, interface to PC/104 bus, using single board computer. • Integration of all components in single hardware to make it standalone and portable. • Development of application software in Qt on Linux platform for better performance and low cost compared to Windows. • Explored and utilized FPGA resources for hardware interfacing. - Abstract: The 64 channel data acquisition board is designed to meet the future demand of acquisition channels for plasma diagnostics. The inherent features of the board are 16 bit resolution, programmable sampling rate upto 200 kS/s/ch and simultaneous acquisition. To make system embedded and compact, 8 Analog Inputs ADC chip, 4M × 16 bit RAM memory, Field Programmable Gate Arrays, PC/104 platform and single board computer are used. High speed timing control signals for all ADCs and RAMs are generated by FPGA. The system is standalone, portable and interface through Ethernet. The acquisition application is developed in Qt. on Linux platform, in SBC. Due to ethernet connectivity and onboard processing, system can be integrated into Aditya and SST-1 data acquisition system. The performance of hardware is tested on Linux and Windows Embedded OS. The paper describes design, hardware and software architecture, implementation and results of 64 channel DAQ system.

  18. DB2 9 for Linux, Unix, and Windows database administration upgrade certification study guide

    CERN Document Server

    Sanders, Roger E

    2007-01-01

    Written by one of the world's leading DB2 authors who is an active participant in the development of the DB2 certification exams, this resource covers everything a database adminstrator needs to know to pass the DB2 9 for Linux, UNIX, and Windows Database Administration Certification Upgrade exam (Exam 736). This comprehensive study guide discusses all exam topics: server management, data placement, XML concepts, analyzing activity, high availability, database security, and much more. Each chapter contains an extensive set of practice questions along with carefully explained answers. Both information-technology professionals who have experience as database administrators and have a current DBA certification on version 8 of DB2 and individuals who would like to learn the new features of DB2 9 will benefit from the information in this reference guide.

  19. CLOTU: An online pipeline for processing and clustering of 454 amplicon reads into OTUs followed by taxonomic annotation

    Directory of Open Access Journals (Sweden)

    Shalchian-Tabrizi Kamran

    2011-05-01

    Full Text Available Abstract Background The implementation of high throughput sequencing for exploring biodiversity poses high demands on bioinformatics applications for automated data processing. Here we introduce CLOTU, an online and open access pipeline for processing 454 amplicon reads. CLOTU has been constructed to be highly user-friendly and flexible, since different types of analyses are needed for different datasets. Results In CLOTU, the user can filter out low quality sequences, trim tags, primers, adaptors, perform clustering of sequence reads, and run BLAST against NCBInr or a customized database in a high performance computing environment. The resulting data may be browsed in a user-friendly manner and easily forwarded to downstream analyses. Although CLOTU is specifically designed for analyzing 454 amplicon reads, other types of DNA sequence data can also be processed. A fungal ITS sequence dataset generated by 454 sequencing of environmental samples is used to demonstrate the utility of CLOTU. Conclusions CLOTU is a flexible and easy to use bioinformatics pipeline that includes different options for filtering, trimming, clustering and taxonomic annotation of high throughput sequence reads. Some of these options are not included in comparable pipelines. CLOTU is implemented in a Linux computer cluster and is freely accessible to academic users through the Bioportal web-based bioinformatics service (http://www.bioportal.uio.no.

  20. Dynamical transitions in large systems of mean field-coupled Landau-Stuart oscillators: Extensive chaos and cluster states.

    Science.gov (United States)

    Ku, Wai Lim; Girvan, Michelle; Ott, Edward

    2015-12-01

    In this paper, we study dynamical systems in which a large number N of identical Landau-Stuart oscillators are globally coupled via a mean-field. Previously, it has been observed that this type of system can exhibit a variety of different dynamical behaviors. These behaviors include time periodic cluster states in which each oscillator is in one of a small number of groups for which all oscillators in each group have the same state which is different from group to group, as well as a behavior in which all oscillators have different states and the macroscopic dynamics of the mean field is chaotic. We argue that this second type of behavior is "extensive" in the sense that the chaotic attractor in the full phase space of the system has a fractal dimension that scales linearly with N and that the number of positive Lyapunov exponents of the attractor also scales linearly with N. An important focus of this paper is the transition between cluster states and extensive chaos as the system is subjected to slow adiabatic parameter change. We observe discontinuous transitions between the cluster states (which correspond to low dimensional dynamics) and the extensively chaotic states. Furthermore, examining the cluster state, as the system approaches the discontinuous transition to extensive chaos, we find that the oscillator population distribution between the clusters continually evolves so that the cluster state is always marginally stable. This behavior is used to reveal the mechanism of the discontinuous transition. We also apply the Kaplan-Yorke formula to study the fractal structure of the extensively chaotic attractors.

  1. MCR Container Tools

    Energy Technology Data Exchange (ETDEWEB)

    2018-01-01

    MathWorks' MATLAB is widely used in academia and industry for prototyping, data analysis, data processing, etc. Many users compile their programs using the MATLAB Compiler to run on workstations/computing clusters via the free MATLAB Compiler Runtime (MCR). The MCR facilitates the execution of code calling Application Programming Interfaces (API) functions from both base MATLAB and MATLAB toolboxes. In a Linux environment, a sizable number of third-party runtime dependencies (i.e. shared libraries) are necessary. Unfortunately, to the MTLAB community's knowledge, these dependencies are not documented, leaving system administrators and/or end-users to find/install the necessary libraries either as runtime errors resulting from them missing or by inspecting the header information of Executable and Linkable Format (ELF) libraries of the MCR to determine which ones are missing from the system. To address various shortcomings, Docker Images based on Community Enterprise Operating System (CentOS) 7, a derivative of Redhat Enterprise Linux (RHEL) 7, containing recent (2015-2017) MCR releases and their dependencies were created. These images, along with a provided sample Docker Compose YAML Script, can be used to create a simulated computing cluster where MATLAB Compiler created binaries can be executed using a sample Slurm Workload Manager script.

  2. Extracting Aggregation Free Energies of Mixed Clusters from Simulations of Small Systems: Application to Ionic Surfactant Micelles.

    Science.gov (United States)

    Zhang, X; Patel, L A; Beckwith, O; Schneider, R; Weeden, C J; Kindt, J T

    2017-11-14

    Micelle cluster distributions from molecular dynamics simulations of a solvent-free coarse-grained model of sodium octyl sulfate (SOS) were analyzed using an improved method to extract equilibrium association constants from small-system simulations containing one or two micelle clusters at equilibrium with free surfactants and counterions. The statistical-thermodynamic and mathematical foundations of this partition-enabled analysis of cluster histograms (PEACH) approach are presented. A dramatic reduction in computational time for analysis was achieved through a strategy similar to the selector variable method to circumvent the need for exhaustive enumeration of the possible partitions of surfactants and counterions into clusters. Using statistics from a set of small-system (up to 60 SOS molecules) simulations as input, equilibrium association constants for micelle clusters were obtained as a function of both number of surfactants and number of associated counterions through a global fitting procedure. The resulting free energies were able to accurately predict micelle size and charge distributions in a large (560 molecule) system. The evolution of micelle size and charge with SOS concentration as predicted by the PEACH-derived free energies and by a phenomenological four-parameter model fit, along with the sensitivity of these predictions to variations in cluster definitions, are analyzed and discussed.

  3. Evolution of highly compact binary stellar systems in globular clusters

    International Nuclear Information System (INIS)

    Krolik, J.H.; Meiksin, A.; Joss, P.C.

    1984-01-01

    We have calculated the secular evolution of a highly compact binary stellar system, composed of a collapsed object and a low-mass secondary star, in the core of a globular cluster. The binary evolves under the combined influences of (i) gravitational radiation losses from the system, (ii) the evolution of the secondary star, (iii) the resultant gradual mass transfer, if any, from the secondary to the collapsed object, and (iv) occasional encounters with passing field stars. We calculate all these effects in detail, utilizing some simplifying approximations appropriate to low-mass secondaries. The times of encounters with field stars, and the initial parameter specifying those encounters, were chosen by use of a Monte Carlo technique; the subsequent gravitational interactions were calculated utilzing a three-body integrator, and the changes in the binary orbital parmeters were thereby determined. We carried out a total of 20 such evolutionary calculations for each of two cluster core densities (1 and 3 x 10 3 stars pc -3 ). Each calculation was continued until the binary was disrupted or until 2 x 10 10 yr had elapsed

  4. Installation of JMTR core management system

    International Nuclear Information System (INIS)

    Imaizumi, Tomomi; Ide, Hiroshi; Naka, Michihiro; Komukai, Bunsaku; Nagao, Yoshiharu

    2013-01-01

    In order to carry out the core management after the reoperation of JMTR quickly and accurately, the authors took up the Standard Reactor Analysis Code (SRAC) system and core management support programs that are operating in a general-purpose large computer and transferred them to PC (OS: Linux), and newly established a JMTR core management system. As for the core analysis, this measure enabled an increase in the processing speed from the check of core arrangement to the result display of nuclear restriction values to about 60 times, compared with the conventional method. It was confirmed that the differences of calculation results originated from the difference of internal display of computers, associated with the transfer of each analysis code from GS21-400 system to PC-Linux, were within practically allowable level. In the future, this system will be applied to the core analysis of JMTR, as well as to the preparation of operation plans. (A.O.)

  5. Cluster Bulleticity

    OpenAIRE

    Massey, Richard; Kitching, Thomas; Nagai, Daisuke

    2010-01-01

    The unique properties of dark matter are revealed during collisions between clusters of galaxies, such as the bullet cluster (1E 0657−56) and baby bullet (MACS J0025−12). These systems provide evidence for an additional, invisible mass in the separation between the distributions of their total mass, measured via gravitational lensing, and their ordinary ‘baryonic’ matter, measured via its X-ray emission. Unfortunately, the information available from these systems is limited by their rarity. C...

  6. A new Self-Adaptive disPatching System for local clusters

    Science.gov (United States)

    Kan, Bowen; Shi, Jingyan; Lei, Xiaofeng

    2015-12-01

    The scheduler is one of the most important components of a high performance cluster. This paper introduces a self-adaptive dispatching system (SAPS) based on Torque[1] and Maui[2]. It promotes cluster resource utilization and improves the overall speed of tasks. It provides some extra functions for administrators and users. First of all, in order to allow the scheduling of GPUs, a GPU scheduling module based on Torque and Maui has been developed. Second, SAPS analyses the relationship between the number of queueing jobs and the idle job slots, and then tunes the priority of users’ jobs dynamically. This means more jobs run and fewer job slots are idle. Third, integrating with the monitoring function, SAPS excludes nodes in error states as detected by the monitor, and returns them to the cluster after the nodes have recovered. In addition, SAPS provides a series of function modules including a batch monitoring management module, a comprehensive scheduling accounting module and a real-time alarm module. The aim of SAPS is to enhance the reliability and stability of Torque and Maui. Currently, SAPS has been running stably on a local cluster at IHEP (Institute of High Energy Physics, Chinese Academy of Sciences), with more than 12,000 cpu cores and 50,000 jobs running each day. Monitoring has shown that resource utilization has been improved by more than 26%, and the management work for both administrator and users has been reduced greatly.

  7. Search for Formation Criteria for Globular Cluster Systems

    Science.gov (United States)

    Nuritdinov, S. N.; Mirtadjieva, K. T.; Tadjibaev, I. U.

    2005-01-01

    Star cluster formation is a major mode of star formation in the extreme conditions of interacting galaxies and violent starbursts. By studying ages and metallicities of young metal-enhanced star clusters in mergers / merger remnants we can learn about the violent star formation history of these galaxies and eventually about galaxy formation and evolution. We will present a new set of evolutionary synthesis models of our GALEV code specially developed to account for the gaseous emission of presently forming star clusters and an advanced tool to compare large model grids with multi-color broad-band observations becoming presently available in large amounts. Such observations are an ecomonic way to determine the parameters of young star clusters as will be shown in the presentation. First results of newly-born clusters in mergers and starburst galaxies are presented and compared to the well-studied old globulars and interpreted in the framework of galaxy formation / evolution.

  8. Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system

    International Nuclear Information System (INIS)

    Meier, Konrad; Fleig, Georg; Hauth, Thomas; Quast, Günter; Janczyk, Michael; Von Suchodoletz, Dirk; Wiebelt, Bernd

    2016-01-01

    Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare

  9. Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system

    Science.gov (United States)

    Meier, Konrad; Fleig, Georg; Hauth, Thomas; Janczyk, Michael; Quast, Günter; von Suchodoletz, Dirk; Wiebelt, Bernd

    2016-10-01

    Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare

  10. AIDEN: A Density Conscious Artificial Immune System for Automatic Discovery of Arbitrary Shape Clusters in Spatial Patterns

    Directory of Open Access Journals (Sweden)

    Vishwambhar Pathak

    2012-11-01

    Full Text Available Recent efforts in modeling of dynamics of the natural immune cells leading to artificial immune systems (AIS have ignited contemporary research interest in finding out its analogies to real world problems. The AIS models have been vastly exploited to develop dependable robust
    solutions to clustering. Most of the traditional clustering methods bear limitations in their capability to detect clusters of arbitrary shapes in a fully unsupervised manner. In this paper the recognition and communication dynamics of T Cell Receptors, the recognizing elements in innate immune
    system, has been modeled with a kernel density estimation method. The model has been shown to successfully discover non spherical clusters in spatial patterns. Modeling the cohesion of the antibodies and pathogens with ‘local influence’ measure inducts comprehensive extension of the
    antibody representation ball (ARB, which in turn corresponds to controlled expansion of clusters and prevents overfitting.

  11. The high performance cluster computing system for BES offline data analysis

    International Nuclear Information System (INIS)

    Sun Yongzhao; Xu Dong; Zhang Shaoqiang; Yang Ting

    2004-01-01

    A high performance cluster computing system (EPCfarm) is introduced, which used for BES offline data analysis. The setup and the characteristics of the hardware and software of EPCfarm are described. The PBS, a queue management package, and the performance of EPCfarm is presented also. (authors)

  12. Explicitly-correlated ring-coupled-cluster-doubles theory: Including exchange for computations on closed-shell systems

    Energy Technology Data Exchange (ETDEWEB)

    Hehn, Anna-Sophia; Holzer, Christof; Klopper, Wim, E-mail: klopper@kit.edu

    2016-11-10

    Highlights: • Ring-coupled-cluster-doubles approach now implemented with exchange terms. • Ring-coupled-cluster-doubles approach now implemented with F12 functions. • Szabo–Ostlund scheme (SO2) implemented for use in SAPT. • Fast convergence to the limit of a complete basis. • Implementation in the TURBOMOLE program system. - Abstract: Random-phase-approximation (RPA) methods have proven to be powerful tools in electronic-structure theory, being non-empirical, computationally efficient and broadly applicable to a variety of molecular systems including small-gap systems, transition-metal compounds and dispersion-dominated complexes. Applications are however hindered due to the slow basis-set convergence of the electron-correlation energy with the one-electron basis. As a remedy, we present approximate explicitly-correlated RPA approaches based on the ring-coupled-cluster-doubles formulation including exchange contributions. Test calculations demonstrate that the basis-set convergence of correlation energies is drastically accelerated through the explicitly-correlated approach, reaching 99% of the basis-set limit with triple-zeta basis sets. When implemented in close analogy to early work by Szabo and Ostlund [36], the new explicitly-correlated ring-coupled-cluster-doubles approach including exchange has the perspective to become a valuable tool in the framework of symmetry-adapted perturbation theory (SAPT) for the computation of dispersion energies of molecular complexes of weakly interacting closed-shell systems.

  13. EVIDENCE FOR AN ACCRETION ORIGIN FOR THE OUTER HALO GLOBULAR CLUSTER SYSTEM OF M31

    International Nuclear Information System (INIS)

    Mackey, A. D.; Huxor, A. P.; Ferguson, A. M. N.; Irwin, M. J.; Chapman, S. C.; Tanvir, N. R.; McConnachie, A. W.; Ibata, R. A.; Lewis, G. F.

    2010-01-01

    We use a sample of newly discovered globular clusters from the Pan-Andromeda Archaeological Survey (PAndAS) in combination with previously cataloged objects to map the spatial distribution of globular clusters in the M31 halo. At projected radii beyond ∼30 kpc, where large coherent stellar streams are readily distinguished in the field, there is a striking correlation between these features and the positions of the globular clusters. Adopting a simple Monte Carlo approach, we test the significance of this association by computing the probability that it could be due to the chance alignment of globular clusters smoothly distributed in the M31 halo. We find that the likelihood of this possibility is low, below 1%, and conclude that the observed spatial coincidence between globular clusters and multiple tidal debris streams in the outer halo of M31 reflects a genuine physical association. Our results imply that the majority of the remote globular cluster system of M31 has been assembled as a consequence of the accretion of cluster-bearing satellite galaxies. This constitutes the most direct evidence to date that the outer halo globular cluster populations in some galaxies are largely accreted.

  14. Large-Scale Multi-Dimensional Document Clustering on GPU Clusters

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Mueller, Frank [North Carolina State University; Zhang, Yongpeng [ORNL; Potok, Thomas E [ORNL

    2010-01-01

    Document clustering plays an important role in data mining systems. Recently, a flocking-based document clustering algorithm has been proposed to solve the problem through simulation resembling the flocking behavior of birds in nature. This method is superior to other clustering algorithms, including k-means, in the sense that the outcome is not sensitive to the initial state. One limitation of this approach is that the algorithmic complexity is inherently quadratic in the number of documents. As a result, execution time becomes a bottleneck with large number of documents. In this paper, we assess the benefits of exploiting the computational power of Beowulf-like clusters equipped with contemporary Graphics Processing Units (GPUs) as a means to significantly reduce the runtime of flocking-based document clustering. Our framework scales up to over one million documents processed simultaneously in a sixteennode GPU cluster. Results are also compared to a four-node cluster with higher-end GPUs. On these clusters, we observe 30X-50X speedups, which demonstrates the potential of GPU clusters to efficiently solve massive data mining problems. Such speedups combined with the scalability potential and accelerator-based parallelization are unique in the domain of document-based data mining, to the best of our knowledge.

  15. Effect of mesoscopic fluctuations on equation of state in cluster-forming systems

    Directory of Open Access Journals (Sweden)

    A. Ciach

    2012-06-01

    Full Text Available Equation of state for systems with particles self-assembling into aggregates is derived within a mesoscopic theory combining density functional and field-theoretic approaches. We focus on the effect of mesoscopic fluctuations in the disordered phase. The pressure - volume fraction isotherms are calculated explicitly for two forms of the short-range attraction long-range repulsion potential. Mesoscopic fluctuations lead to an increased pressure in each case, except for very small volume fractions. When large clusters are formed, the mechanical instability of the system is present at much higher temperature than found in mean-field approximation. In this case phase separation competes with the formation of periodic phases (colloidal crystals. In the case of small clusters, no mechanical instability associated with separation into dilute and dense phases appears.

  16. Classification Of Cluster Area Forsatellite Image

    Directory of Open Access Journals (Sweden)

    Thwe Zin Phyo

    2015-06-01

    Full Text Available Abstract This paper describes area classification for Landsat7 satellite image. The main purpose of this system is to classify the area of each cluster contained in a satellite image. To classify this image firstly need to clusterthe satellite image into different land cover types. Clustering is an unsupervised learning method that aimsto classify an image into homogeneous regions. This system is implemented based on color features with K-means clustering unsupervised algorithm. This method does not need to train image before clustering.The clusters of satellite image are grouped into a set of three clusters for Landsat7 satellite image. For this work the combined band 432 from Landsat7 satellite is used as an input. Satellite imageMandalay area in 2001 is chosen to test the segmentation method. After clustering a specific range for three clustered images must be defined in order to obtain greenland water and urbanbalance.This system is implemented by using MATLAB programming language.

  17. Linked cluster expansions for open quantum systems on a lattice

    Science.gov (United States)

    Biella, Alberto; Jin, Jiasen; Viyuela, Oscar; Ciuti, Cristiano; Fazio, Rosario; Rossini, Davide

    2018-01-01

    We propose a generalization of the linked-cluster expansions to study driven-dissipative quantum lattice models, directly accessing the thermodynamic limit of the system. Our method leads to the evaluation of the desired extensive property onto small connected clusters of a given size and topology. We first test this approach on the isotropic spin-1/2 Hamiltonian in two dimensions, where each spin is coupled to an independent environment that induces incoherent spin flips. Then we apply it to the study of an anisotropic model displaying a dissipative phase transition from a magnetically ordered to a disordered phase. By means of a Padé analysis on the series expansions for the average magnetization, we provide a viable route to locate the phase transition and to extrapolate the critical exponent for the magnetic susceptibility.

  18. MULTIAGENT IMITATION MODEL OF A REGIONAL CONSTRUCTION CLUSTER AS A HETERARCHICAL SYSTEM

    Directory of Open Access Journals (Sweden)

    Anufriev Dmitriy Petrovich

    2018-01-01

    Full Text Available Subject: a regional construction cluster, which is viewed as a complex system territorially localized within the region, consisting of interconnected and complementary enterprises of construction and related industries that are united with local institutions, authorities and cooperating enterprises by heterarchic relations. Research objectives: development of multi-agent simulation model that allows us to examine the business-processes in the regional construction cluster as a complex heterarchical system. Materials and methods: we formulate the mathematical problem for description of processes in a heterarchic system as in a special multi-agent queueing network. Conclusions: the article substantiates application of the decentralized approach which is based on the use of agent methodology. Several types of agents that model elementary organizational structures have been developed. We describe the functional core of the multi-agent simulation model characterizing the heterarchic organizational model. Using the Fishman-Kivia criterion, the adequacy of the logical functioning of the developed model was established.

  19. Spatial clustering of pixels of a multispectral image

    Science.gov (United States)

    Conger, James Lynn

    2014-08-19

    A method and system for clustering the pixels of a multispectral image is provided. A clustering system computes a maximum spectral similarity score for each pixel that indicates the similarity between that pixel and the most similar neighboring. To determine the maximum similarity score for a pixel, the clustering system generates a similarity score between that pixel and each of its neighboring pixels and then selects the similarity score that represents the highest similarity as the maximum similarity score. The clustering system may apply a filtering criterion based on the maximum similarity score so that pixels with similarity scores below a minimum threshold are not clustered. The clustering system changes the current pixel values of the pixels in a cluster based on an averaging of the original pixel values of the pixels in the cluster.

  20. An easy-to-build, low-budget point-of-care ultrasound simulator: from Linux to a web-based solution.

    Science.gov (United States)

    Damjanovic, Domagoj; Goebel, Ulrich; Fischer, Benedikt; Huth, Martin; Breger, Hartmut; Buerkle, Hartmut; Schmutz, Axel

    2017-12-01

    Hands-on training in point-of-care ultrasound (POC-US) should ideally comprise bedside teaching, as well as simulated clinical scenarios. High-fidelity phantoms and portable ultrasound simulation systems are commercially available, however, at considerable costs. This limits their suitability for medical schools. A Linux-based software for Emergency Department Ultrasound Simulation (edus2TM) was developed by Kulyk and Olszynski in 2011. Its feasibility for POC-US education has been well-documented, and shows good acceptance. An important limitation to an even more widespread use of edus2, however, may be due to the need for a virtual machine for WINDOWS ® systems. Our aim was to adapt the original software toward an HTML-based solution, thus making it affordable and applicable in any simulation setting. We created an HTML browser-based ultrasound simulation application, which reads the input of different sensors, triggering an ultrasound video to be displayed on a respective device. RFID tags, NFC tags, and QR Codes™ have been integrated into training phantoms or were attached to standardized patients. The RFID antenna was hidden in a mock ultrasound probe. The application is independent from the respective device. Our application was used successfully with different trigger/scanner combinations and mounted readily into simulated training scenarios. The application runs independently from operating systems or electronic devices. This low-cost, browser-based ultrasound simulator is easy-to-build, very adaptive, and independent from operating systems. It has the potential to facilitate POC-US training throughout the world, especially in resource-limited areas.