WorldWideScience

Sample records for linux kernel code

  1. SOFTICE: Facilitating both Adoption of Linux Undergraduate Operating Systems Laboratories and Students' Immersion in Kernel Code

    Alessio Gaspar

    2007-06-01

    Full Text Available This paper discusses how Linux clustering and virtual machine technologies can improve undergraduate students' hands-on experience in operating systems laboratories. Like similar projects, SOFTICE relies on User Mode Linux (UML to provide students with privileged access to a Linux system without creating security breaches on the hosting network. We extend such approaches in two aspects. First, we propose to facilitate adoption of Linux-based laboratories by using a load-balancing cluster made of recycled classroom PCs to remotely serve access to virtual machines. Secondly, we propose a new approach for students to interact with the kernel code.

  2. An approach to improving the structure of error-handling code in the linux kernel

    Saha, Suman; Lawall, Julia; Muller, Gilles

    2011-01-01

    The C language does not provide any abstractions for exception handling or other forms of error handling, leaving programmers to devise their own conventions for detecting and handling errors. The Linux coding style guidelines suggest placing error handling code at the end of each function, where...... an automatic program transformation that transforms error-handling code into this style. We have applied our transformation to the Linux 2.6.34 kernel source code, on which it reorganizes the error handling code of over 1800 functions, in about 25 minutes....

  3. Kernel Korner : The Linux keyboard driver

    Brouwer, A.E.

    1995-01-01

    Our Kernel Korner series continues with an article describing the Linux keyboard driver. This article is not for "Kernel Hackers" only--in fact, it will be most useful to those who wish to use their own keyboard to its fullest potential, and those who want to write programs to take advantage of the

  4. Enforcing the use of API functions in Linux code

    Lawall, Julia; Muller, Gilles; Palix, Nicolas Jean-Michel

    2009-01-01

    In the Linux kernel source tree, header files typically define many small functions that have a simple behavior but are critical to ensure readability, correctness, and maintainability. We have observed, however, that some Linux code does not use these functions systematically. In this paper, we...... in the header file include/linux/usb.h....

  5. Research of Performance Linux Kernel File Systems

    Andrey Vladimirovich Ostroukh

    2015-10-01

    Full Text Available The article describes the most common Linux Kernel File Systems. The research was carried out on a personal computer, the characteristics of which are written in the article. The study was performed on a typical workstation running GNU/Linux with below characteristics. On a personal computer for measuring the file performance, has been installed the necessary software. Based on the results, conclusions and proposed recommendations for use of file systems. Identified and recommended by the best ways to store data.

  6. On methods to increase the security of the Linux kernel

    Matvejchikov, I.V.

    2014-01-01

    Methods to increase the security of the Linux kernel for the implementation of imposed protection tools have been examined. The methods of incorporation into various subsystems of the kernel on the x86 architecture have been described [ru

  7. The Linux kernel as flexible product-line architecture

    M. de Jonge (Merijn)

    2002-01-01

    textabstractThe Linux kernel source tree is huge ($>$ 125 MB) and inflexible (because it is difficult to add new kernel components). We propose to make this architecture more flexible by assembling kernel source trees dynamically from individual kernel components. Users then, can select what

  8. Extracting Feature Model Changes from the Linux Kernel Using FMDiff

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2014-01-01

    The Linux kernel feature model has been studied as an example of large scale evolving feature model and yet details of its evolution are not known. We present here a classification of feature changes occurring on the Linux kernel feature model, as well as a tool, FMDiff, designed to automatically

  9. MARS Code in Linux Environment

    Hwang, Moon Kyu; Bae, Sung Won; Jung, Jae Joon; Chung, Bub Dong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2005-07-01

    The two-phase system analysis code MARS has been incorporated into Linux system. The MARS code was originally developed based on the RELAP5/MOD3.2 and COBRA-TF. The 1-D module which evolved from RELAP5 alone could be applied for the whole NSSS system analysis. The 3-D module developed based on the COBRA-TF, however, could be applied for the analysis of the reactor core region where 3-D phenomena would be better treated. The MARS code also has several other code units that could be incorporated for more detailed analysis. The separate code units include containment analysis modules and 3-D kinetics module. These code modules could be optionally invoked to be coupled with the main MARS code. The containment code modules (CONTAIN and CONTEMPT), for example, could be utilized for the analysis of the plant containment phenomena in a coupled manner with the nuclear reactor system. The mass and energy interaction during the hypothetical coolant leakage accident could, thereby, be analyzed in a more realistic manner. In a similar way, 3-D kinetics could be incorporated for simulating the three dimensional reactor kinetic behavior, instead of using the built-in point kinetics model. The MARS code system, developed initially for the MS Windows environment, however, would not be adequate enough for the PC cluster system where multiple CPUs are available. When parallelism is to be eventually incorporated into the MARS code, MS Windows environment is not considered as an optimum platform. Linux environment, on the other hand, is generally being adopted as a preferred platform for the multiple codes executions as well as for the parallel application. In this study, MARS code has been modified for the adaptation of Linux platform. For the initial code modification, the Windows system specific features have been removed from the code. Since the coupling code module CONTAIN is originally in a form of dynamic load library (DLL) in the Windows system, a similar adaptation method

  10. MARS Code in Linux Environment

    Hwang, Moon Kyu; Bae, Sung Won; Jung, Jae Joon; Chung, Bub Dong

    2005-01-01

    The two-phase system analysis code MARS has been incorporated into Linux system. The MARS code was originally developed based on the RELAP5/MOD3.2 and COBRA-TF. The 1-D module which evolved from RELAP5 alone could be applied for the whole NSSS system analysis. The 3-D module developed based on the COBRA-TF, however, could be applied for the analysis of the reactor core region where 3-D phenomena would be better treated. The MARS code also has several other code units that could be incorporated for more detailed analysis. The separate code units include containment analysis modules and 3-D kinetics module. These code modules could be optionally invoked to be coupled with the main MARS code. The containment code modules (CONTAIN and CONTEMPT), for example, could be utilized for the analysis of the plant containment phenomena in a coupled manner with the nuclear reactor system. The mass and energy interaction during the hypothetical coolant leakage accident could, thereby, be analyzed in a more realistic manner. In a similar way, 3-D kinetics could be incorporated for simulating the three dimensional reactor kinetic behavior, instead of using the built-in point kinetics model. The MARS code system, developed initially for the MS Windows environment, however, would not be adequate enough for the PC cluster system where multiple CPUs are available. When parallelism is to be eventually incorporated into the MARS code, MS Windows environment is not considered as an optimum platform. Linux environment, on the other hand, is generally being adopted as a preferred platform for the multiple codes executions as well as for the parallel application. In this study, MARS code has been modified for the adaptation of Linux platform. For the initial code modification, the Windows system specific features have been removed from the code. Since the coupling code module CONTAIN is originally in a form of dynamic load library (DLL) in the Windows system, a similar adaptation method

  11. Rebootless Linux Kernel Patching with Ksplice Uptrack at BNL

    Hollowell, Christopher; Pryor, James; Smith, Jason

    2012-01-01

    Ksplice/Oracle Uptrack is a software tool and update subscription service which allows system administrators to apply security and bug fix patches to the Linux kernel running on servers/workstations without rebooting them. The RHIC/ATLAS Computing Facility (RACF) at Brookhaven National Laboratory (BNL) has deployed Uptrack on nearly 2,000 hosts running Scientific Linux and Red Hat Enterprise Linux. The use of this software has minimized downtime, and increased our security posture. In this paper, we provide an overview of Ksplice's rebootless kernel patch creation/insertion mechanism, and our experiences with Uptrack.

  12. Analysis of Linux kernel as a complex network

    Gao, Yichao; Zheng, Zheng; Qin, Fangyun

    2014-01-01

    Operating system (OS) acts as an intermediary between software and hardware in computer-based systems. In this paper, we analyze the core of the typical Linux OS, Linux kernel, as a complex network to investigate its underlying design principles. It is found that the Linux Kernel Network (LKN) is a directed network and its out-degree follows an exponential distribution while the in-degree follows a power-law distribution. The correlation between topology and functions is also explored, by which we find that LKN is a highly modularized network with 12 key communities. Moreover, we investigate the robustness of LKN under random failures and intentional attacks. The result shows that the failure of the large in-degree nodes providing basic services will do more damage on the whole system. Our work may shed some light on the design of complex software systems

  13. KNBD: A Remote Kernel Block Server for Linux

    Becker, Jeff

    1999-01-01

    I am developing a prototype of a Linux remote disk block server whose purpose is to serve as a lower level component of a parallel file system. Parallel file systems are an important component of high performance supercomputers and clusters. Although supercomputer vendors such as SGI and IBM have their own custom solutions, there has been a void and hence a demand for such a system on Beowulf-type PC Clusters. Recently, the Parallel Virtual File System (PVFS) project at Clemson University has begun to address this need (1). Although their system provides much of the functionality of (and indeed was inspired by) the equivalent file systems in the commercial supercomputer market, their system is all in user-space. Migrating their 10 services to the kernel could provide a performance boost, by obviating the need for expensive system calls. Thanks to Pavel Machek, the Linux kernel has provided the network block device (2) with kernels 2.1.101 and later. You can configure this block device to redirect reads and writes to a remote machine's disk. This can be used as a building block for constructing a striped file system across several nodes.

  14. Free and open source software at CERN: integration of drivers in the Linux kernel

    Gonzalez Cobas, J.D.; Iglesias Gonsalvez, S.; Howard Lewis, J.; Serrano, J.; Vanga, M.; Cota, E.G.; Rubini, A.; Vaga, F.

    2012-01-01

    Most device drivers written for accelerator control systems suffer from a severe lack of portability due to the ad hoc nature of the code, often embodied with intimate knowledge of the particular machine it is deployed in. In this paper we challenge this practice by arguing for the opposite approach: development in the open, which in our case translates into the integration of our code within the Linux kernel. We make our case by describing the upstream merge effort of the tsi148 driver, a critical (and complex) component of the control system. The encouraging results from this effort have then led us to follow the same approach with two more ambitious projects, currently in the works: Linux support for the upcoming FMC boards and a new I/O subsystem. (authors)

  15. Lecture 11: Systemtap : Patching the linux kernel on the fly

    CERN. Geneva

    2013-01-01

    The presentation will describe the usage of Systemtap, in CERN Scientific Linux environment. Systemtap is a tool that allows developers and administrators to write and reuse simple scripts to deeply examine the activities of a live Linux system. We will go through the life cycle of a system tap module : creation, packaging, deployment. It will focus on how we used it recently at CERN as a workaround to patch a 0-day. Thomas Oulevey is a member if the IT department at CERN where he is an active member of the Linux team which supports 9’000 servers, 3’000 desktop systems and more than 5’000 active users. Before CERN he worked at the former astrophysics department of CERN, now called the European Southern Observatory, based in Chile. He was used to maintain the core telescope Linux systems and monitoring infrastructure.

  16. Linux containers networking: performance and scalability of kernel modules

    Claassen, J.; Koning, R.; Grosso, P.; Oktug Badonnel, S.; Ulema, M.; Cavdar, C.; Zambenedetti Granville, L.; dos Santos, C.R.P.

    2016-01-01

    Linux container virtualisation is gaining momentum as lightweight technology to support cloud and distributed computing. Applications relying on container architectures might at times rely on inter-container communication, and container networking solutions are emerging to address this need.

  17. An update on perfmon and the struggle to get into the Linux kernel

    Nowak, Andrzej, E-mail: Andrzej.Nowak@cern.c [CERN openlab (Switzerland)

    2010-04-01

    At CHEP2007 we reported on the perfmon2 subsystem as a tool for interfacing to the PMUs (Performance Monitoring Units) which are found in the hardware of all modern processors (from AMD, Intel, SUN, IBM, MIPS, etc.). The intent was always to get the subsystem into the Linux kernel by default. This paper reports on how progress was made (after long discussions) and will also show the latest additions to the subsystems.

  18. An update on perfmon and the struggle to get into the Linux kernel

    Nowak, Andrzej

    2010-01-01

    At CHEP2007 we reported on the perfmon2 subsystem as a tool for interfacing to the PMUs (Performance Monitoring Units) which are found in the hardware of all modern processors (from AMD, Intel, SUN, IBM, MIPS, etc.). The intent was always to get the subsystem into the Linux kernel by default. This paper reports on how progress was made (after long discussions) and will also show the latest additions to the subsystems.

  19. Fast scalar data buffering interface in Linux 2.6 kernel

    Homs, A.

    2012-01-01

    Key instrumentation devices like counter/timers, analog-to-digital converters and encoders provide scalar data input. Many of them allow fast acquisitions, but do not provide hardware triggering or buffering mechanisms. A Linux 2.4 kernel driver called Hook was developed at the ESRF as a generic software-triggered buffering interface. This work presents the portage of the ESRF Hook interface to the Linux 2.6 kernel. The interface distinguishes 2 independent functional groups: trigger event generators and data channels. Devices in the first group create software events, like hardware interrupts generated by timers or external signals. On each event, one or more device channels on the second group are read and stored in kernel buffers. The event generators and data channels to be read are fully configurable before each sequence. Designed for fast acquisitions, the Hook implementation is well adapted to multi-CPU systems, where the interrupt latency is notably reduced. On heavily loaded dual-core PCs running standard (non real time) Linux, data can be taken at 1 KHz without losing events. Additional features include full integration into the /sys virtual file-system and hot-plug devices support. (author)

  20. Covert Android Rootkit Detection: Evaluating Linux Kernel Level Rootkits on the Android Operating System

    2012-06-14

    modifies the same kernel memory as the first. Race conditions can be prevented using synchronization primitives (e.g., locks, semaphores ...exception and provides generic data structures and primitives to encourage code reuse by developers [Bov05]. These structures, that all programmers are

  1. WYSIWIB: A Declarative Approach to Finding API Protocols and Bugs in Linux Code

    Lawall, Julia; Brunel, Julien Pierre Manuel; Palix, Nicolas Jean-Michel

    2009-01-01

    the tools on specific kinds of bugs and to relate the results to patterns in the source code. We propose a declarative approach to bug finding in Linux OS code using a control-flow based program search engine. Our approach is WYSIWIB (What You See Is Where It Bugs), since the programmer expresses...

  2. LINK codes TRAC-BF1/PARCSv2.7 in LINUX without external communication interface

    Barrachina, T.; Garcia-Fenoll, M.; Abarca, A.; Miro, R.; Verdu, G.; Concejal, A.; Solar, A.

    2014-01-01

    The TRAC-BF1 code is still widely used by the nuclear industry for safety analysis. The plant models developed using this code are highly validated, so it is advisable to continue improving this code before migrating to another completely different code. The coupling with the NRC neutronic code PARCSv2.7 increases the simulation capabilities in transients in which the power distribution plays an important role. In this paper, the procedure for the coupling of TRAC-BF1 and PARCSv2.7 codes without PVM and in Linux is presented. (Author)

  3. MARMER, a flexible point-kernel shielding code

    Kloosterman, J.L.; Hoogenboom, J.E.

    1990-01-01

    A point-kernel shielding code entitled MARMER is described. It has several options with respect to geometry input, source description and detector point description which extend the flexibility and usefulness of the code, and which are especially useful in spent fuel shielding. MARMER has been validated using the TN12 spent fuel shipping cask benchmark. (author)

  4. MARMER, a flexible point-kernel shielding code

    Kloosterman, J.L.; Hoogenboom, J.E. (Interuniversitair Reactor Inst., Delft (Netherlands))

    1990-01-01

    A point-kernel shielding code entitled MARMER is described. It has several options with respect to geometry input, source description and detector point description which extend the flexibility and usefulness of the code, and which are especially useful in spent fuel shielding. MARMER has been validated using the TN12 spent fuel shipping cask benchmark. (author).

  5. Ideal Gas Resonance Scattering Kernel Routine for the NJOY Code

    Rothenstein, W.

    1999-01-01

    In a recent publication an expression for the temperature-dependent double-differential ideal gas scattering kernel is derived for the case of scattering cross sections that are energy dependent. Some tabulations and graphical representations of the characteristics of these kernels are presented in Ref. 2. They demonstrate the increased probability that neutron scattering by a heavy nuclide near one of its pronounced resonances will bring the neutron energy nearer to the resonance peak. This enhances upscattering, when a neutron with energy just below that of the resonance peak collides with such a nuclide. A routine for using the new kernel has now been introduced into the NJOY code. Here, its principal features are described, followed by comparisons between scattering data obtained by the new kernel, and the standard ideal gas kernel, when such comparisons are meaningful (i.e., for constant values of the scattering cross section a 0 K). The new ideal gas kernel for variable σ s 0 (E) at 0 K leads to the correct Doppler-broadened σ s T (E) at temperature T

  6. Local coding based matching kernel method for image classification.

    Yan Song

    Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  7. LZW-Kernel: fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification.

    Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila

    2018-05-07

    Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.

  8. LINK codes TRAC-BF1/PARCSv2.7 in LINUX without external communication interface; Acoplamiento de los codigos TRAC-BF1/PARCSv2.7 en Linux sin interfaz externa de comunicacion

    Barrachina, T.; Garcia-Fenoll, M.; Abarca, A.; Miro, R.; Verdu, G.; Concejal, A.; Solar, A.

    2014-07-01

    The TRAC-BF1 code is still widely used by the nuclear industry for safety analysis. The plant models developed using this code are highly validated, so it is advisable to continue improving this code before migrating to another completely different code. The coupling with the NRC neutronic code PARCSv2.7 increases the simulation capabilities in transients in which the power distribution plays an important role. In this paper, the procedure for the coupling of TRAC-BF1 and PARCSv2.7 codes without PVM and in Linux is presented. (Author)

  9. Real Time Linux - The RTOS for Astronomy?

    Daly, P. N.

    The BoF was attended by about 30 participants and a free CD of real time Linux-based upon RedHat 5.2-was available. There was a detailed presentation on the nature of real time Linux and the variants for hard real time: New Mexico Tech's RTL and DIAPM's RTAI. Comparison tables between standard Linux and real time Linux responses to time interval generation and interrupt response latency were presented (see elsewhere in these proceedings). The present recommendations are to use RTL for UP machines running the 2.0.x kernels and RTAI for SMP machines running the 2.2.x kernel. Support, both academically and commercially, is available. Some known limitations were presented and the solutions reported e.g., debugging and hardware support. The features of RTAI (scheduler, fifos, shared memory, semaphores, message queues and RPCs) were described. Typical performance statistics were presented: Pentium-based oneshot tasks running > 30kHz, 486-based oneshot tasks running at ~ 10 kHz, periodic timer tasks running in excess of 90 kHz with average zero jitter peaking to ~ 13 mus (UP) and ~ 30 mus (SMP). Some detail on kernel module programming, including coding examples, were presented showing a typical data acquisition system generating simulated (random) data writing to a shared memory buffer and a fifo buffer to communicate between real time Linux and user space. All coding examples were complete and tested under RTAI v0.6 and the 2.2.12 kernel. Finally, arguments were raised in support of real time Linux: it's open source, free under GPL, enables rapid prototyping, has good support and the ability to have a fully functioning workstation capable of co-existing hard real time performance. The counter weight-the negatives-of lack of platforms (x86 and PowerPC only at present), lack of board support, promiscuous root access and the danger of ignorance of real time programming issues were also discussed. See ftp://orion.tuc.noao.edu/pub/pnd/rtlbof.tgz for the StarOffice overheads

  10. Linux bible

    Negus, Christopher

    2015-01-01

    The industry favorite Linux guide, updated for Red Hat Enterprise Linux 7 and the cloud Linux Bible, 9th Edition is the ultimate hands-on Linux user guide, whether you're a true beginner or a more advanced user navigating recent changes. This updated ninth edition covers the latest versions of Red Hat Enterprise Linux 7 (RHEL 7), Fedora 21, and Ubuntu 14.04 LTS, and includes new information on cloud computing and development with guidance on Openstack and Cloudforms. With a focus on RHEL 7, this practical guide gets you up to speed quickly on the new enhancements for enterprise-quality file s

  11. Linux Essentials

    Smith, Roderick W

    2012-01-01

    A unique, full-color introduction to Linux fundamentals Serving as a low-cost, secure alternative to expensive operating systems, Linux is a UNIX-based, open source operating system. Full-color and concise, this beginner's guide takes a learning-by-doing approach to understanding the essentials of Linux. Each chapter begins by clearly identifying what you will learn in the chapter, followed by a straightforward discussion of concepts that leads you right into hands-on tutorials. Chapters conclude with additional exercises and review questions, allowing you to reinforce and measure your underst

  12. Linux real-time framework for fusion devices

    Neto, Andre [Associacao Euratom-IST, Instituto de Plasmas e Fusao Nuclear, Av. Rovisco Pais, 1049-001 Lisboa (Portugal)], E-mail: andre.neto@cfn.ist.utl.pt; Sartori, Filippo; Piccolo, Fabio [Euratom-UKAEA, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Barbalace, Antonio [Euratom-ENEA Association, Consorzio RFX, 35127 Padova (Italy); Vitelli, Riccardo [Dipartimento di Informatica, Sistemi e Produzione, Universita di Roma, Tor Vergata, Via del Politecnico 1-00133, Roma (Italy); Fernandes, Horacio [Associacao Euratom-IST, Instituto de Plasmas e Fusao Nuclear, Av. Rovisco Pais, 1049-001 Lisboa (Portugal)

    2009-06-15

    A new framework for the development and execution of real-time codes is currently being developed and commissioned at JET. The foundations of the system are Linux, the Real Time Application Interface (RTAI) and a wise exploitation of the new i386 multi-core processors technology. The driving motivation was the need to find a real-time operating system for the i386 platform able to satisfy JET Vertical Stabilisation Enhancement project requirements: 50 {mu}s cycle time. Even if the initial choice was the VxWorks operating system, it was decided to explore an open source alternative, mostly because of the costs involved in the commercial product. The work started with the definition of a precise set of requirements and milestones to achieve: Linux distribution and kernel versions to be used for the real-time operating system; complete characterization of the Linux/RTAI real-time capabilities; exploitation of the multi-core technology; implementation of all the required and missing features; commissioning of the system. Latency and jitter measurements were compared for Linux and RTAI in both user and kernel-space. The best results were attained using the RTAI kernel solution where the time to reschedule a real-time task after an external interrupt is of 2.35 {+-} 0.35 {mu}s. In order to run the real-time codes in the kernel-space, a solution to provide user-space functionalities to the kernel modules had to be designed. This novel work provided the most common functions from the standard C library and transparent interaction with files and sockets to the kernel real-time modules. Kernel C++ support was also tested, further developed and integrated in the framework. The work has produced very convincing results so far: complete isolation of the processors assigned to real-time from the Linux non real-time activities, high level of stability over several days of benchmarking operations and values well below 3 {mu}s for task rescheduling after external interrupt. From

  13. Linux real-time framework for fusion devices

    Neto, Andre; Sartori, Filippo; Piccolo, Fabio; Barbalace, Antonio; Vitelli, Riccardo; Fernandes, Horacio

    2009-01-01

    A new framework for the development and execution of real-time codes is currently being developed and commissioned at JET. The foundations of the system are Linux, the Real Time Application Interface (RTAI) and a wise exploitation of the new i386 multi-core processors technology. The driving motivation was the need to find a real-time operating system for the i386 platform able to satisfy JET Vertical Stabilisation Enhancement project requirements: 50 μs cycle time. Even if the initial choice was the VxWorks operating system, it was decided to explore an open source alternative, mostly because of the costs involved in the commercial product. The work started with the definition of a precise set of requirements and milestones to achieve: Linux distribution and kernel versions to be used for the real-time operating system; complete characterization of the Linux/RTAI real-time capabilities; exploitation of the multi-core technology; implementation of all the required and missing features; commissioning of the system. Latency and jitter measurements were compared for Linux and RTAI in both user and kernel-space. The best results were attained using the RTAI kernel solution where the time to reschedule a real-time task after an external interrupt is of 2.35 ± 0.35 μs. In order to run the real-time codes in the kernel-space, a solution to provide user-space functionalities to the kernel modules had to be designed. This novel work provided the most common functions from the standard C library and transparent interaction with files and sockets to the kernel real-time modules. Kernel C++ support was also tested, further developed and integrated in the framework. The work has produced very convincing results so far: complete isolation of the processors assigned to real-time from the Linux non real-time activities, high level of stability over several days of benchmarking operations and values well below 3 μs for task rescheduling after external interrupt. From being the

  14. The Research on Linux Memory Forensics

    Zhang, Jun; Che, ShengBing

    2018-03-01

    Memory forensics is a branch of computer forensics. It does not depend on the operating system API, and analyzes operating system information from binary memory data. Based on the 64-bit Linux operating system, it analyzes system process and thread information from physical memory data. Using ELF file debugging information and propose a method for locating kernel structure member variable, it can be applied to different versions of the Linux operating system. The experimental results show that the method can successfully obtain the sytem process information from physical memory data, and can be compatible with multiple versions of the Linux kernel.

  15. Running Linux

    Dalheimer, Matthias Kalle

    2006-01-01

    The fifth edition of Running Linux is greatly expanded, reflecting the maturity of the operating system and the teeming wealth of software available for it. Hot consumer topics such as audio and video playback applications, groupware functionality, and spam filtering are covered, along with the basics in configuration and management that always made the book popular.

  16. Faults in Linux

    Palix, Nicolas Jean-Michel; Thomas, Gaël; Saha, Suman

    2011-01-01

    In 2001, Chou et al. published a study of faults found by applying a static analyzer to Linux versions 1.0 through 2.4.1. A major result of their work was that the drivers directory contained up to 7 times more of certain kinds of faults than other directories. This result inspired a number...... of development and research efforts on improving the reliability of driver code. Today Linux is used in a much wider range of environments, provides a much wider range of services, and has adopted a new development and release model. What has been the impact of these changes on code quality? Are drivers still...... a major problem? To answer these questions, we have transported the experiments of Chou et al. to Linux versions 2.6.0 to 2.6.33, released between late 2003 and early 2010. We find that Linux has more than doubled in size during this period, but that the number of faults per line of code has been...

  17. WASTK: A Weighted Abstract Syntax Tree Kernel Method for Source Code Plagiarism Detection

    Deqiang Fu

    2017-01-01

    Full Text Available In this paper, we introduce a source code plagiarism detection method, named WASTK (Weighted Abstract Syntax Tree Kernel, for computer science education. Different from other plagiarism detection methods, WASTK takes some aspects other than the similarity between programs into account. WASTK firstly transfers the source code of a program to an abstract syntax tree and then gets the similarity by calculating the tree kernel of two abstract syntax trees. To avoid misjudgment caused by trivial code snippets or frameworks given by instructors, an idea similar to TF-IDF (Term Frequency-Inverse Document Frequency in the field of information retrieval is applied. Each node in an abstract syntax tree is assigned a weight by TF-IDF. WASTK is evaluated on different datasets and, as a result, performs much better than other popular methods like Sim and JPlag.

  18. A point kernel shielding code, PKN-HP, for high energy proton incident

    Kotegawa, Hiroshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1996-06-01

    A point kernel integral technique code PKN-HP, and the related thick target neutron yield data have been developed to calculate neutron and secondary gamma-ray dose equivalents in ordinary concrete and iron shields for fully stopping length C, Cu and U-238 target neutrons produced by 100 MeV-10 GeV proton incident in a 3-dimensional geometry. The comparisons among calculation results of the present code and other calculation techniques, and measured values showed the usefulness of the code. (author)

  19. Remote Boot of a Diskless Linux Client for Operating System Integrity

    Allen, Bruce

    2002-01-01

    .... The diskless Linux client is organized to provide read-write files over NFS at home, read-only files over NFS for accessing bulky immutable utilities, and sone volatile RAM disk files to allow the Linux Kernel to boot...

  20. The Linux operating system: An introduction

    Bokhari, Shahid H.

    1995-01-01

    Linux is a Unix-like operating system for Intel 386/486/Pentium based IBM-PCs and compatibles. The kernel of this operating system was written from scratch by Linus Torvalds and, although copyrighted by the author, may be freely distributed. A world-wide group has collaborated in developing Linux on the Internet. Linux can run the powerful set of compilers and programming tools of the Free Software Foundation, and XFree86, a port of the X Window System from MIT. Most capabilities associated with high performance workstations, such as networking, shared file systems, electronic mail, TeX, LaTeX, etc. are freely available for Linux. It can thus transform cheap IBM-PC compatible machines into Unix workstations with considerable capabilities. The author explains how Linux may be obtained, installed and networked. He also describes some interesting applications for Linux that are freely available. The enormous consumer market for IBM-PC compatible machines continually drives down prices of CPU chips, memory, hard disks, CDROMs, etc. Linux can convert such machines into powerful workstations that can be used for teaching, research and software development. For professionals who use Unix based workstations at work, Linux permits virtually identical working environments on their personal home machines. For cost conscious educational institutions Linux can create world-class computing environments from cheap, easily maintained, PC clones. Finally, for university students, it provides an essentially cost-free path away from DOS into the world of Unix and X Windows.

  1. A point-kernel shielding code for calculations of neutron and secondary gamma-ray 1cm dose equivalents: PKN

    Kotegawa, Hiroshi; Tanaka, Shun-ichi

    1991-09-01

    A point-kernel integral technique code, PKN, and the related data library have been developed to calculate neutron and secondary gamma-ray dose equivalents in water, concrete and iron shields for neutron sources in 3-dimensional geometry. The comparison between calculational results of the present code and those of the 1-dimensional transport code ANISN = JR, and the 2-dimensional transport code DOT4.2 showed a sufficient accuracy, and the availability of the PKN code has been confirmed. (author)

  2. BrachyTPS -Interactive point kernel code package for brachytherapy treatment planning of gynaecological cancers

    Thilagam, L.; Subbaiah, K.V.

    2008-01-01

    Brachytherapy treatment planning systems (TPS) are always recommended to account for the effect of tissue, applicator and shielding material heterogeneities exist in Intracavitary brachytherapy (ICBT) applicators. Most of the commercially available brachytherapy TPS softwares estimate the absorbed dose at a point, only taking care of the contributions of individual sources and the source distribution, neglecting the dose perturbations arising from the applicator design and construction. So the doses estimated by them are not much accurate under realistic clinical conditions. In this regard, interactive point kernel rode (BrachyTPS) has been developed to perform independent dose calculations by taking into account the effect of these heterogeneities, using two regions build up factors, proposed by Kalos. As primary input data, the code takes patients' planning data including the source specifications, dwell positions, dwell times and it computes the doses at reference points by dose point kernel formalisms, with multi-layer shield build-up factors accounting for the contributions from scattered radiation. In addition to performing dose distribution calculations, this code package is capable of displaying an isodose distribution curve into the patient anatomy images. The primary aim of this study is to validate the developed point kernel code integrated with treatment planning systems against the other tools which are available in the market. In the present work, three brachytherapy applicators commonly used in the treatment of uterine cervical carcinoma, Board of Radiation Isotope and Technology (BRIT) made low dose rate (LDR) applicator, Fletcher Green type LDR applicator and Fletcher Williamson high dose rate (HDR) applicator were studied to test the accuracy of the software

  3. Two-stage sparse coding of region covariance via Log-Euclidean kernels to detect saliency.

    Zhang, Ying-Ying; Yang, Cai; Zhang, Ping

    2017-05-01

    In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. NARMER-1: a photon point-kernel code with build-up factors

    Visonneau, Thierry; Pangault, Laurence; Malouch, Fadhel; Malvagi, Fausto; Dolci, Florence

    2017-09-01

    This paper presents an overview of NARMER-1, the new generation of photon point-kernel code developed by the Reactor Studies and Applied Mathematics Unit (SERMA) at CEA Saclay Center. After a short introduction giving some history points and the current context of development of the code, the paper exposes the principles implemented in the calculation, the physical quantities computed and surveys the generic features: programming language, computer platforms, geometry package, sources description, etc. Moreover, specific and recent features are also detailed: exclusion sphere, tetrahedral meshes, parallel operations. Then some points about verification and validation are presented. Finally we present some tools that can help the user for operations like visualization and pre-treatment.

  5. Error quantification of the axial nodal diffusion kernel of the DeCART code

    Cho, J. Y.; Kim, K. S.; Lee, C. C.

    2006-01-01

    This paper is to quantify the transport effects involved in the axial nodal diffusion kernel of the DeCART code. The transport effects are itemized into three effects, the homogenization, the diffusion, and the nodal effects. A five pin model consisting of four fuel pins and one non-fuel pin is demonstrated to quantify the transport effects. The transport effects are analyzed for three problems, the single pin (SP), guide tube (GT) and control rod (CR) problems by replacing the non-fuel pin with the fuel pin, a guide-tube and a control rod pins, respectively. The homogenization and diffusion effects are estimated to be about -4 and -50 pcm for the eigenvalue, and less than 2 % for the node power. The nodal effect on the eigenvalue is evaluated to be about -50 pcm in the SP and GT problems, and +350 pcm in the CR problem. Regarding the node power, this effect induces about a 3 % error in the SP and GT problems, and about a 20 % error in the CR problem. The large power error in the CR problem is due to the plane thickness, and it can be decreased by using the adaptive plane size. From the error quantification, it is concluded that the homogenization and the diffusion effects are not controllable if DeCART maintains the diffusion kernel for the axial solution, but the nodal effect is controllable by introducing the adaptive plane size scheme. (authors)

  6. A Quantitative Analysis of Variability Warnings in Linux

    Melo, Jean; Flesborg, Elvis; Brabrand, Claus

    2015-01-01

    In order to get insight into challenges with quality in highly-configurable software, we analyze one of the largest open source projects, the Linux kernel, and quantify basic properties of configuration-related warnings. We automatically analyze more than 20 thousand valid and distinct random...... configurations, in a computation that lasted more than a month. We count and classify a total of 400,000 warnings to get an insight in the distribution of warning types, and the location of the warnings. We run both on a stable and unstable version of the Linux kernel. The results show that Linux contains...

  7. Understanding Collateral Evolution in Linux Device Drivers

    Padioleau, Yoann; Lawall, Julia Laetitia; Muller, Gilles

    2006-01-01

    no tools to help in this process, collateral evolution is thus time consuming and error prone.In this paper, we present a qualitative and quantitative assessment of collateral evolution in Linux device driver code. We provide a taxonomy of evolutions and collateral evolutions, and use an automated patch......-analysis tool that we have developed to measure the number of evolutions and collateral evolutions that affect device drivers between Linux versions 2.2 and 2.6. In particular, we find that from one version of Linux to the next, collateral evolutions can account for up to 35% of the lines modified in such code....

  8. Linux System Administration

    Adelstein, Tom

    2007-01-01

    If you're an experienced system administrator looking to acquire Linux skills, or a seasoned Linux user facing a new challenge, Linux System Administration offers practical knowledge for managing a complete range of Linux systems and servers. The book summarizes the steps you need to build everything from standalone SOHO hubs, web servers, and LAN servers to load-balanced clusters and servers consolidated through virtualization. Along the way, you'll learn about all of the tools you need to set up and maintain these working environments. Linux is now a standard corporate platform with user

  9. Linux Desktop Pocket Guide

    Brickner, David

    2005-01-01

    While Mac OS X garners all the praise from pundits, and Windows XP attracts all the viruses, Linux is quietly being installed on millions of desktops every year. For programmers and system administrators, business users, and educators, desktop Linux is a breath of fresh air and a needed alternative to other operating systems. The Linux Desktop Pocket Guide is your introduction to using Linux on five of the most popular distributions: Fedora, Gentoo, Mandriva, SUSE, and Ubuntu. Despite what you may have heard, using Linux is not all that hard. Firefox and Konqueror can handle all your web bro

  10. Beginning Ubuntu Linux

    Raggi, Emilio; Channelle, Andy; Parsons, Trevor; Van Vugt, Sander

    2010-01-01

    Ubuntu Linux is the fastest growing Linux-based operating system, and Beginning Ubuntu Linux, Fifth Edition teaches all of us - including those who have never used Linux - how to use it productively, whether you come from Windows or the Mac or the world of open source. Beginning Ubuntu Linux, Fifth Edition shows you how to take advantage of the newest Ubuntu release, Lucid Lynx. Based on the best-selling previous edition, Emilio Raggi maintains a fine balance between teaching Ubuntu and introducing new features. Whether you aim to use it in the home or in the office, you'll be introduced to th

  11. Tuning Linux to meet real time requirements

    Herbel, Richard S.; Le, Dang N.

    2007-04-01

    There is a desire to use Linux in military systems. Customers are requesting contractors to use open source to the maximal possible extent in contracts. Linux is probably the best operating system of choice to meet this need. It is widely used. It is free. It is royalty free, and, best of all, it is completely open source. However, there is a problem. Linux was not originally built to be a real time operating system. There are many places where interrupts can and will be blocked for an indeterminate amount of time. There have been several attempts to bridge this gap. One of them is from RTLinux, which attempts to build a microkernel underneath Linux. The microkernel will handle all interrupts and then pass it up to the Linux operating system. This does insure good interrupt latency; however, it is not free [1]. Another is RTAI, which provides a similar typed interface; however, the PowerPC platform, which is used widely in real time embedded community, was stated as "recovering" [2]. Thus this is not suited for military usage. This paper provides a method for tuning a standard Linux kernel so it can meet the real time requirement of an embedded system.

  12. 42 Variability Bugs in the Linux Kernel

    Abal, Iago; Brabrand, Claus; Wasowski, Andrzej

    2014-01-01

    Feature-sensitive verification pursues effective analysis of the exponentially many variants of a program family. However, researchers lack examples of concrete bugs induced by variability, occurring in real large-scale systems. Such a collection of bugs is a requirement for goal-oriented research...... provide self-contained simplified C99 versions of the bugs, facilitating understanding and tool evaluation. Our study provides insights into the nature and occurrence of variability bugs in a large C software system, and shows in what ways variability affects and increases the complexity of software bugs....

  13. 40 Variability Bugs in the Linux Kernel

    Abal Rivas, Iago; Brabrand, Claus; Wasowski, Andrzej

    2014-01-01

    Feature-sensitive verification is a recent field that pursues the effective analysis of the exponential number of variants of a program family. Today researchers lack examples of concrete bugs induced by variability, and occurring in real large-scale software. Such a collection of bugs is a requi......Feature-sensitive verification is a recent field that pursues the effective analysis of the exponential number of variants of a program family. Today researchers lack examples of concrete bugs induced by variability, and occurring in real large-scale software. Such a collection of bugs...... the outcome of our analysis into a database. In addition, we provide self-contained simplified C99 versions of the bugs, facilitating understanding and tool evaluation. Our study provides insights about the nature and occurrence of variability bugs in a large C software system, and shows in what ways...

  14. The Modularized Software Package ASKI - Full Waveform Inversion Based on Waveform Sensitivity Kernels Utilizing External Seismic Wave Propagation Codes

    Schumacher, F.; Friederich, W.

    2015-12-01

    We present the modularized software package ASKI which is a flexible and extendable toolbox for seismic full waveform inversion (FWI) as well as sensitivity or resolution analysis operating on the sensitivity matrix. It utilizes established wave propagation codes for solving the forward problem and offers an alternative to the monolithic, unflexible and hard-to-modify codes that have typically been written for solving inverse problems. It is available under the GPL at www.rub.de/aski. The Gauss-Newton FWI method for 3D-heterogeneous elastic earth models is based on waveform sensitivity kernels and can be applied to inverse problems at various spatial scales in both Cartesian and spherical geometries. The kernels are derived in the frequency domain from Born scattering theory as the Fréchet derivatives of linearized full waveform data functionals, quantifying the influence of elastic earth model parameters on the particular waveform data values. As an important innovation, we keep two independent spatial descriptions of the earth model - one for solving the forward problem and one representing the inverted model updates. Thereby we account for the independent needs of spatial model resolution of forward and inverse problem, respectively. Due to pre-integration of the kernels over the (in general much coarser) inversion grid, storage requirements for the sensitivity kernels are dramatically reduced.ASKI can be flexibly extended to other forward codes by providing it with specific interface routines that contain knowledge about forward code-specific file formats and auxiliary information provided by the new forward code. In order to sustain flexibility, the ASKI tools must communicate via file output/input, thus large storage capacities need to be accessible in a convenient way. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full

  15. BLINDAGE: A neutron and gamma-ray transport code for shieldings with the removal-diffusion technique coupled with the point-kernel technique

    Fanaro, L.C.C.B.

    1984-01-01

    It was developed the BLINDAGE computer code for the radiation transport (neutrons and gammas) calculation. The code uses the removal - diffusion method for neutron transport and point-kernel technique with buil-up factors for gamma-rays. The results obtained through BLINDAGE code are compared with those obtained with the ANISN and SABINE computer codes. (Author) [pt

  16. Linux utilities cookbook

    Lewis, James Kent

    2013-01-01

    A Cookbook-style guide packed with examples and illustrations, it offers organized learning through recipes and step-by-step instructions. The book is designed so that you can pick exactly what you need, when you need it.Written for anyone that would like to familiarize themselves with Linux. This book is perfect migrating from Windows to Linux and will save your time and money, learn exactly how to and where to begin working with Linux and troubleshooting in easy steps.

  17. Linux Networking Cookbook

    Schroder, Carla

    2008-01-01

    If you want a book that lays out the steps for specific Linux networking tasks, one that clearly explains the commands and configurations, this is the book for you. Linux Networking Cookbook is a soup-to-nuts collection of recipes that covers everything you need to know to perform your job as a Linux network administrator. You'll dive straight into the gnarly hands-on work of building and maintaining a computer network

  18. Pro Linux System Administration

    Turnbull, James

    2009-01-01

    We can all be Linux experts, provided we invest the time in learning the craft of Linux administration. Pro Linux System Administration makes it easy for small to medium--sized businesses to enter the world of zero--cost software running on Linux and covers all the distros you might want to use, including Red Hat, Ubuntu, Debian, and CentOS. Authors, and systems infrastructure experts James Turnbull, Peter Lieverdink, and Dennis Matotek take a layered, component--based approach to open source business systems, while training system administrators as the builders of business infrastructure. If

  19. Ubuntu Linux toolbox

    Negus, Christopher

    2012-01-01

    This bestseller from Linux guru Chris Negus is packed with an array of new and revised material As a longstanding bestseller, Ubuntu Linux Toolbox has taught you how to get the most out Ubuntu, the world?s most popular Linux distribution. With this eagerly anticipated new edition, Christopher Negus returns with a host of new and expanded coverage on tools for managing file systems, ways to connect to networks, techniques for securing Ubuntu systems, and a look at the latest Long Term Support (LTS) release of Ubuntu, all aimed at getting you up and running with Ubuntu Linux quickly.

  20. Minimalist's linux cluster

    Choi, Chang-Yeong; Kim, Jeong-Hyun; Kim, Seyong

    2004-01-01

    Using barebone PC components and NIC's, we construct a linux cluster which has 2-dimensional mesh structure. This cluster has smaller footprint, is less expensive, and use less power compared to conventional linux cluster. Here, we report our experience in building such a machine and discuss our current lattice project on the machine

  1. Embedded Linux platform for data acquisition systems

    Patel, Jigneshkumar J.; Reddy, Nagaraj; Kumari, Praveena; Rajpal, Rachana; Pujara, Harshad; Jha, R.; Kalappurakkal, Praveen

    2014-01-01

    Highlights: • The design and the development of data acquisition system on FPGA based reconfigurable hardware platform. • Embedded Linux configuration and compilation for FPGA based systems. • Hardware logic IP core and its Linux device driver development for the external peripheral to interface it with the FPGA based system. - Abstract: This scalable hardware–software system is designed and developed to explore the emerging open standards for data acquisition requirement of Tokamak experiments. To address the future need for a scalable data acquisition and control system for fusion experiments, we have explored the capability of software platform using Open Source Embedded Linux Operating System on a programmable hardware platform such as FPGA. The idea was to identify the platform which can be customizable, flexible and scalable to support the data acquisition system requirements. To do this, we have selected FPGA based reconfigurable and scalable hardware platform to design the system with Embedded Linux based operating system for flexibility in software development and Gigabit Ethernet interface for high speed data transactions. The proposed hardware–software platform using FPGA and Embedded Linux OS offers a single chip solution with processor, peripherals such ADC interface controller, Gigabit Ethernet controller, memory controller amongst other peripherals. The Embedded Linux platform for data acquisition is implemented and tested on a Virtex-5 FXT FPGA ML507 which has PowerPC 440 (PPC440) [2] hard block on FPGA. For this work, we have used the Linux Kernel version 2.6.34 with BSP support for the ML507 platform. It is downloaded from the Xilinx [1] GIT server. Cross-compiler tool chain is created using the Buildroot scripts. The Linux Kernel and Root File System are configured and compiled using the cross-tools to support the hardware platform. The Analog to Digital Converter (ADC) IO module is designed and interfaced with the ML507 through Xilinx

  2. Embedded Linux platform for data acquisition systems

    Patel, Jigneshkumar J., E-mail: jjp@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Reddy, Nagaraj, E-mail: nagaraj.reddy@coreel.com [Sandeepani School of Embedded System Design, Bangalore, Karnataka (India); Kumari, Praveena, E-mail: praveena@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Rajpal, Rachana, E-mail: rachana@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Pujara, Harshad, E-mail: pujara@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Jha, R., E-mail: rjha@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Kalappurakkal, Praveen, E-mail: praveen.k@coreel.com [Sandeepani School of Embedded System Design, Bangalore, Karnataka (India)

    2014-05-15

    Highlights: • The design and the development of data acquisition system on FPGA based reconfigurable hardware platform. • Embedded Linux configuration and compilation for FPGA based systems. • Hardware logic IP core and its Linux device driver development for the external peripheral to interface it with the FPGA based system. - Abstract: This scalable hardware–software system is designed and developed to explore the emerging open standards for data acquisition requirement of Tokamak experiments. To address the future need for a scalable data acquisition and control system for fusion experiments, we have explored the capability of software platform using Open Source Embedded Linux Operating System on a programmable hardware platform such as FPGA. The idea was to identify the platform which can be customizable, flexible and scalable to support the data acquisition system requirements. To do this, we have selected FPGA based reconfigurable and scalable hardware platform to design the system with Embedded Linux based operating system for flexibility in software development and Gigabit Ethernet interface for high speed data transactions. The proposed hardware–software platform using FPGA and Embedded Linux OS offers a single chip solution with processor, peripherals such ADC interface controller, Gigabit Ethernet controller, memory controller amongst other peripherals. The Embedded Linux platform for data acquisition is implemented and tested on a Virtex-5 FXT FPGA ML507 which has PowerPC 440 (PPC440) [2] hard block on FPGA. For this work, we have used the Linux Kernel version 2.6.34 with BSP support for the ML507 platform. It is downloaded from the Xilinx [1] GIT server. Cross-compiler tool chain is created using the Buildroot scripts. The Linux Kernel and Root File System are configured and compiled using the cross-tools to support the hardware platform. The Analog to Digital Converter (ADC) IO module is designed and interfaced with the ML507 through Xilinx

  3. Linux Server Security

    Bauer, Michael D

    2005-01-01

    Linux consistently appears high up in the list of popular Internet servers, whether it's for the Web, anonymous FTP, or general services such as DNS and delivering mail. But security is the foremost concern of anyone providing such a service. Any server experiences casual probe attempts dozens of time a day, and serious break-in attempts with some frequency as well. This highly regarded book, originally titled Building Secure Servers with Linux, combines practical advice with a firm knowledge of the technical tools needed to ensure security. The book focuses on the most common use of Linux--

  4. Setup of the development tools for a small-sized controller built in a robot using Linux

    Lee, Jae Cheol; Jun, Hyeong Seop; Choi, Yu Rak; Kim, Jae Hee

    2004-03-01

    This report explains a setup method of practical development tools for robot control software programming. Well constituted development tools make a programmer more productive and a program more reliable. We ported a proven operating system to the target board (our robot's controller) to avoid such convention. We selected open source Linux as operating system, because it is free, reliable, flexible and widely used. First, we setup the host computer with Linux, and installed a cross compiler on it. And we ported Linux to the target board and connected to the host computer with ethernet, and setup NFS to both the host and the target. So the target board can use host computer's hard disk as it's own disk. Next, we installed gdb server on the target board and gdb client and DDD to the host computer for debugging the target program in the host computer with graphic environment. Finally, we patched the target board's Linux kernel with another one which have realtime capability. In this way, we can have a realtime embedded hardware controller for a robot with convenient software developing tools. All source programs are edited and compiled on the host computer, and executable codes exist in the NFS mounted directory that can be accessed from target board's directory. We can execute and debugging the code by means of logging into the target through the ethernet or the serial line

  5. Reprint of "Two-stage sparse coding of region covariance via Log-Euclidean kernels to detect saliency".

    Zhang, Ying-Ying; Yang, Cai; Zhang, Ping

    2017-08-01

    In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Abstract of talk for Silicon Valley Linux Users Group

    Clanton, Sam

    2003-01-01

    The use of Linux for research at NASA Ames is discussed.Topics include:work with the Atmospheric Physics branch on software for a spectrometer to be used in the CRYSTAL-FACE mission this summer; work on in the Neuroengineering Lab with code IC including an introduction to the extension of the human senses project,advantages with using linux for real-time biological data processing,algorithms utilized on a linux system, goals of the project,slides of people with Neuroscan caps on, and progress that has been made and how linux has helped.

  7. A simplified computer code based on point Kernel theory for calculating radiation dose in packages of radioactive material

    1986-03-01

    A study on radiation dose control in packages of radioactive waste from nuclear facilities, hospitals and industries, such as sources of Ra-226, Co-60, Ir-192 and Cs-137, is presented. The MAPA and MAPAM computer codes, based on point Kernel theory for calculating doses of several source-shielding type configurations, aiming to assure the safe transport conditions for these sources, was developed. The validation of the code for point sources, using the values provided by NCRP, for the thickness of lead and concrete shieldings, limiting the dose at 100 Mrem/hr for several distances from the source to the detector, was carried out. The validation for non point sources was carried out, measuring experimentally radiation dose from packages developed by Brazilian CNEN/S.P. for removing the sources. (M.C.K.) [pt

  8. Kali Linux CTF blueprints

    Buchanan, Cameron

    2014-01-01

    Taking a highly practical approach and a playful tone, Kali Linux CTF Blueprints provides step-by-step guides to setting up vulnerabilities, in-depth guidance to exploiting them, and a variety of advice and ideas to build and customising your own challenges. If you are a penetration testing team leader or individual who wishes to challenge yourself or your friends in the creation of penetration testing assault courses, this is the book for you. The book assumes a basic level of penetration skills and familiarity with the Kali Linux operating system.

  9. Linux Security Cookbook

    Barrett, Daniel J; Byrnes, Robert G

    2003-01-01

    Computer security is an ongoing process, a relentless contest between system administrators and intruders. A good administrator needs to stay one step ahead of any adversaries, which often involves a continuing process of education. If you're grounded in the basics of security, however, you won't necessarily want a complete treatise on the subject each time you pick up a book. Sometimes you want to get straight to the point. That's exactly what the new Linux Security Cookbook does. Rather than provide a total security solution for Linux computers, the authors present a series of easy-to-fol

  10. PERCEVAL v4.0: a new PC code for gamma radiation studies based upon most recent development about point kernel

    Bindel, Laurent; Clouet, Laurent; Castanier, Eric; Bonnet, Jerome; Fleury, Guillaume; Vermuse, Manuel; Gamess, Andre; Lejeune, Eric [Societe Generale pour les techniques Nouvelles, Saint Quentin en Yvelines (France)

    2000-03-01

    The present paper presents the capabilities of a new code named PERCEVAL v4.0 and based on the new point kernel described in an attached issue. Two linked codes named SPECTRE{sub G} and GRAPH{sub 3}D are part of the code package in order to establish the energetic source term and visualize the tri-dimensional scene respectively. (author)

  11. Kali Linux social engineering

    Singh, Rahul

    2013-01-01

    This book is a practical, hands-on guide to learning and performing SET attacks with multiple examples.Kali Linux Social Engineering is for penetration testers who want to use BackTrack in order to test for social engineering vulnerabilities or for those who wish to master the art of social engineering attacks.

  12. Embedded Linux in het onderwijs

    Dr Ruud Ermers

    2008-01-01

    Embedded Linux wordt bij steeds meer grote bedrijven ingevoerd als embedded operating system. Binnen de opleiding Technische Informatica van Fontys Hogeschool ICT is Embedded Linux geïntroduceerd in samenwerking met het lectoraat Architectuur van Embedded Systemen. Embedded Linux is als vakgebied

  13. BSD Portals for LINUX 2.0

    McNab, A. David; woo, Alex (Technical Monitor)

    1999-01-01

    Portals, an experimental feature of 4.4BSD, extend the file system name space by exporting certain open () requests to a user-space daemon. A portal daemon is mounted into the file name space as if it were a standard file system. When the kernel resolves a pathname and encounters a portal mount point, the remainder of the path is passed to the portal daemon. Depending on the portal "pathname" and the daemon's configuration, some type of open (2) is performed. The resulting file descriptor is passed back to the kernel which eventually returns it to the user, to whom it appears that a "normal" open has occurred. A proxy portalfs file system is responsible for kernel interaction with the daemon. The overall effect is that the portal daemon performs an open (2) on behalf of the kernel, possibly hiding substantial complexity from the calling process. One particularly useful application is implementing a connection service that allows simple scripts to open network sockets. This paper describes the implementation of portals for LINUX 2.0.

  14. A real-time data transmission method based on Linux for physical experimental readout systems

    Cao Ping; Song Kezhu; Yang Junfeng

    2012-01-01

    In a typical physical experimental instrument, such as a fusion or particle physical application, the readout system generally implements an interface between the data acquisition (DAQ) system and the front-end electronics (FEE). The key task of a readout system is to read, pack, and forward the data from the FEE to the back-end data concentration center in real time. To guarantee real-time performance, the VxWorks operating system (OS) is widely used in readout systems. However, VxWorks is not an open-source OS, which gives it has many disadvantages. With the development of multi-core processor and new scheduling algorithm, Linux OS exhibits performance in real-time applications similar to that of VxWorks. It has been successfully used even for some hard real-time systems. Discussions and evaluations of real-time Linux solutions for a possible replacement of VxWorks arise naturally. In this paper, a real-time transmission method based on Linux is introduced. To reduce the number of transfer cycles for large amounts of data, a large block of contiguous memory buffer for DMA transfer is allocated by modifying the Linux Kernel (version 2.6) source code slightly. To increase the throughput for network transmission, the user software is designed into formation of parallelism. To achieve high performance in real-time data transfer from hardware to software, mapping techniques must be used to avoid unnecessary data copying. A simplified readout system is implemented with 4 readout modules in a PXI crate. This system can support up to 48 MB/s data throughput from the front-end hardware to the back-end concentration center through a Gigabit Ethernet connection. There are no restrictions on the use of this method, hardware or software, which means that it can be easily migrated to other interrupt related applications.

  15. Communication to Linux users

    IT Department

    We would like to inform you that the aging “phone” Linux command will stop working: On lxplus on 30 November 2009, On lxbatch on 4 January 2010, and is replaced by the new “phonebook” command, currently available on SLC4 and SLC5 Linux. As the new “phonebook” command has different syntax and output formats from the “phone” command, please update and test all scripts currently using “phone” before the above dates. You can refer to the article published on the IT Service Status Board, under the Service Changes section. Please send any comments to it-dep-phonebook-feedback@cern.ch Best regards, IT-UDS User Support Section

  16. Calculation of electron and isotopes dose point kernels with FLUKA Monte Carlo code for dosimetry in nuclear medicine therapy

    Mairani, A; Valente, M; Battistoni, G; Botta, F; Pedroli, G; Ferrari, A; Cremonesi, M; Di Dia, A; Ferrari, M; Fasso, A

    2011-01-01

    Purpose: The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, FLUKA Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, FLUKA has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernel (DPK), quantifying the energy deposition all around a point isotropic source, is often the one. Methods: FLUKA DPKS have been calculated in both water and compact bone for monoenergetic electrons (10-3 MeV) and for beta emitting isotopes commonly used for therapy ((89)Sr, (90)Y, (131)I, (153)Sm, (177)Lu, (186)Re, and (188)Re). Point isotropic...

  17. Comparison of electron dose-point kernels in water generated by the Monte Carlo codes, PENELOPE, GEANT4, MCNPX, and ETRAN.

    Uusijärvi, Helena; Chouin, Nicolas; Bernhardt, Peter; Ferrer, Ludovic; Bardiès, Manuel; Forssell-Aronsson, Eva

    2009-08-01

    Point kernels describe the energy deposited at a certain distance from an isotropic point source and are useful for nuclear medicine dosimetry. They can be used for absorbed-dose calculations for sources of various shapes and are also a useful tool when comparing different Monte Carlo (MC) codes. The aim of this study was to compare point kernels calculated by using the mixed MC code, PENELOPE (v. 2006), with point kernels calculated by using the condensed-history MC codes, ETRAN, GEANT4 (v. 8.2), and MCNPX (v. 2.5.0). Point kernels for electrons with initial energies of 10, 100, 500, and 1 MeV were simulated with PENELOPE. Spherical shells were placed around an isotropic point source at distances from 0 to 1.2 times the continuous-slowing-down-approximation range (R(CSDA)). Detailed (event-by-event) simulations were performed for electrons with initial energies of less than 1 MeV. For 1-MeV electrons, multiple scattering was included for energy losses less than 10 keV. Energy losses greater than 10 keV were simulated in a detailed way. The point kernels generated were used to calculate cellular S-values for monoenergetic electron sources. The point kernels obtained by using PENELOPE and ETRAN were also used to calculate cellular S-values for the high-energy beta-emitter, 90Y, the medium-energy beta-emitter, 177Lu, and the low-energy electron emitter, 103mRh. These S-values were also compared with the Medical Internal Radiation Dose (MIRD) cellular S-values. The greatest differences between the point kernels (mean difference calculated for distances, electrons was 1.4%, 2.5%, and 6.9% for ETRAN, GEANT4, and MCNPX, respectively, compared to PENELOPE, if omitting the S-values when the activity was distributed on the cell surface for 10-keV electrons. The largest difference between the cellular S-values for the radionuclides, between PENELOPE and ETRAN, was seen for 177Lu (1.2%). There were large differences between the MIRD cellular S-values and those obtained from

  18. Neutron shielding point kernel integral calculation code for personal computer: PKN-pc

    Kotegawa, Hiroshi; Sakamoto, Yukio; Nakane, Yoshihiro; Tomita, Ken-ichi; Kurosawa, Naohiro.

    1994-07-01

    A personal computer version of PKN code, PKN-pc, has been developed to calculate neutron and secondary gamma-ray 1cm depth dose equivalents in water, ordinary concrete and iron for neutron source. Characteristics of PKN code are, to able to calculate dose equivalents in multi-layer three-dimensional system, which are described with two-dimensional surface, for monoenergetic neutron source from 0.01 to 14.9 MeV, 252 Cf fission and 241 Am-Be neutron source quick and easily. In addition to these features, the PKN-pc is possible to process interactive input and to get graphical system configuration and graphical results easily. (author)

  19. Kali Linux cookbook

    Pritchett, Willie

    2013-01-01

    A practical, cookbook style with numerous chapters and recipes explaining the penetration testing. The cookbook-style recipes allow you to go directly to your topic of interest if you are an expert using this book as a reference, or to follow topics throughout a chapter to gain in-depth knowledge if you are a beginner.This book is ideal for anyone who wants to get up to speed with Kali Linux. It would also be an ideal book to use as a reference for seasoned penetration testers.

  20. Radiation transport simulation in gamma irradiator systems using E G S 4 Monte Carlo code and dose mapping calculations based on point kernel technique

    Raisali, G.R.

    1992-01-01

    A series of computer codes based on point kernel technique and also Monte Carlo method have been developed. These codes perform radiation transport calculations for irradiator systems having cartesian, cylindrical and mixed geometries. The monte Carlo calculations, the computer code 'EGS4' has been applied to a radiation processing type problem. This code has been acompanied by a specific user code. The set of codes developed include: GCELLS, DOSMAPM, DOSMAPC2 which simulate the radiation transport in gamma irradiator systems having cylinderical, cartesian, and mixed geometries, respectively. The program 'DOSMAP3' based on point kernel technique, has been also developed for dose rate mapping calculations in carrier type gamma irradiators. Another computer program 'CYLDETM' as a user code for EGS4 has been also developed to simulate dose variations near the interface of heterogeneous media in gamma irradiator systems. In addition a system of computer codes 'PRODMIX' has been developed which calculates the absorbed dose in the products with different densities. validation studies of the calculated results versus experimental dosimetry has been performed and good agreement has been obtained

  1. Calculation of electron and isotopes dose point kernels with FLUKA Monte Carlo code for dosimetry in nuclear medicine therapy.

    Botta, F; Mairani, A; Battistoni, G; Cremonesi, M; Di Dia, A; Fassò, A; Ferrari, A; Ferrari, M; Paganelli, G; Pedroli, G; Valente, M

    2011-07-01

    The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, FLUKA Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, FLUKA has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernel (DPK), quantifying the energy deposition all around a point isotropic source, is often the one. FLUKA DPKS have been calculated in both water and compact bone for monoenergetic electrons (10-3 MeV) and for beta emitting isotopes commonly used for therapy (89Sr, 90Y, 131I 153Sm, 177Lu, 186Re, and 188Re). Point isotropic sources have been simulated at the center of a water (bone) sphere, and deposed energy has been tallied in concentric shells. FLUKA outcomes have been compared to PENELOPE v.2008 results, calculated in this study as well. Moreover, in case of monoenergetic electrons in water, comparison with the data from the literature (ETRAN, GEANT4, MCNPX) has been done. Maximum percentage differences within 0.8.RCSDA and 0.9.RCSDA for monoenergetic electrons (RCSDA being the continuous slowing down approximation range) and within 0.8.X90 and 0.9.X90 for isotopes (X90 being the radius of the sphere in which 90% of the emitted energy is absorbed) have been computed, together with the average percentage difference within 0.9.RCSDA and 0.9.X90 for electrons and isotopes, respectively. Concerning monoenergetic electrons, within 0.8.RCSDA (where 90%-97% of the particle energy is deposed), FLUKA and PENELOPE agree mostly within 7%, except for 10 and 20 keV electrons (12% in water, 8.3% in bone). The

  2. Developing and Benchmarking Native Linux Applications on Android

    Batyuk, Leonid; Schmidt, Aubrey-Derrick; Schmidt, Hans-Gunther; Camtepe, Ahmet; Albayrak, Sahin

    Smartphones get increasingly popular where more and more smartphone platforms emerge. Special attention was gained by the open source platform Android which was presented by the Open Handset Alliance (OHA) hosting members like Google, Motorola, and HTC. Android uses a Linux kernel and a stripped-down userland with a custom Java VM set on top. The resulting system joins the advantages of both environments, while third-parties are intended to develop only Java applications at the moment.

  3. Diskless Linux Cluster How-To

    Shumaker, Justin L

    2005-01-01

    Diskless linux clustering is not yet a turn-key solution. The process of configuring a cluster of diskless linux machines requires many modifications to the stock linux operating system before they can boot cleanly...

  4. Getting priorities straight: improving Linux support for database I/O

    Hall, Christoffer; Bonnet, Philippe

    2005-01-01

    The Linux 2.6 kernel supports asynchronous I/O as a result of propositions from the database industry. This is a positive evolution but is it a panacea? In the context of the Badger project, a collaboration between MySQL AB and University of Copenhagen, ......The Linux 2.6 kernel supports asynchronous I/O as a result of propositions from the database industry. This is a positive evolution but is it a panacea? In the context of the Badger project, a collaboration between MySQL AB and University of Copenhagen, ...

  5. PMICALC: an R code-based software for estimating post-mortem interval (PMI) compatible with Windows, Mac and Linux operating systems.

    Muñoz-Barús, José I; Rodríguez-Calvo, María Sol; Suárez-Peñaranda, José M; Vieira, Duarte N; Cadarso-Suárez, Carmen; Febrero-Bande, Manuel

    2010-01-30

    In legal medicine the correct determination of the time of death is of utmost importance. Recent advances in estimating post-mortem interval (PMI) have made use of vitreous humour chemistry in conjunction with Linear Regression, but the results are questionable. In this paper we present PMICALC, an R code-based freeware package which estimates PMI in cadavers of recent death by measuring the concentrations of potassium ([K+]), hypoxanthine ([Hx]) and urea ([U]) in the vitreous humor using two different regression models: Additive Models (AM) and Support Vector Machine (SVM), which offer more flexibility than the previously used Linear Regression. The results from both models are better than those published to date and can give numerical expression of PMI with confidence intervals and graphic support within 20 min. The program also takes into account the cause of death. 2009 Elsevier Ireland Ltd. All rights reserved.

  6. Real time kernel performance monitoring with SystemTap

    CERN. Geneva

    2018-01-01

    SystemTap is a dynamic method of monitoring and tracing the operation of a running Linux kernel. In this talk I will present a few practical use cases where SystemTap allowed me to turn otherwise complex userland monitoring tasks in simple kernel probes.

  7. Using Vega Linux Cluster at Reactor Physics Dept

    Zefran, B.; Jeraj, R.; Skvarc, J.; Glumac, B.

    1999-01-01

    Experience using a Linux-based cluster for the reactor physics calculations are presented in this paper. Special attention is paid to the MCNP code in this environment and to practical guidelines how to prepare and use the paralel version of the code. Our results of a time comparison study are presented for two sets of inputs. The results are promising and speedup factor achieved on the Linux cluster agrees with previous tests on other parallel systems. We also tested tools for parallelization of other programs used at our Dept..(author)

  8. Shielding calculations and collective dose estimations with the point-kernel-code VISIPLAN registered for the example of the project ZENT

    Boehlke, S.; Niegoth, H.

    2012-01-01

    In the nuclear power plant Leibstadt (KKL) during the next year large components will be dismantled and stored for final disposal within the interim storage facility ZENT at the NPP site. Before construction of ZENT appropriate estimations of the local dose rate inside and outside the building and the collective dose for the normal operation have to be performed. The shielding calculations are based on the properties of the stored components and radiation sources and on the concepts for working place requirements. The installation of control and monitoring areas will depend on these calculations. For the determination of the shielding potential of concrete walls and steel doors with the defined boundary conditions point-kernel codes like MICROSHIELd registered are used. Complex problems cannot be modeled with this code. Therefore the point-kernel code VISIPLAN registered was developed for the determination of the local dose distribution functions in 3D models. The possibility of motion sequence inputs allows an optimization of collective dose estimations for the operational phases of a nuclear facility.

  9. Locally orderless registration code

    2012-01-01

    This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows.......This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows....

  10. Enabling rootless Linux containers in multi-user environments. The udocker tool

    Gomes, Jorge; David, Mario; Alves, Luis; Martins, Jo ao; Pina, Jo ao [Laboratorio de Instrumentacao e Fisica Experimental de Particulas (LIP), Lisboa (Portugal); Bagnaschi, Emanuele [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Campos, Isabel; Lopez-Garcia, Alvaro; Orviz, Pablo [IFCA, Consejo Superior de Investigaciones Cientificas-CSIC, Santander (Spain)

    2017-11-15

    Containers are increasingly used as means to distribute and run Linux services and applications. In this paper we describe the architectural design and implementation of udocker a tool to execute Linux containers in user mode and we describe a few practical applications for a range of scientific codes meeting different requirements: from single core execution to MPI parallel execution and execution on GPGPUs.

  11. Enabling rootless Linux containers in multi-user environments. The udocker tool

    Gomes, Jorge; David, Mario; Alves, Luis; Martins, Jo ao; Pina, Jo ao; Bagnaschi, Emanuele; Campos, Isabel; Lopez-Garcia, Alvaro; Orviz, Pablo

    2017-11-01

    Containers are increasingly used as means to distribute and run Linux services and applications. In this paper we describe the architectural design and implementation of udocker a tool to execute Linux containers in user mode and we describe a few practical applications for a range of scientific codes meeting different requirements: from single core execution to MPI parallel execution and execution on GPGPUs.

  12. SmPL: A Domain-Specific Language for Specifying Collateral Evolutions in Linux Device Drivers

    Padioleau, Yoann; Lawall, Julia Laetitia; Muller, Gilles

    2007-01-01

    identifying the affected files and modifying all of the code fragments in these files that in some way depend on the changed interface. We have studied the collateral evolution problem in the context of Linux device drivers. Currently, collateral evolutions in Linux are mostly done manually using a text...

  13. Implementation of the On-the-fly Encryption for the Linux OS Based on Certified CPS

    Alexander Mikhailovich Korotin

    2013-02-01

    Full Text Available The article is devoted to tools for on-the-fly encryption and a method to implement such tool for the Linux OS based on a certified CPS.The idea is to modify the existing tool named eCryptfs. Russian cryptographic algorithms will be used in the user and kernel modes.

  14. Channel Bonding in Linux Ethernet Environment using Regular Switching Hub

    Chih-wen Hsueh

    2004-06-01

    Full Text Available Bandwidth plays an important role for quality of service in most network systems. There are many technologies developed to increase host bandwidth in a LAN environment. Most of them need special hardware support, such as switching hub that supports IEEE Link Aggregation standard. In this paper, we propose a Linux solution to increase the bandwidth between hosts with multiple network adapters connected to a regular switching hub. The approach is implemented as two Linux kernel modules in a LAN environment without modification to the hardware and operating systems on host machines. Packets are dispatched to bonding network adapters for transmission. The proposed approach is backward compatible, flexible and transparent to users and only one IP address is needed for multiple bonding network adapters. Evaluation experiments in TCP and UDP transmission are shown with bandwidth gain proportionally to the number of network adapters. It is suitable for large-scale LAN systems with high bandwidth requirement, such as clustering systems.

  15. Analysing the Linux kernel feature model changes using FMDiff

    Dintzner, N.J.R.; van Deursen, A.; Pinzger, M.

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The

  16. Analysing the Linux kernel feature model changes using FMDiff

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2015-01-01

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The

  17. Linux all-in-one for dummies

    Dulaney, Emmett

    2014-01-01

    Eight minibooks in one volume cover every important aspect of Linux and everything you need to know to pass level-1 certification Linux All-in-One For Dummies explains everything you need to get up and running with the popular Linux operating system. Written in the friendly and accessible For Dummies style, the book ideal for new and intermediate Linux users, as well as anyone studying for level-1 Linux certification. The eight minibooks inside cover the basics of Linux, interacting with it, networking issues, Internet services, administration, security, scripting, and level-1 certification. C

  18. Parallelization characteristics of a three-dimensional whole-core code DeCART

    Cho, J. Y.; Joo, H.K.; Kim, H. Y.; Lee, J. C.; Jang, M. H.

    2003-01-01

    Neutron transport calculation for three-dimensional amount of computing time but also huge memory. Therefore, whole-core codes such as DeCART need both also parallel computation and distributed memory capabilities. This paper is to implement such parallel capabilities based on MPI grouping and memory distribution on the DeCART code, and then to evaluate the performance by solving the C5G7 three-dimensional benchmark and a simplified three-dimensional SMART core problem. In C5G7 problem with 24 CPUs, a speedup of maximum 22 is obtained on IBM regatta machine and 21 on a LINUX cluster for the MOC kernel, which indicates good parallel performance of the DeCART code. The simplified SMART problem which need about 11 GBytes memory with one processors requires about 940 MBytes, which means that the DeCART code can now solve large core problems on affordable LINUX clusters

  19. Analyzing Security-Enhanced Linux Policy Specifications

    Archer, Myla

    2003-01-01

    NSA's Security-Enhanced (SE) Linux enhances Linux by providing a specification language for security policies and a Flask-like architecture with a security server for enforcing policies defined in the language...

  20. Shielding calculations and collective dose estimations with the point-kernel-code VISIPLAN {sup registered} for the example of the project ZENT; Abschirmberechnungen und Kollektivdosisabschaetzungen mit dem Punkt-Kern-Code VISIPLAN {sup registered} am Beispiel des Projektes ZENT

    Boehlke, S.; Niegoth, H. [STEAG Energy Services GmbH, Essen (Germany). Nuclear Technologies; Stalder, I. [Kernkraftwerk Leibstadt AG, Leibstadt (Switzerland)

    2012-11-01

    In the nuclear power plant Leibstadt (KKL) during the next year large components will be dismantled and stored for final disposal within the interim storage facility ZENT at the NPP site. Before construction of ZENT appropriate estimations of the local dose rate inside and outside the building and the collective dose for the normal operation have to be performed. The shielding calculations are based on the properties of the stored components and radiation sources and on the concepts for working place requirements. The installation of control and monitoring areas will depend on these calculations. For the determination of the shielding potential of concrete walls and steel doors with the defined boundary conditions point-kernel codes like MICROSHIELd {sup registered} are used. Complex problems cannot be modeled with this code. Therefore the point-kernel code VISIPLAN {sup registered} was developed for the determination of the local dose distribution functions in 3D models. The possibility of motion sequence inputs allows an optimization of collective dose estimations for the operational phases of a nuclear facility.

  1. Hard Real-Time Performances in Multiprocessor-Embedded Systems Using ASMP-Linux

    Daniel Pierre Bovet

    2008-01-01

    Full Text Available Multiprocessor systems, especially those based on multicore or multithreaded processors, and new operating system architectures can satisfy the ever increasing computational requirements of embedded systems. ASMP-LINUX is a modified, high responsiveness, open-source hard real-time operating system for multiprocessor systems capable of providing high real-time performance while maintaining the code simple and not impacting on the performances of the rest of the system. Moreover, ASMP-LINUX does not require code changing or application recompiling/relinking. In order to assess the performances of ASMP-LINUX, benchmarks have been performed on several hardware platforms and configurations.

  2. Hard Real-Time Performances in Multiprocessor-Embedded Systems Using ASMP-Linux

    Betti Emiliano

    2008-01-01

    Full Text Available Abstract Multiprocessor systems, especially those based on multicore or multithreaded processors, and new operating system architectures can satisfy the ever increasing computational requirements of embedded systems. ASMP-LINUX is a modified, high responsiveness, open-source hard real-time operating system for multiprocessor systems capable of providing high real-time performance while maintaining the code simple and not impacting on the performances of the rest of the system. Moreover, ASMP-LINUX does not require code changing or application recompiling/relinking. In order to assess the performances of ASMP-LINUX, benchmarks have been performed on several hardware platforms and configurations.

  3. Super computer made with Linux cluster

    Lee, Jeong Hun; Oh, Yeong Eun; Kim, Jeong Seok

    2002-01-01

    This book consists of twelve chapters, which introduce super computer made with Linux cluster. The contents of this book are Linux cluster, the principle of cluster, design of Linux cluster, general things for Linux, building up terminal server and client, Bear wolf cluster by Debian GNU/Linux, cluster system with red hat, Monitoring system, application programming-MPI, on set-up and install application programming-PVM, with PVM programming and XPVM application programming-open PBS with composition and install and set-up and GRID with GRID system, GSI, GRAM, MDS, its install and using of tool kit

  4. Linux Command Line and Shell Scripting Bible

    Blum, Richard

    2011-01-01

    The authoritative guide to Linux command line and shell scripting?completely updated and revised [it's not a guide to Linux as a whole ? just to scripting] The Linux command line allows you to type specific Linux commands directly to the system so that you can easily manipulate files and query system resources, thereby permitting you to automate commonly used functions and even schedule those programs to run automatically. This new edition is packed with new and revised content, reflecting the many changes to new Linux versions, including coverage of alternative shells to the default bash shel

  5. ARTiS, an Asymmetric Real-Time Scheduler for Linux on Multi-Processor Architectures

    Piel , Éric; Marquet , Philippe; Soula , Julien; Osuna , Christophe; Dekeyser , Jean-Luc

    2005-01-01

    The ARTiS system is a real-time extension of the GNU/Linux scheduler dedicated to SMP (Symmetric Multi-Processors) systems. It allows to mix High Performance Computing and real-time. ARTiS exploits the SMP architecture to guarantee the preemption of a processor when the system has to schedule a real-time task. The implementation is available as a modification of the Linux kernel, especially focusing (but not restricted to) IA-64 architecture. The basic idea of ARTiS is to assign a selected se...

  6. Membangun Sistem Linux Mandrake Minimal Menggunakan Inisial Disk Ram

    Wagito, Wagito

    2006-01-01

    Minimal Linux system is commonly used for special systems like router, gateway, Linux installer and diskless Linux system. Minimal Linux system is a Linux system that use a few facilities of all Linux capabilities. Mandrake Linux, as one of Linux distribution is able to perform minimal Linux system. RAM is a computer resource that especially used as main memory. A part of RAM's function can be changed into disk called RAM disk. This RAM disk can be used to run the Linux system. This ...

  7. MEMBANGUN SISTEM LINUX MANDRAKE MINIMAL MENGGUNAKAN INISIAL DISK RAM

    Wagito, Wagito

    2009-01-01

            Minimal Linux system is commonly used for special systems like router, gateway, Linux installer and diskless Linux system. Minimal Linux system is a Linux system that use a few facilities of all Linux capabilities. Mandrake Linux, as one of Linux distribution is able to perform minimal Linux system.         RAM is a computer resource that especially used as main memory. A  part of RAM’s function can be changed into disk called RAM disk. This RAM disk can be used to run the Linux syste...

  8. Generation of point isotropic source dose buildup factor data for the PFBR special concretes in a form compatible for usage in point kernel computer code QAD-CGGP

    Radhakrishnan, G.

    2003-01-01

    Full text: Around the PFBR (Prototype Fast Breeder Reactor) reactor assembly, in the peripheral shields special concretes of density 2.4 g/cm 3 and 3.6 g/cm 3 are to be used in complex geometrical shapes. Point-kernel computer code like QAD-CGGP, written for complex shield geometry comes in handy for the shield design optimization of peripheral shields. QAD-CGGP requires data base for the buildup factor data and it contains only ordinary concrete of density 2.3 g/cm 3 . In order to extend the data base for the PFBR special concretes, point isotropic source dose buildup factors have been generated by Monte Carlo method using the computer code MCNP-4A. For the above mentioned special concretes, buildup factor data have been generated in the energy range 0.5 MeV to 10.0 MeV with the thickness ranging from 1 mean free paths (mfp) to 40 mfp. Capo's formula fit of the buildup factor data compatible with QAD-CGGP has been attempted

  9. Climate tools in mainstream Linux distributions

    McKinstry, Alastair

    2015-04-01

    Debian/meterology is a project to integrate climate tools and analysis software into the mainstream Debian/Ubuntu Linux distributions. This work describes lessons learnt, and recommends practices for scientific software to be adopted and maintained in OS distributions. In addition to standard analysis tools (cdo,, grads, ferret, metview, ncl, etc.), software used by the Earth System Grid Federation was chosen for integraion, to enable ESGF portals to be built on this base; however exposing scientific codes via web APIs enables security weaknesses, normally ignorable, to be exposed. How tools are hardened, and what changes are required to handle security upgrades, are described. Secondly, to enable libraries and components (e.g. Python modules) to be integrated requires planning by writers: it is not sufficient to assume users can upgrade their code when you make incompatible changes. Here, practices are recommended to enable upgrades and co-installability of C, C++, Fortran and Python codes. Finally, software packages such as NetCDF and HDF5 can be built in multiple configurations. Tools may then expect incompatible versions of these libraries (e.g. serial and parallel) to be simultaneously available; how this was solved in Debian using "pkg-config" and shared library interfaces is described, and best practices for software writers to enable this are summarised.

  10. Research and implementation of intelligent gateway driver layer based on Linux bus

    ZHANG Jian

    2016-10-01

    Full Text Available Currently,in the field of smart home,there is no relevant organization that yet has proposed an unified protocol standard.It increases the complexity and limitations of heterogeneous gateway software framework design that different vendor′s devices have different communication mode and protocol standards.In this paper,a serial of interfaces are provided by Linux kernel,and a virtual bus is registered under Linux.The physical device drivers are able to connect to the virtual bus.The detailed designs of the communication protocol are placed in the underlying adapters,making the integration of heterogeneous networks more natural.At the same time,designing the intelligent gateway system driver layer based on Linux bus can let the application layer be more unified and clear logical.And it also let the hardware access network become more convenient and distinct.

  11. Quality of service on Linux for the Atlas TDAQ event building network

    Yasu, Y.; Manabe, A.; Fujii, H.; Watase, Y.; Nagasaka, Y.; Hasegawa, Y.; Shimojima, M.; Nomachi, M.

    2001-01-01

    Congestion control for packets sent on a network is important for DAQ systems that contain an event builder using switching network technologies. Quality of Service (QoS) is a technique for congestion control. Recent Linux releases provide QoS in the kernel to manage network traffic. The authors have analyzed the packet-loss and packet distribution for the event builder prototype of the Atlas TDAQ system. The authors used PC/Linux with Gigabit Ethernet network as the testbed. The results showed that QoS using CBQ and TBF eliminated packet loss on UDP/IP transfer while the UDP/IP transfer in best effort made lots of packet loss. The result also showed that the QoS overhead was small. The authors concluded that QoS on Linux performed efficiently in TCP/IP and UDP/IP and will have an important role of the Atlas TDAQ system

  12. Reuse of the compact nuclear simulator software under PC with Linux

    Cha, K. H.; Park, J. C.; Kwon, K. C.; Lee, G. Y.

    2000-01-01

    This study was approached to reuse source programs for a nuclear simulator under PC with Open Source Software(OSS) and to extend its applicability. Source programs in the Compact Nuclear Simulator(CNS), which has been operated for institutional research and training in KAERI, were reused and implemented for Linux-PC environment with the aim of supporting the study. PC with 500 MHz processor and Linux 2.2.5-22 kernel were utilized for the reuse implementation and it was investigated for some applications, through the functional testing for its main functions as interfaced with compact control panels in the current CNS. Development and upgrade of small-scale simulators, establishment of process simulation for PC, and development of prototype predictive simulation, can effectively be enabled with the experience though the reuse implementation was limited to port only CNS programs for PC with Linux

  13. Kali Linux assuring security by penetration testing

    Ali, Shakeel; Allen, Lee

    2014-01-01

    Written as an interactive tutorial, this book covers the core of Kali Linux with real-world examples and step-by-step instructions to provide professional guidelines and recommendations for you. The book is designed in a simple and intuitive manner that allows you to explore the whole Kali Linux testing process or study parts of it individually.If you are an IT security professional who has a basic knowledge of Unix/Linux operating systems, including an awareness of information security factors, and want to use Kali Linux for penetration testing, then this book is for you.

  14. AliEnFS - a Linux File System for the AliEn Grid Services

    Peters, Andreas J.; Saiz, P.; Buncic, P.

    2003-01-01

    Among the services offered by the AliEn (ALICE Environment http://alien.cern.ch) Grid framework there is a virtual file catalogue to allow transparent access to distributed data-sets using various file transfer protocols. $alienfs$ (AliEn File System) integrates the AliEn file catalogue as a new file system type into the Linux kernel using LUFS, a hybrid user space file system framework (Open Source http://lufs.sourceforge.net). LUFS uses a special kernel interface level called VFS (Virtual F...

  15. The Free Software Movement and the GNU/Linux Operating System

    CERN. Geneva

    2003-01-01

    Richard Stallman will speak about the purpose, goals, philosophy, methods, status, and future prospects of the GNU operating system, which in combination with the kernel Linux is now used by an estimated 17 to 20 million users world wide.BiographyRichard Stallman is the founder of the Gnu Project, launched in 1984 to develop the free operating system GNU (an acronym for ''GNU's Not Unix''), and thereby give computer users the freedom that most of them have lost. GNU is free software: everyone is free to copy it and redistribute it, as well as to make changes either large or small. Today, Linux-based variants of the GNU system, based on the kernel Linux developed by Linus Torvalds, are in widespread use. There are estimated to be some 20 million users of GNU/Linux systems today. Richard Stallman is the principal author of the GNU Compiler Collection, a portable optimizing compiler which was designed to support diverse architectures and multiple languages. The compiler now supports over 30 different architect...

  16. Kali Linux wireless penetration testing essentials

    Alamanni, Marco

    2015-01-01

    This book is targeted at information security professionals, penetration testers and network/system administrators who want to get started with wireless penetration testing. No prior experience with Kali Linux and wireless penetration testing is required, but familiarity with Linux and basic networking concepts is recommended.

  17. Getting Priorities Straight: Improving Linux Support for Database I/O

    Hall, Christoffer; Bonnet, Philippe

    2005-01-01

    advantage of Linux asynchronous I/O and how Linux can help MySQL/InnoDB best take advantage of the underlying I/O bandwidth. This is a crucial problem for the increasing number of MySQL servers deployed for very large database applications. In this paper, we rst show that the conservative I/O submission......The Linux 2.6 kernel supports asynchronous I/O as a result of propositions from the database industry. This is a positive evolution but is it a panacea? In the context of the Badger project, a collaboration between MySQL AB and University of Copenhagen, we evaluate how MySQL/InnoDB can best take...... policy used by InnoDB (as well as Oracle 9.2) leads to an under-utilization of the available I/O bandwidth. We then show that introducing prioritized asynchronous I/O in Linux will allow MySQL/InnoDB and the other Linux databases to fully utilize the available I/O bandwith using a more aggressive I...

  18. Modeling Security-Enhanced Linux Policy Specifications for Analysis (Preprint)

    Archer, Myla; Leonard, Elizabeth; Pradella, Matteo

    2003-01-01

    Security-Enhanced (SE) Linux is a modification of Linux initially released by NSA in January 2001 that provides a language for specifying Linux security policies and, as in the Flask architecture, a security server...

  19. Linux command line and shell scripting bible

    Blum, Richard

    2014-01-01

    Talk directly to your system for a faster workflow with automation capability Linux Command Line and Shell Scripting Bible is your essential Linux guide. With detailed instruction and abundant examples, this book teaches you how to bypass the graphical interface and communicate directly with your computer, saving time and expanding capability. This third edition incorporates thirty pages of new functional examples that are fully updated to align with the latest Linux features. Beginning with command line fundamentals, the book moves into shell scripting and shows you the practical application

  20. Web penetration testing with Kali Linux

    Muniz, Joseph

    2013-01-01

    Web Penetration Testing with Kali Linux contains various penetration testing methods using BackTrack that will be used by the reader. It contains clear step-by-step instructions with lot of screenshots. It is written in an easy to understand language which will further simplify the understanding for the user.""Web Penetration Testing with Kali Linux"" is ideal for anyone who is interested in learning how to become a penetration tester. It will also help the users who are new to Kali Linux and want to learn the features and differences in Kali versus Backtrack, and seasoned penetration testers

  1. The Linux command line a complete introduction

    Shotts, William E

    2012-01-01

    You've experienced the shiny, point-and-click surface of your Linux computer—now dive below and explore its depths with the power of the command line. The Linux Command Line takes you from your very first terminal keystrokes to writing full programs in Bash, the most popular Linux shell. Along the way you'll learn the timeless skills handed down by generations of gray-bearded, mouse-shunning gurus: file navigation, environment configuration, command chaining, pattern matching with regular expressions, and more.

  2. LPI Linux Certification in a Nutshell

    Haeder, Adam; Pessanha, Bruno; Stanger, James

    2010-01-01

    Linux deployment continues to increase, and so does the demand for qualified and certified Linux system administrators. If you're seeking a job-based certification from the Linux Professional Institute (LPI), this updated guide will help you prepare for the technically challenging LPIC Level 1 Exams 101 and 102. The third edition of this book is a meticulously researched reference to these exams, written by trainers who work closely with LPI. You'll find an overview of each exam, a summary of the core skills you need, review questions and exercises, as well as a study guide, a practice test,

  3. Aplicación de RT-Linux en el control de motores de pasos. Parte II; Appication of RT-Linux in the Control of Steps Motors. Part II

    Ernesto Duany Renté

    2011-02-01

    Full Text Available Este trabajo complementa al presentado anteriormente: "Aplicación de RT-Linux en el control de motoresde pasos. Primera parte", de manera que se puedan relacionar a las tareas de adquisición y control para laobtención de un sistema lo más exacto posible. Las técnicas empleadas son las de tiempo real aprovechandolas posibilidades del microkernel RT-Linux y los software libres contenidos en sistemas Unix/Linux. Lasseñales se obtienen mediante un conversor AD y mostradas en pantalla empleando el Gnuplot.  The work presented in this paper is a complement to the control and acquisition tasks which were explainedin "Application of RT-Linux in the Control of Steps Motors. First Part", so that those both real time taskscan be fully related in order to make the whole control system more accurate. The employed techniquesare those of Real Time Taking advantage of the possibilities of the micro kernel RT-Linux and the freesoftware distributed in the Unix/Linux operating systems. The signals are obtained by means of an ADconverter and are shown in screen using Gnuplot.

  4. Kali Linux wireless penetration testing beginner's guide

    Ramachandran, Vivek

    2015-01-01

    If you are a security professional, pentester, or anyone interested in getting to grips with wireless penetration testing, this is the book for you. Some familiarity with Kali Linux and wireless concepts is beneficial.

  5. Two-factor Authorization in Linux

    L. S. Nosov

    2010-03-01

    Full Text Available Identification and authentication realization in OS Linux on basis of external USB-device and on basis of PAM-module program realization by the example answer on control question (enigma is considered.

  6. The Linux farm at the RCF

    Chan, A.W.; Hogue, R.W.; Throwe, T.G.; Yanuklis, T.A.

    2001-01-01

    A description of the Linux Farm at the RHIC Computing Facility (RCF) is presented. The RCF is a dedicated data processing facility for RHIC, which became operational in the summer of 2000 at Brookhaven National Laboratory

  7. OS X and iOS Kernel Programming

    Halvorsen, Ole Henry

    2011-01-01

    OS X and iOS Kernel Programming combines essential operating system and kernel architecture knowledge with a highly practical approach that will help you write effective kernel-level code. You'll learn fundamental concepts such as memory management and thread synchronization, as well as the I/O Kit framework. You'll also learn how to write your own kernel-level extensions, such as device drivers for USB and Thunderbolt devices, including networking, storage and audio drivers. OS X and iOS Kernel Programming provides an incisive and complete introduction to the XNU kernel, which runs iPhones, i

  8. Replacing OSE with Real Time capable Linux

    Boman, Simon; Rutgersson, Olof

    2009-01-01

    For many years OSE has been a common used operating system, with real time extensions enhancements, in embed-ded systems. But in the last decades, Linux has grown and became a competitor to common operating systems and, in recent years, even as an operating system with real time extensions. With this in mind, ÅF was interested in replacing the quite expensive OSE with some distribution of the open source based Linux on a PowerPC MPC8360. Therefore, our purpose with thesis is to implement Linu...

  9. Mastering Kali Linux for advanced penetration testing

    Beggs, Robert W

    2014-01-01

    This book provides an overview of the kill chain approach to penetration testing, and then focuses on using Kali Linux to provide examples of how this methodology is applied in the real world. After describing the underlying concepts, step-by-step examples are provided that use selected tools to demonstrate the techniques. If you are an IT professional or a security consultant who wants to maximize the success of your network testing using some of the advanced features of Kali Linux, then this book is for you. This book will teach you how to become an expert in the pre-engagement, management,

  10. Python for Unix and Linux system administration

    Gift, Noah

    2007-01-01

    Python is an ideal language for solving problems, especially in Linux and Unix networks. With this pragmatic book, administrators can review various tasks that often occur in the management of these systems, and learn how Python can provide a more efficient and less painful way to handle them. Each chapter in Python for Unix and Linux System Administration presents a particular administrative issue, such as concurrency or data backup, and presents Python solutions through hands-on examples. Once you finish this book, you'll be able to develop your own set of command-line utilities with Pytho

  11. Linux software for large topology optimization problems

    evolving product, which allows a parallel solution of the PDE, it lacks the important feature that the matrix-generation part of the computations is localized to each processor. This is well-known to be critical for obtaining a useful speedup on a Linux cluster and it motivates the search for a COMSOL......-like package for large topology optimization problems. One candidate for such software is developed for Linux by Sandia Nat’l Lab in the USA being the Sundance system. Sundance also uses a symbolic representation of the PDE and a scalable numerical solution is achieved by employing the underlying Trilinos...

  12. Putting Priors in Mixture Density Mercer Kernels

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  13. LPIC-2 Linux Professional Institute Certification Study Guide Exams 201 and 202

    Smith, Roderick W

    2011-01-01

    The first book to cover the LPIC-2 certification Linux allows developers to update source code freely, making it an excellent, low-cost, secure alternative to alternate, more expensive operating systems. It is for this reason that the demand for IT professionals to have an LPI certification is so strong. This study guide provides unparalleled coverage of the LPIC-2 objectives for exams 201 and 202. Clear and concise coverage examines all Linux administration topics while practical, real-world examples enhance your learning process. On the CD, you'll find the Sybex Test Engine, electronic flash

  14. Hard Real-Time Linux for Off-The-Shelf Multicore Architectures

    Radder, Dirk

    2015-01-01

    This document describes the research results that were obtained from the development of a real-time extension for the Linux operating system. The paper describes a full extension of the kernel, which enables hard real-time performance on a 64-bit x86 architecture. In the first part of this study, real-time systems are categorized and concepts of real-time operating systems are introduced to the reader. In addition, numerous well-known real-time operating systems are considered. QNX Neutrino, ...

  15. Linux malware incident response an excerpt from malware forensic field guide for Linux systems

    Malin, Cameron H; Aquilina, James M

    2013-01-01

    Linux Malware Incident Response is a ""first look"" at the Malware Forensics Field Guide for Linux Systems, exhibiting the first steps in investigating Linux-based incidents. The Syngress Digital Forensics Field Guides series includes companions for any digital and computer forensic investigator and analyst. Each book is a ""toolkit"" with checklists for specific tasks, case studies of difficult situations, and expert analyst tips. This compendium of tools for computer forensics analysts and investigators is presented in a succinct outline format with cross-references to suppleme

  16. FLUKA-LIVE-an embedded framework, for enabling a computer to execute FLUKA under the control of a Linux OS

    Cohen, A.; Battistoni, G.; Mark, S.

    2008-01-01

    This paper describes a Linux-based OS framework for integrating the FLUKA Monte Carlo software (currently distributed only for Linux) into a CD-ROM, resulting in a complete environment for a scientist to edit, link and run FLUKA routines-without the need to install a UNIX/Linux operating system. The building process includes generating from scratch a complete operating system distribution which will, when operative, build all necessary components for successful operation of FLUKA software and libraries. Various source packages, as well as the latest kernel sources, are freely available from the Internet. These sources are used to create a functioning Linux system that integrates several core utilities in line with the main idea-enabling FLUKA to act as if it was running under a popular Linux distribution or even a proprietary UNIX workstation. On boot-up a file system will be created and the contents from the CD will be uncompressed and completely loaded into RAM-after which the presence of the CD is no longer necessary, and could be removed for use on a second computer. The system can operate on any i386 PC as long as it can boot from a CD

  17. RTOS kernel in portable electrocardiograph

    Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.

    2011-12-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  18. RTOS kernel in portable electrocardiograph

    Centeno, C A; Voos, J A; Riva, G G; Zerbini, C; Gonzalez, E A

    2011-01-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  19. RELAP5-3D developmental assessment: Comparison of version 4.2.1i on Linux and Windows

    Bayless, Paul D. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2014-06-01

    Figures have been generated comparing the parameters used in the developmental assessment of the RELAP5-3D code, version 4.2i, compiled on Linux and Windows platforms. The figures, which are the same as those used in Volume III of the RELAP5-3D code manual, compare calculations using the semi-implicit solution scheme with available experiment data. These figures provide a quick, visual indication of how the code predictions differ between the Linux and Windows versions.

  20. RELAP5-3D Developmental Assessment. Comparison of Version 4.3.4i on Linux and Windows

    Bayless, Paul David

    2015-01-01

    Figures have been generated comparing the parameters used in the developmental assessment of the RELAP5-3D code, version 4.3i, compiled on Linux and Windows platforms. The figures, which are the same as those used in Volume III of the RELAP5-3D code manual, compare calculations using the semi-implicit solution scheme with available experiment data. These figures provide a quick, visual indication of how the code predictions differ between the Linux and Windows versions.

  1. Using Linux PCs in DAQ applications

    Ünel, G; Beck, H P; Cetin, S A; Conka, T; Crone, G J; Fernandes, A; Francis, D; Joosb, M; Lehmann, G; López, J; Mailov, A A; Mapelli, Livio P; Mornacchi, Giuseppe; Niculescu, M; Petersen, J; Tremblet, L J; Veneziano, Stefano; Wildish, T; Yasu, Y

    2000-01-01

    ATLAS Data Acquisition/Event Filter "-1" (DAQ/EF1) project provides the opportunity to explore the use of commodity hardware (PCs) and Open Source Software (Linux) in DAQ applications. In DAQ/EF-1 there is an element called the LDAQ which is responsible for providing local run-control, error-handling and reporting for a number of read- out modules in front end crates. This element is also responsible for providing event data for monitoring and for the interface with the global control and monitoring system (Back-End). We present the results of an evaluation of the Linux operating system made in the context of DAQ/EF-1 where there are no strong real-time requirements. We also report on our experience in implementing the LDAQ on a VMEbus based PC (the VMIVME-7587) and a desktop PC linked to VMEbus with a Bit3 interface both running Linux. We then present the problems encountered during the integration with VMEbus, the status of the LDAQ implementation and draw some conclusions on the use of Linux in DAQ applica...

  2. Embedded Linux projects using Yocto project cookbook

    González, Alex

    2015-01-01

    If you are an embedded developer learning about embedded Linux with some experience with the Yocto project, this book is the ideal way to become proficient and broaden your knowledge with examples that are immediately applicable to your embedded developments. Experienced embedded Yocto developers will find new insight into working methodologies and ARM specific development competence.

  3. Linux Incident Response Volatile Data Analysis Framework

    McFadden, Matthew

    2013-01-01

    Cyber incident response is an emphasized subject area in cybersecurity in information technology with increased need for the protection of data. Due to ongoing threats, cybersecurity imposes many challenges and requires new investigative response techniques. In this study a Linux Incident Response Framework is designed for collecting volatile data…

  4. Superiority of CT imaging reconstruction on Linux OS

    Lin Shaochun; Yan Xufeng; Wu Tengfang; Luo Xiaomei; Cai Huasong

    2010-01-01

    Objective: To compare the speed of CT reconstruction using the Linux and Windows OS. Methods: Shepp-Logan head phantom in different pixel size was projected to obtain the sinogram by using the inverse Fourier transformation, filtered back projection and Radon transformation on both Linux and Windows OS. Results: CT image reconstruction using the Linux operating system was significantly better and more efficient than Windows. Conclusion: CT image reconstruction using the Linux operating system is more efficient. (authors)

  5. ZIO: The Ultimate Linux I/O Framework

    Gonzalez Cobas, J D; Rubini, A; Nellaga, S; Vaga, F

    2014-01-01

    ZIO (with Z standing for “The Ultimate I/O” Framework) was developed for CERN with the specific needs of physics labs in mind, which are poorly addressed in the mainstream Linux kernel. ZIO provides a framework for industrial, high-bandwith, high-channel count I/O device drivers (digitizers, function generators, timing devices like TDCs) with performance, generality and scalability as design goals. Among its features, it offers abstractions for • both input and output channels, and channel sets • run-time selection of trigger types • run-time selection of buffer types • sysfs-based configuration • char devices for data and metadata • a socket interface (PF ZIO) as alternative to char devices In this paper, we discuss the design and implementation of ZIO, and describe representative cases of driver development for typical and exotic applications: drivers for the FMC (FPGAMezzanine Card, see [1]) boards developed at CERN like the FMC ADC 100Msps digitizer, FMC TDC timestamp counter, and FMC DEL ...

  6. Preparing a scientific manuscript in Linux: Today's possibilities and limitations.

    Tchantchaleishvili, Vakhtang; Schmitto, Jan D

    2011-10-22

    Increasing number of scientists are enthusiastic about using free, open source software for their research purposes. Authors' specific goal was to examine whether a Linux-based operating system with open source software packages would allow to prepare a submission-ready scientific manuscript without the need to use the proprietary software. Preparation and editing of scientific manuscripts is possible using Linux and open source software. This letter to the editor describes key steps for preparation of a publication-ready scientific manuscript in a Linux-based operating system, as well as discusses the necessary software components. This manuscript was created using Linux and open source programs for Linux.

  7. Distributed MDSplus database performance with Linux clusters

    Minor, D.H.; Burruss, J.R.

    2006-01-01

    The staff at the DIII-D National Fusion Facility, operated for the USDOE by General Atomics, are investigating the use of grid computing and Linux technology to improve performance in our core data management services. We are in the process of converting much of our functionality to cluster-based and grid-enabled software. One of the most important pieces is a new distributed version of the MDSplus scientific data management system that is presently used to support fusion research in over 30 countries worldwide. To improve data handling performance, the staff is investigating the use of Linux clusters for both data clients and servers. The new distributed capability will result in better load balancing between these clients and servers, and more efficient use of network resources resulting in improved support of the data analysis needs of the scientific staff

  8. UNIX and Linux system administration handbook

    Nemeth, Evi; Hein, Trent R; Whaley, Ben; Mackin, Dan; Garnett, James; Branca, Fabrizio; Mouat, Adrian

    2018-01-01

    Now fully updated for today’s Linux distributions and cloud environments, it details best practices for every facet of system administration, including storage management, network design and administration, web hosting and scale-out, automation, configuration management, performance analysis, virtualization, DNS, security, management of IT service organizations, and much more. For modern system and network administrators, this edition contains indispensable new coverage of cloud deployments, continuous delivery, Docker and other containerization solutions, and much more.

  9. IP Security für Linux

    Parthey, Mirko

    2001-01-01

    Die Nutzung des Internet für sicherheitskritische Anwendungen erfordert kryptographische Schutzmechanismen. IP Security (IPsec) definiert dafür geeignete Protokolle. Diese Arbeit gibt einen Überblick über IPsec. Eine IPsec-Implementierung für Linux (FreeS/WAN) wird auf Erweiterbarkeit und Praxistauglichkeit untersucht. Using the Internet in security-critical areas requires cryptographic protection, for which IP Security (IPsec) defines suitable protocols. This paper gives an overview of IP...

  10. Modelling of HTR (High Temperature Reactor Pebble-Bed 10 MW to Determine Criticality as A Variations of Enrichment and Radius of the Fuel (Kernel With the Monte Carlo Code MCNP4C

    Hammam Oktajianto

    2014-12-01

    Full Text Available Gas-cooled nuclear reactor is a Generation IV reactor which has been receiving significant attention due to many desired characteristics such as inherent safety, modularity, relatively low cost, short construction period, and easy financing. High temperature reactor (HTR pebble-bed as one of type of gas-cooled reactor concept is getting attention. In HTR pebble-bed design, radius and enrichment of the fuel kernel are the key parameter that can be chosen freely to determine the desired value of criticality. This paper models HTR pebble-bed 10 MW and determines an effective of enrichment and radius of the fuel (Kernel to get criticality value of reactor. The TRISO particle coated fuel particle which was modelled explicitly and distributed in the fuelled region of the fuel pebbles using a Simple-Cubic (SC lattice. The pebble-bed balls and moderator balls distributed in the core zone using a Body-Centred Cubic lattice with assumption of a fresh fuel by the fuel enrichment was 7-17% at 1% range and the size of the fuel radius was 175-300 µm at 25 µm ranges. The geometrical model of the full reactor is obtained by using lattice and universe facilities provided by MCNP4C. The details of model are discussed with necessary simplifications. Criticality calculations were conducted by Monte Carlo transport code MCNP4C and continuous energy nuclear data library ENDF/B-VI. From calculation results can be concluded that an effective of enrichment and radius of fuel (Kernel to achieve a critical condition was the enrichment of 15-17% at a radius of 200 µm, the enrichment of 13-17% at a radius of 225 µm, the enrichments of 12-15% at radius of 250 µm, the enrichments of 11-14% at a radius of 275 µm and the enrichment of 10-13% at a radius of 300 µm, so that the effective of enrichments and radii of fuel (Kernel can be considered in the HTR 10 MW. Keywords—MCNP4C, HTR, enrichment, radius, criticality 

  11. Control Transfer in Operating System Kernels

    1994-05-13

    microkernel system that runs less code in the kernel address space. To realize the performance benefit of allocating stacks in unmapped kseg0 memory, the...review how I modified the Mach 3.0 kernel to use continuations. Because of Mach’s message-passing microkernel structure, interprocess communication was...critical control transfer paths, deeply- nested call chains are undesirable in any case because of the function call overhead. 4.1.3 Microkernel Operating

  12. Robust Kernel (Cross-) Covariance Operators in Reproducing Kernel Hilbert Space toward Kernel Methods

    Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping

    2016-01-01

    To the best of our knowledge, there are no general well-founded robust methods for statistical unsupervised learning. Most of the unsupervised methods explicitly or implicitly depend on the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). They are sensitive to contaminated data, even when using bounded positive definite kernels. First, we propose robust kernel covariance operator (robust kernel CO) and robust kernel crosscovariance operator (robust kern...

  13. Approximate kernel competitive learning.

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Cleaning up a GNU/Linux operating system

    Oblak , Denis

    2018-01-01

    The aim of the thesis is to develop an application for cleaning up the Linux operating system that would be able to function on most distributions. The theoretical part discusses the cleaning of the Linux operating system that frees up disk space and allows a better functioning. The cleaning techniques and the existing tools for Linux are systematically reviewed and presented. The following part examines the cleaning of the Windows and MacOS operating systems. The thesis also compares all...

  15. Parallelization of a three-dimensional whole core transport code DeCART

    Jin Young, Cho; Han Gyu, Joo; Ha Yong, Kim; Moon-Hee, Chang [Korea Atomic Energy Research Institute, Yuseong-gu, Daejon (Korea, Republic of)

    2003-07-01

    Parallelization of the DeCART (deterministic core analysis based on ray tracing) code is presented that reduces the computational burden of the tremendous computing time and memory required in three-dimensional whole core transport calculations. The parallelization employs the concept of MPI grouping and the MPI/OpenMP mixed scheme as well. Since most of the computing time and memory are used in MOC (method of characteristics) and the multi-group CMFD (coarse mesh finite difference) calculation in DeCART, variables and subroutines related to these two modules are the primary targets for parallelization. Specifically, the ray tracing module was parallelized using a planar domain decomposition scheme and an angular domain decomposition scheme. The parallel performance of the DeCART code is evaluated by solving a rodded variation of the C5G7MOX three dimensional benchmark problem and a simplified three-dimensional SMART PWR core problem. In C5G7MOX problem with 24 CPUs, a speedup of maximum 21 is obtained on an IBM Regatta machine and 22 on a LINUX Cluster in the MOC kernel, which indicates good parallel performance of the DeCART code. In the simplified SMART problem, the memory requirement of about 11 GBytes in the single processor cases reduces to 940 Mbytes with 24 processors, which means that the DeCART code can now solve large core problems with affordable LINUX clusters. (authors)

  16. Compact PCI/Linux platform in FTU slow control system

    Iannone, F.; Centioli, C.; Panella, M.; Mazza, G.; Vitale, V.; Wang, L.

    2004-01-01

    In large fusion experiments, such as tokamak devices, there is a common trend for slow control systems. Because of complexity of the plants, the so-called 'Standard Model' (SM) in slow control has been adopted on several tokamak machines. This model is based on a three-level hierarchical control: 1) High-Level Control (HLC) with a supervisory function; 2) Medium-Level Control (MLC) to interface and concentrate I/O field equipment; 3) Low-Level Control (LLC) with hard real-time I/O function, often managed by PLCs. FTU (Frascati Tokamak Upgrade) control system designed with SM concepts has underwent several stages of developments in its fifteen years duration of runs. The latest evolution was inevitable, due to the obsolescence of the MLC CPUs, based on VME-MOTOROLA 68030 with OS9 operating system. A large amount of C code was developed for that platform to route the data flow from LLC, which is constituted by 24 Westinghouse Numalogic PC-700 PLCs with about 8000 field-points, to HLC, based on a commercial Object-Oriented Real-Time database on Alpha/CompaqTru64 platform. Therefore, authors have to look for cost-effective solutions and finally a CompactPCI-Intel x86 platform with Linux operating system was chosen. A software porting has been done, taking into account the differences between OS9 and Linux operating system in terms of Inter/Network Processes Communications and I/O multi-ports serial driver. This paper describes the hardware/software architecture of the new MLC system, emphasizing the reliability and the low costs of the open source solutions. Moreover, a huge amount of software packages available in open source environment will assure a less painful maintenance, and will open the way to further improvements of the system itself. (authors)

  17. Digital signal processing with kernel methods

    Rojo-Alvarez, José Luis; Muñoz-Marí, Jordi; Camps-Valls, Gustavo

    2018-01-01

    A realistic and comprehensive review of joint approaches to machine learning and signal processing algorithms, with application to communications, multimedia, and biomedical engineering systems Digital Signal Processing with Kernel Methods reviews the milestones in the mixing of classical digital signal processing models and advanced kernel machines statistical learning tools. It explains the fundamental concepts from both fields of machine learning and signal processing so that readers can quickly get up to speed in order to begin developing the concepts and application software in their own research. Digital Signal Processing with Kernel Methods provides a comprehensive overview of kernel methods in signal processing, without restriction to any application field. It also offers example applications and detailed benchmarking experiments with real and synthetic datasets throughout. Readers can find further worked examples with Matlab source code on a website developed by the authors. * Presents the necess...

  18. Measuring performances of linux hyper visors

    Chierici, A.; Veraldi, R.; Salomoni, D.

    2009-01-01

    Virtualisation is a now proven software technology that is rapidly transforming the I T landscape and fundamentally changing the way people make computations and implement services. Recently, all major software producers (e.g., Microsoft and Red Hat) developed or acquired virtualisation technologies. Our institute (http://www.CNAF.INFN.it) is a Tier l for experiments carried on at the Large Hadron Collider at CERN (http://lhc.web.CERN.ch/lhc/) and is experiencing several benefits from virtualisation technologies, like improving fault tolerance, providing efficient hardware resource usage and increasing security. Currently, the virtualisation solution we adopted is xen, which is well supported by the Scientific Linux distribution, widely used by the High-Energy Physics (HEP) community. Since Scientific Linux is based on Red Hat E S, we felt the need to investigate performances and usability differences with the new k vm technology, recently acquired by Red Hat. The case study of this work is the Tier2 site for the LHCb experiment hosted at our institute; all major grid elements for this Tier2 run on xen virtual machines smoothly. We will investigate the impact on performance and stability that a migration to k vm would entail on the Tier2 site, as well as the effort required by a system administrator to deploy the migration.

  19. Optimized Kernel Entropy Components.

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  20. Subsampling Realised Kernels

    Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger

    2011-01-01

    In a recent paper we have introduced the class of realised kernel estimators of the increments of quadratic variation in the presence of noise. We showed that this estimator is consistent and derived its limit distribution under various assumptions on the kernel weights. In this paper we extend our...... that subsampling is impotent, in the sense that subsampling has no effect on the asymptotic distribution. Perhaps surprisingly, for the efficient smooth kernels, such as the Parzen kernel, we show that subsampling is harmful as it increases the asymptotic variance. We also study the performance of subsampled...

  1. RKRD: Runtime Kernel Rootkit Detection

    Grover, Satyajit; Khosravi, Hormuzd; Kolar, Divya; Moffat, Samuel; Kounavis, Michael E.

    In this paper we address the problem of protecting computer systems against stealth malware. The problem is important because the number of known types of stealth malware increases exponentially. Existing approaches have some advantages for ensuring system integrity but sophisticated techniques utilized by stealthy malware can thwart them. We propose Runtime Kernel Rootkit Detection (RKRD), a hardware-based, event-driven, secure and inclusionary approach to kernel integrity that addresses some of the limitations of the state of the art. Our solution is based on the principles of using virtualization hardware for isolation, verifying signatures coming from trusted code as opposed to malware for scalability and performing system checks driven by events. Our RKRD implementation is guided by our goals of strong isolation, no modifications to target guest OS kernels, easy deployment, minimal infra-structure impact, and minimal performance overhead. We developed a system prototype and conducted a number of experiments which show that the per-formance impact of our solution is negligible.

  2. RTSPM: real-time Linux control software for scanning probe microscopy.

    Chandrasekhar, V; Mehta, M M

    2013-01-01

    Real time computer control is an essential feature of scanning probe microscopes, which have become important tools for the characterization and investigation of nanometer scale samples. Most commercial (and some open-source) scanning probe data acquisition software uses digital signal processors to handle the real time data processing and control, which adds to the expense and complexity of the control software. We describe here scan control software that uses a single computer and a data acquisition card to acquire scan data. The computer runs an open-source real time Linux kernel, which permits fast acquisition and control while maintaining a responsive graphical user interface. Images from a simulated tuning-fork based microscope as well as a standard topographical sample are also presented, showing some of the capabilities of the software.

  3. Iterative software kernels

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  4. Validation of Born Traveltime Kernels

    Baig, A. M.; Dahlen, F. A.; Hung, S.

    2001-12-01

    Most inversions for Earth structure using seismic traveltimes rely on linear ray theory to translate observed traveltime anomalies into seismic velocity anomalies distributed throughout the mantle. However, ray theory is not an appropriate tool to use when velocity anomalies have scale lengths less than the width of the Fresnel zone. In the presence of these structures, we need to turn to a scattering theory in order to adequately describe all of the features observed in the waveform. By coupling the Born approximation to ray theory, the first order dependence of heterogeneity on the cross-correlated traveltimes (described by the Fréchet derivative or, more colourfully, the banana-doughnut kernel) may be determined. To determine for what range of parameters these banana-doughnut kernels outperform linear ray theory, we generate several random media specified by their statistical properties, namely the RMS slowness perturbation and the scale length of the heterogeneity. Acoustic waves are numerically generated from a point source using a 3-D pseudo-spectral wave propagation code. These waves are then recorded at a variety of propagation distances from the source introducing a third parameter to the problem: the number of wavelengths traversed by the wave. When all of the heterogeneity has scale lengths larger than the width of the Fresnel zone, ray theory does as good a job at predicting the cross-correlated traveltime as the banana-doughnut kernels do. Below this limit, wavefront healing becomes a significant effect and ray theory ceases to be effective even though the kernels remain relatively accurate provided the heterogeneity is weak. The study of wave propagation in random media is of a more general interest and we will also show our measurements of the velocity shift and the variance of traveltime compare to various theoretical predictions in a given regime.

  5. Classification With Truncated Distance Kernel.

    Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas

    2018-05-01

    This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

  6. Fedora Bible 2010 Edition Featuring Fedora Linux 12

    Negus, Christopher

    2010-01-01

    The perfect companion for mastering the latest version of Fedora. As a free, open source Linux operating system sponsored by Red Hat, Fedora can either be a stepping stone to Enterprise or used as a viable operating system for those looking for frequent updates. Written by veteran authors of perennial bestsellers, this book serves as an ideal companion for Linux users and offers a thorough look at the basics of the new Fedora 12. Step-by-step instructions make the Linux installation simple while clear explanations walk you through best practices for taking advantage of the desktop interface. Y

  7. Web application security analysis using the Kali Linux operating system

    BABINCEV IVAN M.; VULETIC DEJAN V.

    2016-01-01

    The Kali Linux operating system is described as well as its purpose and possibilities. There are listed groups of tools that Kali Linux has together with the methods of their functioning, as well as a possibility to install and use tools that are not an integral part of Kali. The final part shows a practical testing of web applications using the tools from the Kali Linux operating system. The paper thus shows a part of the possibilities of this operating system in analaysing web applications ...

  8. Shell Scripting Expert Recipes for Linux, Bash and more

    Parker, Steve

    2011-01-01

    A compendium of shell scripting recipes that can immediately be used, adjusted, and applied The shell is the primary way of communicating with the Unix and Linux systems, providing a direct way to program by automating simple-to-intermediate tasks. With this book, Linux expert Steve Parker shares a collection of shell scripting recipes that can be used as is or easily modified for a variety of environments or situations. The book covers shell programming, with a focus on Linux and the Bash shell; it provides credible, real-world relevance, as well as providing the flexible tools to get started

  9. Pro Oracle database 11g RAC on Linux

    Shaw, Steve

    2010-01-01

    Pro Oracle Database 11g RAC on Linux provides full-life-cycle guidance on implementing Oracle Real Application Clusters in a Linux environment. Real Application Clusters, commonly abbreviated as RAC, is Oracle's industry-leading architecture for scalable and fault-tolerant databases. RAC allows you to scale up and down by simply adding and subtracting inexpensive Linux servers. Redundancy provided by those multiple, inexpensive servers is the basis for the failover and other fault-tolerance features that RAC provides. Written by authors well-known for their talent with RAC, Pro Oracle Database

  10. Kernels for structured data

    Gärtner, Thomas

    2009-01-01

    This book provides a unique treatment of an important area of machine learning and answers the question of how kernel methods can be applied to structured data. Kernel methods are a class of state-of-the-art learning algorithms that exhibit excellent learning results in several application domains. Originally, kernel methods were developed with data in mind that can easily be embedded in a Euclidean vector space. Much real-world data does not have this property but is inherently structured. An example of such data, often consulted in the book, is the (2D) graph structure of molecules formed by

  11. Software infrastructure progress in the RAVEN code

    Cogliati, Joshua J. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Permann, Cody J. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-03-01

    The milestones have been achieved. RAVEN has been migrated to Gitlab which adds new abilities for code review and management. Standalone RAVEN framework packages have been created for OSX and two Linux distributions.

  12. Locally linear approximation for Kernel methods : the Railway Kernel

    Muñoz, Alberto; González, Javier

    2008-01-01

    In this paper we present a new kernel, the Railway Kernel, that works properly for general (nonlinear) classification problems, with the interesting property that acts locally as a linear kernel. In this way, we avoid potential problems due to the use of a general purpose kernel, like the RBF kernel, as the high dimension of the induced feature space. As a consequence, following our methodology the number of support vectors is much lower and, therefore, the generalization capab...

  13. Development of a portable Linux-based ECG measurement and monitoring system.

    Tan, Tan-Hsu; Chang, Ching-Su; Huang, Yung-Fa; Chen, Yung-Fu; Lee, Cheng

    2011-08-01

    This work presents a portable Linux-based electrocardiogram (ECG) signals measurement and monitoring system. The proposed system consists of an ECG front end and an embedded Linux platform (ELP). The ECG front end digitizes 12-lead ECG signals acquired from electrodes and then delivers them to the ELP via a universal serial bus (USB) interface for storage, signal processing, and graphic display. The proposed system can be installed anywhere (e.g., offices, homes, healthcare centers and ambulances) to allow people to self-monitor their health conditions at any time. The proposed system also enables remote diagnosis via Internet. Additionally, the system has a 7-in. interactive TFT-LCD touch screen that enables users to execute various functions, such as scaling a single-lead or multiple-lead ECG waveforms. The effectiveness of the proposed system was verified by using a commercial 12-lead ECG signal simulator and in vivo experiments. In addition to its portability, the proposed system is license-free as Linux, an open-source code, is utilized during software development. The cost-effectiveness of the system significantly enhances its practical application for personal healthcare.

  14. Application of the Linux cluster for exhaustive window haplotype analysis using the FBAT and Unphased programs.

    Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun

    2008-05-28

    Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4-15.9 times faster, while Unphased jobs performed 1.1-18.6 times faster compared to the accumulated computation duration. Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance.

  15. Infecting Windows, Linux & Mac in one go

    Computer Security Team

    2012-01-01

    Still love bashing on Windows as you believe it is an insecure operating system? Hold on a second! Just recently, a vulnerability has been published for Java 7.   It affects Windows/Linux PCs and Macs, Internet Explorer, Safari and Firefox. In fact, it affects all computers that have enabled the Java 7 plug-in in their browser (Java 6 and earlier is not affected). Once you visit a malicious website (and there are plenty already out in the wild), your computer is infected… That's "Game Over" for you.      And this is not the first time. For a while now, attackers have not been targeting the operating system itself, but rather aiming at vulnerabilities inherent in e.g. your Acrobat Reader, Adobe Flash or Java programmes. All these are standard plug-ins added into your favourite web browser which make your web-surfing comfortable (or impossible when you un-install them). A single compromised web-site, however, is sufficient to prob...

  16. Evolution of Linux operating system network

    Xiao, Guanping; Zheng, Zheng; Wang, Haoqin

    2017-01-01

    Linux operating system (LOS) is a sophisticated man-made system and one of the most ubiquitous operating systems. However, there is little research on the structure and functionality evolution of LOS from the prospective of networks. In this paper, we investigate the evolution of the LOS network. 62 major releases of LOS ranging from versions 1.0 to 4.1 are modeled as directed networks in which functions are denoted by nodes and function calls are denoted by edges. It is found that the size of the LOS network grows almost linearly, while clustering coefficient monotonically decays. The degree distributions are almost the same: the out-degree follows an exponential distribution while both in-degree and undirected degree follow power-law distributions. We further explore the functionality evolution of the LOS network. It is observed that the evolution of functional modules is shown as a sequence of seven events (changes) succeeding each other, including continuing, growth, contraction, birth, splitting, death and merging events. By means of a statistical analysis of these events in the top 4 largest components (i.e., arch, drivers, fs and net), it is shown that continuing, growth and contraction events occupy more than 95% events. Our work exemplifies a better understanding and describing of the dynamics of LOS evolution.

  17. Data-variant kernel analysis

    Motai, Yuichi

    2015-01-01

    Describes and discusses the variants of kernel analysis methods for data types that have been intensely studied in recent years This book covers kernel analysis topics ranging from the fundamental theory of kernel functions to its applications. The book surveys the current status, popular trends, and developments in kernel analysis studies. The author discusses multiple kernel learning algorithms and how to choose the appropriate kernels during the learning phase. Data-Variant Kernel Analysis is a new pattern analysis framework for different types of data configurations. The chapters include

  18. Porting oxbash to linux and its application in SD-shell calculations

    Suman, H.; Suleiman, S.

    1998-01-01

    Oxbash, a code for nuclear structure calculations within the shell model approach, was ported to Linux that is a UNIX clone for PC's. Due to many faults in the code version we had, deep corrective actions in the code had to be undertaken. This was done through intensive use of UNIX utilities like sed, nm, make in addition to proper shell script programming. Our version contained calls for missing subroutines. Some of these were included from C- and f90 libraries. Others had to be written separately. All these actions were organized and automated through a robust system of M akefiles . Finally the code was tested and applied for nuclei with 18 and 20 nucleons. (author)

  19. Open discovery: An integrated live Linux platform of Bioinformatics tools.

    Vetrivel, Umashankar; Pilla, Kalabharath

    2008-01-01

    Historically, live linux distributions for Bioinformatics have paved way for portability of Bioinformatics workbench in a platform independent manner. Moreover, most of the existing live Linux distributions limit their usage to sequence analysis and basic molecular visualization programs and are devoid of data persistence. Hence, open discovery - a live linux distribution has been developed with the capability to perform complex tasks like molecular modeling, docking and molecular dynamics in a swift manner. Furthermore, it is also equipped with complete sequence analysis environment and is capable of running windows executable programs in Linux environment. Open discovery portrays the advanced customizable configuration of fedora, with data persistency accessible via USB drive or DVD. The Open Discovery is distributed free under Academic Free License (AFL) and can be downloaded from http://www.OpenDiscovery.org.in.

  20. Real-time data collection in Linux: a case study.

    Finney, S A

    2001-05-01

    Multiuser UNIX-like operating systems such as Linux are often considered unsuitable for real-time data collection because of the potential for indeterminate timing latencies resulting from preemptive scheduling. In this paper, Linux is shown to be fully adequate for precisely controlled programming with millisecond resolution or better. The Linux system calls that subserve such timing control are described and tested and then utilized in a MIDI-based program for tapping and music performance experiments. The timing of this program, including data input and output, is shown to be accurate at the millisecond level. This demonstrates that Linux, with proper programming, is suitable for real-time experiment software. In addition, the detailed description and test of both the operating system facilities and the application program itself may serve as a model for publicly documenting programming methods and software performance on other operating systems.

  1. After the first five years: central Linux support at DESY

    Knut Woller; Thorsten Kleinwort; Peter Jung

    2001-01-01

    The authors will describe how Linux is embedded into DESY's unix computing, their support concept and policies, tools used and developed, and the challenges which they are facing now that the number of supported PCs is rapidly approaching one thousand

  2. Linux, OpenBSD, and Talisker: A Comparative Complexity Analysis

    Smith, Kevin

    2002-01-01

    .... Rigorous engineering principles are applicable across a broad range of systems. The purpose of this study is to analyze and compare three operating systems, including two general-purpose operating systems (Linux and OpenBSD...

  3. Linux vallutab arvutiilma / Scott Handy ; interv. Kristjan Otsmann

    Handy, Scott

    2000-01-01

    IBM tarkvaragrupi Linuxi lahenduste turundusdirektor S. Handy prognoosib, et kolme-nelja aasta pärast kasutab tasuta operatsioonisüsteemi Linux sama palju arvuteid kui Windowsi operatsioonisüsteemi

  4. Vabavarana levitatav Linux alles viib end massidesse / Erik Aru

    Aru, Erik

    2004-01-01

    Tasuta operatsioonisüsteem Linux leiab maailmas aina laialdasemat kasutust. Operatsioonisüsteemi eeldusteks peetakse töö- ja viirusekindlust. Lisad: Toshiba lahing DVD tuleviku pärast. Vastuseks vt. Maaleht 16. dets. lk. 12

  5. Assessment of VME-PCI Interfaces with Linux Drivers

    Schossmater, K; CERN. Geneva

    2000-01-01

    Abstract This report summarises the performance measurements and experiences recorded by testing three commercial VME-PCI interfaces with their Linux drivers. These interfaces are manufactured by Wiener, National Instruments and SBS Bit 3. The developed C programs are reading/writing a VME memory in different transfer modes via these interfaces. A dual processor HP Kayak XA-s workstation was used with the CERN certified Red Hat Linux 6.1 running on it.

  6. Research on application of VME based embedded linux

    Ji Xiaolu; Ye Mei; Zhu Kejun; Li Xiaonan; Wang Yifang

    2010-01-01

    It describes the feasibility and realization of using embedded Linux in DAQ readout system of high energy physics experiment. The first part, the hardware and software framework is introduced. And then emphasis is placed on the key technologies during the system realization. The development is based on the VME bus and vme u niverse driver. Finally, the test result is presented: the readout system can work well in Embedded Linux OS. (authors)

  7. Argonne National Lab gets Linux network teraflop cluster

    2003-01-01

    "Linux NetworX, Salt Lake City, Utah, has delivered an Evolocity II (E2) Linux cluster to Argonne National Laboratory that is capable of performing more than one trillion calculations per second (1 teraFLOP). The cluster, named "Jazz" by Argonne, is designed to provide optimum performance for multiple disciplines such as chemistry, physics and reactor engineering and will be used by the entire scientific community at the Lab" (1 page).

  8. A package of Linux scripts for the parallelization of Monte Carlo simulations

    Badal, Andreu; Sempau, Josep

    2006-09-01

    Despite the fact that fast computers are nowadays available at low cost, there are many situations where obtaining a reasonably low statistical uncertainty in a Monte Carlo (MC) simulation involves a prohibitively large amount of time. This limitation can be overcome by having recourse to parallel computing. Most tools designed to facilitate this approach require modification of the source code and the installation of additional software, which may be inconvenient for some users. We present a set of tools, named clonEasy, that implement a parallelization scheme of a MC simulation that is free from these drawbacks. In clonEasy, which is designed to run under Linux, a set of "clone" CPUs is governed by a "master" computer by taking advantage of the capabilities of the Secure Shell (ssh) protocol. Any Linux computer on the Internet that can be ssh-accessed by the user can be used as a clone. A key ingredient for the parallel calculation to be reliable is the availability of an independent string of random numbers for each CPU. Many generators—such as RANLUX, RANECU or the Mersenne Twister—can readily produce these strings by initializing them appropriately and, hence, they are suitable to be used with clonEasy. This work was primarily motivated by the need to find a straightforward way to parallelize PENELOPE, a code for MC simulation of radiation transport that (in its current 2005 version) employs the generator RANECU, which uses a combination of two multiplicative linear congruential generators (MLCGs). Thus, this paper is focused on this class of generators and, in particular, we briefly present an extension of RANECU that increases its period up to ˜5×10 and we introduce seedsMLCG, a tool that provides the information necessary to initialize disjoint sequences of an MLCG to feed different CPUs. This program, in combination with clonEasy, allows to run PENELOPE in parallel easily, without requiring specific libraries or significant alterations of the

  9. Realized kernels in practice

    Barndorff-Nielsen, Ole Eiler; Hansen, P. Reinhard; Lunde, Asger

    2009-01-01

    and find a remarkable level of agreement. We identify some features of the high-frequency data, which are challenging for realized kernels. They are when there are local trends in the data, over periods of around 10 minutes, where the prices and quotes are driven up or down. These can be associated......Realized kernels use high-frequency data to estimate daily volatility of individual stock prices. They can be applied to either trade or quote data. Here we provide the details of how we suggest implementing them in practice. We compare the estimates based on trade and quote data for the same stock...

  10. Adaptive metric kernel regression

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

  11. Adaptive Metric Kernel Regression

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...

  12. Kernel methods for deep learning

    Cho, Youngmin

    2012-01-01

    We introduce a new family of positive-definite kernels that mimic the computation in large neural networks. We derive the different members of this family by considering neural networks with different activation functions. Using these kernels as building blocks, we also show how to construct other positive-definite kernels by operations such as composition, multiplication, and averaging. We explore the use of these kernels in standard models of supervised learning, such as support vector mach...

  13. Evaluation of mosix-Linux farm performances in GRID environment

    Barone, F.; Rosa, M.de; Rosa, R.de.; Eleuteri, A.; Esposito, R.; Mastroserio, P.; Milano, L.; Taurino, F.; Tortone, G.

    2001-01-01

    The MOSIX extensions to the Linux Operating System allow the creation of high-performance Linux Farms and an excellent integration of the several CPUs of the Farm, whose computational power can be furtherly increased and made more effective by networking them within the GRID environment. Following this strategy, the authors started to perform computational tests using two independent farms within the GRID environment. In particular, the authors performed a preliminary evaluation of the distributed computing efficiency with a MOSIX Linux farm in the simulation of gravitational waves data analysis from coalescing binaries. To this task, two different techniques were compared: the classical matched filters technique and one of its possible evolutions, based on a global optimisation technique

  14. A General Purpose High Performance Linux Installation Infrastructure

    Wachsmann, Alf

    2002-01-01

    With more and more and larger and larger Linux clusters, the question arises how to install them. This paper addresses this question by proposing a solution using only standard software components. This installation infrastructure scales well for a large number of nodes. It is also usable for installing desktop machines or diskless Linux clients, thus, is not designed for cluster installations in particular but is, nevertheless, highly performant. The infrastructure proposed uses PXE as the network boot component on the nodes. It uses DHCP and TFTP servers to get IP addresses and a bootloader to all nodes. It then uses kickstart to install Red Hat Linux over NFS. We have implemented this installation infrastructure at SLAC with our given server hardware and installed a 256 node cluster in 30 minutes. This paper presents the measurements from this installation and discusses the bottlenecks in our installation

  15. Linux OS integrated modular avionics application development framework with apex API of ARINC653 specification

    Anna V. Korneenkova

    2017-01-01

    Full Text Available The framework is made to provide tools to develop the integrated modular avionics (IMA applications, which could be launched on the target platform LynxOs-178 without modifying their source code. The framework usage helps students to form skills for developing modern modules of the avionics. In addition, students obtain deeper knowledge for the development of competencies in the field of technical creativity by using of the framework.The article describes the architecture and implementation of the Linux OS framework for ARINC653 compliant OS application development.The proposed approach reduces ARINC-653 application development costs and gives a unified tool to implement OS vendor independent code that meets specification. To achieve import substitution free and open-source Linux OS is used as an environment for developing IMA applications.The proposed framework is applicable for using as the tool to develop IMA applications and as the tool for development of the following competencies: the ability to master techniques of using software to solve practical problems, the ability to develop components of hardware and software systems and databases, using modern tools and programming techniques, the ability to match hardware and software tools in the information and automated systems, the readiness to apply the fundamentals of informatics and programming to designing, constructing and testing of software products, the readiness to apply basic methods and tools of software development, knowledge of various technologies of software development.

  16. A case study in application I/O on Linux clusters

    Ross, R.; Nurmi, D.; Cheng, A.; Zingale, M.

    2001-01-01

    A critical but often ignored component of system performance is the I/O system. Today's applications expect a great deal from underlying storage systems and software, and both high performance distributed storage and high level interfaces have been developed to fill these needs. In this paper they discuss the I/O performance of a parallel scientific application on a Linux cluster, the FLASH astrophysics code. This application relies on three I/O software components to provide high performance parallel I/O on Linux clusters: the Parallel Virtual File System (PVFS), the ROMIO MPI-IO implementation, and the Hierarchical Data Format (HDF5) library. First they discuss the roles played by each of these components in providing an I/O solution. Next they discuss the FLASH I/O benchmark and point out its relevance. Following this they examine the performance of the benchmark, and through instrumentation of both the application and underlying system software code they discover the location of major software bottlenecks. They work around the most inhibiting of these bottlenecks, showing substantial performance improvement. Finally they point out similarities between the inefficiencies found here and those found in message passing systems, indicating that research in the message passing field could be leveraged to solve similar problems in high-level I/O interfaces

  17. Multivariate realised kernels

    Barndorff-Nielsen, Ole; Hansen, Peter Reinhard; Lunde, Asger

    We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement noise of certain types and can also handle non-synchronous trading. It is the first estimator...

  18. Kernel bundle EPDiff

    Sommer, Stefan Horst; Lauze, Francois Bernard; Nielsen, Mads

    2011-01-01

    In the LDDMM framework, optimal warps for image registration are found as end-points of critical paths for an energy functional, and the EPDiff equations describe the evolution along such paths. The Large Deformation Diffeomorphic Kernel Bundle Mapping (LDDKBM) extension of LDDMM allows scale space...

  19. Kernel structures for Clouds

    Spafford, Eugene H.; Mckendry, Martin S.

    1986-01-01

    An overview of the internal structure of the Clouds kernel was presented. An indication of how these structures will interact in the prototype Clouds implementation is given. Many specific details have yet to be determined and await experimentation with an actual working system.

  20. An Ensemble Approach to Building Mercer Kernels with Prior Information

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2005-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.

  1. Servidor Linux para conexiones seguras de una LAN a Internet

    Escartín Vigo, José Antonio

    2005-01-01

    Este documento esta elaborado para describir la implementación de un servidor GNU/Linux, así como especificar y resolver los principales problemas que un administrador se encuentra al poner en funcionamiento un servidor. Se aprenderá a configurar un servidor GNU/Linux describiendo los principales servicios utilizados para compartir archivos, páginas web, correo y otros que veremos más adelante. La herramienta de configuración Webmin, que se detalla en uno de los últimos capítulos es indepe...

  2. Design and Implementation of Linux Access Control Model

    Wei Xiaomeng; Wu Yongbin; Zhuo Jingchuan; Wang Jianyun; Haliqian Mayibula

    2017-01-01

    In this paper,the design and implementation of an access control model for Linux system are discussed in detail. The design is based on the RBAC model and combines with the inherent characteristics of the Linux system,and the support for the process and role transition is added.The core idea of the model is that the file is divided into different categories,and access authority of every category is distributed to several roles.Then,roles are assigned to users of the system,and the role of the user can be transited from one to another by running the executable file.

  3. First experiences with large SAN storage and Linux

    Wezel, Jos van; Marten, Holger; Verstege, Bernhard; Jaeger, Axel

    2004-01-01

    The use of a storage area network (SAN) with Linux opens possibilities for scalable and affordable large data storage and poses a new challenge for cluster computing. The GridKa center uses a commercial parallel file system to create a highly available high-speed data storage using a combination of Fibre Channel (SAN) and Ethernet (LAN) to optimize between data throughput and costs. This article describes the design, implementation and optimizations of the GridKa storage solution which will offer over 400 TB online storage for 600 nodes. Presented are some throughput measurements of one of the largest Linux-based parallel storage systems in the world

  4. Transitioning to Intel-based Linux Servers in the Payload Operations Integration Center

    Guillebeau, P. L.

    2004-01-01

    The MSFC Payload Operations Integration Center (POIC) is the focal point for International Space Station (ISS) payload operations. The POIC contains the facilities, hardware, software and communication interface necessary to support payload operations. ISS ground system support for processing and display of real-time spacecraft and telemetry and command data has been operational for several years. The hardware components were reaching end of life and vendor costs were increasing while ISS budgets were becoming severely constrained. Therefore it has been necessary to migrate the Unix portions of our ground systems to commodity priced Intel-based Linux servers. hardware architecture including networks, data storage, and highly available resources. This paper will concentrate on the Linux migration implementation for the software portion of our ground system. The migration began with 3.5 million lines of code running on Unix platforms with separate servers for telemetry, command, Payload information management systems, web, system control, remote server interface and databases. The Intel-based system is scheduled to be available for initial operational use by August 2004 The overall migration to Intel-based Linux servers in the control center involves changes to the This paper will address the Linux migration study approach including the proof of concept, criticality of customer buy-in and importance of beginning with POSlX compliant code. It will focus on the development approach explaining the software lifecycle. Other aspects of development will be covered including phased implementation, interim milestones and metrics measurements and reporting mechanisms. This paper will also address the testing approach covering all levels of testing including development, development integration, IV&V, user beta testing and acceptance testing. Test results including performance numbers compared with Unix servers will be included. need for a smooth transition while maintaining

  5. Viscosity kernel of molecular fluids

    Puscasu, Ruslan; Todd, Billy; Daivis, Peter

    2010-01-01

    , temperature, and chain length dependencies of the reciprocal and real-space viscosity kernels are presented. We find that the density has a major effect on the shape of the kernel. The temperature range and chain lengths considered here have by contrast less impact on the overall normalized shape. Functional...... forms that fit the wave-vector-dependent kernel data over a large density and wave-vector range have also been tested. Finally, a structural normalization of the kernels in physical space is considered. Overall, the real-space viscosity kernel has a width of roughly 3–6 atomic diameters, which means...

  6. Salus: Kernel Support for Secure Process Compartments

    Raoul Strackx

    2015-01-01

    Full Text Available Consumer devices are increasingly being used to perform security and privacy critical tasks. The software used to perform these tasks is often vulnerable to attacks, due to bugs in the application itself or in included software libraries. Recent work proposes the isolation of security-sensitive parts of applications into protected modules, each of which can be accessed only through a predefined public interface. But most parts of an application can be considered security-sensitive at some level, and an attacker who is able to gain inapplication level access may be able to abuse services from protected modules. We propose Salus, a Linux kernel modification that provides a novel approach for partitioning processes into isolated compartments sharing the same address space. Salus significantly reduces the impact of insecure interfaces and vulnerable compartments by enabling compartments (1 to restrict the system calls they are allowed to perform, (2 to authenticate their callers and callees and (3 to enforce that they can only be accessed via unforgeable references. We describe the design of Salus, report on a prototype implementation and evaluate it in terms of security and performance. We show that Salus provides a significant security improvement with a low performance overhead, without relying on any non-standard hardware support.

  7. Teaching Hands-On Linux Host Computer Security

    Shumba, Rose

    2006-01-01

    In the summer of 2003, a project to augment and improve the teaching of information assurance courses was started at IUP. Thus far, ten hands-on exercises have been developed. The exercises described in this article, and presented in the appendix, are based on actions required to secure a Linux host. Publicly available resources were used to…

  8. Drowning in PC Management: Could a Linux Solution Save Us?

    Peters, Kathleen A.

    2004-01-01

    Short on funding and IT staff, a Western Canada library struggled to provide adequate public computing resources. Staff turned to a Linux-based solution that supports up to 10 users from a single computer, and blends Web browsing and productivity applications with session management, Internet filtering, and user authentication. In this article,…

  9. Fedora Linux A Complete Guide to Red Hat's Community Distribution

    Tyler, Chris

    2009-01-01

    Whether you are running the stable version of Fedora Core or bleeding-edge Rawhide releases, this book has something for every level of user. The modular, lab-based approach not only shows you how things work--but also explains why--and provides you with the answers you need to get up and running with Fedora Linux.

  10. Linux Adventures on a Laptop. Computers in Small Libraries

    Roberts, Gary

    2005-01-01

    This article discusses the pros and cons of open source software, such as Linux. It asserts that despite the technical difficulties of installing and maintaining this type of software, ultimately it is helpful in terms of knowledge acquisition and as a beneficial investment librarians can make in themselves, their libraries, and their patrons.…

  11. LHC@home online tutorial for Linux users - recording

    CERN. Geneva

    2016-01-01

    A step-by-step online tutorial for LHC@home by Karolina Bozek It contains detailed instructions for Linux users on how-to-join this volunteer computing project.  This 5' linked from http://lhcathome.web.cern.ch/join-us CLICK Here to see the commands to copy/paste for installing BOINC and the VirtualBox.

  12. Argonne Natl Lab receives TeraFLOP Cluster Linux NetworX

    2002-01-01

    " Linux NetworX announced today it has delivered an Evolocity II (E2) Linux cluster to Argonne National Laboratory that is capable of performing more than one trillion calculations per second (1 teraFLOP)" (1/2 page).

  13. Variable Kernel Density Estimation

    Terrell, George R.; Scott, David W.

    1992-01-01

    We investigate some of the possibilities for improvement of univariate and multivariate kernel density estimates by varying the window over the domain of estimation, pointwise and globally. Two general approaches are to vary the window width by the point of estimation and by point of the sample observation. The first possibility is shown to be of little efficacy in one variable. In particular, nearest-neighbor estimators in all versions perform poorly in one and two dimensions, but begin to b...

  14. Steerability of Hermite Kernel

    Yang, Bo; Flusser, Jan; Suk, Tomáš

    2013-01-01

    Roč. 27, č. 4 (2013), 1354006-1-1354006-25 ISSN 0218-0014 R&D Projects: GA ČR GAP103/11/1552 Institutional support: RVO:67985556 Keywords : Hermite polynomials * Hermite kernel * steerability * adaptive filtering Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.558, year: 2013 http://library.utia.cas.cz/separaty/2013/ZOI/yang-0394387. pdf

  15. Scuba: scalable kernel-based gene prioritization.

    Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio

    2018-01-25

    The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .

  16. Feasibility study of BES data processing and physics analysis on a PC/Linux platform

    Rong Gang; He Kanglin; Zhao Jiawei; Heng Yuekun; Zhang Chun

    1999-01-01

    The authors report a feasibility study of off-line BES data processing (data reconstruction and Detector simulation) on a PC/Linux platform and an application of the PC/Linux system in D/Ds physics analysis. The authors compared the results obtained from the PC/Linux with that from HP workstation. It shows that PC/Linux platform can do BES data offline analysis as good as UNIX workstation do, but it is much powerful and economical

  17. Linear and kernel methods for multivariate change detection

    Canty, Morton J.; Nielsen, Allan Aasbjerg

    2012-01-01

    ), as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (nonlinear), may further enhance change signals relative to no-change background. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric...... normalization, and kernel PCA/MAF/MNF transformations are presented that function as transparent and fully integrated extensions of the ENVI remote sensing image analysis environment. The train/test approach to kernel PCA is evaluated against a Hebbian learning procedure. Matlab code is also available...... that allows fast data exploration and experimentation with smaller datasets. New, multiresolution versions of IR-MAD that accelerate convergence and that further reduce no-change background noise are introduced. Computationally expensive matrix diagonalization and kernel image projections are programmed...

  18. Kernel Machine SNP-set Testing under Multiple Candidate Kernels

    Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.

    2013-01-01

    Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868

  19. MULTITASKER, Multitasking Kernel for C and FORTRAN Under UNIX

    Brooks, E.D. III

    1988-01-01

    1 - Description of program or function: MULTITASKER implements a multitasking kernel for the C and FORTRAN programming languages that runs under UNIX. The kernel provides a multitasking environment which serves two purposes. The first is to provide an efficient portable environment for the development, debugging, and execution of production multiprocessor programs. The second is to provide a means of evaluating the performance of a multitasking program on model multiprocessor hardware. The performance evaluation features require no changes in the application program source and are implemented as a set of compile- and run-time options in the kernel. 2 - Method of solution: The FORTRAN interface to the kernel is identical in function to the CRI multitasking package provided for the Cray XMP. This provides a migration path to high speed (but small N) multiprocessors once the application has been coded and debugged. With use of the UNIX m4 macro preprocessor, source compatibility can be achieved between the UNIX code development system and the target Cray multiprocessor. The kernel also provides a means of evaluating a program's performance on model multiprocessors. Execution traces may be obtained which allow the user to determine kernel overhead, memory conflicts between various tasks, and the average concurrency being exploited. The kernel may also be made to switch tasks every cpu instruction with a random execution ordering. This allows the user to look for unprotected critical regions in the program. These features, implemented as a set of compile- and run-time options, cause extra execution overhead which is not present in the standard production version of the kernel

  20. A Linux Workstation for High Performance Graphics

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  1. The definition of kernel Oz

    Smolka, Gert

    1994-01-01

    Oz is a concurrent language providing for functional, object-oriented, and constraint programming. This paper defines Kernel Oz, a semantically complete sublanguage of Oz. It was an important design requirement that Oz be definable by reduction to a lean kernel language. The definition of Kernel Oz introduces three essential abstractions: the Oz universe, the Oz calculus, and the actor model. The Oz universe is a first-order structure defining the values and constraints Oz computes with. The ...

  2. O Linux e a perspectiva da dádiva

    Renata Apgaua

    2004-06-01

    Full Text Available O objetivo deste trabalho é analisar o surgimento e consolidação do sistema operacional Linux em um contexto marcado pela hegemonia de sistemas operacionais comerciais, sendo o Windows/Microsoft o exemplo paradigmático. O idealizador do Linux optou por abrir o seu código-fonte e oferecê-lo, gratuitamente, na Internet. Desde então, pessoas de diversas partes do mundo têm participado do seu desenvolvimento. Busca-se, assim, através deste estudo, analisar as características desse espaço de sociabilidade, onde as trocas apontam para outra lógica que não a do mercado. A proposta de compreender os laços sociais no universo Linux, a partir da perspectiva da dádiva, acaba remetendo a outra discussão, que também merecerá atenção nesse estudo, qual seja: a atualidade da dádiva. Releituras de Mauss, feitas por Godbout e Caillé, indicam que a dádiva, em seu "sistema de transformações", encontra-se presente nas sociedades contemporâneas, mas não apenas nos interstícios sociais, conforme afirmava o próprio Mauss.This work's goal is to analyze the appearance and consolidation of the Linux operational system in a context marked by the hegemony of commercial operational systems, taking the Windows/Microsoft as the paradigmatic example. The creator of Linux chose to make it open-source and offer it free of charge, in the Internet. Since then, people from the various parts of the world have participated in its development. This study, therefore, seeks to analyse the features of this space of sociability, where the exchange points to another logic different of that one adopted by the market. The proposal of comprehending the social ties of the Linux universe through the perspective of gift ends up sending us into another discussion, which will also deserve attention in this study, that would be: the recentness of gift. Re-interpretations of Mauss, made by Godbout and Caillé, indicate that gif, in its "changing system", is present in

  3. 7 CFR 981.7 - Edible kernel.

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...

  4. 7 CFR 981.408 - Inedible kernel.

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...

  5. 7 CFR 981.8 - Inedible kernel.

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...

  6. Multivariate realised kernels

    Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger

    2011-01-01

    We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement error of certain types and can also handle non-synchronous trading. It is the first estimator...... which has these three properties which are all essential for empirical work in this area. We derive the large sample asymptotics of this estimator and assess its accuracy using a Monte Carlo study. We implement the estimator on some US equity data, comparing our results to previous work which has used...

  7. Clustering via Kernel Decomposition

    Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan

    2006-01-01

    Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....

  8. Installing, Running and Maintaining Large Linux Clusters at CERN

    Bahyl, V; van Eldik, Jan; Fuchs, Ulrich; Kleinwort, Thorsten; Murth, Martin; Smith, Tim; Bahyl, Vladimir; Chardi, Benjamin; Eldik, Jan van; Fuchs, Ulrich; Kleinwort, Thorsten; Murth, Martin; Smith, Tim

    2003-01-01

    Having built up Linux clusters to more than 1000 nodes over the past five years, we already have practical experience confronting some of the LHC scale computing challenges: scalability, automation, hardware diversity, security, and rolling OS upgrades. This paper describes the tools and processes we have implemented, working in close collaboration with the EDG project [1], especially with the WP4 subtask, to improve the manageability of our clusters, in particular in the areas of system installation, configuration, and monitoring. In addition to the purely technical issues, providing shared interactive and batch services which can adapt to meet the diverse and changing requirements of our users is a significant challenge. We describe the developments and tuning that we have introduced on our LSF based systems to maximise both responsiveness to users and overall system utilisation. Finally, this paper will describe the problems we are facing in enlarging our heterogeneous Linux clusters, the progress we have ...

  9. Operational Numerical Weather Prediction systems based on Linux cluster architectures

    Pasqui, M.; Baldi, M.; Gozzini, B.; Maracchi, G.; Giuliani, G.; Montagnani, S.

    2005-01-01

    The progress in weather forecast and atmospheric science has been always closely linked to the improvement of computing technology. In order to have more accurate weather forecasts and climate predictions, more powerful computing resources are needed, in addition to more complex and better-performing numerical models. To overcome such a large computing request, powerful workstations or massive parallel systems have been used. In the last few years, parallel architectures, based on the Linux operating system, have been introduced and became popular, representing real high performance-low cost systems. In this work the Linux cluster experience achieved at the Laboratory far Meteorology and Environmental Analysis (LaMMA-CNR-IBIMET) is described and tips and performances analysed

  10. LXtoo: an integrated live Linux distribution for the bioinformatics community.

    Yu, Guangchuang; Wang, Li-Gen; Meng, Xiao-Hua; He, Qing-Yu

    2012-07-19

    Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo.

  11. ISAC EPICS on Linux: the march of the penguins

    Richards, J.; Nussbaumer, R.; Rapaz, S.; Waters, G.

    2012-01-01

    The DC linear accelerators of the ISAC radioactive beam facility at TRIUMF do not impose rigorous timing constraints on the control system. Therefore a real-time operating system is not essential for device control. The ISAC Control System is completing a move to the use of the Linux operating system for hosting all EPICS IOCs. The IOC platforms include GE-Fanuc VME based CPUs for control of most optics and diagnostics, rack mounted servers for supervising PLCs, small desktop PCs for GPIB and RS232 instruments, as well as embedded ARM processors controlling CAN-bus devices that provide a suitcase sized control system. This article focuses on the experience of creating a customized Linux distribution for front-end IOC deployment. Rationale, a road-map of the process, and efficiency advantages in personnel training and system management realized by using a single OS will be discussed. (authors)

  12. A embedded Linux system based on PowerPC

    Ye Mei; Zhao Jingwei; Chu Yuanping

    2006-01-01

    The authors will introduce a Embedded Linux System based on PowerPC as well as the base method on how to establish the system. The goal of the system is to build a test system of VMEbus device. It also can be used to setup the small data acquisition and control system. Two types of compiler are provided by the developer system according to the features of the system and the Power PC. At the top of the article some typical embedded Operation system will be introduced and the features of different system will be provided. And then the method on how to build a embedded Linux system as well as the key technique will be discussed in detail. Finally a successful read-write example will be given based on the test system. (authors)

  13. LightNVM: The Linux Open-Channel SSD Subsystem

    Bjørling, Matias; Gonzalez, Javier; Bonnet, Philippe

    2017-01-01

    resource utilization. We propose that SSD management trade-offs should be handled through Open-Channel SSDs, a new class of SSDs, that give hosts control over their internals. We present our experience building LightNVM, the Linux Open-Channel SSD subsystem. We introduce a new Physical Page Ad- dress I...... to limit read latency variability and that it can be customized to achieve predictable I/O latencies....

  14. Diversifying the Department of Defense Network Enterprise with Linux

    2010-03-01

    protection of DoD infrastructure. In the competitive marketplace, strategy is defined as a firm’s theory on how it gains high levels of performance...practice of discontinuing support to legacy systems. Microsoft also needs to convey it was in the user’s best interest to upgrade the operating... stockholders , Microsoft acknowledged recent notable competitors in the market place threatening their long time monopolistic enterprise. Linux (a popular

  15. A camac data acquisition system based on PC-Linux

    Ribas, R.V.

    2002-01-01

    A multi-parametric data acquisition system for Nuclear Physics experiments using camac instrumentation on a personal computer with the Linux operating system in described. The system is very reliable, inexpensive and is capable of handling event rates up to 4-6 k events/s. In the present version, the maximum number of parameters to be acquired is limited only by the number of camac modules that can be fitted in one camac crate

  16. FTAP: a Linux-based program for tapping and music experiments.

    Finney, S A

    2001-02-01

    This paper describes FTAP, a flexible data collection system for tapping and music experiments. FTAP runs on standard PC hardware with the Linux operating system and can process input keystrokes and auditory output with reliable millisecond resolution. It uses standard MIDI devices for input and output and is particularly flexible in the area of auditory feedback manipulation. FTAP can run a wide variety of experiments, including synchronization/continuation tasks (Wing & Kristofferson, 1973), synchronization tasks combined with delayed auditory feedback (Aschersleben & Prinz, 1997), continuation tasks with isolated feedback perturbations (Wing, 1977), and complex alterations of feedback in music performance (Finney, 1997). Such experiments have often been implemented with custom hardware and software systems, but with FTAP they can be specified by a simple ASCII text parameter file. FTAP is available at no cost in source-code form.

  17. A Linux cluster for between-pulse magnetic equilibrium reconstructions and other processor bound analyses

    Peng, Q.; Groebner, R. J.; Lao, L. L.; Schachter, J.; Schissel, D. P.; Wade, M. R.

    2001-01-01

    A 12-processor Linux PC cluster has been installed to perform between-pulse magnetic equilibrium reconstructions during tokamak operations using the EFIT code written in FORTRAN. The MPICH package implementing message passing interface is employed by EFIT for data distribution and communication. The new system calculates equilibria eight times faster than the previous system yielding a complete equilibrium time history on a 25 ms time scale 4 min after the pulse ends. A graphical interface is provided for users to control the time resolution and the type of EFITs. The next analysis to benefit from the cluster is CERQUICK written in IDL for ion temperature profile analysis. The plan is to expand the cluster so that a full profile analysis (Te, Ti, ne, Vr, Zeff) can be made available between pulses, which lays the ground work for Kinetic EFIT and/or ONETWO power balance analyses

  18. Dugong: a Docker image, based on Ubuntu Linux, focused on reproducibility and replicability for bioinformatics analyses.

    Menegidio, Fabiano B; Jabes, Daniela L; Costa de Oliveira, Regina; Nunes, Luiz R

    2018-02-01

    This manuscript introduces and describes Dugong, a Docker image based on Ubuntu 16.04, which automates installation of more than 3500 bioinformatics tools (along with their respective libraries and dependencies), in alternative computational environments. The software operates through a user-friendly XFCE4 graphic interface that allows software management and installation by users not fully familiarized with the Linux command line and provides the Jupyter Notebook to assist in the delivery and exchange of consistent and reproducible protocols and results across laboratories, assisting in the development of open science projects. Source code and instructions for local installation are available at https://github.com/DugongBioinformatics, under the MIT open source license. Luiz.nunes@ufabc.edu.br. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  19. The performance analysis of linux networking - packet receiving

    Wu, Wenji; Crawford, Matt; Bowden, Mark; /Fermilab

    2006-11-01

    The computing models for High-Energy Physics experiments are becoming ever more globally distributed and grid-based, both for technical reasons (e.g., to place computational and data resources near each other and the demand) and for strategic reasons (e.g., to leverage equipment investments). To support such computing models, the network and end systems, computing and storage, face unprecedented challenges. One of the biggest challenges is to transfer scientific data sets--now in the multi-petabyte (10{sup 15} bytes) range and expected to grow to exabytes within a decade--reliably and efficiently among facilities and computation centers scattered around the world. Both the network and end systems should be able to provide the capabilities to support high bandwidth, sustained, end-to-end data transmission. Recent trends in technology are showing that although the raw transmission speeds used in networks are increasing rapidly, the rate of advancement of microprocessor technology has slowed down. Therefore, network protocol-processing overheads have risen sharply in comparison with the time spent in packet transmission, resulting in degraded throughput for networked applications. More and more, it is the network end system, instead of the network, that is responsible for degraded performance of network applications. In this paper, the Linux system's packet receive process is studied from NIC to application. We develop a mathematical model to characterize the Linux packet receiving process. Key factors that affect Linux systems network performance are analyzed.

  20. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community

    2012-01-01

    Background A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Results Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Conclusions Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the

  1. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community.

    Krampis, Konstantinos; Booth, Tim; Chapman, Brad; Tiwari, Bela; Bicak, Mesude; Field, Dawn; Nelson, Karen E

    2012-03-19

    A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly

  2. Global Polynomial Kernel Hazard Estimation

    Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch

    2015-01-01

    This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...

  3. Perbandingan proxy pada linux dan windows untuk mempercepat browsing website

    Dafwen Toresa

    2017-05-01

    Full Text Available AbstrakPada saat ini sangat banyak organisasi, baik pendidikan, pemerintahan,  maupun perusahaan swasta berusaha membatasi akses para pengguna ke internet dengan alasan bandwidth yang dimiliki mulai terasa lambat ketika para penggunanya mulai banyak yang melakukan browsing ke internet. Mempercepat akses browsing menjadi perhatian utama dengan memanfaatkan teknologi Proxy server. Penggunaan proxy server perlu mempertimbangkan sistem operasi pada server dan tool yang digunakan belum diketahui performansi terbaiknya pada sistem operasi apa.  Untuk itu dirasa perlu untuk menganalisis performan Proxy server pada sistem operasi berbeda yaitu Sistem Operasi Linux dengan tools Squid  dan Sistem Operasi Windows dengan tool Winroute. Kajian ini dilakukan untuk mengetahui perbandingan kecepatan browsing dari komputer pengguna (client. Browser yang digunakan di komputer pengguna adalah Mozilla Firefox. Penelitian ini menggunakan 2 komputer klien dengan pengujian masing-masingnya 5 kali pengujian pengaksesan/browsing web yang dituju melalui proxy server. Dari hasil pengujian yang dilakukan, diperoleh kesimpulan bahwa penerapan proxy server di sistem operasi linux dengan tools squid lebih cepat browsing dari klien menggunakan web browser yang sama dan komputer klien yang berbeda dari pada proxy server sistem operasi windows dengan tools winroute.  Kata kunci: Proxy, Bandwidth, Browsing, Squid, Winroute AbstractAt this time very many organizations, both education, government, and private companies try to limit the access of users to the internet on the grounds that the bandwidth owned began to feel slow when the users began to do a lot of browsing to the internet. Speed up browsing access is a major concern by utilizing Proxy server technology. The use of proxy servers need to consider the operating system on the server and the tool used is not yet known the best performance on what operating system. For that it is necessary to analyze Performance Proxy

  4. A compact kernel for the calculus of inductive constructions

    CIC) implemented inside the Matita Interactive Theorem Prover. The design of the new kernel has been completely revisited since the first release, resulting in a remarkably compact implementation of about 2300 lines of OCaml code. The work ...

  5. Memory Analysis of the KBeast Linux Rootkit: Investigating Publicly Available Linux Rootkit Using the Volatility Memory Analysis Framework

    2015-06-01

    examine how a computer forensic investigator/incident handler, without specialised computer memory or software reverse engineering skills , can successfully...memory images and malware, this new series of reports will be directed at those who must analyse Linux malware-infected memory images. The skills ...disable 1287 1000 1000 /usr/lib/policykit-1-gnome/polkit-gnome-authentication- agent-1 1310 1000 1000 /usr/lib/pulseaudio/pulse/gconf- helper 1350

  6. Robotic intelligence kernel

    Bruemmer, David J [Idaho Falls, ID

    2009-11-17

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.

  7. Wilson and Domainwall Kernels on Oakforest-PACS

    Kanamori, Issaku; Matsufuru, Hideo

    2018-03-01

    We report the performance of Wilson and Domainwall Kernels on a new Intel Xeon Phi Knights Landing based machine named Oakforest-PACS, which is co-hosted by University of Tokyo and Tsukuba University and is currently fastest in Japan. This machine uses Intel Omni-Path for the internode network. We compare performance with several types of implementation including that makes use of the Grid library. The code is incorporated with the code set Bridge++.

  8. Mixture Density Mercer Kernels: A Method to Learn Kernels

    National Aeronautics and Space Administration — This paper presents a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian...

  9. STATUS OF THE LINUX PC CLUSTER FOR BETWEEN-PULSE DATA ANALYSES AT DIII-D

    PENG, Q; GROEBNER, R.J; LAO, L.L; SCHACHTER, J.; SCHISSEL, D.P; WADE, M.R.

    2001-08-01

    OAK-B135 Some analyses that survey experimental data are carried out at a sparse sample rate between pulses during tokamak operation and/or completed as a batch job overnight because the complete analysis on a single fast workstation cannot fit in the narrow time window between two pulses. Scientists therefore miss the opportunity to use these results to guide experiments quickly. With a dedicated Beowulf type cluster at a cost less than that of a workstation, these analyses can be accomplished between pulses and the analyzed data made available for the research team during the tokamak operation. A Linux PC cluster comprises of 12 processors was installed at DIII-D National Fusion Facility in CY00 and expanded to 24 processors in CY01 to automatically perform between-pulse magnetic equilibrium reconstructions using the EFIT code written in Fortran, CER analyses using CERQUICK code written in IDL and full profile fitting analyses (n e , T e , T i , V r , Z eff ) using IDL code ZIPFIT. This paper reports the current status of the system and discusses some problems and concerns raised during the implementation and expansion of the system

  10. A Unified and Comprehensible View of Parametric and Kernel Methods for Genomic Prediction with Application to Rice.

    Jacquin, Laval; Cao, Tuong-Vi; Ahmadi, Nourollah

    2016-01-01

    One objective of this study was to provide readers with a clear and unified understanding of parametric statistical and kernel methods, used for genomic prediction, and to compare some of these in the context of rice breeding for quantitative traits. Furthermore, another objective was to provide a simple and user-friendly R package, named KRMM, which allows users to perform RKHS regression with several kernels. After introducing the concept of regularized empirical risk minimization, the connections between well-known parametric and kernel methods such as Ridge regression [i.e., genomic best linear unbiased predictor (GBLUP)] and reproducing kernel Hilbert space (RKHS) regression were reviewed. Ridge regression was then reformulated so as to show and emphasize the advantage of the kernel "trick" concept, exploited by kernel methods in the context of epistatic genetic architectures, over parametric frameworks used by conventional methods. Some parametric and kernel methods; least absolute shrinkage and selection operator (LASSO), GBLUP, support vector machine regression (SVR) and RKHS regression were thereupon compared for their genomic predictive ability in the context of rice breeding using three real data sets. Among the compared methods, RKHS regression and SVR were often the most accurate methods for prediction followed by GBLUP and LASSO. An R function which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression, with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time has been developed. Moreover, a modified version of this function, which allows users to tune kernels for RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available.

  11. Laboratorio de Seguridad Informática con Kali Linux

    Gutiérrez Benito, Fernando

    2014-01-01

    Laboratorio de Seguridad Informática usando la distribución Linux Kali, un sistema operativo dedicado a la auditoría de seguridad informática. Se emplearán herramientas especializadas en los distintos campos de la seguridad, como nmap, Metaspoit, w3af, John the Ripper o Aircrack-ng. Se intentará que los alumnos comprendan la necesidad de crear aplicaciones seguras así como pueda servir de base para aquellos que deseen continuar en el mundo de la seguridad informática. Grado en Ingeniería T...

  12. Documenting and automating collateral evolutions in Linux device drivers

    Padioleau, Yoann; Hansen, René Rydhof; Lawall, Julia

    2008-01-01

    . Manually performing such collateral evolutions is time-consuming and unreliable, and has lead to errors when modifications have not been done consistently. In this paper, we present an automatic program transformation tool, Coccinelle, for documenting and automating device driver collateral evolutions...... programmer. We have evaluated our approach on 62 representative collateral evolutions that were previously performed manually in Linux 2.5 and 2.6. On a test suite of over 5800 relevant driver files, the semantic patches for these collateral evolutions update over 93% of the files completely...

  13. Millisecond accuracy video display using OpenGL under Linux.

    Stewart, Neil

    2006-02-01

    To measure people's reaction times to the nearest millisecond, it is necessary to know exactly when a stimulus is displayed. This article describes how to display stimuli with millisecond accuracy on a normal CRT monitor, using a PC running Linux. A simple C program is presented to illustrate how this may be done within X Windows using the OpenGL rendering system. A test of this system is reported that demonstrates that stimuli may be consistently displayed with millisecond accuracy. An algorithm is presented that allows the exact time of stimulus presentation to be deduced, even if there are relatively large errors in measuring the display time.

  14. Construction of a Linux based chemical and biological information system.

    Molnár, László; Vágó, István; Fehér, András

    2003-01-01

    A chemical and biological information system with a Web-based easy-to-use interface and corresponding databases has been developed. The constructed system incorporates all chemical, numerical and textual data related to the chemical compounds, including numerical biological screen results. Users can search the database by traditional textual/numerical and/or substructure or similarity queries through the web interface. To build our chemical database management system, we utilized existing IT components such as ORACLE or Tripos SYBYL for database management and Zope application server for the web interface. We chose Linux as the main platform, however, almost every component can be used under various operating systems.

  15. A PC-Linux-based data acquisition system for the STAR TOFp detector

    Liu Zhixu; Liu Feng; Zhang Bingyun

    2003-01-01

    Commodity hardware running the open source operating system Linux is playing various important roles in the field of high energy physics. This paper describes the PC-Linux-based Data Acquisition System of STAR TOFp detector. It is based on the conventional solutions with front-end electronics made of NIM and CAMAC modules controlled by a PC running Linux. The system had been commissioned into the STAR DAQ system, and worked successfully in the second year of STAR physics runs

  16. Ubuntu Linux Toolbox 1000 + Commands for Ubuntu and Debian Power Users

    Negus, Christopher

    2008-01-01

    In this handy, compact guide, you'll explore a ton of powerful Ubuntu Linux commands while you learn to use Ubuntu Linux as the experts do: from the command line. Try out more than 1,000 commands to find and get software, monitor system health and security, and access network resources. Then, apply the skills you learn from this book to use and administer desktops and servers running Ubuntu, Debian, and KNOPPIX or any other Linux distribution.

  17. 7 CFR 981.9 - Kernel weight.

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...

  18. 7 CFR 51.2295 - Half kernel.

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...

  19. A kernel version of spatial factor analysis

    Nielsen, Allan Aasbjerg

    2009-01-01

    . Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...

  20. kernel oil by lipolytic organisms

    USER

    2010-08-02

    Aug 2, 2010 ... Rancidity of extracted cashew oil was observed with cashew kernel stored at 70, 80 and 90% .... method of American Oil Chemist Society AOCS (1978) using glacial ..... changes occur and volatile products are formed that are.

  1. The integral first collision kernel method for gamma-ray skyshine analysis[Skyshine; Gamma-ray; First collision kernel; Monte Carlo calculation

    Sheu, R.-D.; Chui, C.-S.; Jiang, S.-H. E-mail: shjiang@mx.nthu.edu.tw

    2003-12-01

    A simplified method, based on the integral of the first collision kernel, is presented for performing gamma-ray skyshine calculations for the collimated sources. The first collision kernels were calculated in air for a reference air density by use of the EGS4 Monte Carlo code. These kernels can be applied to other air densities by applying density corrections. The integral first collision kernel (IFCK) method has been used to calculate two of the ANSI/ANS skyshine benchmark problems and the results were compared with a number of other commonly used codes. Our results were generally in good agreement with others but only spend a small fraction of the computation time required by the Monte Carlo calculations. The scheme of the IFCK method for dealing with lots of source collimation geometry is also presented in this study.

  2. Multivariate and semiparametric kernel regression

    Härdle, Wolfgang; Müller, Marlene

    1997-01-01

    The paper gives an introduction to theory and application of multivariate and semiparametric kernel smoothing. Multivariate nonparametric density estimation is an often used pilot tool for examining the structure of data. Regression smoothing helps in investigating the association between covariates and responses. We concentrate on kernel smoothing using local polynomial fitting which includes the Nadaraya-Watson estimator. Some theory on the asymptotic behavior and bandwidth selection is pro...

  3. Notes on the gamma kernel

    Barndorff-Nielsen, Ole E.

    The density function of the gamma distribution is used as shift kernel in Brownian semistationary processes modelling the timewise behaviour of the velocity in turbulent regimes. This report presents exact and asymptotic properties of the second order structure function under such a model......, and relates these to results of von Karmann and Horwath. But first it is shown that the gamma kernel is interpretable as a Green’s function....

  4. Point kernels and superposition methods for scatter dose calculations in brachytherapy

    Carlsson, A.K.

    2000-01-01

    Point kernels have been generated and applied for calculation of scatter dose distributions around monoenergetic point sources for photon energies ranging from 28 to 662 keV. Three different approaches for dose calculations have been compared: a single-kernel superposition method, a single-kernel superposition method where the point kernels are approximated as isotropic and a novel 'successive-scattering' superposition method for improved modelling of the dose from multiply scattered photons. An extended version of the EGS4 Monte Carlo code was used for generating the kernels and for benchmarking the absorbed dose distributions calculated with the superposition methods. It is shown that dose calculation by superposition at and below 100 keV can be simplified by using isotropic point kernels. Compared to the assumption of full in-scattering made by algorithms currently in clinical use, the single-kernel superposition method improves dose calculations in a half-phantom consisting of air and water. Further improvements are obtained using the successive-scattering superposition method, which reduces the overestimates of dose close to the phantom surface usually associated with kernel superposition methods at brachytherapy photon energies. It is also shown that scatter dose point kernels can be parametrized to biexponential functions, making them suitable for use with an effective implementation of the collapsed cone superposition algorithm. (author)

  5. Protein fold recognition using geometric kernel data fusion.

    Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves

    2014-07-01

    Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼ 86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/. © The Author 2014. Published by Oxford University Press.

  6. GSI operation software: migration from OpenVMS TO Linux

    Huhmann, R.; Froehlich, G.; Juelicher, S.; Schaa, V.R.W.

    2012-01-01

    The current operation software at GSI, controlling the linac, beam transfer lines, synchrotron and storage ring, has been developed over a period of more than two decades using OpenVMS on Alpha-Workstations. The GSI accelerator facilities will serve as an injector chain for the new FAIR accelerator complex for which a control system is currently developed. To enable reuse and integration of parts of the distributed GSI software system, in particular the linac operation software, within the FAIR control system, the corresponding software components must be migrated to Linux. Inter-operability with FAIR controls applications is achieved by adding a generic middle-ware interface accessible from Java applications. For porting applications to Linux a set of libraries and tools has been developed covering the necessary OpenVMS system functionality. Currently, core applications and services are already ported or rewritten and functionally tested but not in operational usage. This paper presents the current status of the project and concepts for putting the migrated software into operation. (authors)

  7. Berkeley lab checkpoint/restart (BLCR) for Linux clusters

    Hargrove, Paul H; Duell, Jason C

    2006-01-01

    This article describes the motivation, design and implementation of Berkeley Lab Checkpoint/Restart (BLCR), a system-level checkpoint/restart implementation for Linux clusters that targets the space of typical High Performance Computing applications, including MPI. Application-level solutions, including both checkpointing and fault-tolerant algorithms, are recognized as more time and space efficient than system-level checkpoints, which cannot make use of any application-specific knowledge. However, system-level checkpointing allows for preemption, making it suitable for responding to ''fault precursors'' (for instance, elevated error rates from ECC memory or network CRCs, or elevated temperature from sensors). Preemption can also increase the efficiency of batch scheduling; for instance reducing idle cycles (by allowing for shutdown without any queue draining period or reallocation of resources to eliminate idle nodes when better fitting jobs are queued), and reducing the average queued time (by limiting large jobs to running during off-peak hours, without the need to limit the length of such jobs). Each of these potential uses makes BLCR a valuable tool for efficient resource management in Linux clusters

  8. Implementing Journaling in a Linux Shared Disk File System

    Preslan, Kenneth W.; Barry, Andrew; Brassow, Jonathan; Cattelan, Russell; Manthei, Adam; Nygaard, Erling; VanOort, Seth; Teigland, David; Tilstra, Mike; O'Keefe, Matthew; hide

    2000-01-01

    In computer systems today, speed and responsiveness is often determined by network and storage subsystem performance. Faster, more scalable networking interfaces like Fibre Channel and Gigabit Ethernet provide the scaffolding from which higher performance computer systems implementations may be constructed, but new thinking is required about how machines interact with network-enabled storage devices. In this paper we describe how we implemented journaling in the Global File System (GFS), a shared-disk, cluster file system for Linux. Our previous three papers on GFS at the Mass Storage Symposium discussed our first three GFS implementations, their performance, and the lessons learned. Our fourth paper describes, appropriately enough, the evolution of GFS version 3 to version 4, which supports journaling and recovery from client failures. In addition, GFS scalability tests extending to 8 machines accessing 8 4-disk enclosures were conducted: these tests showed good scaling. We describe the GFS cluster infrastructure, which is necessary for proper recovery from machine and disk failures in a collection of machines sharing disks using GFS. Finally, we discuss the suitability of Linux for handling the big data requirements of supercomputing centers.

  9. Migration of alcator C-Mod computer infrastructure to Linux

    Fredian, T.W.; Greenwald, M.; Stillerman, J.A.

    2004-01-01

    The Alcator C-Mod fusion experiment at MIT in Cambridge, Massachusetts has been operating for twelve years. The data handling for the experiment during most of this period was based on MDSplus running on a cluster of VAX and Alpha computers using the OpenVMS operating system. While the OpenVMS operating system provided a stable reliable platform, the support of the operating system and the software layered on the system has deteriorated in recent years. With the advent of extremely powerful low cost personal computers and the increasing popularity and robustness of the Linux operating system a decision was made to migrate the data handling systems for C-Mod to a collection of PC's running Linux. This paper will describe the new system configuration, the effort involved in the migration from OpenVMS, the results of the first run campaign under the new configuration and the impact the switch may have on the rest of the MDSplus community

  10. FIREWALL E SEGURANÇA DE SISTEMAS APLICADO AO LINUX

    Rodrigo Ribeiro

    2017-04-01

    Full Text Available Tendo em vista a evolução da internet no mundo, torna-se necessário investir na segurança da informação, alguns importantes conceitos referentes as redes de computadores e sua evolução direcionam para o surgimento de novas vulnerabilidades. O objetivo principal deste trabalho é comprovar que, por meio da utilização de software livre como o Linux e suas ferramentas, é possível criar um cenário seguro contra alguns ataques, por meio de testes em ambientes controlados utilizando-se de arquiteturas testadas em tempo real e verificando qual o potencial de uso entre uma pesquisa autoral sobre o assunto, a partir dessa ideia, foi possível reconhecer a grande utilização dos mecanismos de segurança, validando a eficiência de tais ferramentas estudadas na mitigação de ataques a redes de computares. Os sistemas de defesa da plataforma Linux são extremamente eficientes e atende ao objetivo de prevenir uma rede de acesso indevido.

  11. Structured Kernel Dictionary Learning with Correlation Constraint for Object Recognition.

    Wang, Zhengjue; Wang, Yinghua; Liu, Hongwei; Zhang, Hao

    2017-06-21

    In this paper, we propose a new discriminative non-linear dictionary learning approach, called correlation constrained structured kernel KSVD, for object recognition. The objective function for dictionary learning contains a reconstructive term and a discriminative term. In the reconstructive term, signals are implicitly non-linearly mapped into a space, where a structured kernel dictionary, each sub-dictionary of which lies in the span of the mapped signals from the corresponding class, is established. In the discriminative term, by analyzing the classification mechanism, the correlation constraint is proposed in kernel form, constraining the correlations between different discriminative codes, and restricting the coefficient vectors to be transformed into a feature space, where the features are highly correlated inner-class and nearly independent between-classes. The objective function is optimized by the proposed structured kernel KSVD. During the classification stage, the specific form of the discriminative feature is needless to be known, while the inner product of the discriminative feature with kernel matrix embedded is available, and is suitable for a linear SVM classifier. Experimental results demonstrate that the proposed approach outperforms many state-of-the-art dictionary learning approaches for face, scene and synthetic aperture radar (SAR) vehicle target recognition.

  12. CompTIA Linux+ Complete Study Guide (Exams LX0-101 and LX0-102)

    Smith, Roderick W

    2010-01-01

    Prepare for CompTIA's Linux+ Exams. As the Linux server and desktop markets continue to grow, so does the need for qualified Linux administrators. CompTIA's Linux+ certification (Exams LX0-101 and LX0-102) includes the very latest enhancements to the popular open source operating system. This detailed guide not only covers all key exam topics—such as using Linux command-line tools, understanding the boot process and scripts, managing files and file systems, managing system security, and much more—it also builds your practical Linux skills with real-world examples. Inside, you'll find:. Full co

  13. Influence Function and Robust Variant of Kernel Canonical Correlation Analysis

    Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping

    2017-01-01

    Many unsupervised kernel methods rely on the estimation of the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). Both kernel CO and kernel CCO are sensitive to contaminated data, even when bounded positive definite kernels are used. To the best of our knowledge, there are few well-founded robust kernel methods for statistical unsupervised learning. In addition, while the influence function (IF) of an estimator can characterize its robustness, asymptotic ...

  14. PERI - auto-tuning memory-intensive kernels for multicore

    Williams, S; Carter, J; Oliker, L; Shalf, J; Yelick, K; Bailey, D; Datta, K

    2008-01-01

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to sparse matrix vector multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the high-performance computing literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4x improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications

  15. PERI - Auto-tuning Memory Intensive Kernels for Multicore

    Bailey, David H; Williams, Samuel; Datta, Kaushik; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine; Bailey, David H

    2008-06-24

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.

  16. Kernel versions of some orthogonal transformations

    Nielsen, Allan Aasbjerg

    Kernel versions of orthogonal transformations such as principal components are based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced...... by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution also known as the kernel trick these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel...... function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA) and kernel minimum noise fraction (MNF) analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function...

  17. An Approximate Approach to Automatic Kernel Selection.

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  18. Model Selection in Kernel Ridge Regression

    Exterkate, Peter

    Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...

  19. Tri-Laboratory Linux Capacity Cluster 2007 SOW

    Seager, M.

    2007-01-01

    well, the budget demands are extreme and new, more cost effective ways of fielding these systems must be developed. This Tri-Laboratory Linux Capacity Cluster (TLCC) procurement represents the ASC first investment vehicle in these capacity systems. It also represents a new strategy for quickly building, fielding and integrating many Linux clusters of various sizes into classified and unclassified production service through a concept of Scalable Units (SU). The programmatic objective is to dramatically reduce the overall Total Cost of Ownership (TCO) of these 'capacity' systems relative to the best practices in Linux Cluster deployments today. This objective only makes sense in the context of these systems quickly becoming very robust and useful production clusters under the crushing load that will be inflicted on them by the ASC and SSP scientific simulation capacity workload.

  20. Big Data demonstrator using Hadoop to build a Linux cluster for log data analysis using R

    Torbensen, Rune Sonnich; Top, Søren

    2017-01-01

    This article walks through the steps to create a Hadoop Linux cluster in the cloud and outlines how to analyze device log data via an example in the R programming language.......This article walks through the steps to create a Hadoop Linux cluster in the cloud and outlines how to analyze device log data via an example in the R programming language....

  1. Application of instrument platform based embedded Linux system on intelligent scaler

    Wang Jikun; Yang Run'an; Xia Minjian; Yang Zhijun; Li Lianfang; Yang Binhua

    2011-01-01

    It designs a instrument platform based on embedded Linux system and peripheral circuit, by designing Linux device driver and application program based on QT Embedded, various functions of the intelligent scaler are realized. The system architecture is very reasonable, so the stability and the expansibility and the integration level are increased, the development cycle is shorten greatly. (authors)

  2. Automating the Port of Linux to the VirtualLogix Hypervisor using Semantic Patches

    Armand, Francois; Muller, Gilles; Lawall, Julia Laetitia

    2008-01-01

    of Linux to the VLX hypervisor.  Coccinelle provides a notion of semantic patches, which are more abstract than standard patches, and thus are potentially applicable to a wider range of OS versions.  We have applied this approach in the context of Linux versions 2.6.13, 2.6.14, and 2.6.15, for the ARM...

  3. Linux thin-client conversion in a large cardiology practice: initial experience.

    Echt, Martin P; Rosen, Jordan

    2004-01-01

    Capital Cardiology Associates (CCA) is a single-specialty cardiology practice with offices in New York and Massachusetts. In 2003, CCA converted its IT system from a Microsoft-based network to a Linux network employing Linux thin-client technology with overall positive outcomes.

  4. Integral equations with contrasting kernels

    Theodore Burton

    2008-01-01

    Full Text Available In this paper we study integral equations of the form $x(t=a(t-\\int^t_0 C(t,sx(sds$ with sharply contrasting kernels typified by $C^*(t,s=\\ln (e+(t-s$ and $D^*(t,s=[1+(t-s]^{-1}$. The kernel assigns a weight to $x(s$ and these kernels have exactly opposite effects of weighting. Each type is well represented in the literature. Our first project is to show that for $a\\in L^2[0,\\infty$, then solutions are largely indistinguishable regardless of which kernel is used. This is a surprise and it leads us to study the essential differences. In fact, those differences become large as the magnitude of $a(t$ increases. The form of the kernel alone projects necessary conditions concerning the magnitude of $a(t$ which could result in bounded solutions. Thus, the next project is to determine how close we can come to proving that the necessary conditions are also sufficient. The third project is to show that solutions will be bounded for given conditions on $C$ regardless of whether $a$ is chosen large or small; this is important in real-world problems since we would like to have $a(t$ as the sum of a bounded, but badly behaved function, and a large well behaved function.

  5. Kernel learning algorithms for face recognition

    Li, Jun-Bao; Pan, Jeng-Shyang

    2013-01-01

    Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new

  6. Model selection for Gaussian kernel PCA denoising

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  7. EMBEDDED LINUX BASED ALBUM BROWSER SYSTEM AT MUSIC STORES

    Suryadiputra Liawatimena

    2009-01-01

    Full Text Available The goal of this research is the creation of an album browser system at a music store based on embedded Linux. It is expected with this system; it will help the promotion of said music store and make the customers activity at the store simpler and easier. This system uses NFS for networking, database system, ripping software, and GUI development. The research method used are and laboratory experiments to test the system’s hardware using TPC-57 (Touch Panel Computer 5.7" SA2410 ARM-9 Medallion CPU Module and software using QtopiaCore. The result of the research are; 1. The database query process is working properly; 2. The audio data buffering process is working properly. With those experiment results, it can be concluded that the summary of this research is that the system is ready to be implemented and used in the music stores.

  8. Impact on TRMM Products of Conversion to Linux

    Stocker, Erich Franz; Kwiatkowski, John

    2008-01-01

    In June 2008, TRMM data processing will be assumed by the Precipitation Processing System (PPS). This change will also mean a change in the hardware production environment from an SGI 32 bit IRIX processing environment to a Linux (Beowulf) 64 bit processing environment. This change of platform and operating system addressing (32 to 64) has some influence on data values in the TRMM data products. This paper will describe the transition architecture and scheduling. It will also provide an analysis of what the nature of the product differences will be. It will demonstrate that the differences are not scientifically significant and are generally not visible. However, they are not always identical with those which the SGI would produce.

  9. Searching remote homology with spectral clustering with symmetry in neighborhood cluster kernels.

    Ujjwal Maulik

    Full Text Available Remote homology detection among proteins utilizing only the unlabelled sequences is a central problem in comparative genomics. The existing cluster kernel methods based on neighborhoods and profiles and the Markov clustering algorithms are currently the most popular methods for protein family recognition. The deviation from random walks with inflation or dependency on hard threshold in similarity measure in those methods requires an enhancement for homology detection among multi-domain proteins. We propose to combine spectral clustering with neighborhood kernels in Markov similarity for enhancing sensitivity in detecting homology independent of "recent" paralogs. The spectral clustering approach with new combined local alignment kernels more effectively exploits the unsupervised protein sequences globally reducing inter-cluster walks. When combined with the corrections based on modified symmetry based proximity norm deemphasizing outliers, the technique proposed in this article outperforms other state-of-the-art cluster kernels among all twelve implemented kernels. The comparison with the state-of-the-art string and mismatch kernels also show the superior performance scores provided by the proposed kernels. Similar performance improvement also is found over an existing large dataset. Therefore the proposed spectral clustering framework over combined local alignment kernels with modified symmetry based correction achieves superior performance for unsupervised remote homolog detection even in multi-domain and promiscuous domain proteins from Genolevures database families with better biological relevance. Source code available upon request.sarkar@labri.fr.

  10. Semi-Supervised Kernel PCA

    Walder, Christian; Henao, Ricardo; Mørup, Morten

    We present three generalisations of Kernel Principal Components Analysis (KPCA) which incorporate knowledge of the class labels of a subset of the data points. The first, MV-KPCA, penalises within class variances similar to Fisher discriminant analysis. The second, LSKPCA is a hybrid of least...... squares regression and kernel PCA. The final LR-KPCA is an iteratively reweighted version of the previous which achieves a sigmoid loss function on the labeled points. We provide a theoretical risk bound as well as illustrative experiments on real and toy data sets....

  11. Calculation of dose point kernels for five radionuclides used in radio-immunotherapy

    Okigaki, S.; Ito, A.; Uchida, I.; Tomaru, T.

    1994-01-01

    With the recent interest in radioimmunotherapy, attention has been given to calculation of dose distribution from beta rays and monoenergetic electrons in tissue. Dose distribution around a point source of a beta ray emitting radioisotope is referred to as a beta dose point kernel. Beta dose point kernels for five radionuclides such as 131 I, 186 Re, 32 P, 188 Re, and 90 Y appropriate for radioimmunotherapy are calculated by Monte Carlo method using the EGS4 code system. Present results were compared with the published data of experiments and other calculations. Accuracy and precisions of beta dose point kernels are discussed. (author)

  12. Model selection in kernel ridge regression

    Exterkate, Peter

    2013-01-01

    Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...

  13. Real-time Linux operating system for plasma control on FTU--implementation advantages and first experimental results

    Vitale, V.; Centioli, C.; Iannone, F.; Mazza, G.; Panella, M.; Pangione, L.; Podda, S.; Zaccarian, L.

    2004-01-01

    In this paper, we report on the experiment carried out at the Frascati Tokamak Upgrade (FTU) on the porting of the plasma control system (PCS) from a LynxOS architecture to an open source Linux real-time architecture. The old LynxOS system was implemented on a VME/PPC604r embedded controller guaranteeing successful plasma position, density and current control. The new RTAI-Linux operating system has shown to easily adapt to the VME hardware via a VME/INTELx86 embedded controller. The advantages of the new solution versus the old one are not limited to the reduced cost of the new architecture (based on the open-source characteristic of the RTAI architecture) but also enhanced by the response time of the real-time system which, also through an optimization of the real-time code, has been reduced from 150 μs (LynxOS) to 70 μs (RTAI). The new real-time operating system is also shown to be suitable for new extended control activities, whose implementation is also possible based on the reduced duty cycle duration, which leaves space for the real-time implementation of nonlinear control laws. We report here on recent experiments related to the optimization of the coupling between additional radiofrequency power and plasma

  14. Real-time Linux operating system for plasma control on FTU--implementation advantages and first experimental results

    Vitale, V. E-mail: vitale@frascati.enea.it; Centioli, C.; Iannone, F.; Mazza, G.; Panella, M.; Pangione, L.; Podda, S.; Zaccarian, L

    2004-06-01

    In this paper, we report on the experiment carried out at the Frascati Tokamak Upgrade (FTU) on the porting of the plasma control system (PCS) from a LynxOS architecture to an open source Linux real-time architecture. The old LynxOS system was implemented on a VME/PPC604r embedded controller guaranteeing successful plasma position, density and current control. The new RTAI-Linux operating system has shown to easily adapt to the VME hardware via a VME/INTELx86 embedded controller. The advantages of the new solution versus the old one are not limited to the reduced cost of the new architecture (based on the open-source characteristic of the RTAI architecture) but also enhanced by the response time of the real-time system which, also through an optimization of the real-time code, has been reduced from 150 {mu}s (LynxOS) to 70 {mu}s (RTAI). The new real-time operating system is also shown to be suitable for new extended control activities, whose implementation is also possible based on the reduced duty cycle duration, which leaves space for the real-time implementation of nonlinear control laws. We report here on recent experiments related to the optimization of the coupling between additional radiofrequency power and plasma.

  15. Multiple Kernel Learning with Data Augmentation

    2016-11-22

    JMLR: Workshop and Conference Proceedings 63:49–64, 2016 ACML 2016 Multiple Kernel Learning with Data Augmentation Khanh Nguyen nkhanh@deakin.edu.au...University, Australia Editors: Robert J. Durrant and Kee-Eung Kim Abstract The motivations of multiple kernel learning (MKL) approach are to increase... kernel expres- siveness capacity and to avoid the expensive grid search over a wide spectrum of kernels . A large amount of work has been proposed to

  16. A kernel version of multivariate alteration detection

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2013-01-01

    Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations.......Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations....

  17. Low latency protocol for transmission of measurement data from FPGA to Linux computer via 10 Gbps Ethernet link

    Zabolotny, W.M.

    2015-01-01

    This paper presents FADE-10G—an integrated solution for modern multichannel measurement systems. Its main aim is a low latency, reliable transmission of measurement data from FPGA-based front-end electronic boards (FEBs) to a computer-based node in the Data Acquisition System (DAQ), using a standard Ethernet 1 Gbps or 10 Gbps link. In addition to transmission of data, the system allows the user to send reliably simple control commands from DAQ to FEB and to receive responses. The aim of the work is to provide a possible simple base solution, which can be adapted by the end user to his or her particular needs. Therefore, the emphasis is put on the minimal consumption of FPGA resources in FEB and the minimal CPU load in the DAQ computer. The open source implementation of the FPGA IP core and the Linux kernel driver published under permissive license facilitates modifications and reuse of the solution. The system has been successfully tested in real hardware, both with 1 Gbps and 10 Gbps links

  18. A novel adaptive kernel method with kernel centers determined by a support vector regression approach

    Sun, L.G.; De Visser, C.C.; Chu, Q.P.; Mulder, J.A.

    2012-01-01

    The optimality of the kernel number and kernel centers plays a significant role in determining the approximation power of nearly all kernel methods. However, the process of choosing optimal kernels is always formulated as a global optimization task, which is hard to accomplish. Recently, an

  19. Cold moderator scattering kernels

    MacFarlane, R.E.

    1989-01-01

    New thermal-scattering-law files in ENDF format have been developed for solid methane, liquid methane liquid ortho- and para-hydrogen, and liquid ortho- and para-deuterium using up-to-date models that include such effects as incoherent elastic scattering in the solid, diffusion and hindered vibration and rotations in the liquids, and spin correlations for the hydrogen and deuterium. These files were generated with the new LEAPR module of the NJOY Nuclear Data Processing System. Other modules of this system were used to produce cross sections for these moderators in the correct format for the continuous-energy Monte Carlo code (MCNP) being used for cold-moderator-design calculations at the Los Alamos Neutron Scattering Center (LANSCE). 20 refs., 14 figs

  20. Complex use of cottonseed kernels

    Glushenkova, A I

    1977-01-01

    A review with 41 references is made on the manufacture of oil, protein, and other products from cottonseed, the effects of gossypol on protein yield and quality and technology of gossypol removal. A process eliminating thermal treatment of the kernels and permitting the production of oil, proteins, phytin, gossypol, sugar, sterols, phosphatides, tocopherols, and residual shells and baggase is described.

  1. Kernel regression with functional response

    Ferraty, Frédéric; Laksaci, Ali; Tadj, Amel; Vieu, Philippe

    2011-01-01

    We consider kernel regression estimate when both the response variable and the explanatory one are functional. The rates of uniform almost complete convergence are stated as function of the small ball probability of the predictor and as function of the entropy of the set on which uniformity is obtained.

  2. GRIM : Leveraging GPUs for Kernel integrity monitoring

    Koromilas, Lazaros; Vasiliadis, Giorgos; Athanasopoulos, Ilias; Ioannidis, Sotiris

    2016-01-01

    Kernel rootkits can exploit an operating system and enable future accessibility and control, despite all recent advances in software protection. A promising defense mechanism against rootkits is Kernel Integrity Monitor (KIM) systems, which inspect the kernel text and data to discover any malicious

  3. Paramecium: An Extensible Object-Based Kernel

    van Doorn, L.; Homburg, P.; Tanenbaum, A.S.

    1995-01-01

    In this paper we describe the design of an extensible kernel, called Paramecium. This kernel uses an object-based software architecture which together with instance naming, late binding and explicit overrides enables easy reconfiguration. Determining which components reside in the kernel protection

  4. Local Observed-Score Kernel Equating

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  5. Veto-Consensus Multiple Kernel Learning

    Zhou, Y.; Hu, N.; Spanos, C.J.

    2016-01-01

    We propose Veto-Consensus Multiple Kernel Learning (VCMKL), a novel way of combining multiple kernels such that one class of samples is described by the logical intersection (consensus) of base kernelized decision rules, whereas the other classes by the union (veto) of their complements. The

  6. Tri-Laboratory Linux Capacity Cluster 2007 SOW

    Seager, M

    2007-03-22

    ). However, given the growing need for 'capability' systems as well, the budget demands are extreme and new, more cost effective ways of fielding these systems must be developed. This Tri-Laboratory Linux Capacity Cluster (TLCC) procurement represents the ASC first investment vehicle in these capacity systems. It also represents a new strategy for quickly building, fielding and integrating many Linux clusters of various sizes into classified and unclassified production service through a concept of Scalable Units (SU). The programmatic objective is to dramatically reduce the overall Total Cost of Ownership (TCO) of these 'capacity' systems relative to the best practices in Linux Cluster deployments today. This objective only makes sense in the context of these systems quickly becoming very robust and useful production clusters under the crushing load that will be inflicted on them by the ASC and SSP scientific simulation capacity workload.

  7. The RAppArmor Package: Enforcing Security Policies in R Using Dynamic Sandboxing on Linux

    Jeroen Ooms

    2013-11-01

    Full Text Available The increasing availability of cloud computing and scientific super computers brings great potential for making R accessible through public or shared resources. This allows us to efficiently run code requiring lots of cycles and memory, or embed R functionality into, e.g., systems and web services. However some important security concerns need to be addressed before this can be put in production. The prime use case in the design of R has always been a single statistician running R on the local machine through the interactive console. Therefore the execution environment of R is entirely unrestricted, which could result in malicious behavior or excessive use of hardware resources in a shared environment. Properly securing an R process turns out to be a complex problem. We describe various approaches and illustrate potential issues using some of our personal experiences in hosting public web services. Finally we introduce the RAppArmor package: a Linux based reference implementation for dynamic sandboxing in R on the level of the operating system.

  8. An Extreme Learning Machine Based on the Mixed Kernel Function of Triangular Kernel and Generalized Hermite Dirichlet Kernel

    Senyue Zhang

    2016-01-01

    Full Text Available According to the characteristics that the kernel function of extreme learning machine (ELM and its performance have a strong correlation, a novel extreme learning machine based on a generalized triangle Hermitian kernel function was proposed in this paper. First, the generalized triangle Hermitian kernel function was constructed by using the product of triangular kernel and generalized Hermite Dirichlet kernel, and the proposed kernel function was proved as a valid kernel function of extreme learning machine. Then, the learning methodology of the extreme learning machine based on the proposed kernel function was presented. The biggest advantage of the proposed kernel is its kernel parameter values only chosen in the natural numbers, which thus can greatly shorten the computational time of parameter optimization and retain more of its sample data structure information. Experiments were performed on a number of binary classification, multiclassification, and regression datasets from the UCI benchmark repository. The experiment results demonstrated that the robustness and generalization performance of the proposed method are outperformed compared to other extreme learning machines with different kernels. Furthermore, the learning speed of proposed method is faster than support vector machine (SVM methods.

  9. Čiščenje operacijskega sistema GNU/Linux

    OBLAK, DENIS

    2018-01-01

    Cilj diplomskega dela je izdelava aplikacije, ki bo pomagala očistiti operacijski sistem Linux in bo delala v večini distribucij. V teoretičnem delu je obravnavano čiščenje operacijskega sistema Linux, ki sprosti prostor na disku in omogoči boljše delovanje sistema. Sistematično so pregledani in teoretično predstavljeni tehnike čiščenja in obstoječa orodja za operacijski sistem Linux. V nadaljevanju je predstavljeno čiščenje operacijskih sistemov Windows in MacOS. Hkrati so predstavljen...

  10. Viscozyme L pretreatment on palm kernels improved the aroma of palm kernel oil after kernel roasting.

    Zhang, Wencan; Leong, Siew Mun; Zhao, Feifei; Zhao, Fangju; Yang, Tiankui; Liu, Shaoquan

    2018-05-01

    With an interest to enhance the aroma of palm kernel oil (PKO), Viscozyme L, an enzyme complex containing a wide range of carbohydrases, was applied to alter the carbohydrates in palm kernels (PK) to modulate the formation of volatiles upon kernel roasting. After Viscozyme treatment, the content of simple sugars and free amino acids in PK increased by 4.4-fold and 4.5-fold, respectively. After kernel roasting and oil extraction, significantly more 2,5-dimethylfuran, 2-[(methylthio)methyl]-furan, 1-(2-furanyl)-ethanone, 1-(2-furyl)-2-propanone, 5-methyl-2-furancarboxaldehyde and 2-acetyl-5-methylfuran but less 2-furanmethanol and 2-furanmethanol acetate were found in treated PKO; the correlation between their formation and simple sugar profile was estimated by using partial least square regression (PLS1). Obvious differences in pyrroles and Strecker aldehydes were also found between the control and treated PKOs. Principal component analysis (PCA) clearly discriminated the treated PKOs from that of control PKOs on the basis of all volatile compounds. Such changes in volatiles translated into distinct sensory attributes, whereby treated PKO was more caramelic and burnt after aqueous extraction and more nutty, roasty, caramelic and smoky after solvent extraction. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Wigner functions defined with Laplace transform kernels.

    Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George

    2011-10-24

    We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America

  12. Debugging Nondeterministic Failures in Linux Programs through Replay Analysis

    Shakaiba Majeed

    2018-01-01

    Full Text Available Reproducing a failure is the first and most important step in debugging because it enables us to understand the failure and track down its source. However, many programs are susceptible to nondeterministic failures that are hard to reproduce, which makes debugging extremely difficult. We first address the reproducibility problem by proposing an OS-level replay system for a uniprocessor environment that can capture and replay nondeterministic events needed to reproduce a failure in Linux interactive and event-based programs. We then present an analysis method, called replay analysis, based on the proposed record and replay system to diagnose concurrency bugs in such programs. The replay analysis method uses a combination of static analysis, dynamic tracing during replay, and delta debugging to identify failure-inducing memory access patterns that lead to concurrency failure. The experimental results show that the presented record and replay system has low-recording overhead and hence can be safely used in production systems to catch rarely occurring bugs. We also present few concurrency bug case studies from real-world applications to prove the effectiveness of the proposed bug diagnosis framework.

  13. Multi-terabyte EIDE disk arrays running Linux RAID5

    Sanders, D.A.; Cremaldi, L.M.; Eschenburg, V.; Godang, R.; Joy, M.D.; Summers, D.J.; Petravick, D.L.

    2004-01-01

    High-energy physics experiments are currently recording large amounts of data and in a few years will be recording prodigious quantities of data. New methods must be developed to handle this data and make analysis at universities possible. Grid Computing is one method; however, the data must be cached at the various Grid nodes. We examine some storage techniques that exploit recent developments in commodity hardware. Disk arrays using RAID level 5 (RAID-5) include both parity and striping. The striping improves access speed. The parity protects data in the event of a single disk failure, but not in the case of multiple disk failures. We report on tests of dual-processor Linux Software RAID-5 arrays and Hardware RAID-5 arrays using a 12-disk 3ware controller, in conjunction with 250 and 300 GB disks, for use in offline high-energy physics data analysis. The price of IDE disks is now less than $1/GB. These RAID-5 disk arrays can be scaled to sizes affordable to small institutions and used when fast random access at low cost is important

  14. Multi-terabyte EIDE disk arrays running Linux RAID5

    Sanders, D.A.; Cremaldi, L.M.; Eschenburg, V.; Godang, R.; Joy, M.D.; Summers, D.J.; /Mississippi U.; Petravick, D.L.; /Fermilab

    2004-11-01

    High-energy physics experiments are currently recording large amounts of data and in a few years will be recording prodigious quantities of data. New methods must be developed to handle this data and make analysis at universities possible. Grid Computing is one method; however, the data must be cached at the various Grid nodes. We examine some storage techniques that exploit recent developments in commodity hardware. Disk arrays using RAID level 5 (RAID-5) include both parity and striping. The striping improves access speed. The parity protects data in the event of a single disk failure, but not in the case of multiple disk failures. We report on tests of dual-processor Linux Software RAID-5 arrays and Hardware RAID-5 arrays using a 12-disk 3ware controller, in conjunction with 250 and 300 GB disks, for use in offline high-energy physics data analysis. The price of IDE disks is now less than $1/GB. These RAID-5 disk arrays can be scaled to sizes affordable to small institutions and used when fast random access at low cost is important.

  15. Credit scoring analysis using kernel discriminant

    Widiharih, T.; Mukid, M. A.; Mustafid

    2018-05-01

    Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.

  16. Testing Infrastructure for Operating System Kernel Development

    Walter, Maxwell; Karlsson, Sven

    2014-01-01

    Testing is an important part of system development, and to test effectively we require knowledge of the internal state of the system under test. Testing an operating system kernel is a challenge as it is the operating system that typically provides access to this internal state information. Multi......-core kernels pose an even greater challenge due to concurrency and their shared kernel state. In this paper, we present a testing framework that addresses these challenges by running the operating system in a virtual machine, and using virtual machine introspection to both communicate with the kernel...... and obtain information about the system. We have also developed an in-kernel testing API that we can use to develop a suite of unit tests in the kernel. We are using our framework for for the development of our own multi-core research kernel....

  17. Kernel parameter dependence in spatial factor analysis

    Nielsen, Allan Aasbjerg

    2010-01-01

    kernel PCA. Shawe-Taylor and Cristianini [4] is an excellent reference for kernel methods in general. Bishop [5] and Press et al. [6] describe kernel methods among many other subjects. The kernel version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional...... feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...... of the kernel width. The 2,097 samples each covering on average 5 km2 are analyzed chemically for the content of 41 elements....

  18. Development of a laboratory model of SSSC using RTAI on Linux ...

    ... capability to Linux Gen- eral Purpose Operating System (GPOS) over and above the capabilities of non ... Introduction. Power transfer ... of a controller prototyping environment is Matlab/Simulink/Real-time Workshop software, which can be ...

  19. Supporting the Secure Halting of User Sessions and Processes in the Linux Operating System

    Brock, Jerome

    2001-01-01

    .... Only when a session must be reactivated are its processes returned to a runnable state. This thesis presents an approach for adding this "secure halting" functionality to the Linux operating system...

  20. [Study for lung sound acquisition module based on ARM and Linux].

    Lu, Qiang; Li, Wenfeng; Zhang, Xixue; Li, Junmin; Liu, Longqing

    2011-07-01

    A acquisition module with ARM and Linux as a core was developed. This paper presents the hardware configuration and the software design. It is shown that the module can extract human lung sound reliably and effectively.

  1. Parallel Processing Performance Evaluation of Mixed T10/T100 Ethernet Topologies on Linux Pentium Systems

    Decato, Steven

    1997-01-01

    ... performed on relatively inexpensive off the shelf components. Alternative network topologies were implemented using 10 and 100 megabit-per-second Ethernet cards under the Linux operating system on Pentium based personal computer platforms...

  2. Linux aitab olla sõltumatu / Jon Hall ; interv. Kristjan Otsmann

    Hall, Jon

    2002-01-01

    Eesti peaks kasutama rohkem avatud lähtekoodil põhinevat tarkvara, sest see seab Eesti väiksemasse sõltuvusse välismaistest tarkvaratootjatest, ütles intervjuus Postimehele Linux Internationali juht

  3. WYSIWIB: A Declarative Approach to Finding Protocols and Bugs in Linux Code

    Lawall, Julia Laetitia; Brunel, Julien Pierre Manuel; Hansen, Rene Rydhof

    2008-01-01

    the tools to be able to find specific kinds of bugs. In this paper, we propose a declarative approach based on a control-flow based program search engine. Our approach is WYSIWIB (What You See Is Where It Bugs), since the programmer is able to express specifications for protocol and bug finding using...

  4. WYSIWYB: A Declarative Approach to Finding API Protocols and Bugs in Linux Code

    Lawall, Julia; Lawall, Julia; Palix, Nicolas

    2009-01-01

    the tools to be able to find specific kinds of bugs. In this paper, we propose a declarative approach based on a control-flow based program search engine. Our approach is WYSIWIB (What You See Is Where It Bugs), since the programmer is able to express specifications for protocol and bug finding using...

  5. Kernel Bayesian ART and ARTMAP.

    Masuyama, Naoki; Loo, Chu Kiong; Dawood, Farhan

    2018-02-01

    Adaptive Resonance Theory (ART) is one of the successful approaches to resolving "the plasticity-stability dilemma" in neural networks, and its supervised learning model called ARTMAP is a powerful tool for classification. Among several improvements, such as Fuzzy or Gaussian based models, the state of art model is Bayesian based one, while solving the drawbacks of others. However, it is known that the Bayesian approach for the high dimensional and a large number of data requires high computational cost, and the covariance matrix in likelihood becomes unstable. This paper introduces Kernel Bayesian ART (KBA) and ARTMAP (KBAM) by integrating Kernel Bayes' Rule (KBR) and Correntropy Induced Metric (CIM) to Bayesian ART (BA) and ARTMAP (BAM), respectively, while maintaining the properties of BA and BAM. The kernel frameworks in KBA and KBAM are able to avoid the curse of dimensionality. In addition, the covariance-free Bayesian computation by KBR provides the efficient and stable computational capability to KBA and KBAM. Furthermore, Correntropy-based similarity measurement allows improving the noise reduction ability even in the high dimensional space. The simulation experiments show that KBA performs an outstanding self-organizing capability than BA, and KBAM provides the superior classification ability than BAM, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Calculation of the Kernel scattering for thermal neutrons in H2O e D2O

    Leal, L.C.; Assis, J.T. de

    1981-01-01

    A computer code, using the Nelkin-and Butler models for the calculations of the Kernel scattering, was developed. Calculations of the thermal neutron flux in an homogeneous-and infinite medium with a 1 /v absorber in 30 energy groups were done and compared with experimental data. The reactors parameters calculated by the Hammer code (in the original version and with the new library generated by the authors' code) are presented. (E.G) [pt

  7. The NIMROD Code

    Schnack, D. D.; Glasser, A. H.

    1996-11-01

    NIMROD is a new code system that is being developed for the analysis of modern fusion experiments. It is being designed from the beginning to make the maximum use of massively parallel computer architectures and computer graphics. The NIMROD physics kernel solves the three-dimensional, time-dependent two-fluid equations with neo-classical effects in toroidal geometry of arbitrary poloidal cross section. The NIMROD system also includes a pre-processor, a grid generator, and a post processor. User interaction with NIMROD is facilitated by a modern graphical user interface (GUI). The NIMROD project is using Quality Function Deployment (QFD) team management techniques to minimize re-engineering and reduce code development time. This paper gives an overview of the NIMROD project. Operation of the GUI is demonstrated, and the first results from the physics kernel are given.

  8. Theory of reproducing kernels and applications

    Saitoh, Saburou

    2016-01-01

    This book provides a large extension of the general theory of reproducing kernels published by N. Aronszajn in 1950, with many concrete applications. In Chapter 1, many concrete reproducing kernels are first introduced with detailed information. Chapter 2 presents a general and global theory of reproducing kernels with basic applications in a self-contained way. Many fundamental operations among reproducing kernel Hilbert spaces are dealt with. Chapter 2 is the heart of this book. Chapter 3 is devoted to the Tikhonov regularization using the theory of reproducing kernels with applications to numerical and practical solutions of bounded linear operator equations. In Chapter 4, the numerical real inversion formulas of the Laplace transform are presented by applying the Tikhonov regularization, where the reproducing kernels play a key role in the results. Chapter 5 deals with ordinary differential equations; Chapter 6 includes many concrete results for various fundamental partial differential equations. In Chapt...

  9. A unified and comprehensible view of parametric and kernel methods for genomic prediction with application to rice

    Laval Jacquin

    2016-08-01

    Full Text Available One objective of this study was to provide readers with a clear and unified understanding ofparametric statistical and kernel methods, used for genomic prediction, and to compare some ofthese in the context of rice breeding for quantitative traits. Furthermore, another objective wasto provide a simple and user-friendly R package, named KRMM, which allows users to performRKHS regression with several kernels. After introducing the concept of regularized empiricalrisk minimization, the connections between well-known parametric and kernel methods suchas Ridge regression (i.e. genomic best linear unbiased predictor (GBLUP and reproducingkernel Hilbert space (RKHS regression were reviewed. Ridge regression was then reformulatedso as to show and emphasize the advantage of the kernel trick concept, exploited by kernelmethods in the context of epistatic genetic architectures, over parametric frameworks used byconventional methods. Some parametric and kernel methods; least absolute shrinkage andselection operator (LASSO, GBLUP, support vector machine regression (SVR and RKHSregression were thereupon compared for their genomic predictive ability in the context of ricebreeding using three real data sets. Among the compared methods, RKHS regression and SVRwere often the most accurate methods for prediction followed by GBLUP and LASSO. An Rfunction which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression,with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time hasbeen developed. Moreover, a modified version of this function, which allows users to tune kernelsfor RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available.

  10. Convergence of barycentric coordinates to barycentric kernels

    Kosinka, Jiří

    2016-02-12

    We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.

  11. Convergence of barycentric coordinates to barycentric kernels

    Kosinka, Jiří ; Barton, Michael

    2016-01-01

    We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.

  12. Kernel principal component analysis for change detection

    Nielsen, Allan Aasbjerg; Morton, J.C.

    2008-01-01

    region acquired at two different time points. If change over time does not dominate the scene, the projection of the original two bands onto the second eigenvector will show change over time. In this paper a kernel version of PCA is used to carry out the analysis. Unlike ordinary PCA, kernel PCA...... with a Gaussian kernel successfully finds the change observations in a case where nonlinearities are introduced artificially....

  13. Partial Deconvolution with Inaccurate Blur Kernel.

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning

  14. Process for producing metal oxide kernels and kernels so obtained

    Lelievre, Bernard; Feugier, Andre.

    1974-01-01

    The process desbribed is for producing fissile or fertile metal oxide kernels used in the fabrication of fuels for high temperature nuclear reactors. This process consists in adding to an aqueous solution of at least one metallic salt, particularly actinide nitrates, at least one chemical compound capable of releasing ammonia, in dispersing drop by drop the solution thus obtained into a hot organic phase to gel the drops and transform them into solid particles. These particles are then washed, dried and treated to turn them into oxide kernels. The organic phase used for the gel reaction is formed of a mixture composed of two organic liquids, one acting as solvent and the other being a product capable of extracting the anions from the metallic salt of the drop at the time of gelling. Preferably an amine is used as product capable of extracting the anions. Additionally, an alcohol that causes a part dehydration of the drops can be employed as solvent, thus helping to increase the resistance of the particles [fr

  15. VISPA - Visual Physics Analysis on Linux, Mac OS X and Windows

    Brodski, M.; Erdmann, M.; Fischer, R.; Hinzmann, A.; Klimkovich, T.; Mueller, G.; Muenzer, T.; Steggemann, J.; Winchen, T.

    2009-01-01

    Modern physics analysis is an iterative task consisting of prototyping, executing and verifying the analysis procedure. For supporting scientists in each step of this process, we developed VISPA: a toolkit based on graphical and textual elements for visual physics analysis. Unlike many other analysis frameworks VISPA runs on Linux, Windows and Mac OS X. VISPA can be used in any experiment with serial data flow. In particular, VISPA can be connected to any high energy physics experiment. Furthermore, datatypes for the usage in astroparticle physics have recently been successfully included. An analysis on the data is performed in several steps, each represented by an individual module. While modules e.g. for file input and output are already provided, additional modules can be written by the user with C++ or the Python language. From individual modules, the analysis is designed by graphical connections representing the data flow. This modular concept assists the user in fast prototyping of the analysis and improves the reusability of written source code. The execution of the analysis can be performed directly from the GUI, or on any supported computer in batch mode. Therefore the analysis can be transported from the laptop to other machines. The recently improved GUI of VISPA is based on a plug-in mechanism. Besides components for the development and execution of physics analysis, additional plug-ins are available for the visualization of e.g. the structure of high energy physics events or the properties of cosmic rays in an astroparticle physics analysis. Furthermore plug-ins have been developed to display and edit configuration files of individual experiments from within the VISPA GUI. (author)

  16. Hierarchical Image Segmentation of Remotely Sensed Data using Massively Parallel GNU-LINUX Software

    Tilton, James C.

    2003-01-01

    A hierarchical set of image segmentations is a set of several image segmentations of the same image at different levels of detail in which the segmentations at coarser levels of detail can be produced from simple merges of regions at finer levels of detail. In [1], Tilton, et a1 describes an approach for producing hierarchical segmentations (called HSEG) and gave a progress report on exploiting these hierarchical segmentations for image information mining. The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HSWO) approach to region growing, which was described as early as 1989 by Beaulieu and Goldberg. The HSWO approach seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing (e.g. Horowitz and T. Pavlidis, [3]). In addition, HSEG optionally interjects between HSWO region growing iterations, merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the utility of the segmentation results, especially for larger images, it also significantly increases HSEG s computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) was devised, which includes special code to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. The recursive nature of RHSEG makes for a straightforward parallel implementation. This paper describes the HSEG algorithm, its recursive formulation (referred to as RHSEG), and the implementation of RHSEG using massively parallel GNU-LINUX software. Results with Landsat TM data are included comparing RHSEG with classic

  17. On the Automatic Evolution of an OS Kernel Using Temporal Logic and AOP

    Åberg, Rickard; Lawall, Julia Laetitia; Sudholt, Mario

    2003-01-01

    aspect-oriented programing, temporal logic, process scheduling, Linux, domain-specific languages......aspect-oriented programing, temporal logic, process scheduling, Linux, domain-specific languages...

  18. Hilbertian kernels and spline functions

    Atteia, M

    1992-01-01

    In this monograph, which is an extensive study of Hilbertian approximation, the emphasis is placed on spline functions theory. The origin of the book was an effort to show that spline theory parallels Hilbertian Kernel theory, not only for splines derived from minimization of a quadratic functional but more generally for splines considered as piecewise functions type. Being as far as possible self-contained, the book may be used as a reference, with information about developments in linear approximation, convex optimization, mechanics and partial differential equations.

  19. CompTIA Linux+ study guide exam LX0-103 and exam LX0-104

    Bresnahan, Christine

    2015-01-01

    CompTIA Authorized Linux+ prepCompTIA Linux+ Study Guide is your comprehensive study guide for the Linux+ Powered by LPI certification exams. With complete coverage of 100% of the objectives on both exam LX0-103 and exam LX0-104, this study guide provides clear, concise information on all aspects of Linux administration, with a focus on the latest version of the exam. You'll gain the insight of examples drawn from real-world scenarios, with detailed guidance and authoritative coverage of key topics, including GNU and Unix commands, system operation, system administration, system services, secu

  20. Dense Medium Machine Processing Method for Palm Kernel/ Shell ...

    ADOWIE PERE

    Cracked palm kernel is a mixture of kernels, broken shells, dusts and other impurities. In ... machine processing method using dense medium, a separator, a shell collector and a kernel .... efficiency, ease of maintenance and uniformity of.

  1. Mitigation of artifacts in rtm with migration kernel decomposition

    Zhan, Ge; Schuster, Gerard T.

    2012-01-01

    The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently

  2. Optimization of Finite-Differencing Kernels for Numerical Relativity Applications

    Roberto Alfieri

    2018-05-01

    Full Text Available A simple optimization strategy for the computation of 3D finite-differencing kernels on many-cores architectures is proposed. The 3D finite-differencing computation is split direction-by-direction and exploits two level of parallelism: in-core vectorization and multi-threads shared-memory parallelization. The main application of this method is to accelerate the high-order stencil computations in numerical relativity codes. Our proposed method provides substantial speedup in computations involving tensor contractions and 3D stencil calculations on different processor microarchitectures, including Intel Knight Landing.

  3. Ranking Support Vector Machine with Kernel Approximation

    Kai Chen

    2017-01-01

    Full Text Available Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels can give higher accuracy than linear RankSVM (RankSVM with a linear kernel for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  4. Ranking Support Vector Machine with Kernel Approximation.

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  5. Sentiment classification with interpolated information diffusion kernels

    Raaijmakers, S.

    2007-01-01

    Information diffusion kernels - similarity metrics in non-Euclidean information spaces - have been found to produce state of the art results for document classification. In this paper, we present a novel approach to global sentiment classification using these kernels. We carry out a large array of

  6. Evolution kernel for the Dirac field

    Baaquie, B.E.

    1982-06-01

    The evolution kernel for the free Dirac field is calculated using the Wilson lattice fermions. We discuss the difficulties due to which this calculation has not been previously performed in the continuum theory. The continuum limit is taken, and the complete energy eigenfunctions as well as the propagator are then evaluated in a new manner using the kernel. (author)

  7. Panel data specifications in nonparametric kernel regression

    Czekaj, Tomasz Gerard; Henningsen, Arne

    parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...

  8. Improving the Bandwidth Selection in Kernel Equating

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  9. Metabolic network prediction through pairwise rational kernels.

    Roche-Lima, Abiel; Domaratzki, Michael; Fristensky, Brian

    2014-09-26

    Metabolic networks are represented by the set of metabolic pathways. Metabolic pathways are a series of biochemical reactions, in which the product (output) from one reaction serves as the substrate (input) to another reaction. Many pathways remain incompletely characterized. One of the major challenges of computational biology is to obtain better models of metabolic pathways. Existing models are dependent on the annotation of the genes. This propagates error accumulation when the pathways are predicted by incorrectly annotated genes. Pairwise classification methods are supervised learning methods used to classify new pair of entities. Some of these classification methods, e.g., Pairwise Support Vector Machines (SVMs), use pairwise kernels. Pairwise kernels describe similarity measures between two pairs of entities. Using pairwise kernels to handle sequence data requires long processing times and large storage. Rational kernels are kernels based on weighted finite-state transducers that represent similarity measures between sequences or automata. They have been effectively used in problems that handle large amount of sequence information such as protein essentiality, natural language processing and machine translations. We create a new family of pairwise kernels using weighted finite-state transducers (called Pairwise Rational Kernel (PRK)) to predict metabolic pathways from a variety of biological data. PRKs take advantage of the simpler representations and faster algorithms of transducers. Because raw sequence data can be used, the predictor model avoids the errors introduced by incorrect gene annotations. We then developed several experiments with PRKs and Pairwise SVM to validate our methods using the metabolic network of Saccharomyces cerevisiae. As a result, when PRKs are used, our method executes faster in comparison with other pairwise kernels. Also, when we use PRKs combined with other simple kernels that include evolutionary information, the accuracy

  10. Implementing Discretionary Access Control with Time Character in Linux and Performance Analysis

    TAN Liang; ZHOU Ming-Tian

    2006-01-01

    DAC (Discretionary Access Control Policy) is access control based on ownership relations between subject and object, the subject can discretionarily decide on that who, by what methods, can access his owns object. In this paper, the system time is looked as a basic secure element. The DAC_T (Discretionary Access Control Policy with Time Character) is presented and formalized. The DAC_T resolves that the subject can discretionarily decide that who, on when, can access his owns objects. And then the DAC_T is implemented on Linux based on GFAC (General Framework for Access Control), and the algorithm is put forward. Finally, the performance analysis for the DAC_T_Linux is carried out. It is proved that the DAC_T_Linux not only can realize time constraints between subject and object but also can still be accepted by us though its performance have been decreased.

  11. Research on applications of ARM-LINUX embedded systems in manufacturing the nuclear equipment

    Nguyen Van Sy; Phan Luong Tuan; Nguyen Xuan Vinh; Dang Quang Bao

    2016-01-01

    A new microprocessor system that is ARM processor with open source Linux operating system is studied with the objective to apply ARM-Linux embedded systems in manufacturing the nuclear equipment. We use the development board of the company to learn and to build the workflow for an embedded system, then basing on the knowledge we design a motherboard embedded systems interface with the peripherals is buttons, LEDs through GPIO interface and connected with GM counting system via RS232 interface. The results of this study are: i) The procedures for working with embedded systems: process customization, installation embedded operating system and installation process, configure the development tools on the host computer; ii) ARM-Linux motherboard embedded systems interface with the peripherals and GM counting system, indicating the counts from GM counting system on the touch screen. (author)

  12. PKI, Gamma Radiation Reactor Shielding Calculation by Point-Kernel Method

    Li Chunhuai; Zhang Liwu; Zhang Yuqin; Zhang Chuanxu; Niu Xihua

    1990-01-01

    1 - Description of program or function: This code calculates radiation shielding problem of gamma-ray in geometric space. 2 - Method of solution: PKI uses a point kernel integration technique, describes radiation shielding geometric space by using geometric space configuration method and coordinate conversion, and makes use of calculation result of reactor primary shielding and flow regularity in loop system for coolant

  13. Isolation of a kernel oleoyl-ACP thioesterase gene from the oil palm ...

    We have isolated a cDNA clone from the developing kernel of the oil palm Elaeis guineensis which encodes a thioesterase enzyme. Its highest homology was to the Brassica napus oleoyl-ACP thioesterase with which it had 72% homology at the nucleotide level, over the coding region examined, and 83% identity (90% ...

  14. Benchmarking NWP Kernels on Multi- and Many-core Processors

    Michalakes, J.; Vachharajani, M.

    2008-12-01

    Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.

  15. A Stochastic Proof of the Resonant Scattering Kernel and its Applications for Gen IV Reactors Type

    Becker, B.; Dagan, R.; Broeders, C.H.M.; Lohnert, G.

    2008-01-01

    Monte Carlo codes such as MCNP are widely accepted as almost-reference for reactor analysis. The Monte Carlo Code should therefore use as few as possible approximations in order to produce 'experimental-level' calculations. In this study we deal with one of the most problematic approximations done in MCNP in which the resonances are ignored for the secondary neutron energy distribution, namely the change of the energy and angular direction of the neutron after interaction with a heavy isotope with pronounced resonances. The endeavour of exploiting the influence of the resonances on the scattering kernel goes back to 1944 where E. Wigner and J. Wilkins developed the first temperature dependent scattering kernel. However only in 1998, the full analytical solution for the double differential resonant dependent scattering kernel was suggested by W. Rothenstein and R. Dagan. An independent stochastic approach is presented for the first time to confirm the above analytical kernel with a complete different methodology. Moreover, by manipulating in a subtle manner the scattering subroutine COLIDN of MCNP, it is proven that this very subroutine is, to some extent, inappropriate as well as the relevant explanation in the MCNP manual. The impact of this improved resonance dependent scattering kernel on diverse types of reactors, in particular for the Generation IV innovative core design HTR, is shown to be significant. (authors)

  16. The OKE Corral : Code organisation and reconfiguration at runtime using active linking

    Bos, Herbert; Samwel, Bart

    2002-01-01

    The OKE Corral is an active network environment which allows third-party active code to configure an active node’s code organisation at any level, including the kernel. Using the safety properties of an open kernel environment and a simple ‘Click-like’ software model, third parties are able to load

  17. Malware Memory Analysis of the Jynx2 Linux Rootkit (Part 1): Investigating a Publicly Available Linux Rootkit Using the Volatility Memory Analysis Framework

    2014-10-01

    represented by the Minister of National Defence, 2014 © Sa Majesté la Reine (en droit du Canada), telle que représentée par le ministre de la Défense...analysis techniques is outside the scope of this work, as it requires a comprehensive study of operating system internals and software reverse engineering...2 Peripheral concerns 2.1 Why examine Linux memory images or make them available? After extensively searching the available public

  18. Linux toys II 9 Cool New Projects for Home, Office, and Entertainment

    Negus, Christopher

    2006-01-01

    Builds on the success of the original Linux Toys (0-7645-2508-5) and adds projects using different Linux distributionsAll-new toys in this edition include a car computer system with built-in entertainment and navigation features, bootable movies, a home surveillance monitor, a LEGO Mindstorms robot, and a weather mapping stationIntroduces small business opportunities with an Internet radio station and Internet caf ̌projectsCompanion Web site features specialized hardware drivers, software interfaces, music and game software, project descriptions, and discussion forumsIncludes a CD-ROM with scr

  19. Memanfaatkan Sistem Operasi Linux Untuk Keamanan Data Pada E-commerce

    Isnania

    2012-01-01

    E-commerce is one of the major networks to do the transaction, where security is an issue that must be considered vital to the security of customer data and transactions. To realize the process of e-commerce, let prepared operating system (OS) that are reliable to secure the transaction path and also Dynamic Database back end that provides a product catalog that will be sold online. For technology, we can adopt open source technologies that are all available on linux. On linux it's too bundle...

  20. PhyLIS: A Simple GNU/Linux Distribution for Phylogenetics and Phyloinformatics

    Robert C. Thomson

    2009-01-01

    Full Text Available PhyLIS is a free GNU/Linux distribution that is designed to provide a simple, standardized platform for phylogenetic and phyloinformatic analysis. The operating system incorporates most commonly used phylogenetic software, which has been pre-compiled and pre-configured, allowing for straightforward application of phylogenetic methods and development of phyloinformatic pipelines in a stable Linux environment. The software is distributed as a live CD and can be installed directly or run from the CD without making changes to the computer. PhyLIS is available for free at http://www.eve.ucdavis.edu/rcthomson/phylis/.

  1. PhyLIS: a simple GNU/Linux distribution for phylogenetics and phyloinformatics.

    Thomson, Robert C

    2009-07-30

    PhyLIS is a free GNU/Linux distribution that is designed to provide a simple, standardized platform for phylogenetic and phyloinformatic analysis. The operating system incorporates most commonly used phylogenetic software, which has been pre-compiled and pre-configured, allowing for straightforward application of phylogenetic methods and development of phyloinformatic pipelines in a stable Linux environment. The software is distributed as a live CD and can be installed directly or run from the CD without making changes to the computer. PhyLIS is available for free at http://www.eve.ucdavis.edu/rcthomson/phylis/.

  2. An Internal Data Non-hiding Type Real-time Kernel and its Application to the Mechatronics Controller

    Yoshida, Toshio

    For the mechatronics equipment controller that controls robots and machine tools, high-speed motion control processing is essential. The software system of the controller like other embedded systems is composed of three layers software such as real-time kernel layer, middleware layer, and application software layer on the dedicated hardware. The application layer in the top layer is composed of many numbers of tasks, and application function of the system is realized by the cooperation between these tasks. In this paper we propose an internal data non-hiding type real-time kernel in which customizing the task control is possible only by change in the program code of the task side without any changes in the program code of real-time kernel. It is necessary to reduce the overhead caused by the real-time kernel task control for the speed-up of the motion control of the mechatronics equipment. For this, customizing the task control function is needed. We developed internal data non-cryptic type real-time kernel ZRK to evaluate this method, and applied to the control of the multi system automatic lathe. The effect of the speed-up of the task cooperation processing was able to be confirmed by combined task control processing on the task side program code using an internal data non-hiding type real-time kernel ZRK.

  3. Bayesian Kernel Mixtures for Counts.

    Canale, Antonio; Dunson, David B

    2011-12-01

    Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online.

  4. Mercure IV code application to the external dose computation from low and medium level wastes

    Tomassini, T.

    1985-01-01

    In the present work the external dose from low and medium level wastes is calculated using MERCURE IV code. The code utilizes MONTECARLO method for integrating multigroup line of sight attenuation Kernels

  5. Research on offense and defense technology for iOS kernel security mechanism

    Chu, Sijun; Wu, Hao

    2018-04-01

    iOS is a strong and widely used mobile device system. It's annual profits make up about 90% of the total profits of all mobile phone brands. Though it is famous for its security, there have been many attacks on the iOS operating system, such as the Trident apt attack in 2016. So it is important to research the iOS security mechanism and understand its weaknesses and put forward targeted protection and security check framework. By studying these attacks and previous jailbreak tools, we can see that an attacker could only run a ROP code and gain kernel read and write permissions based on the ROP after exploiting kernel and user layer vulnerabilities. However, the iOS operating system is still protected by the code signing mechanism, the sandbox mechanism, and the not-writable mechanism of the system's disk area. This is far from the steady, long-lasting control that attackers expect. Before iOS 9, breaking these security mechanisms was usually done by modifying the kernel's important data structures and security mechanism code logic. However, after iOS 9, the kernel integrity protection mechanism was added to the 64-bit operating system and none of the previous methods were adapted to the new versions of iOS [1]. But this does not mean that attackers can not break through. Therefore, based on the analysis of the vulnerability of KPP security mechanism, this paper implements two possible breakthrough methods for kernel security mechanism for iOS9 and iOS10. Meanwhile, we propose a defense method based on kernel integrity detection and sensitive API call detection to defense breakthrough method mentioned above. And we make experiments to prove that this method can prevent and detect attack attempts or invaders effectively and timely.

  6. Development of Automatic Live Linux Rebuilding System with Flexibility in Science and Engineering Education and Applying to Information Processing Education

    Sonoda, Jun; Yamaki, Kota

    We develop an automatic Live Linux rebuilding system for science and engineering education, such as information processing education, numerical analysis and so on. Our system is enable to easily and automatically rebuild a customized Live Linux from a ISO image of Ubuntu, which is one of the Linux distribution. Also, it is easily possible to install/uninstall packages and to enable/disable init daemons. When we rebuild a Live Linux CD using our system, we show number of the operations is 8, and the rebuilding time is about 33 minutes on CD version and about 50 minutes on DVD version. Moreover, we have applied the rebuilded Live Linux CD in a class of information processing education in our college. As the results of a questionnaires survey from our 43 students who used the Live Linux CD, we obtain that the our Live Linux is useful for about 80 percents of students. From these results, we conclude that our system is able to easily and automatically rebuild a useful Live Linux in short time.

  7. Anisotropic hydrodynamics with a scalar collisional kernel

    Almaalol, Dekrayat; Strickland, Michael

    2018-04-01

    Prior studies of nonequilibrium dynamics using anisotropic hydrodynamics have used the relativistic Anderson-Witting scattering kernel or some variant thereof. In this paper, we make the first study of the impact of using a more realistic scattering kernel. For this purpose, we consider a conformal system undergoing transversally homogenous and boost-invariant Bjorken expansion and take the collisional kernel to be given by the leading order 2 ↔2 scattering kernel in scalar λ ϕ4 . We consider both classical and quantum statistics to assess the impact of Bose enhancement on the dynamics. We also determine the anisotropic nonequilibrium attractor of a system subject to this collisional kernel. We find that, when the near-equilibrium relaxation-times in the Anderson-Witting and scalar collisional kernels are matched, the scalar kernel results in a higher degree of momentum-space anisotropy during the system's evolution, given the same initial conditions. Additionally, we find that taking into account Bose enhancement further increases the dynamically generated momentum-space anisotropy.

  8. Study on the scattering law and scattering kernel of hydrogen in zirconium hydride

    Jiang Xinbiao; Chen Wei; Chen Da; Yin Banghua; Xie Zhongsheng

    1999-01-01

    The nuclear analytical model of calculating scattering law and scattering kernel for the uranium zirconium hybrid reactor is described. In the light of the acoustic and optic model of zirconium hydride, its frequency distribution function f(ω) is given and the scattering law of hydrogen in zirconium hydride is obtained by GASKET. The scattering kernel σ l (E 0 →E) of hydrogen bound in zirconium hydride is provided by the SMP code in the standard WIMS cross section library. Along with this library, WIMS is used to calculate the thermal neutron energy spectrum of fuel cell. The results are satisfied

  9. Jdpd: an open java simulation kernel for molecular fragment dissipative particle dynamics.

    van den Broek, Karina; Kuhn, Hubert; Zielesny, Achim

    2018-05-21

    Jdpd is an open Java simulation kernel for Molecular Fragment Dissipative Particle Dynamics with parallelizable force calculation, efficient caching options and fast property calculations. It is characterized by an interface and factory-pattern driven design for simple code changes and may help to avoid problems of polyglot programming. Detailed input/output communication, parallelization and process control as well as internal logging capabilities for debugging purposes are supported. The new kernel may be utilized in different simulation environments ranging from flexible scripting solutions up to fully integrated "all-in-one" simulation systems.

  10. Automatic performance tuning of parallel and accelerated seismic imaging kernels

    Haberdar, Hakan

    2014-01-01

    With the increased complexity and diversity of mainstream high performance computing systems, significant effort is required to tune parallel applications in order to achieve the best possible performance for each particular platform. This task becomes more and more challenging and requiring a larger set of skills. Automatic performance tuning is becoming a must for optimizing applications such as Reverse Time Migration (RTM) widely used in seismic imaging for oil and gas exploration. An empirical search based auto-tuning approach is applied to the MPI communication operations of the parallel isotropic and tilted transverse isotropic kernels. The application of auto-tuning using the Abstract Data and Communication Library improved the performance of the MPI communications as well as developer productivity by providing a higher level of abstraction. Keeping productivity in mind, we opted toward pragma based programming for accelerated computation on latest accelerated architectures such as GPUs using the fairly new OpenACC standard. The same auto-tuning approach is also applied to the OpenACC accelerated seismic code for optimizing the compute intensive kernel of the Reverse Time Migration application. The application of such technique resulted in an improved performance of the original code and its ability to adapt to different execution environments.

  11. MAGNETOHYDRODYNAMIC EQUATIONS (MHD GENERATION CODE

    Francisco Frutos Alfaro

    2017-04-01

    Full Text Available A program to generate codes in Fortran and C of the full magnetohydrodynamic equations is shown. The program uses the free computer algebra system software REDUCE. This software has a package called EXCALC, which is an exterior calculus program. The advantage of this program is that it can be modified to include another complex metric or spacetime. The output of this program is modified by means of a LINUX script which creates a new REDUCE program to manipulate the magnetohydrodynamic equations to obtain a code that can be used as a seed for a magnetohydrodynamic code for numerical applications. As an example, we present part of the output of our programs for Cartesian coordinates and how to do the discretization.

  12. Convolutional Neural Network on Embedded Linux System-on-Chip: A Methodology and Performance Benchmark

    2016-05-01

    Linux® is a registered trademark of Linus Torvalds. NVIDIA ® is a registered trademark of NVIDIA Corporation. Oracle® is a registered trademark of...two NVIDIA ® GTX580 GPUs [3]. Therefore, for this initial work, we decided to concentrate on small networks and small datasets until the methods are

  13. Web application for monitoring mainframe computer, Linux operating systems and application servers

    Dimnik, Tomaž

    2016-01-01

    This work presents the idea and the realization of web application for monitoring the operation of the mainframe computer, servers with Linux operating system and application servers. Web application is intended for administrators of these systems, as an aid to better understand the current state, load and operation of the individual components of the server systems.

  14. NSC KIPT Linux cluster for computing within the CMS physics program

    Levchuk, L.G.; Sorokin, P.V.; Soroka, D.V.

    2002-01-01

    The architecture of the NSC KIPT specialized Linux cluster constructed for carrying out work on CMS physics simulations and data processing is described. The configuration of the portable batch system (PBS) on the cluster is outlined. Capabilities of the cluster in its current configuration to perform CMS physics simulations are pointed out

  15. Design of software platform based on linux operating system for γ-spectrometry instrument

    Hong Tianqi; Zhou Chen; Zhang Yongjin

    2008-01-01

    This paper described the design of γ-spectrometry instrument software platform based on s3c2410a processor with arm920t core, emphases are focused on analyzing the integrated application of embedded linux operating system, yaffs file system and qt/embedded GUI development library. It presented a new software platform in portable instrument for γ measurement. (authors)

  16. Higher-Order Hybrid Gaussian Kernel in Meshsize Boosting Algorithm

    In this paper, we shall use higher-order hybrid Gaussian kernel in a meshsize boosting algorithm in kernel density estimation. Bias reduction is guaranteed in this scheme like other existing schemes but uses the higher-order hybrid Gaussian kernel instead of the regular fixed kernels. A numerical verification of this scheme ...

  17. NLO corrections to the Kernel of the BKP-equations

    Bartels, J. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Fadin, V.S. [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation); Novosibirskij Gosudarstvennyj Univ., Novosibirsk (Russian Federation); Lipatov, L.N. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Petersburg Nuclear Physics Institute, Gatchina, St. Petersburg (Russian Federation); Vacca, G.P. [INFN, Sezione di Bologna (Italy)

    2012-10-02

    We present results for the NLO kernel of the BKP equations for composite states of three reggeized gluons in the Odderon channel, both in QCD and in N=4 SYM. The NLO kernel consists of the NLO BFKL kernel in the color octet representation and the connected 3{yields}3 kernel, computed in the tree approximation.

  18. Adaptive Kernel in Meshsize Boosting Algorithm in KDE ...

    This paper proposes the use of adaptive kernel in a meshsize boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  19. Adaptive Kernel In The Bootstrap Boosting Algorithm In KDE ...

    This paper proposes the use of adaptive kernel in a bootstrap boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  20. Kernel maximum autocorrelation factor and minimum noise fraction transformations

    Nielsen, Allan Aasbjerg

    2010-01-01

    in hyperspectral HyMap scanner data covering a small agricultural area, and 3) maize kernel inspection. In the cases shown, the kernel MAF/MNF transformation performs better than its linear counterpart as well as linear and kernel PCA. The leading kernel MAF/MNF variates seem to possess the ability to adapt...

  1. 7 CFR 51.1441 - Half-kernel.

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...

  2. 7 CFR 51.2296 - Three-fourths half kernel.

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...

  3. 7 CFR 981.401 - Adjusted kernel weight.

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel weight... kernels in excess of five percent; less shells, if applicable; less processing loss of one percent for...

  4. 7 CFR 51.1403 - Kernel color classification.

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...

  5. Parsimonious Wavelet Kernel Extreme Learning Machine

    Wang Qin

    2015-11-01

    Full Text Available In this study, a parsimonious scheme for wavelet kernel extreme learning machine (named PWKELM was introduced by combining wavelet theory and a parsimonious algorithm into kernel extreme learning machine (KELM. In the wavelet analysis, bases that were localized in time and frequency to represent various signals effectively were used. Wavelet kernel extreme learning machine (WELM maximized its capability to capture the essential features in “frequency-rich” signals. The proposed parsimonious algorithm also incorporated significant wavelet kernel functions via iteration in virtue of Householder matrix, thus producing a sparse solution that eased the computational burden and improved numerical stability. The experimental results achieved from the synthetic dataset and a gas furnace instance demonstrated that the proposed PWKELM is efficient and feasible in terms of improving generalization accuracy and real time performance.

  6. Ensemble Approach to Building Mercer Kernels

    National Aeronautics and Space Administration — This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive...

  7. Kernel Learning of Histogram of Local Gabor Phase Patterns for Face Recognition

    Bineng Zhong

    2008-06-01

    Full Text Available This paper proposes a new face recognition method, named kernel learning of histogram of local Gabor phase pattern (K-HLGPP, which is based on Daugman’s method for iris recognition and the local XOR pattern (LXP operator. Unlike traditional Gabor usage exploiting the magnitude part in face recognition, we encode the Gabor phase information for face classification by the quadrant bit coding (QBC method. Two schemes are proposed for face recognition. One is based on the nearest-neighbor classifier with chi-square as the similarity measurement, and the other makes kernel discriminant analysis for HLGPP (K-HLGPP using histogram intersection and Gaussian-weighted chi-square kernels. The comparative experiments show that K-HLGPP achieves a higher recognition rate than other well-known face recognition systems on the large-scale standard FERET, FERET200, and CAS-PEAL-R1 databases.

  8. Feature selection and multi-kernel learning for sparse representation on a manifold

    Wang, Jim Jing-Yan

    2014-03-01

    Sparse representation has been widely studied as a part-based data representation method and applied in many scientific and engineering fields, such as bioinformatics and medical imaging. It seeks to represent a data sample as a sparse linear combination of some basic items in a dictionary. Gao etal. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples. To overcome this problem, we integrate feature selection and multiple kernel learning into the sparse coding on the manifold. To this end, unified objectives are defined for feature selection, multiple kernel learning, sparse coding, and graph regularization. By optimizing the objective functions iteratively, we develop novel data representation algorithms with feature selection and multiple kernel learning respectively. Experimental results on two challenging tasks, N-linked glycosylation prediction and mammogram retrieval, demonstrate that the proposed algorithms outperform the traditional sparse coding methods. © 2013 Elsevier Ltd.

  9. Feature selection and multi-kernel learning for sparse representation on a manifold.

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2014-03-01

    Sparse representation has been widely studied as a part-based data representation method and applied in many scientific and engineering fields, such as bioinformatics and medical imaging. It seeks to represent a data sample as a sparse linear combination of some basic items in a dictionary. Gao et al. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples. To overcome this problem, we integrate feature selection and multiple kernel learning into the sparse coding on the manifold. To this end, unified objectives are defined for feature selection, multiple kernel learning, sparse coding, and graph regularization. By optimizing the objective functions iteratively, we develop novel data representation algorithms with feature selection and multiple kernel learning respectively. Experimental results on two challenging tasks, N-linked glycosylation prediction and mammogram retrieval, demonstrate that the proposed algorithms outperform the traditional sparse coding methods. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Uranium kernel formation via internal gelation

    Hunt, R.D.; Collins, J.L.

    2004-01-01

    In the 1970s and 1980s, U.S. Department of Energy (DOE) conducted numerous studies on the fabrication of nuclear fuel particles using the internal gelation process. These amorphous kernels were prone to flaking or breaking when gases tried to escape from the kernels during calcination and sintering. These earlier kernels would not meet today's proposed specifications for reactor fuel. In the interim, the internal gelation process has been used to create hydrous metal oxide microspheres for the treatment of nuclear waste. With the renewed interest in advanced nuclear fuel by the DOE, the lessons learned from the nuclear waste studies were recently applied to the fabrication of uranium kernels, which will become tri-isotropic (TRISO) fuel particles. These process improvements included equipment modifications, small changes to the feed formulations, and a new temperature profile for the calcination and sintering. The modifications to the laboratory-scale equipment and its operation as well as small changes to the feed composition increased the product yield from 60% to 80%-99%. The new kernels were substantially less glassy, and no evidence of flaking was found. Finally, key process parameters were identified, and their effects on the uranium microspheres and kernels are discussed. (orig.)

  11. Quantum tomography, phase-space observables and generalized Markov kernels

    Pellonpaeae, Juha-Pekka

    2009-01-01

    We construct a generalized Markov kernel which transforms the observable associated with the homodyne tomography into a covariant phase-space observable with a regular kernel state. Illustrative examples are given in the cases of a 'Schroedinger cat' kernel state and the Cahill-Glauber s-parametrized distributions. Also we consider an example of a kernel state when the generalized Markov kernel cannot be constructed.

  12. Penetuan Bilangan Iodin pada Hydrogenated Palm Kernel Oil (HPKO) dan Refined Bleached Deodorized Palm Kernel Oil (RBDPKO)

    Sitompul, Monica Angelina

    2015-01-01

    Have been conducted Determination of Iodin Value by method titration to some Hydrogenated Palm Kernel Oil (HPKO) and Refined Bleached Deodorized Palm Kernel Oil (RBDPKO). The result of analysis obtained the Iodin Value in Hydrogenated Palm Kernel Oil (A) = 0,16 gr I2/100gr, Hydrogenated Palm Kernel Oil (B) = 0,20 gr I2/100gr, Hydrogenated Palm Kernel Oil (C) = 0,24 gr I2/100gr. And in Refined Bleached Deodorized Palm Kernel Oil (A) = 17,51 gr I2/100gr, Refined Bleached Deodorized Palm Kernel ...

  13. Epileptic Seizure Detection with Log-Euclidean Gaussian Kernel-Based Sparse Representation.

    Yuan, Shasha; Zhou, Weidong; Wu, Qi; Zhang, Yanli

    2016-05-01

    Epileptic seizure detection plays an important role in the diagnosis of epilepsy and reducing the massive workload of reviewing electroencephalography (EEG) recordings. In this work, a novel algorithm is developed to detect seizures employing log-Euclidean Gaussian kernel-based sparse representation (SR) in long-term EEG recordings. Unlike the traditional SR for vector data in Euclidean space, the log-Euclidean Gaussian kernel-based SR framework is proposed for seizure detection in the space of the symmetric positive definite (SPD) matrices, which form a Riemannian manifold. Since the Riemannian manifold is nonlinear, the log-Euclidean Gaussian kernel function is applied to embed it into a reproducing kernel Hilbert space (RKHS) for performing SR. The EEG signals of all channels are divided into epochs and the SPD matrices representing EEG epochs are generated by covariance descriptors. Then, the testing samples are sparsely coded over the dictionary composed by training samples utilizing log-Euclidean Gaussian kernel-based SR. The classification of testing samples is achieved by computing the minimal reconstructed residuals. The proposed method is evaluated on the Freiburg EEG dataset of 21 patients and shows its notable performance on both epoch-based and event-based assessments. Moreover, this method handles multiple channels of EEG recordings synchronously which is more speedy and efficient than traditional seizure detection methods.

  14. Exact Heat Kernel on a Hypersphere and Its Applications in Kernel SVM

    Chenchao Zhao

    2018-01-01

    Full Text Available Many contemporary statistical learning methods assume a Euclidean feature space. This paper presents a method for defining similarity based on hyperspherical geometry and shows that it often improves the performance of support vector machine compared to other competing similarity measures. Specifically, the idea of using heat diffusion on a hypersphere to measure similarity has been previously proposed and tested by Lafferty and Lebanon [1], demonstrating promising results based on a heuristic heat kernel obtained from the zeroth order parametrix expansion; however, how well this heuristic kernel agrees with the exact hyperspherical heat kernel remains unknown. This paper presents a higher order parametrix expansion of the heat kernel on a unit hypersphere and discusses several problems associated with this expansion method. We then compare the heuristic kernel with an exact form of the heat kernel expressed in terms of a uniformly and absolutely convergent series in high-dimensional angular momentum eigenmodes. Being a natural measure of similarity between sample points dwelling on a hypersphere, the exact kernel often shows superior performance in kernel SVM classifications applied to text mining, tumor somatic mutation imputation, and stock market analysis.

  15. High-speed data acquisition with the Solaris and Linux operating systems

    Zilker, M.; Heimann, P.

    2000-01-01

    In this paper, we discuss whether Solaris and Linux are suitable for data acquisition systems in soft real time conditions. As an example we consider a plasma diagnostic (Mirnov coils), which collects data for a complete plasma discharge of about 10 s from up to 72 channels. Each ADC-Channel generates a data stream of 4 MB/s. To receive these data streams an eight-channel Hotlink PCI interface board was designed. With a prototype system using Solaris and the driver developed by us we investigate important properties of the operating system such as the I/O performance and scheduling of processes. We compare the Solaris operating system on the Ultra Sparc platform with Linux on the Intel platform. Finally, some points of user program development are mentioned to show how the application can make the most efficient use of the underlying high-speed I/O system

  16. [Making a low cost IPSec router on Linux and the assessment for practical use].

    Amiki, M; Horio, M

    2001-09-01

    We installed Linux and FreeS/WAN on a PC/AT compatible machine to make an IPSec router. We measured the time of ping/ftp, only in the university, between the university and the external network. Between the university and the external network (the Internet), there were no differences. Therefore, we concluded that CPU load was not remarkable at low speed networks, because packets exchanged via the Internet are small, or compressions of VPN are more effective than encoding and decoding. On the other hand, in the university, the IPSec router performed down about 20-30% compared with normal IP communication, but this is not a serious problem for practical use. Recently, VPN machines are becoming cheaper, but they do not function sufficiently to create a fundamental VPN environment. Therefore, if one wants a fundamental VPN environment at a low cost, we believe you should select a VPN router on Linux.

  17. MySQL databases as part of the Online Business, using a platform based on Linux

    Ion-Sorin STROE

    2011-09-01

    Full Text Available The Internet is a business development environment that has major advantages over traditional environment. From a financial standpoint, the initial investment is much reduced and, as yield, the chances of success are considerably higher. Developing an online business also depends on the manager’s ability to use the best solutions, sustainable on a long term. The current trend is to decrease the costs for the technical platform by adopting open-source license products. Such platform is based on a Linux operating system and a database system based on MySQL product. This article aims to answer two basic questions: “A platform based on Linux and MySQL can handle the demands of an online business?” and “Adopting such a solution has the effect of increasing profitability?”

  18. A PC parallel port button box provides millisecond response time accuracy under Linux.

    Stewart, Neil

    2006-02-01

    For psychologists, it is sometimes necessary to measure people's reaction times to the nearest millisecond. This article describes how to use the PC parallel port to receive signals from a button box to achieve millisecond response time accuracy. The workings of the parallel port, the corresponding port addresses, and a simple Linux program for controlling the port are described. A test of the speed and reliability of button box signal detection is reported. If the reader is moderately familiar with Linux, this article should provide sufficient instruction for him or her to build and test his or her own parallel port button box. This article also describes how the parallel port could be used to control an external apparatus.

  19. The Linux based distributed data acquisition system for the ISTRA+ experiment

    Filin, A.; Inyakin, A.; Novikov, V.; Obraztsov, V.; Smirnov, N.; Vlassov, E.; Yuschenko, O.

    2001-01-01

    The DAQ hardware of the ISTRA+ experiment consists of the VME system crate that contains two PCI-VME bridges interfacing two PCs with VME, external interrupts receiver, the readout controller for dedicated front-end electronics, the readout controller buffer memory module, the VME-CAMAC interface, and additional control modules. The DAQ computing consist of 6 PCs running the Linux operating system and linked into LAN. The first PC serves the external interrupts and acquires the data from front-end electronic. The second one is the slow control computer. The remaining PCs host the monitoring and data analysis software. The Linux based DAQ software provides the external interrupts processing, the data acquisition, recording, and distribution between monitoring and data analysis tasks running at DAQ PCs. The monitoring programs are based on two packages for data visualization: home-written one and the ROOT system. MySQL is used as a DAQ database

  20. Aflatoxin contamination of developing corn kernels.

    Amer, M A

    2005-01-01

    Preharvest of corn and its contamination with aflatoxin is a serious problem. Some environmental and cultural factors responsible for infection and subsequent aflatoxin production were investigated in this study. Stage of growth and location of kernels on corn ears were found to be one of the important factors in the process of kernel infection with A. flavus & A. parasiticus. The results showed positive correlation between the stage of growth and kernel infection. Treatment of corn with aflatoxin reduced germination, protein and total nitrogen contents. Total and reducing soluble sugar was increase in corn kernels as response to infection. Sucrose and protein content were reduced in case of both pathogens. Shoot system length, seeding fresh weigh and seedling dry weigh was also affected. Both pathogens induced reduction of starch content. Healthy corn seedlings treated with aflatoxin solution were badly affected. Their leaves became yellow then, turned brown with further incubation. Moreover, their total chlorophyll and protein contents showed pronounced decrease. On the other hand, total phenolic compounds were increased. Histopathological studies indicated that A. flavus & A. parasiticus could colonize corn silks and invade developing kernels. Germination of A. flavus spores was occurred and hyphae spread rapidly across the silk, producing extensive growth and lateral branching. Conidiophores and conidia had formed in and on the corn silk. Temperature and relative humidity greatly influenced the growth of A. flavus & A. parasiticus and aflatoxin production.

  1. Analog forecasting with dynamics-adapted kernels

    Zhao, Zhizhen; Giannakis, Dimitrios

    2016-09-01

    Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.

  2. Classification of Hyperspectral Images Using Kernel Fully Constrained Least Squares

    Jianjun Liu

    2017-11-01

    Full Text Available As a widely used classifier, sparse representation classification (SRC has shown its good performance for hyperspectral image classification. Recent works have highlighted that it is the collaborative representation mechanism under SRC that makes SRC a highly effective technique for classification purposes. If the dimensionality and the discrimination capacity of a test pixel is high, other norms (e.g., ℓ 2 -norm can be used to regularize the coding coefficients, except for the sparsity ℓ 1 -norm. In this paper, we show that in the kernel space the nonnegative constraint can also play the same role, and thus suggest the investigation of kernel fully constrained least squares (KFCLS for hyperspectral image classification. Furthermore, in order to improve the classification performance of KFCLS by incorporating spatial-spectral information, we investigate two kinds of spatial-spectral methods using two regularization strategies: (1 the coefficient-level regularization strategy, and (2 the class-level regularization strategy. Experimental results conducted on four real hyperspectral images demonstrate the effectiveness of the proposed KFCLS, and show which way to incorporate spatial-spectral information efficiently in the regularization framework.

  3. genepop'007: a complete re-implementation of the genepop software for Windows and Linux.

    Rousset, François

    2008-01-01

    This note summarizes developments of the genepop software since its first description in 1995, and in particular those new to version 4.0: an extended input format, several estimators of neighbourhood size under isolation by distance, new estimators and confidence intervals for null allele frequency, and less important extensions to previous options. genepop now runs under Linux as well as under Windows, and can be entirely controlled by batch calls. © 2007 The Author.

  4. Empirical tests of Zipf's law mechanism in open source Linux distribution.

    Maillart, T; Sornette, D; Spaeth, S; von Krogh, G

    2008-11-21

    Zipf's power law is a ubiquitous empirical regularity found in many systems, thought to result from proportional growth. Here, we establish empirically the usually assumed ingredients of stochastic growth models that have been previously conjectured to be at the origin of Zipf's law. We use exceptionally detailed data on the evolution of open source software projects in Linux distributions, which offer a remarkable example of a growing complex self-organizing adaptive system, exhibiting Zipf's law over four full decades.

  5. Disk cloning program 'Dolly+' for system management of PC Linux cluster

    Atsushi Manabe

    2001-01-01

    The Dolly+ is a Linux application program to clone files and disk partition image from a PC to many others. By using several techniques such as logical ring connection, multi threading and pipelining, it could achieve high performance and scalability. For example, in typical condition, installations to a hundred PCs takes almost equivalent time for two PCs. Together with the Intel PXE and the RedHat kickstart, automatic and very fast system installation and upgrading could be performed

  6. Prediction of protein subcellular localization using support vector machine with the choice of proper kernel

    Al Mehedi Hasan

    2017-07-01

    subcellular localization prediction to find out which kernel is the best for SVM. We have evaluated our system on a combined dataset containing 5447 single-localized proteins (originally published as part of the Höglund dataset and 3056 multi-localized proteins (originally published as part of the DBMLoc set. This dataset was used by Briesemeister et al. in their extensive comparison of multilocalization prediction system. The experimental results indicate that the system based on SVM with the Laplace kernel, termed LKLoc, not only achieves a higher accuracy than the system using other kernels but also shows significantly better results than those obtained from other top systems (MDLoc, BNCs, YLoc+. The source code of this prediction system is available upon request.

  7. The Classification of Diabetes Mellitus Using Kernel k-means

    Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.

    2018-01-01

    Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.

  8. Object classification and detection with context kernel descriptors

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2014-01-01

    Context information is important in object representation. By embedding context cue of image attributes into kernel descriptors, we propose a set of novel kernel descriptors called Context Kernel Descriptors (CKD) for object classification and detection. The motivation of CKD is to use spatial...... consistency of image attributes or features defined within a neighboring region to improve the robustness of descriptor matching in kernel space. For feature selection, Kernel Entropy Component Analysis (KECA) is exploited to learn a subset of discriminative CKD. Different from Kernel Principal Component...

  9. Low Cost Multisensor Kinematic Positioning and Navigation System with Linux/RTAI

    Baoxin Hu

    2012-09-01

    Full Text Available Despite its popularity, the development of an embedded real-time multisensor kinematic positioning and navigation system discourages many researchers and developers due to its complicated hardware environment setup and time consuming device driver development. To address these issues, this paper proposed a multisensor kinematic positioning and navigation system built on Linux with Real Time Application Interface (RTAI, which can be constructed in a fast and economical manner upon popular hardware platforms. The authors designed, developed, evaluated and validated the application of Linux/RTAI in the proposed system for the integration of the low cost MEMS IMU and OEM GPS sensors. The developed system with Linux/RTAI as the core of a direct geo-referencing system provides not only an excellent hard real-time performance but also the conveniences for sensor hardware integration and real-time software development. A software framework is proposed in this paper for a universal kinematic positioning and navigation system with loosely-coupled integration architecture. In addition, general strategies of sensor time synchronization in a multisensor system are also discussed. The success of the loosely-coupled GPS-aided inertial navigation Kalman filter is represented via post-processed solutions from road tests.

  10. Protein Subcellular Localization with Gaussian Kernel Discriminant Analysis and Its Kernel Parameter Selection.

    Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu

    2017-12-15

    Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.

  11. The computer code system for reactor radiation shielding in design of nuclear power plant

    Li Chunhuai; Fu Shouxin; Liu Guilian

    1995-01-01

    The computer code system used in reactor radiation shielding design of nuclear power plant includes the source term codes, discrete ordinate transport codes, Monte Carlo and Albedo Monte Carlo codes, kernel integration codes, optimization code, temperature field code, skyshine code, coupling calculation codes and some processing codes for data libraries. This computer code system has more satisfactory variety of codes and complete sets of data library. It is widely used in reactor radiation shielding design and safety analysis of nuclear power plant and other nuclear facilities

  12. Design and Achievement of User Interface Automation Testing of Linux Based on Element Tree of DogTail

    Yuan Wen-Chao

    2017-01-01

    Full Text Available As Linux gets more popular around the world, the advantage of the open source on software makes people do automated UI test by unified testing framework. UI software testing can guarantee the rationality of User Interface of Linux and accuracy of the UI’s widgets. In order to set free from fuzzy and repeated manual testing, and improve efficiency, this paper achieves automation testing of UI under Linux, and proposes a method to identify and test UI widgets under Linux, which is according to element tree of DogTail automaton testing framework. It achieves automation test of UI under Linux. According to this method, Aiming at the product of Red Hat Subscription Manager under Red Hat Enterprise Linux, it designs the automation test plan of this series of product’s dialogs. After many tests, it is indicated that this plan can identify UI widgets accurately and rationally, describe the structure of software clearly, avoid software errors and improve efficiency of the software. Simultaneously, it also can be used in the internationalization testing for checking translation during software internationalization.

  13. Kernel abortion in maize. II. Distribution of 14C among kernel carboydrates

    Hanft, J.M.; Jones, R.J.

    1986-01-01

    This study was designed to compare the uptake and distribution of 14 C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35 0 C were transferred to [ 14 C]sucrose media 10 days after pollination. Kernels cultured at 35 0 C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on [ 14 C]sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35 0 C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35 0 C compared to kernels cultured at 30 0 C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35 0 C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30 0 C (89%). Kernels cultured at 35 0 C had a correspondingly higher proportion of 14 C in endosperm fructose, glucose, and sucrose

  14. Fluidization calculation on nuclear fuel kernel coating

    Sukarsono; Wardaya; Indra-Suryawan

    1996-01-01

    The fluidization of nuclear fuel kernel coating was calculated. The bottom of the reactor was in the from of cone on top of the cone there was a cylinder, the diameter of the cylinder for fluidization was 2 cm and at the upper part of the cylinder was 3 cm. Fluidization took place in the cone and the first cylinder. The maximum and the minimum velocity of the gas of varied kernel diameter, the porosity and bed height of varied stream gas velocity were calculated. The calculation was done by basic program

  15. Reduced multiple empirical kernel learning machine.

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3

  16. Computing the sparse matrix vector product using block-based kernels without zero padding on processors with AVX-512 instructions

    Bérenger Bramas

    2018-04-01

    Full Text Available The sparse matrix-vector product (SpMV is a fundamental operation in many scientific applications from various fields. The High Performance Computing (HPC community has therefore continuously invested a lot of effort to provide an efficient SpMV kernel on modern CPU architectures. Although it has been shown that block-based kernels help to achieve high performance, they are difficult to use in practice because of the zero padding they require. In the current paper, we propose new kernels using the AVX-512 instruction set, which makes it possible to use a blocking scheme without any zero padding in the matrix memory storage. We describe mask-based sparse matrix formats and their corresponding SpMV kernels highly optimized in assembly language. Considering that the optimal blocking size depends on the matrix, we also provide a method to predict the best kernel to be used utilizing a simple interpolation of results from previous executions. We compare the performance of our approach to that of the Intel MKL CSR kernel and the CSR5 open-source package on a set of standard benchmark matrices. We show that we can achieve significant improvements in many cases, both for sequential and for parallel executions. Finally, we provide the corresponding code in an open source library, called SPC5.

  17. Code Cactus; Code Cactus

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  18. Comparative Analysis of Kernel Methods for Statistical Shape Learning

    Rathi, Yogesh; Dambreville, Samuel; Tannenbaum, Allen

    2006-01-01

    .... In this work, we perform a comparative analysis of shape learning techniques such as linear PCA, kernel PCA, locally linear embedding and propose a new method, kernelized locally linear embedding...

  19. Variable kernel density estimation in high-dimensional feature spaces

    Van der Walt, Christiaan M

    2017-02-01

    Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...

  20. Influence of differently processed mango seed kernel meal on ...

    Influence of differently processed mango seed kernel meal on performance response of west African ... and TD( consisted spear grass and parboiled mango seed kernel meal with concentrate diet in a ratio of 35:30:35). ... HOW TO USE AJOL.

  1. Linear and kernel methods for multi- and hypervariate change detection

    Nielsen, Allan Aasbjerg; Canty, Morton J.

    2010-01-01

    . Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual...... formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution......, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component...

  2. Kernel methods in orthogonalization of multi- and hypervariate data

    Nielsen, Allan Aasbjerg

    2009-01-01

    A kernel version of maximum autocorrelation factor (MAF) analysis is described very briefly and applied to change detection in remotely sensed hyperspectral image (HyMap) data. The kernel version is based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis...... via inner products in the Gram matrix only. In the kernel version the inner products are replaced by inner products between nonlinear mappings into higher dimensional feature space of the original data. Via kernel substitution also known as the kernel trick these inner products between the mappings...... are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MAF analysis handle nonlinearities by implicitly transforming data into high (even infinite...

  3. Mitigation of artifacts in rtm with migration kernel decomposition

    Zhan, Ge

    2012-01-01

    The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently. In this paper, we present a generalized diffraction-stack migration approach for reducing RTM artifacts via decomposition of migration kernel. The decomposition leads to an improved understanding of migration artifacts and, therefore, presents us with opportunities for improving the quality of RTM images.

  4. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    2016-01-05

    SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model, is able to model the rate of occurrence of... kernel methods made use of: (i) the Bayesian property of improving predictive accuracy as data are dynamically obtained, and (ii) the kernel function

  5. Relationship between attenuation coefficients and dose-spread kernels

    Boyer, A.L.

    1988-01-01

    Dose-spread kernels can be used to calculate the dose distribution in a photon beam by convolving the kernel with the primary fluence distribution. The theoretical relationships between various types and components of dose-spread kernels relative to photon attenuation coefficients are explored. These relations can be valuable as checks on the conservation of energy by dose-spread kernels calculated by analytic or Monte Carlo methods

  6. Fabrication of Uranium Oxycarbide Kernels for HTR Fuel

    Barnes, Charles; Richardson, Clay; Nagley, Scott; Hunn, John; Shaber, Eric

    2010-01-01

    Babcock and Wilcox (B and W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-(micro)m, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B and W produced 425-(micro)m, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B and W also produced 500-(micro)m, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B and W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.

  7. Consistent Estimation of Pricing Kernels from Noisy Price Data

    Vladislav Kargin

    2003-01-01

    If pricing kernels are assumed non-negative then the inverse problem of finding the pricing kernel is well-posed. The constrained least squares method provides a consistent estimate of the pricing kernel. When the data are limited, a new method is suggested: relaxed maximization of the relative entropy. This estimator is also consistent. Keywords: $\\epsilon$-entropy, non-parametric estimation, pricing kernel, inverse problems.

  8. Evaluation of the OpenCL AES Kernel using the Intel FPGA SDK for OpenCL

    Jin, Zheming [Argonne National Lab. (ANL), Argonne, IL (United States); Yoshii, Kazutomo [Argonne National Lab. (ANL), Argonne, IL (United States); Finkel, Hal [Argonne National Lab. (ANL), Argonne, IL (United States); Cappello, Franck [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-04-20

    The OpenCL standard is an open programming model for accelerating algorithms on heterogeneous computing system. OpenCL extends the C-based programming language for developing portable codes on different platforms such as CPU, Graphics processing units (GPUs), Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs). The Intel FPGA SDK for OpenCL is a suite of tools that allows developers to abstract away the complex FPGA-based development flow for a high-level software development flow. Users can focus on the design of hardware-accelerated kernel functions in OpenCL and then direct the tools to generate the low-level FPGA implementations. The approach makes the FPGA-based development more accessible to software users as the needs for hybrid computing using CPUs and FPGAs are increasing. It can also significantly reduce the hardware development time as users can evaluate different ideas with high-level language without deep FPGA domain knowledge. In this report, we evaluate the performance of the kernel using the Intel FPGA SDK for OpenCL and Nallatech 385A FPGA board. Compared to the M506 module, the board provides more hardware resources for a larger design exploration space. The kernel performance is measured with the compute kernel throughput, an upper bound to the FPGA throughput. The report presents the experimental results in details. The Appendix lists the kernel source code.

  9. Quantum logic in dagger kernel categories

    Heunen, C.; Jacobs, B.P.F.

    2009-01-01

    This paper investigates quantum logic from the perspective of categorical logic, and starts from minimal assumptions, namely the existence of involutions/daggers and kernels. The resulting structures turn out to (1) encompass many examples of interest, such as categories of relations, partial

  10. Quantum logic in dagger kernel categories

    Heunen, C.; Jacobs, B.P.F.; Coecke, B.; Panangaden, P.; Selinger, P.

    2011-01-01

    This paper investigates quantum logic from the perspective of categorical logic, and starts from minimal assumptions, namely the existence of involutions/daggers and kernels. The resulting structures turn out to (1) encompass many examples of interest, such as categories of relations, partial

  11. Symbol recognition with kernel density matching.

    Zhang, Wan; Wenyin, Liu; Zhang, Kun

    2006-12-01

    We propose a novel approach to similarity assessment for graphic symbols. Symbols are represented as 2D kernel densities and their similarity is measured by the Kullback-Leibler divergence. Symbol orientation is found by gradient-based angle searching or independent component analysis. Experimental results show the outstanding performance of this approach in various situations.

  12. Flexible Scheduling in Multimedia Kernels: An Overview

    Jansen, P.G.; Scholten, Johan; Laan, Rene; Chow, W.S.

    1999-01-01

    Current Hard Real-Time (HRT) kernels have their timely behaviour guaranteed on the cost of a rather restrictive use of the available resources. This makes current HRT scheduling techniques inadequate for use in a multimedia environment where we can make a considerable profit by a better and more

  13. Reproducing kernel Hilbert spaces of Gaussian priors

    Vaart, van der A.W.; Zanten, van J.H.; Clarke, B.; Ghosal, S.

    2008-01-01

    We review definitions and properties of reproducing kernel Hilbert spaces attached to Gaussian variables and processes, with a view to applications in nonparametric Bayesian statistics using Gaussian priors. The rate of contraction of posterior distributions based on Gaussian priors can be described

  14. A synthesis of empirical plant dispersal kernels

    Bullock, J. M.; González, L. M.; Tamme, R.; Götzenberger, Lars; White, S. M.; Pärtel, M.; Hooftman, D. A. P.

    2017-01-01

    Roč. 105, č. 1 (2017), s. 6-19 ISSN 0022-0477 Institutional support: RVO:67985939 Keywords : dispersal kernel * dispersal mode * probability density function Subject RIV: EH - Ecology, Behaviour OBOR OECD: Ecology Impact factor: 5.813, year: 2016

  15. Analytic continuation of weighted Bergman kernels

    Engliš, Miroslav

    2010-01-01

    Roč. 94, č. 6 (2010), s. 622-650 ISSN 0021-7824 R&D Projects: GA AV ČR IAA100190802 Keywords : Bergman kernel * analytic continuation * Toeplitz operator Subject RIV: BA - General Mathematics Impact factor: 1.450, year: 2010 http://www.sciencedirect.com/science/article/pii/S0021782410000942

  16. On convergence of kernel learning estimators

    Norkin, V.I.; Keyzer, M.A.

    2009-01-01

    The paper studies convex stochastic optimization problems in a reproducing kernel Hilbert space (RKHS). The objective (risk) functional depends on functions from this RKHS and takes the form of a mathematical expectation (integral) of a nonnegative integrand (loss function) over a probability

  17. Analytic properties of the Virasoro modular kernel

    Nemkov, Nikita [Moscow Institute of Physics and Technology (MIPT), Dolgoprudny (Russian Federation); Institute for Theoretical and Experimental Physics (ITEP), Moscow (Russian Federation); National University of Science and Technology MISIS, The Laboratory of Superconducting metamaterials, Moscow (Russian Federation)

    2017-06-15

    On the space of generic conformal blocks the modular transformation of the underlying surface is realized as a linear integral transformation. We show that the analytic properties of conformal block implied by Zamolodchikov's formula are shared by the kernel of the modular transformation and illustrate this by explicit computation in the case of the one-point toric conformal block. (orig.)

  18. Kernel based subspace projection of hyperspectral images

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten

    In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF...

  19. Kernel Temporal Differences for Neural Decoding

    Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504

  20. Scattering kernels and cross sections working group

    Russell, G.; MacFarlane, B.; Brun, T.

    1998-01-01

    Topics addressed by this working group are: (1) immediate needs of the cold-moderator community and how to fill them; (2) synthetic scattering kernels; (3) very simple synthetic scattering functions; (4) measurements of interest; and (5) general issues. Brief summaries are given for each of these topics

  1. Enhanced gluten properties in soft kernel durum wheat

    Soft kernel durum wheat is a relatively recent development (Morris et al. 2011 Crop Sci. 51:114). The soft kernel trait exerts profound effects on kernel texture, flour milling including break flour yield, milling energy, and starch damage, and dough water absorption (DWA). With the caveat of reduce...

  2. Predictive Model Equations for Palm Kernel (Elaeis guneensis J ...

    Estimated error of ± 0.18 and ± 0.2 are envisaged while applying the models for predicting palm kernel and sesame oil colours respectively. Keywords: Palm kernel, Sesame, Palm kernel, Oil Colour, Process Parameters, Model. Journal of Applied Science, Engineering and Technology Vol. 6 (1) 2006 pp. 34-38 ...

  3. Stable Kernel Representations as Nonlinear Left Coprime Factorizations

    Paice, A.D.B.; Schaft, A.J. van der

    1994-01-01

    A representation of nonlinear systems based on the idea of representing the input-output pairs of the system as elements of the kernel of a stable operator has been recently introduced. This has been denoted the kernel representation of the system. In this paper it is demonstrated that the kernel

  4. 7 CFR 981.60 - Determination of kernel weight.

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Determination of kernel weight. 981.60 Section 981.60... Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...

  5. 21 CFR 176.350 - Tamarind seed kernel powder.

    2010-04-01

    ... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...

  6. End-use quality of soft kernel durum wheat

    Kernel texture is a major determinant of end-use quality of wheat. Durum wheat has very hard kernels. We developed soft kernel durum wheat via Ph1b-mediated homoeologous recombination. The Hardness locus was transferred from Chinese Spring to Svevo durum wheat via back-crossing. ‘Soft Svevo’ had SKC...

  7. Heat kernel analysis for Bessel operators on symmetric cones

    Möllers, Jan

    2014-01-01

    . The heat kernel is explicitly given in terms of a multivariable $I$-Bessel function on $Ω$. Its corresponding heat kernel transform defines a continuous linear operator between $L^p$-spaces. The unitary image of the $L^2$-space under the heat kernel transform is characterized as a weighted Bergmann space...

  8. A Fast and Simple Graph Kernel for RDF

    de Vries, G.K.D.; de Rooij, S.

    2013-01-01

    In this paper we study a graph kernel for RDF based on constructing a tree for each instance and counting the number of paths in that tree. In our experiments this kernel shows comparable classification performance to the previously introduced intersection subtree kernel, but is significantly faster

  9. 7 CFR 981.61 - Redetermination of kernel weight.

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Redetermination of kernel weight. 981.61 Section 981... GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.61 Redetermination of kernel weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds...

  10. Single pass kernel k-means clustering method

    paper proposes a simple and faster version of the kernel k-means clustering ... It has been considered as an important tool ... On the other hand, kernel-based clustering methods, like kernel k-means clus- ..... able at the UCI machine learning repository (Murphy 1994). ... All the data sets have only numeric valued features.

  11. Considerations on absorbed dose estimates based on different β-dose point kernels in internal dosimetry

    Uchida, Isao; Yamada, Yasuhiko; Yamashita, Takashi; Okigaki, Shigeyasu; Oyamada, Hiyoshimaru; Ito, Akira.

    1995-01-01

    In radiotherapy with radiopharmaceuticals, more accurate estimates of the three-dimensional (3-D) distribution of absorbed dose is important in specifying the activity to be administered to patients to deliver a prescribed absorbed dose to target volumes without exceeding the toxicity limit of normal tissues in the body. A calculation algorithm for the purpose has already been developed by the authors. An accurate 3-D distribution of absorbed dose based on the algorithm is given by convolution of the 3-D dose matrix for a unit cubic voxel containing unit cumulated activity, which is obtained by transforming a dose point kernel into a 3-D cubic dose matrix, with the 3-D cumulated activity distribution given by the same voxel size. However, beta-dose point kernels affecting accurate estimates of the 3-D absorbed dose distribution have been different among the investigators. The purpose of this study is to elucidate how different beta-dose point kernels in water influence on the estimates of the absorbed dose distribution due to the dose point kernel convolution method by the authors. Computer simulations were performed using the MIRD thyroid and lung phantoms under assumption of uniform activity distribution of 32 P. Using beta-dose point kernels derived from Monte Carlo simulations (EGS-4 or ACCEPT computer code), the differences among their point kernels gave little differences for the mean and maximum absorbed dose estimates for the MIRD phantoms used. In the estimates of mean and maximum absorbed doses calculated using different cubic voxel sizes (4x4x4 mm and 8x8x8 mm) for the MIRD thyroid phantom, the maximum absorbed doses for the 4x4x4 mm-voxel were estimated approximately 7% greater than the cases of the 8x8x8 mm-voxel. They were found in every beta-dose point kernel used in this study. On the other hand, the percentage difference of the mean absorbed doses in the both voxel sizes for each beta-dose point kernel was less than approximately 0.6%. (author)

  12. The collapsed cone algorithm for (192)Ir dosimetry using phantom-size adaptive multiple-scatter point kernels.

    Tedgren, Åsa Carlsson; Plamondon, Mathieu; Beaulieu, Luc

    2015-07-07

    The aim of this work was to investigate how dose distributions calculated with the collapsed cone (CC) algorithm depend on the size of the water phantom used in deriving the point kernel for multiple scatter. A research version of the CC algorithm equipped with a set of selectable point kernels for multiple-scatter dose that had initially been derived in water phantoms of various dimensions was used. The new point kernels were generated using EGSnrc in spherical water phantoms of radii 5 cm, 7.5 cm, 10 cm, 15 cm, 20 cm, 30 cm and 50 cm. Dose distributions derived with CC in water phantoms of different dimensions and in a CT-based clinical breast geometry were compared to Monte Carlo (MC) simulations using the Geant4-based brachytherapy specific MC code Algebra. Agreement with MC within 1% was obtained when the dimensions of the phantom used to derive the multiple-scatter kernel were similar to those of the calculation phantom. Doses are overestimated at phantom edges when kernels are derived in larger phantoms and underestimated when derived in smaller phantoms (by around 2% to 7% depending on distance from source and phantom dimensions). CC agrees well with MC in the high dose region of a breast implant and is superior to TG43 in determining skin doses for all multiple-scatter point kernel sizes. Increased agreement between CC and MC is achieved when the point kernel is comparable to breast dimensions. The investigated approximation in multiple scatter dose depends on the choice of point kernel in relation to phantom size and yields a significant fraction of the total dose only at distances of several centimeters from a source/implant which correspond to volumes of low doses. The current implementation of the CC algorithm utilizes a point kernel derived in a comparatively large (radius 20 cm) water phantom. A fixed point kernel leads to predictable behaviour of the algorithm with the worst case being a source/implant located well within a patient

  13. Code system BCG for gamma-ray skyshine calculation

    Ryufuku, Hiroshi; Numakunai, Takao; Miyasaka, Shun-ichi; Minami, Kazuyoshi.

    1979-03-01

    A code system BCG has been developed for calculating conveniently and efficiently gamma-ray skyshine doses using the transport calculation codes ANISN and DOT and the point-kernel calculation codes G-33 and SPAN. To simplify the input forms to the system, the forms for these codes are unified, twelve geometric patterns are introduced to give material regions, and standard data are available as a library. To treat complex arrangements of source and shield, it is further possible to use successively the code such that the results from one code may be used as input data to the same or other code. (author)

  14. Kernel based orthogonalization for change detection in hyperspectral images

    Nielsen, Allan Aasbjerg

    function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MNF analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via...... analysis all 126 spectral bands of the HyMap are included. Changes on the ground are most likely due to harvest having taken place between the two acquisitions and solar effects (both solar elevation and azimuth have changed). Both types of kernel analysis emphasize change and unlike kernel PCA, kernel MNF...

  15. A laser optical method for detecting corn kernel defects

    Gunasekaran, S.; Paulsen, M. R.; Shove, G. C.

    1984-01-01

    An opto-electronic instrument was developed to examine individual corn kernels and detect various kernel defects according to reflectance differences. A low power helium-neon (He-Ne) laser (632.8 nm, red light) was used as the light source in the instrument. Reflectance from good and defective parts of corn kernel surfaces differed by approximately 40%. Broken, chipped, and starch-cracked kernels were detected with nearly 100% accuracy; while surface-split kernels were detected with about 80% accuracy. (author)

  16. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  17. Windows Vista Kernel-Mode: Functions, Security Enhancements and Flaws

    Mohammed D. ABDULMALIK

    2008-06-01

    Full Text Available Microsoft has made substantial enhancements to the kernel of the Microsoft Windows Vista operating system. Kernel improvements are significant because the kernel provides low-level operating system functions, including thread scheduling, interrupt and exception dispatching, multiprocessor synchronization, and a set of routines and basic objects.This paper describes some of the kernel security enhancements for 64-bit edition of Windows Vista. We also point out some weakness areas (flaws that can be attacked by malicious leading to compromising the kernel.

  18. Difference between standard and quasi-conformal BFKL kernels

    Fadin, V.S.; Fiore, R.; Papa, A.

    2012-01-01

    As it was recently shown, the colour singlet BFKL kernel, taken in Möbius representation in the space of impact parameters, can be written in quasi-conformal shape, which is unbelievably simple compared with the conventional form of the BFKL kernel in momentum space. It was also proved that the total kernel is completely defined by its Möbius representation. In this paper we calculated the difference between standard and quasi-conformal BFKL kernels in momentum space and discovered that it is rather simple. Therefore we come to the conclusion that the simplicity of the quasi-conformal kernel is caused mainly by using the impact parameter space.

  19. Real-time head movement system and embedded Linux implementation for the control of power wheelchairs.

    Nguyen, H T; King, L M; Knight, G

    2004-01-01

    Mobility has become very important for our quality of life. A loss of mobility due to an injury is usually accompanied by a loss of self-confidence. For many individuals, independent mobility is an important aspect of self-esteem. Head movement is a natural form of pointing and can be used to directly replace the joystick whilst still allowing for similar control. Through the use of embedded LINUX and artificial intelligence, a hands-free head movement wheelchair controller has been designed and implemented successfully. This system provides for severely disabled users an effective power wheelchair control method with improved posture, ease of use and attractiveness.

  20. Implementação de um sistema SIP para o sistema operacional Linux

    Davison Gonzaga da Silva

    2003-01-01

    Resumo: Este trabalho apresenta a implementação de um Sistema de VoIP usando o Protocolo SIP. Este Sistema SIP foi desenvolvido para o Linux, usando-se a linguagem C++ em conjunto com a biblioteca QT. O Sistema SIP é composto de três entidades básicas: o Terminal SIP, o Proxy e o Servidor de Registros. O Terminal SIP é a entidade responsável por estabelecer sessões SIP com outros Terminais SIP. Para o Terminal SIP, foi desenvolvida uma biblioteca de acesso à placa de áudio, que permite a modi...

  1. Implementasi Manajemen Bandwidth Dengan Disiplin Antrian Hierarchical Token Bucket (HTB Pada Sistem Operasi Linux

    Muhammad Nugraha

    2016-09-01

    Full Text Available Important Problem on Internet networking is exhausted resource and bandwidth by some user while other user did not get service properly. To overcome that problem we need to implement traffic control and bandwidth management system in router. In this research author want to implement Hierarchical Token Bucket algorithm as queue discipline (qdisc to get bandwidth management accurately in order the user can get bandwidth properly. The result of this research is form the management bandwidth cheaply and efficiently by using Hierarchical Token Bucket qdisc on Linux operating system were able to manage the user as we want.

  2. Programación de LEGO MindStorms bajo GNU/Linux

    Matellán Olivera, Vicente; Heras Quirós, Pedro de las; Centeno González, José; González Barahona, Jesús

    2002-01-01

    GNU/Linux sobre un ordenador personal es la opción libre preferida por muchos desarrolladores de aplicaciones, pero también es una plataforma de desarrollo muy popular para otros sistemas, incluida la programación de robots, en particular es muy adecuada para jugar con los LEGO Mindstorms. En este artículo presentaremos las dos opciones más extendidas a la hora de programar estos juguetes: NQC y LegOS. NQC es una versión reducida de C que permite el desarrollo rápido de programas ...

  3. The visual and remote analyzing software for a Linux-based radiation information acquisition system

    Fan Zhaoyang; Zhang Li; Chen Zhiqiang

    2003-01-01

    A visual and remote analyzing software for the radiation information, which has the merit of universality and credibility, is developed based on the Linux operating system and the TCP/IP network protocol. The software is applied to visually debug and real time monitor of the high-speed radiation information acquisition system, and a safe, direct and timely control can assured. The paper expatiates the designing thought of the software, which provides the reference for other software with the same purpose for the similar systems

  4. DB2 9 for Linux, Unix, and Windows database administration certification study guide

    Sanders, Roger E

    2007-01-01

    In DB2 9 for Linux, UNIX, and Windows Database Administration Certification Study Guide, Roger E. Sanders-one of the world's leading DB2 authors and an active participant in the development of IBM's DB2 certification exams-covers everything a reader needs to know to pass the DB2 9 UDB DBA Certification Test (731).This comprehensive study guide steps you through all of the topics that are covered on the test, including server management, data placement, database access, analyzing DB2 activity, DB2 utilities, high availability, security, and much more. Each chapter contains an extensive set of p

  5. IMPLEMENTASI MANAJEMEN BANDWIDTH DENGAN DISIPLIN ANTRIAN HIERARCHICAL TOKEN BUCKET (HTB PADA SISTEM OPERASI LINUX

    Muhammad Nugraha

    2017-01-01

    Full Text Available Important Problem on Internet networking is exhausted resource and bandwidth by some user while other user did not get service properly. To overcome that problem we need to implement traffic control and bandwidth management system in router. In this research author want to implement Hierarchical Token Bucket algorithm as queue discipline (qdisc to get bandwidth management accurately in order the user can get bandwidth properly. The result of this research is form the management bandwidth cheaply and efficiently by using Hierarchical Token Bucket qdisc on Linux operating system were able to manage the user as we want.

  6. ClusterControl: a web interface for distributing and monitoring bioinformatics applications on a Linux cluster.

    Stocker, Gernot; Rieder, Dietmar; Trajanoski, Zlatko

    2004-03-22

    ClusterControl is a web interface to simplify distributing and monitoring bioinformatics applications on Linux cluster systems. We have developed a modular concept that enables integration of command line oriented program into the application framework of ClusterControl. The systems facilitate integration of different applications accessed through one interface and executed on a distributed cluster system. The package is based on freely available technologies like Apache as web server, PHP as server-side scripting language and OpenPBS as queuing system and is available free of charge for academic and non-profit institutions. http://genome.tugraz.at/Software/ClusterControl

  7. [Design of an embedded stroke rehabilitation apparatus system based on Linux computer engineering].

    Zhuang, Pengfei; Tian, XueLong; Zhu, Lin

    2014-04-01

    A realizaton project of electrical stimulator aimed at motor dysfunction of stroke is proposed in this paper. Based on neurophysiological biofeedback, this system, using an ARM9 S3C2440 as the core processor, integrates collection and display of surface electromyography (sEMG) signal, as well as neuromuscular electrical stimulation (NMES) into one system. By embedding Linux system, the project is able to use Qt/Embedded as a graphical interface design tool to accomplish the design of stroke rehabilitation apparatus. Experiments showed that this system worked well.

  8. Using a Linux Cluster for Parallel Simulations of an Active Magnetic Regenerator Refrigerator

    Petersen, T.F.; Pryds, N.; Smith, A.

    2006-01-01

    This paper describes the implementation of a Comsol Multiphysics model on a Linux computer Cluster. The Magnetic Refrigerator (MR) is a special type of refrigerator with potential to reduce the energy consumption of household refrigeration by a factor of two or more. To conduct numerical analysis....... The coupled set of equations and the transient convergence towards the final steady state means that the model has an excessive solution time. To make parametric studies practical, the developed model was implemented on a Cluster to allow parallel simulations, which has decreased the solution time...

  9. Audio Arduino - an ALSA (Advanced Linux Sound Architecture) audio driver for FTDI-based Arduinos

    Dimitrov, Smilen; Serafin, Stefania

    2011-01-01

    be considered to be a system, that encompasses design decisions on both hardware and software levels - that also demand a certain understanding of the architecture of the target PC operating system. This project outlines how an Arduino Duemillanove board (containing a USB interface chip, manufactured by Future...... Technology Devices International Ltd [FTDI] company) can be demonstrated to behave as a full-duplex, mono, 8-bit 44.1 kHz soundcard, through an implementation of: a PC audio driver for ALSA (Advanced Linux Sound Architecture); a matching program for the Arduino's ATmega microcontroller - and nothing more...

  10. Feasibility study of BES data off-line processing and D/Ds physics analysis on a PC/Linux platform

    Rong Gang; He Kanglin; Heng Yuekun; Zhang Chun; Liu Huaimin; Cheng Baosen; Yan Wuguang; Mai Jimao; Zhao Haiwen

    2000-01-01

    The authors report a feasibility study of BES data off-line processing (BES data off-line reconstruction and Monte Carlo simulation) and D/Ds physics analysis on a PC/Linux platform. The authors compared the results obtained from the PC/Linux with that from HP/UNIX workstation. It shows that PC/Linux platform can do BES data off-line analysis as good as HP/UNIX workstation, and is much powerful and economical

  11. Linux: Hacia una revolución silenciosa de la sociedad de la información

    Pascuale Sofia

    2004-01-01

    Full Text Available l presente artículo intenta realizar una demostración de las cualidades globales que posee el nuevo sistema operativo LINUX a nivel técnico, y develar el cambio que está engendrando en el sector económico y en el mundo cultural. Esto se realiza por medio, de un análisis comparativo entre los sistemas operativos: Comerciales (Microsoft y Open Source (LINUX. El mundo de hoy está caracterizado por cambios rádicales y rápidos, ocurriendo con mayor frecuencia en el sector de la informática. Actualmente en éste sector y específicamente en el ámbito del sofware, es LINUX el nuevo sistema operativo que está modificando el mundo de la informática. Todo ello se efectuó sobre los lineamientos metodológicos exploratorios, porque la literatura sobre los avances de Linux es escasa, por lo tanto el trabajo responde a la sintesis de un amplio trabajo (Conferencias, Exposiciones en Universidades, Asociaciones de empresas, entre otros de los autores, llevaron a cabo desde que el producto LINUX es conocido y trabajado por una pequeña elite de técnicos.

  12. Analytic scattering kernels for neutron thermalization studies

    Sears, V.F.

    1990-01-01

    Current plans call for the inclusion of a liquid hydrogen or deuterium cold source in the NRU replacement vessel. This report is part of an ongoing study of neutron thermalization in such a cold source. Here, we develop a simple analytical model for the scattering kernel of monatomic and diatomic liquids. We also present the results of extensive numerical calculations based on this model for liquid hydrogen, liquid deuterium, and mixtures of the two. These calculations demonstrate the dependence of the scattering kernel on the incident and scattered-neutron energies, the behavior near rotational thresholds, the dependence on the centre-of-mass pair correlations, the dependence on the ortho concentration, and the dependence on the deuterium concentration in H 2 /D 2 mixtures. The total scattering cross sections are also calculated and compared with available experimental results

  13. Quantized kernel least mean square algorithm.

    Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C

    2012-01-01

    In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.

  14. Kernel-based tests for joint independence

    Pfister, Niklas; Bühlmann, Peter; Schölkopf, Bernhard

    2018-01-01

    if the $d$ variables are jointly independent, as long as the kernel is characteristic. Based on an empirical estimate of dHSIC, we define three different non-parametric hypothesis tests: a permutation test, a bootstrap test and a test based on a Gamma approximation. We prove that the permutation test......We investigate the problem of testing whether $d$ random variables, which may or may not be continuous, are jointly (or mutually) independent. Our method builds on ideas of the two variable Hilbert-Schmidt independence criterion (HSIC) but allows for an arbitrary number of variables. We embed...... the $d$-dimensional joint distribution and the product of the marginals into a reproducing kernel Hilbert space and define the $d$-variable Hilbert-Schmidt independence criterion (dHSIC) as the squared distance between the embeddings. In the population case, the value of dHSIC is zero if and only...

  15. Wilson Dslash Kernel From Lattice QCD Optimization

    Joo, Balint [Jefferson Lab, Newport News, VA; Smelyanskiy, Mikhail [Parallel Computing Lab, Intel Corporation, California, USA; Kalamkar, Dhiraj D. [Parallel Computing Lab, Intel Corporation, India; Vaidyanathan, Karthikeyan [Parallel Computing Lab, Intel Corporation, India

    2015-07-01

    Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.

  16. Study on the Calculation of Pebble-Bed Reactor Multiplication Factor As a Function of Fuel Kernel Radius at Various Enrichments

    Zuhair; Suwoto

    2009-01-01

    Main characteristics of PBR comes from utilization of coated particle fuels dispersed in pebble fuels . Because of vibration, fuel kernel can be grouped into cluster and in these cases, neutronic characteristics of pebble fuel significantly changes . In this study, cluster is modeled structural form consisting of uniform cubic cells with eight neighborhood TRISO particles . Neutronic characteristics was investigated by calculating pebble-bed reactor multiplication factor as a function of fuel kernel radius at various enrichments . The calculation results using MCNP5 code with ENDF/BVI neutron library show that k eff value depends on the average fuel radius and reaches its minimum when all kernels have the same radius, i.e. 0.0280 cm . With this radius, the total kernel surface area achieves maximum value . The dependence of k eff on fuel kernel radius decreases in relation to the increase in uranium enrichment . However, k eff value is not affected by fuel kernel radius when the uranium is 100% enriched . From these result, it can be concluded that, exception of uranium enrichment, the selection of fuel kernel radius should be considered thoroughly in designing a PBR, since this parameter provides significant influences on neutronic characteristics of the reactor. (author)

  17. Unsteady, Cooled Turbine Simulation Using a PC-Linux Analysis System

    List, Michael G.; Turner, Mark G.; Chen, Jen-Pimg; Remotigue, Michael G.; Veres, Joseph P.

    2004-01-01

    The fist stage of the high-pressure turbine (HPT) of the GE90 engine was simulated with a three-dimensional unsteady Navier-Sokes solver, MSU Turbo, which uses source terms to simulate the cooling flows. In addition to the solver, its pre-processor, GUMBO, and a post-processing and visualization tool, Turbomachinery Visual3 (TV3) were run in a Linux environment to carry out the simulation and analysis. The solver was run both with and without cooling. The introduction of cooling flow on the blade surfaces, case, and hub and its effects on both rotor-vane interaction as well the effects on the blades themselves were the principle motivations for this study. The studies of the cooling flow show the large amount of unsteadiness in the turbine and the corresponding hot streak migration phenomenon. This research on the GE90 turbomachinery has also led to a procedure for running unsteady, cooled turbine analysis on commodity PC's running the Linux operating system.

  18. Kualitas Jaringan Pada Jaringan Virtual Local Area Network (VLAN Yang Menerapkan Linux Terminal Server Project (LTSP

    Lipur Sugiyanta

    2017-12-01

    Full Text Available Virtual Local Area Network (VLAN merupakan sebuah teknik dalam jaringan komputer untuk menciptakan beberapa jaringan yang berbeda tetapi masih merupakan sebuah jaringan lokal yang tidak terbatas pada lokasi fisik seperti LAN sedangkan Linux Terminal Server Project (LTSP merupakan sebuah teknik terminal server yang dapat memperbanyak workstation dengan hanya menggunakan sebuah Linux server. Dalam membangun sebuah jaringan komputer perlu memperhatikan beberapa hal dan salah satunya adalah kualitas jaringan dari jaringan yang dibangun. Pada penelitian ini bertujuan untuk mengetahui pengaruh jumlah client terhadap kualitas jaringan berdasarkan parameter delay dan packet loss pada jaringan VLAN yang menerapkan LTSP. Oleh karena itu, penelitian ini menggunakan jenis metode penelitian kualitatif dengan memperhatikan standar yang digunakan dalam penelitian yaitu standar International Telecommunication Union – Telecommunication (ITU-T. Penerapan penelitian ini menggunakan sistem operasi pada server adalah Ubuntu Desktop 14.04 LTS. Berdasarkan dari hasil penelitian yang ditemukan dapat disimpulkan bahwa benar terbukti bahwa makin banyak client yang dilayani oleh server maka akan menurunkan kualitas jaringan berdasarkan parameter Quality of Service (QoS yang digunakan yaitu delay dan packet loss.

  19. Real-time data acquisition and feedback control using Linux Intel computers

    Penaflor, B.G.; Ferron, J.R.; Piglowski, D.A.; Johnson, R.D.; Walker, M.L.

    2006-01-01

    This paper describes the experiences of the DIII-D programming staff in adapting Linux based Intel computing hardware for use in real-time data acquisition and feedback control systems. Due to the highly dynamic and unstable nature of magnetically confined plasmas in tokamak fusion experiments, real-time data acquisition and feedback control systems are in routine use with all major tokamaks. At DIII-D, plasmas are created and sustained using a real-time application known as the digital plasma control system (PCS). During each experiment, the PCS periodically samples data from hundreds of diagnostic signals and provides these data to control algorithms implemented in software. These algorithms compute the necessary commands to send to various actuators that affect plasma performance. The PCS consists of a group of rack mounted Intel Xeon computer systems running an in-house customized version of the Linux operating system tailored specifically to meet the real-time performance needs of the plasma experiments. This paper provides a more detailed description of the real-time computing hardware and custom developed software, including recent work to utilize dual Intel Xeon equipped computers within the PCS

  20. CompactPCI/Linux platform for medium level control system on FTU

    Wang, L.; Centioli, C.; Iannone, F.; Panella, M.; Mazza, G.; Vitale, V.

    2004-01-01

    In large fusion experiments, such as tokamak devices, there are common trends for slow control systems. Because of complexity of the plants, several tokamaks adopt the so-called 'standard model' (SM) based on a three levels hierarchical control: (i) high level control (HLC) - the supervisor; (ii) medium level control (MLC) - I/O field equipments interface and concentration units and (iii) low level control (LLC) - the programmable logic controllers (PLC). FTU control system was designed with SM concepts and, in its 15 years life cycle, it underwent several developments. The latest evolution was mandatory, due to the obsolescence of the MLC CPUs, based on VME/Motorola 68030 with OS9 operating system. Therefore, we had to look for cost-effective solutions and we chose a CompactPCI-Intel x86 platform with Linux operating system. A software porting has been done taking into account the differences between OS9 and Linux operating system in terms of inter/network processes communications and I/O multi-ports serial driver. This paper describes the hardware/software architecture of the new MLC system emphasising the reliability and the low costs of the open source solutions. Moreover, the huge amount of software packages available in open source environment will assure a less painful maintenance, and will open the way to further improvements of the system itself

  1. Lin4Neuro: a customized Linux distribution ready for neuroimaging analysis.

    Nemoto, Kiyotaka; Dan, Ippeita; Rorden, Christopher; Ohnishi, Takashi; Tsuzuki, Daisuke; Okamoto, Masako; Yamashita, Fumio; Asada, Takashi

    2011-01-25

    A variety of neuroimaging software packages have been released from various laboratories worldwide, and many researchers use these packages in combination. Though most of these software packages are freely available, some people find them difficult to install and configure because they are mostly based on UNIX-like operating systems. We developed a live USB-bootable Linux package named "Lin4Neuro." This system includes popular neuroimaging analysis tools. The user interface is customized so that even Windows users can use it intuitively. The boot time of this system was only around 40 seconds. We performed a benchmark test of inhomogeneity correction on 10 subjects of three-dimensional T1-weighted MRI scans. The processing speed of USB-booted Lin4Neuro was as fast as that of the package installed on the hard disk drive. We also installed Lin4Neuro on a virtualization software package that emulates the Linux environment on a Windows-based operation system. Although the processing speed was slower than that under other conditions, it remained comparable. With Lin4Neuro in one's hand, one can access neuroimaging software packages easily, and immediately focus on analyzing data. Lin4Neuro can be a good primer for beginners of neuroimaging analysis or students who are interested in neuroimaging analysis. It also provides a practical means of sharing analysis environments across sites.

  2. Development and application of remote video monitoring system for combine harvester based on embedded Linux

    Chen, Jin; Wang, Yifan; Wang, Xuelei; Wang, Yuehong; Hu, Rui

    2017-01-01

    Combine harvester usually works in sparsely populated areas with harsh environment. In order to achieve the remote real-time video monitoring of the working state of combine harvester. A remote video monitoring system based on ARM11 and embedded Linux is developed. The system uses USB camera for capturing working state video data of the main parts of combine harvester, including the granary, threshing drum, cab and cut table. Using JPEG image compression standard to compress video data then transferring monitoring screen to remote monitoring center over the network for long-range monitoring and management. At the beginning of this paper it describes the necessity of the design of the system. Then it introduces realization methods of hardware and software briefly. And then it describes detailedly the configuration and compilation of embedded Linux operating system and the compiling and transplanting of video server program are elaborated. At the end of the paper, we carried out equipment installation and commissioning on combine harvester and then tested the system and showed the test results. In the experiment testing, the remote video monitoring system for combine harvester can achieve 30fps with the resolution of 800x600, and the response delay in the public network is about 40ms.

  3. PsyToolkit: a software package for programming psychological experiments using Linux.

    Stoet, Gijsbert

    2010-11-01

    PsyToolkit is a set of software tools for programming psychological experiments on Linux computers. Given that PsyToolkit is freely available under the Gnu Public License, open source, and designed such that it can easily be modified and extended for individual needs, it is suitable not only for technically oriented Linux users, but also for students, researchers on small budgets, and universities in developing countries. The software includes a high-level scripting language, a library for the programming language C, and a questionnaire presenter. The software easily integrates with other open source tools, such as the statistical software package R. PsyToolkit is designed to work with external hardware (including IoLab and Cedrus response keyboards and two common digital input/output boards) and to support millisecond timing precision. Four in-depth examples explain the basic functionality of PsyToolkit. Example 1 demonstrates a stimulus-response compatibility experiment. Example 2 demonstrates a novel mouse-controlled visual search experiment. Example 3 shows how to control light emitting diodes using PsyToolkit, and Example 4 shows how to build a light-detection sensor. The last two examples explain the electronic hardware setup such that they can even be used with other software packages.

  4. A Kernel for Protein Secondary Structure Prediction

    Guermeur , Yann; Lifchitz , Alain; Vert , Régis

    2004-01-01

    http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=10338&mode=toc; International audience; Multi-class support vector machines have already proved efficient in protein secondary structure prediction as ensemble methods, to combine the outputs of sets of classifiers based on different principles. In this chapter, their implementation as basic prediction methods, processing the primary structure or the profile of multiple alignments, is investigated. A kernel devoted to the task is in...

  5. Scalar contribution to the BFKL kernel

    Gerasimov, R. E.; Fadin, V. S.

    2010-01-01

    The contribution of scalar particles to the kernel of the Balitsky-Fadin-Kuraev-Lipatov (BFKL) equation is calculated. A great cancellation between the virtual and real parts of this contribution, analogous to the cancellation in the quark contribution in QCD, is observed. The reason of this cancellation is discovered. This reason has a common nature for particles with any spin. Understanding of this reason permits to obtain the total contribution without the complicated calculations, which are necessary for finding separate pieces.

  6. Weighted Bergman Kernels for Logarithmic Weights

    Engliš, Miroslav

    2010-01-01

    Roč. 6, č. 3 (2010), s. 781-813 ISSN 1558-8599 R&D Projects: GA AV ČR IAA100190802 Keywords : Bergman kernel * Toeplitz operator * logarithmic weight * pseudodifferential operator Subject RIV: BA - General Mathematics Impact factor: 0.462, year: 2010 http://www.intlpress.com/site/pub/pages/journals/items/pamq/content/vols/0006/0003/a008/

  7. Heat kernels and zeta functions on fractals

    Dunne, Gerald V

    2012-01-01

    On fractals, spectral functions such as heat kernels and zeta functions exhibit novel features, very different from their behaviour on regular smooth manifolds, and these can have important physical consequences for both classical and quantum physics in systems having fractal properties. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical in honour of Stuart Dowker's 75th birthday devoted to ‘Applications of zeta functions and other spectral functions in mathematics and physics’. (paper)

  8. An integrated genetic data environment (GDE)-based LINUX interface for analysis of HIV-1 and other microbial sequences.

    De Oliveira, T; Miller, R; Tarin, M; Cassol, S

    2003-01-01

    Sequence databases encode a wealth of information needed to develop improved vaccination and treatment strategies for the control of HIV and other important pathogens. To facilitate effective utilization of these datasets, we developed a user-friendly GDE-based LINUX interface that reduces input/output file formatting. GDE was adapted to the Linux operating system, bioinformatics tools were integrated with microbe-specific databases, and up-to-date GDE menus were developed for several clinically important viral, bacterial and parasitic genomes. Each microbial interface was designed for local access and contains Genbank, BLAST-formatted and phylogenetic databases. GDE-Linux is available for research purposes by direct application to the corresponding author. Application-specific menus and support files can be downloaded from (http://www.bioafrica.net).

  9. Exploiting graph kernels for high performance biomedical relation extraction.

    Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri

    2018-01-30

    Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM

  10. Identification of Fusarium damaged wheat kernels using image analysis

    Ondřej Jirsa

    2011-01-01

    Full Text Available Visual evaluation of kernels damaged by Fusarium spp. pathogens is labour intensive and due to a subjective approach, it can lead to inconsistencies. Digital imaging technology combined with appropriate statistical methods can provide much faster and more accurate evaluation of the visually scabby kernels proportion. The aim of the present study was to develop a discrimination model to identify wheat kernels infected by Fusarium spp. using digital image analysis and statistical methods. Winter wheat kernels from field experiments were evaluated visually as healthy or damaged. Deoxynivalenol (DON content was determined in individual kernels using an ELISA method. Images of individual kernels were produced using a digital camera on dark background. Colour and shape descriptors were obtained by image analysis from the area representing the kernel. Healthy and damaged kernels differed significantly in DON content and kernel weight. Various combinations of individual shape and colour descriptors were examined during the development of the model using linear discriminant analysis. In addition to basic descriptors of the RGB colour model (red, green, blue, very good classification was also obtained using hue from the HSL colour model (hue, saturation, luminance. The accuracy of classification using the developed discrimination model based on RGBH descriptors was 85 %. The shape descriptors themselves were not specific enough to distinguish individual kernels.

  11. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  12. Accelerating Scientific Applications using High Performance Dense and Sparse Linear Algebra Kernels on GPUs

    Abdelfattah, Ahmad

    2015-01-15

    performance experiments show improvements ranging from 10% and up to more than fourfold speedup against competitive GPU MVM approaches. Performance impacts on high-level numerical libraries and a computational astronomy application are highlighted, since such memory-bound kernels are often located in innermost levels of the software chain. The excellent performance obtained in this work has led to the adoption of code in NVIDIAs widely distributed cuBLAS library.

  13. KINETIC-J: A computational kernel for solving the linearized Vlasov equation applied to calculations of the kinetic, configuration space plasma current for time harmonic wave electric fields

    Green, David L.; Berry, Lee A.; Simpson, Adam B.; Younkin, Timothy R.

    2018-04-01

    We present the KINETIC-J code, a computational kernel for evaluating the linearized Vlasov equation with application to calculating the kinetic plasma response (current) to an applied time harmonic wave electric field. This code addresses the need for a configuration space evaluation of the plasma current to enable kinetic full-wave solvers for waves in hot plasmas to move beyond the limitations of the traditional Fourier spectral methods. We benchmark the kernel via comparison with the standard k →-space forms of the hot plasma conductivity tensor.

  14. Using Real Time Workshop for rapid and reliable control implementation in the Frascati Tokamak Upgrade Feedback Control System running under RTAI-GNU/Linux

    Centioli, C.; Iannone, F.; Ledauphin, M.; Panella, M.; Pangione, L.; Podda, S.; Vitale, V.; Zaccarian, L.

    2005-01-01

    The Feedback Control System running at FTU has been recently ported from a commercial platform (based on LynxOS) to an open-source GNU/Linux-based RTAI-LXRT platform, thereby, obtaining significant performance and cost improvements. Based on the new open-source platform, it is now possible to experiment novel control strategies aimed at improving the robustness and accuracy of the feedback control. Nevertheless, the implementation of control ideas still requires a great deal of coding of the control algorithms that, if carried out manually, may be prone to coding errors, therefore time consuming both in the development phase and in the subsequent validation tests consisting of dedicated experiments carried out on FTU. In this paper, we report on recent developments based on Mathworks' Simulink and Real Time Workshop (RTW) packages to obtain a user-friendly environment where the real time code implementing novel control algorithms can be easily generated, tested and validated. Thanks to this new tool, the control designer only needs to specify the block diagram of the control task (namely, a high level and functional description of the new algorithm under consideration) and the corresponding real time code generation and testing is completely automated without any need of dedicated experiments. In the paper, the necessary work carried out to adapt the Real Time Workshop to our RTAI-LXRT context will be illustrated. A necessary re-organization of the previous real time software, aimed at incorporating the code coming from the adapted RTW, will also be discussed. Moreover, we will report on a performance comparison between the code obtained using the automated RTW-based procedure and the hand-written C code, appropriately optimised; at the moment, a preliminary performance comparison consisting of dummy algorithms has shown that the code automatically generated from RTW is faster (about 30% up) than the manually written one. This preliminary result combined with the

  15. LPIC-1 Linux Professional Institute certification study guide exam 101-400 and exam 102-400

    Bresnahan, Christine

    2015-01-01

    Thorough LPIC-1 exam prep, with complete coverage and bonus study tools LPIC-1Study Guide is your comprehensive source for the popular Linux Professional Institute Certification Level 1 exam, fully updated to reflect the changes to the latest version of the exam. With 100% coverage of objectives for both LPI 101 and LPI 102, this book provides clear and concise information on all Linux administration topics and practical examples drawn from real-world experience. Authoritative coverage of key exam topics includes GNU and UNIX commands, devices, file systems, file system hierarchy, user interf

  16. A real-time computer simulation of nuclear simulator software using standard PC hardware and linux environments

    Cha, K. H.; Kweon, K. C.

    2001-01-01

    A feasibility study, which standard PC hardware and Real-Time Linux are applied to real-time computer simulation of software for a nuclear simulator, is presented in this paper. The feasibility prototype was established with the existing software in the Compact Nuclear Simulator (CNS). Throughout the real-time implementation in the feasibility prototype, we has identified that the approach can enable the computer-based predictive simulation to be approached, due to both the remarkable improvement in real-time performance and the less efforts for real-time implementation under standard PC hardware and Real-Time Linux envrionments

  17. Kernel based subspace projection of near infrared hyperspectral images of maize kernels

    Larsen, Rasmus; Arngren, Morten; Hansen, Per Waaben

    2009-01-01

    In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods ......- tor transform outperform the linear methods as well as kernel principal components in producing interesting projections of the data.......In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods...... including principal component analysis and maximum autocorrelation factor analysis. The latter utilizes the fact that interesting phenomena in images exhibit spatial autocorrelation. However, linear projections often fail to grasp the underlying variability on the data. Therefore we propose to use so...

  18. Proof and implementation of the stochastic formula for ideal gas, energy dependent scattering kernel

    Becker, B.; Dagan, R.; Lohnert, G.

    2009-01-01

    The ideal gas, scattering kernel for heavy nuclei with pronounced resonances was developed [Rothenstein, W., Dagan, R., 1998. Ann. Nucl. Energy 25, 209-222], proved and implemented [Rothenstein, W., 2004 Ann. Nucl. Energy 31, 9-23] in the data processing code NJOY [Macfarlane, R.E., Muir, D.W., 1994. The NJOY Nuclear Data Processing System Version 91, LA-12740-M] from which the scattering probability tables were prepared [Dagan, R., 2005. Ann. Nucl. Energy 32, 367-377]. Those tables were introduced to the well known MCNP code [X-5 Monte Carlo Team. MCNP - A General Monte Carlo N-Particle Transport Code version 5 LA-UR-03-1987 code] via the 'mt' input cards in the same manner as it is done for light nuclei in the thermal energy range. In this study we present an alternative methodology for solving the double differential energy dependent scattering kernel which is based solely on stochastic consideration as far as the scattering probabilities are concerned. The solution scheme is based on an alternative rejection scheme suggested by Rothenstein [Rothenstein, W. ENS conference 1994 Tel Aviv]. Based on comparison with the above mentioned analytical (probability S(α,β)-tables) approach it is confirmed that the suggested rejection scheme provides accurate results. The uncertainty concerning the magnitude of the bias due to the enhanced multiple rejections during the sampling procedure are proved to lie within 1-2 standard deviations for all practical cases that were analysed.

  19. GUI2QAD, Graphical Interface for QAD-CGPIC, Point Kernel for Shielding Calculations

    2001-01-01

    1 - Description of program or function: GUI2QAD is an aid in preparation of input for the included QAD-CGPIC program, which is based on CCC-493/QAD-CGGP and PICTURE. QAD-CGPIC, which is included in this distribution, is a Fortran code for neutron and gamma-ray shielding calculations by the point kernel method. Provision is available to interactively view the geometry of the system. QAD-CG calculates fast-neutron and gamma-ray penetration through various shield configurations defined by combinatorial geometry specifications. The code can use the ANS-6.4.3 1990 buildup factor compilation (26 materials). 2 - Methods:The code QAD-CGPIC is based on point kernel method and has a provision to select either GP or Capo's build up factors. 3 - Restrictions on the complexity of the problem: Details on restrictions and limitations are available in the RSICC code manual CCC-493/QAD-CGGP. Because CCC-493 was obsoleted by CCC-645/QAD-CGGP-A, the CCC-493 documentation is not online but is included with this package. This package includes a Graphical User Interface to facilitate use

  20. Kernel based eigenvalue-decomposition methods for analysing ham

    Christiansen, Asger Nyman; Nielsen, Allan Aasbjerg; Møller, Flemming

    2010-01-01

    methods, such as PCA, MAF or MNF. We therefore investigated the applicability of kernel based versions of these transformation. This meant implementing the kernel based methods and developing new theory, since kernel based MAF and MNF is not described in the literature yet. The traditional methods only...... have two factors that are useful for segmentation and none of them can be used to segment the two types of meat. The kernel based methods have a lot of useful factors and they are able to capture the subtle differences in the images. This is illustrated in Figure 1. You can see a comparison of the most...... useful factor of PCA and kernel based PCA respectively in Figure 2. The factor of the kernel based PCA turned out to be able to segment the two types of meat and in general that factor is much more distinct, compared to the traditional factor. After the orthogonal transformation a simple thresholding...