WorldWideScience

Sample records for super computer system

  1. NETL Super Computer

    Data.gov (United States)

    Federal Laboratory Consortium — The NETL Super Computer was designed for performing engineering calculations that apply to fossil energy research. It is one of the world’s larger supercomputers,...

  2. Computational mission analysis and conceptual system design for super low altitude satellite

    Institute of Scientific and Technical Information of China (English)

    Ming Xu; Jinlong Wang; Nan Zhou

    2014-01-01

    This paper deals with system engineering and design methodology for super low altitude satel ites in the view of the com-putational mission analysis. Due to the slight advance of imaging instruments, such as the focus of camera and the image element of charge coupled device (CCD), it is an innovative and economical way to improve the camera’s resolution to enforce the satel ite to fly on the lower altitude orbit. DFH-3, the mature satel ite bus de-veloped by Chinese Academy of Space Technology, is employed to define the mass and power budgets for the computational mis-sion analysis and the detailed engineering design for super low altitude satel ites. An effective iterative algorithm is proposed to solve the ergodic representation of feasible mass and power bud-gets at the flight altitude under constraints. Besides, boundaries of mass or power exist for every altitude, where the upper boundary is derived from the maximum power, while the minimum thrust force holds the lower boundary before the power reaching the initial value. What’s more, an analytical algorithm is employed to numerical y investigate the coverage percentage over the altitude, so that the nominal altitude could be selected from al the feasi-ble altitudes based on both the mass and power budgets and the repetitive ground traces. The local time at the descending node is chosen for the nominal sun-synchronous orbit based on the average evaluation function. After determining the key orbital ele-ments based on the computational mission analysis, the detailed engineering design on the configuration and other subsystems, like power, telemetry telecontrol and communication (TT&C), and attitude determination and control system (ADCS), is performed based on the benchmark bus, besides, some improvements to the bus are also implemented to accommodate the flight at a super low altitude. Two operation strategies, drag-free closed-loop mode and on/off open-loop mode, are presented to maintain the satel

  3. Super-computer architecture

    CERN Document Server

    Hockney, R W

    1977-01-01

    This paper examines the design of the top-of-the-range, scientific, number-crunching computers. The market for such computers is not as large as that for smaller machines, but on the other hand it is by no means negligible. The present work-horse machines in this category are the CDC 7600 and IBM 360/195, and over fifty of the former machines have been sold. The types of installation that form the market for such machines are not only the major scientific research laboratories in the major countries-such as Los Alamos, CERN, Rutherford laboratory-but also major universities or university networks. It is also true that, as with sports cars, innovations made to satisfy the top of the market today often become the standard for the medium-scale computer of tomorrow. Hence there is considerable interest in examining present developments in this area. (0 refs).

  4. Super computer made with Linux cluster

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jeong Hun; Oh, Yeong Eun; Kim, Jeong Seok

    2002-01-15

    This book consists of twelve chapters, which introduce super computer made with Linux cluster. The contents of this book are Linux cluster, the principle of cluster, design of Linux cluster, general things for Linux, building up terminal server and client, Bear wolf cluster by Debian GNU/Linux, cluster system with red hat, Monitoring system, application programming-MPI, on set-up and install application programming-PVM, with PVM programming and XPVM application programming-open PBS with composition and install and set-up and GRID with GRID system, GSI, GRAM, MDS, its install and using of tool kit.

  5. Virtualizing Super-Computation On-Board Uas

    Science.gov (United States)

    Salami, E.; Soler, J. A.; Cuadrado, R.; Barrado, C.; Pastor, E.

    2015-04-01

    Unmanned aerial systems (UAS, also known as UAV, RPAS or drones) have a great potential to support a wide variety of aerial remote sensing applications. Most UAS work by acquiring data using on-board sensors for later post-processing. Some require the data gathered to be downlinked to the ground in real-time. However, depending on the volume of data and the cost of the communications, this later option is not sustainable in the long term. This paper develops the concept of virtualizing super-computation on-board UAS, as a method to ease the operation by facilitating the downlink of high-level information products instead of raw data. Exploiting recent developments in miniaturized multi-core devices is the way to speed-up on-board computation. This hardware shall satisfy size, power and weight constraints. Several technologies are appearing with promising results for high performance computing on unmanned platforms, such as the 36 cores of the TILE-Gx36 by Tilera (now EZchip) or the 64 cores of the Epiphany-IV by Adapteva. The strategy for virtualizing super-computation on-board includes the benchmarking for hardware selection, the software architecture and the communications aware design. A parallelization strategy is given for the 36-core TILE-Gx36 for a UAS in a fire mission or in similar target-detection applications. The results are obtained for payload image processing algorithms and determine in real-time the data snapshot to gather and transfer to ground according to the needs of the mission, the processing time, and consumed watts.

  6. Two-Component Super AKNS Equations and Their Finite-Dimensional Integrable Super Hamiltonian System

    OpenAIRE

    Jing Yu; Jingwei Han

    2014-01-01

    Starting from a matrix Lie superalgebra, two-component super AKNS system is constructed. By making use of monononlinearization technique of Lax pairs, we find that the obtained two-component super AKNS system is a finite-dimensional integrable super Hamiltonian system. And its Lax representation and $r$ -matrix are also given in this paper.

  7. Two-Component Super AKNS Equations and Their Finite-Dimensional Integrable Super Hamiltonian System

    Directory of Open Access Journals (Sweden)

    Jing Yu

    2014-01-01

    Full Text Available Starting from a matrix Lie superalgebra, two-component super AKNS system is constructed. By making use of monononlinearization technique of Lax pairs, we find that the obtained two-component super AKNS system is a finite-dimensional integrable super Hamiltonian system. And its Lax representation and r-matrix are also given in this paper.

  8. Computer protection plan for the Superconducing Super Collider Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Hunter, S.

    1992-04-15

    The purpose of this document is to describe the current unclassified computer security program practices, Policies and procedures for the Superconducting Super Collider Laboratory (SSCL). This document includes or references all related policies and procedures currently implemented throughout the SSCL. The document includes security practices which are planned when the facility is fully operational.

  9. Second invariant for two-dimensional classical super systems

    Indian Academy of Sciences (India)

    S C Mishra; Roshan Lal; Veena Mishra

    2003-10-01

    Construction of superpotentials for two-dimensional classical super systems (for ≥ 2) is carried out. Some interesting potentials have been studied in their super form and also their integrability.

  10. The SuperNova Early Warning System

    OpenAIRE

    Scholberg, K.

    2008-01-01

    A core collapse in the Milky Way will produce an enormous burst of neutrinos in detectors world-wide. Such a burst has the potential to provide an early warning of a supernova's appearance. I will describe the nature of the signal, the sensitivity of current detectors, and SNEWS, the SuperNova Early Warning System, a network designed to alert astronomers as soon as possible after the detected neutrino signal.

  11. Transfer function characteristics of super resolving systems

    Science.gov (United States)

    Milster, Tom D.; Curtis, Craig H.

    1992-01-01

    Signal quality in an optical storage device greatly depends on the optical system transfer function used to write and read data patterns. The problem is similar to analysis of scanning optical microscopes. Hopkins and Braat have analyzed write-once-read-many (WORM) optical data storage devices. Herein, transfer function analysis of magnetooptic (MO) data storage devices is discussed with respect to improving transfer-function characteristics. Several authors have described improving the transfer function as super resolution. However, none have thoroughly analyzed the MO optical system and effects of the medium. Both the optical system transfer function and effects of the medium of this development are discussed.

  12. Computer systems

    Science.gov (United States)

    Olsen, Lola

    1992-01-01

    In addition to the discussions, Ocean Climate Data Workshop hosts gave participants an opportunity to hear about, see, and test for themselves some of the latest computer tools now available for those studying climate change and the oceans. Six speakers described computer systems and their functions. The introductory talks were followed by demonstrations to small groups of participants and some opportunities for participants to get hands-on experience. After this familiarization period, attendees were invited to return during the course of the Workshop and have one-on-one discussions and further hands-on experience with these systems. Brief summaries or abstracts of introductory presentations are addressed.

  13. COMPUTATION OF SUPER-CONVERGENT NODAL STRESSES OF TIMOSHENKO BEAM ELEMENTS BY EEP METHOD

    Institute of Scientific and Technical Information of China (English)

    王枚; 袁驷

    2004-01-01

    The newly proposed element energy projection (EEP) method has been applied to the computation of super-convergent nodal stresses of Timoshenko beam elements. General formulas based on element projection theorem were derived and illustrative numerical examples using two typical elements were given. Both the analysis and examples show that EEP method also works very well for the problems with vector function solutions. The EEP method gives super-convergent nodal stresses, which are well comparable to the nodal displacements in terms of both convergence rate and error magnitude. And in addition, it can overcome the "shear locking" difficulty for stresses even when the displacements are badly affected. This research paves the way for application of the EEP method to general onedimensional systems of ordinary differential equations.

  14. Efficient super-resolution image reconstruction applied to surveillance video captured by small unmanned aircraft systems

    Science.gov (United States)

    He, Qiang; Schultz, Richard R.; Chu, Chee-Hung Henry

    2008-04-01

    The concept surrounding super-resolution image reconstruction is to recover a highly-resolved image from a series of low-resolution images via between-frame subpixel image registration. In this paper, we propose a novel and efficient super-resolution algorithm, and then apply it to the reconstruction of real video data captured by a small Unmanned Aircraft System (UAS). Small UAS aircraft generally have a wingspan of less than four meters, so that these vehicles and their payloads can be buffeted by even light winds, resulting in potentially unstable video. This algorithm is based on a coarse-to-fine strategy, in which a coarsely super-resolved image sequence is first built from the original video data by image registration and bi-cubic interpolation between a fixed reference frame and every additional frame. It is well known that the median filter is robust to outliers. If we calculate pixel-wise medians in the coarsely super-resolved image sequence, we can restore a refined super-resolved image. The primary advantage is that this is a noniterative algorithm, unlike traditional approaches based on highly-computational iterative algorithms. Experimental results show that our coarse-to-fine super-resolution algorithm is not only robust, but also very efficient. In comparison with five well-known super-resolution algorithms, namely the robust super-resolution algorithm, bi-cubic interpolation, projection onto convex sets (POCS), the Papoulis-Gerchberg algorithm, and the iterated back projection algorithm, our proposed algorithm gives both strong efficiency and robustness, as well as good visual performance. This is particularly useful for the application of super-resolution to UAS surveillance video, where real-time processing is highly desired.

  15. Final Report: Super Instruction Architecture for Scalable Parallel Computations

    Energy Technology Data Exchange (ETDEWEB)

    Sanders, Beverly Ann [University of Florida; Bartlett, Rodney [University of Florida; Deumens, Erik [University of Florida

    2013-12-23

    The most advanced methods for reliable and accurate computation of the electronic structure of molecular and nano systems are the coupled-cluster techniques. These high-accuracy methods help us to understand, for example, how biological enzymes operate and contribute to the design of new organic explosives. The ACES III software provides a modern, high-performance implementation of these methods optimized for high performance parallel computer systems, ranging from small clusters typical in individual research groups, through larger clusters available in campus and regional computer centers, all the way to high-end petascale systems at national labs, including exploiting GPUs if available. This project enhanced the ACESIII software package and used it to study interesting scientific problems.

  16. Large-scale integrated super-computing platform for next generation virtual drug discovery.

    Science.gov (United States)

    Mitchell, Wayne; Matsumoto, Shunji

    2011-08-01

    Traditional drug discovery starts by experimentally screening chemical libraries to find hit compounds that bind to protein targets, modulating their activity. Subsequent rounds of iterative chemical derivitization and rescreening are conducted to enhance the potency, selectivity, and pharmacological properties of hit compounds. Although computational docking of ligands to targets has been used to augment the empirical discovery process, its historical effectiveness has been limited because of the poor correlation of ligand dock scores and experimentally determined binding constants. Recent progress in super-computing, coupled to theoretical insights, allows the calculation of the Gibbs free energy, and therefore accurate binding constants, for usually large ligand-receptor systems. This advance extends the potential of virtual drug discovery. A specific embodiment of the technology, integrating de novo, abstract fragment based drug design, sophisticated molecular simulation, and the ability to calculate thermodynamic binding constants with unprecedented accuracy, are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Super-honeycomb lattice: A hybrid fermionic and bosonic system

    CERN Document Server

    Zhong, Hua; Zhu, Yi; Zhang, Da; Li, Changbiao; Zhang, Yanpeng; Li, Fuli; Belić, Milivoj R; Xiao, Min

    2016-01-01

    We report on transport properties of the super-honeycomb lattice, the band structure of which possesses a flat band and Dirac cones, according to the tight-binding approximation. This super-honeycomb model combines the honeycomb lattice and the Lieb lattice and displays the properties of both. The super-honeycomb lattice also represents a hybrid fermionic and bosonic system, which is rarely seen in nature. By choosing the phases of input beams properly, the flat-band mode of the super-honeycomb will be excited and the input beams will exhibit a strong localization during propagation. On the other hand, if the modes of Dirac cones of the super-honeycomb lattice are excited, one will observe conical diffraction. Furthermore, if the input beam is properly chosen to excite a sublattice of the super-honeycomb lattice and the modes of Dirac cones with different pseudospins, e.g., the three-beam interference pattern, the pseudospin-mediated vortices will be observed.

  18. Computer simulations of phase field drops on super-hydrophobic surfaces

    Science.gov (United States)

    Fedeli, Livio

    2017-09-01

    We present a novel quasi-Newton continuation procedure that efficiently solves the system of nonlinear equations arising from the discretization of a phase field model for wetting phenomena. We perform a comparative numerical analysis that shows the improved speed of convergence gained with respect to other numerical schemes. Moreover, we discuss the conditions that, on a theoretical level, guarantee the convergence of this method. At each iterative step, a suitable continuation procedure develops and passes to the nonlinear solver an accurate initial guess. Discretization performs through cell-centered finite differences. The resulting system of equations is solved on a composite grid that uses dynamic mesh refinement and multi-grid techniques. The final code achieves three-dimensional, realistic computer experiments comparable to those produced in laboratory settings. This code offers not only new insights into the phenomenology of super-hydrophobicity, but also serves as a reliable predictive tool for the study of hydrophobic surfaces.

  19. Computer programming and computer systems

    CERN Document Server

    Hassitt, Anthony

    1966-01-01

    Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten

  20. APES-based procedure for super-resolution SAR imagery with GPU parallel computing

    Science.gov (United States)

    Jia, Weiwei; Xu, Xiaojian; Xu, Guangyao

    2015-10-01

    The amplitude and phase estimation (APES) algorithm is widely used in modern spectral analysis. Compared with conventional Fourier transform (FFT), APES results in lower sidelobes and narrower spectral peaks. However, in synthetic aperture radar (SAR) imaging with large scene, without parallel computation, it is difficult to apply APES directly to super-resolution radar image processing due to its great amount of calculation. In this paper, a procedure is proposed to achieve target extraction and parallel computing of APES for super-resolution SAR imaging. Numerical experimental are carried out on Tesla K40C with 745 MHz GPU clock rate and 2880 CUDA cores. Results of SAR image with GPU parallel computing show that the parallel APES is remarkably more efficient than that of CPU-based with the same super-resolution.

  1. Probing LINEAR Collider Final Focus Systems in SuperKEKB

    CERN Document Server

    Thrane, Paul Conrad Vaagen

    2017-01-01

    A challenge for future linear collider final focus systems is the large chromaticity produced by the final quadrupoles. SuperKEKB will be correcting high levels of chromaticity using the traditional scheme which has been also proposed for the CLIC FFS. We present early simulation results indicating that lowering β*у in the SuperKEKB Low Energy Ring might be possible given on-axis injection and low bunch current, opening the possibility of testing chromaticity correction beyond FFTB level, similar to ILC and approaching that of CLIC. CLIC – Note – 1077

  2. Dynamical Constraints on Outer Planets in Super-Earth Systems

    OpenAIRE

    Read, Matthew J.; Wyatt, Mark C.

    2015-01-01

    This paper considers secular interactions within multi-planet systems. In particular we consider dynamical evolution of known planetary systems resulting from an additional hypothetical planet on an eccentric orbit. We start with an analytical study of a general two-planet system, showing that a planet on an elliptical orbit transfers all of its eccentricity to an initially circular planet if the two planets have comparable orbital angular momenta. Application to the single Super-Earth system...

  3. SuperMacLang: Development of an Authoring System.

    Science.gov (United States)

    Frommer, Judith; Foelsche, Otmar K. E.

    1999-01-01

    Describes the development of "SuperMacLang, the 1990s version of the MacLang authoring system. An analysis of various features of the program explains the ways in which certain aspects of collaboration and funding affected developer and programming decisions. (Author/VWL)

  4. Super heat pump energy accumulation system

    Energy Technology Data Exchange (ETDEWEB)

    1989-08-20

    The SHP is a project for which NEDO is commissioned as a part of the Moonlight Project by MITI and has developed since 1985. This report introduced mainly the practical results(trial operation study of the 100kW class bench scale plant) in 1988 fiscal year and the present situation of SHP technical development. Further, this report introduced the estimation of the effects of carbon dioxide decrease and energy saving on the global warmimg up. On the bench scale experiment, the 100kW class compressive heat pump of super high performance and a 10Mcal class high density chemical energy storing technique between higher temperature(100{sup 0}C or more) and cooler temperature(10{sup 0}C or less) were established. The energy saving effect for business, industry and cooling energy in Japan by SHP is estimated to be 205kl(oil)/year in 2000 and CO{sub 2} reducing effect is estimated to be about 820,000tons/Year. 2 refs., 4figs.

  5. CAD-based Monte Carlo Program for Integrated Simulation of Nuclear System SuperMC

    Science.gov (United States)

    Wu, Yican; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Long, Pengcheng; Hu, Liqin

    2014-06-01

    Monte Carlo (MC) method has distinct advantages to simulate complicated nuclear systems and is envisioned as routine method for nuclear design and analysis in the future. High fidelity simulation with MC method coupled with multi-physical phenomenon simulation has significant impact on safety, economy and sustainability of nuclear systems. However, great challenges to current MC methods and codes prevent its application in real engineering project. SuperMC is a CAD-based Monte Carlo program for integrated simulation of nuclear system developed by FDS Team, China, making use of hybrid MC-deterministic method and advanced computer technologies. The design aim, architecture and main methodology of SuperMC were presented in this paper. SuperMC2.1, the latest version for neutron, photon and coupled neutron and photon transport calculation, has been developed and validated by using a series of benchmarking cases such as the fusion reactor ITER model and the fast reactor BN-600 model. SuperMC is still in its evolution process toward a general and routine tool for nuclear system. Warning, no authors found for 2014snam.conf06023.

  6. Super-resolution for scanning light stimulation systems

    Science.gov (United States)

    Bitzer, L. A.; Neumann, K.; Benson, N.; Schmechel, R.

    2016-09-01

    Super-resolution (SR) is a technique used in digital image processing to overcome the resolution limitation of imaging systems. In this process, a single high resolution image is reconstructed from multiple low resolution images. SR is commonly used for CCD and CMOS (Complementary Metal-Oxide-Semiconductor) sensor images, as well as for medical applications, e.g., magnetic resonance imaging. Here, we demonstrate that super-resolution can be applied with scanning light stimulation (LS) systems, which are common to obtain space-resolved electro-optical parameters of a sample. For our purposes, the Projection Onto Convex Sets (POCS) was chosen and modified to suit the needs of LS systems. To demonstrate the SR adaption, an Optical Beam Induced Current (OBIC) LS system was used. The POCS algorithm was optimized by means of OBIC short circuit current measurements on a multicrystalline solar cell, resulting in a mean square error reduction of up to 61% and improved image quality.

  7. Computationally efficient image restoration and super-resolution algorithns for real-time implementation

    Science.gov (United States)

    Sundareshan, Malur K.

    2002-07-01

    Computational complexity is a major impediment to the real- time implementation of image restoration and super- resolution algorithms. Although powerful restoration algorithms have been developed within the last few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require enough number of iterations to be executed to achieve desired resolution gains in order to meaningfully perform detection and recognition tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture mega-pixel imagery data at video frame rates. A major challenge in the processing of these large format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and super- resolution algorithms is of significant practical interest and will be the primary focus of this paper. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate pre-processing and post-processing steps together with the super-resolution iterations in order to tailor optimized overall processing sequences for imagery data of specific formats. Three distinct methods for tailoring a pre-processing filter and integrating it with the super-resolution processing steps will be outlined in this paper. These methods consist of a Region-of-Interest (ROI) extraction scheme, a background- detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared to the super-resolution iterations. A

  8. The super-Turing computational power of plastic recurrent neural networks.

    Science.gov (United States)

    Cabessa, Jérémie; Siegelmann, Hava T

    2014-12-01

    We study the computational capabilities of a biologically inspired neural model where the synaptic weights, the connectivity pattern, and the number of neurons can evolve over time rather than stay static. Our study focuses on the mere concept of plasticity of the model so that the nature of the updates is assumed to be not constrained. In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power--as the static analog neural networks--irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating. Consequently, the incorporation of only bi-valued plastic capabilities in a basic model of RNNs suffices to break the Turing barrier and achieve the super-Turing level of computation. The consideration of more general mechanisms of architectural plasticity or of real synaptic weights does not further increase the capabilities of the networks. These results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks. They further show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.

  9. ESSE: Engineering Super Simulation Emulation for Virtual Reality Systems Environment

    Energy Technology Data Exchange (ETDEWEB)

    Suh, Kune Y. [Seoul National Univ., Seoul (Korea, Republic of); Yeon, Choul W. [PHILOSOPHIA, Inc., Seoul (Korea, Republic of)

    2008-04-15

    The trademark 4{sup +}D Technology{sup TM} based Engineering Super Simulation Emulation (ESSE) is introduced. ESSE resorting to three-dimensional (3D) Virtual Reality (VR) technology pledges to provide with an interactive real-time motion, sound and tactile and other forms of feedback in the man machine systems environment. In particular, the 3D Virtual Engineering Neo cybernetic Unit Soft Power (VENUS) adds a physics engine to the VR platform so as to materialize a physical atmosphere. A close cooperation system and prompt information share are crucial, thereby increasing the necessity of centralized information system and electronic cooperation system. VENUS is further deemed to contribute towards public acceptance of nuclear power in general, and safety in particular. For instance, visualization of nuclear systems can familiarize the public in answering their questions and alleviating misunderstandings on nuclear power plants answering their questions and alleviating misunderstandings on nuclear power plants (NPPs) in general, and performance, security and safety in particular. An in-house flagship project Systemic Three-dimensional Engine Platform Prototype Engineering (STEPPE) endeavors to develop the Systemic Three-dimensional Engine Platform (STEP) for a variety of VR applications. STEP is home to a level system providing the whole visible scene of virtual engineering of man machine system environment. The system is linked with video monitoring that provides a 3D Computer Graphics (CG) visualization of major events. The database linked system provides easy access to relevant blueprints. The character system enables the operators easy access to visualization of major events. The database linked system provides easy access to relevant blueprints. The character system enables the operators to access the virtual systems by using their virtual characters. Virtually Engineered NPP Informative systems by using their virtual characters. Virtually Engineered NPP

  10. Atmospheric evaporation in super-Earth exoplanet systems

    Science.gov (United States)

    Moller, Spencer; Miller, Brendan P.; Gallo, Elena; Wright, Jason; Poppenhaeger, Katja

    2017-01-01

    We investigate the influence of stellar activity on atmospheric heating and evaporation in four super-Earth exoplanets: HD 97658 b, GJ 1214 b, 55 Cnc e, and CoRoT-7 b. We use X-ray observations of the host stars to estimate planetary mass loss. We extracted net count rates from a soft band image, converted it to flux using PIMMS for a standard coronal model, calculated the intrinsic stellar luminosity, and estimated the current-epoch mass-loss rate and the integrated mass lost. Our aim is to determine under what circumstances current super-Earths will have experienced significant mass loss through atmospheric irradiation over the system lifetime. We hypothesize that closely-orbiting exoplanets receiving the greatest amount of high-energy stellar radiation will also tend to be sculpted into lower mass and more dense remnant cores.

  11. Super Efimov effect for mass-imbalanced systems

    Science.gov (United States)

    Moroz, Sergej; Nishida, Yusuke

    2014-12-01

    We study two species of particles in two dimensions interacting by isotropic short-range potentials with the interspecies potential fine-tuned to a p -wave resonance. Their universal low-energy physics can be extracted by analyzing a properly constructed low-energy effective field theory with the renormalization group method. Consequently, a three-body system consisting of two particles of one species and one of the other is shown to exhibit the super Efimov effect, the emergence of an infinite tower of three-body bound states with orbital angular momentum ℓ =±1 whose binding energies obey a doubly exponential scaling, when the two particles are heavier than the other by a mass ratio greater than 4.034 04 for identical bosons and 2.414 21 for identical fermions. With increasing the mass ratio, the super Efimov spectrum becomes denser which would make its experimental observation easier. We also point out that the Born-Oppenheimer approximation is incapable of reproducing the super Efimov effect, the universal low-energy asymptotic scaling of the spectrum.

  12. Passive safety system of a super fast reactor

    Energy Technology Data Exchange (ETDEWEB)

    Sutanto, E-mail: sutanto@fuji.waseda.jp [Cooperative Major in Nuclear Energy, Waseda University, Tokyo (Japan); Polytechnic Institute of Nuclear Technology—National Nuclear Energy Agency, Yogyakarta (Indonesia); Oka, Yoshiaki [The University of Tokyo, Tokyo (Japan)

    2015-08-15

    Highlights: • Passive safety system of a Super FR is proposed. • Total loss of feedwater flow and large LOCA are analyzed. • The criteria of MCST and core pressure are satisfied. - Abstract: Passive safety systems of a Super Fast Reactor are studied. The passive safety systems consist of isolation condenser (IC), automatic depressurization system (ADS), core make-up tank (CMT), gravity driven cooling system (GDCS), and passive containment cooling system (PCCS). Two accidents of total loss of feedwater flow and 100% cold-leg break large LOCA are analyzed by using the passive systems and the criteria of maximum cladding surface temperature (MCST) and maximum core pressure are satisfied. The isolation condenser can be used for mitigation of the accident of total loss of feedwater flow at both supercritical and subcritical pressures. The ADS is used for depressurization leading to a loss of coolant during line switching to operation of the isolation condenser at subcritical pressure. Use of CMT during line switching recovers the lost coolant. In case of large LOCA, GDCS can be used for core reflooding. Coolant vaporization in the core released to containment through the break is condensed by passive containment cooling system. The condensate flows to the GDCS pool by gravity force. The maximum cladding surface temperature (MCST) of the accident satisfies the criterion.

  13. Small-Animal Imaging Using Clinical Positron Emission Tomography/Computed Tomography and Super-Resolution

    Directory of Open Access Journals (Sweden)

    Frank P. DiFilippo

    2012-05-01

    Full Text Available Considering the high cost of dedicated small-animal positron emission tomography/computed tomography (PET/CT, an acceptable alternative in many situations might be clinical PET/CT. However, spatial resolution and image quality are of concern. The utility of clinical PET/CT for small-animal research and image quality improvements from super-resolution (spatial subsampling were investigated. National Electrical Manufacturers Association (NEMA NU 4 phantom and mouse data were acquired with a clinical PET/CT scanner, as both conventional static and stepped scans. Static scans were reconstructed with and without point spread function (PSF modeling. Stepped images were postprocessed with iterative deconvolution to produce super-resolution images. Image quality was markedly improved using the super-resolution technique, avoiding certain artifacts produced by PSF modeling. The 2 mm rod of the NU 4 phantom was visualized with high contrast, and the major structures of the mouse were well resolved. Although not a perfect substitute for a state-of-the-art small-animal PET/CT scanner, a clinical PET/CT scanner with super-resolution produces acceptable small-animal image quality for many preclinical research studies.

  14. ATLAS FTK a – very complex – custom super computer

    CERN Document Server

    Kimura, Naoki; The ATLAS collaboration

    2016-01-01

    In the ever increasing pile-up LHC environment advanced techniques of analyzing the data are implemented in order to increase the rate of relevant physics processes with respect to background processes. The Fast TracKer (FTK) is a track finding implementation at hardware level that is designed to deliver full-scan tracks with PT above 1GeV to the ATLAS trigger system for every L1 accept (at a maximum rate of 100kHz). In order to achieve this performance a highly parallel system was designed and now it is under installation in ATLAS. In the beginning of 2016 it will provide tracks for the trigger system in a region covering the central part of the ATLAS detector, and during the year it's coverage will be extended to the full detector coverage. The system relies on matching hits coming from the silicon tracking detectors against 1 billion patterns stored in specially designed ASICS chips (Associative memory - AM06). In a first stage coarse resolution hits are matched against the patterns and the accepted hits u...

  15. Study of Super-Twisting sliding mode control for U model based nonlinear system

    OpenAIRE

    Zhang, Jianhua; Li, Yang; Xueli WU; Jianan HUO; Shenyang ZHUANG

    2016-01-01

    The Super-Twisting control algorithm is adopted to analyze the U model based nonlinear control system in order to solve the controller design problems of non-affine nonlinear systems. The non-affine nonlinear systems are studied, the neural network approximation of the nonlinear function is performed, and the Super-Twisting control algorithm is used to control. The convergence of the Super-Twisting algorithm is proved by selecting an appropriate Lyapunov function. The Matlab simulation is car...

  16. Dynamical constraints on outer planets in super-Earth systems

    Science.gov (United States)

    Read, Matthew J.; Wyatt, Mark C.

    2016-03-01

    This paper considers secular interactions within multi-planet systems. In particular, we consider dynamical evolution of known planetary systems resulting from an additional hypothetical planet on an eccentric orbit. We start with an analytical study of a general two-planet system, showing that a planet on an elliptical orbit transfers all of its eccentricity to an initially circular planet if the two planets have comparable orbital angular momenta. Application to the single super-Earth system HD 38858 shows that an additional hypothetical planet below current radial velocity (RV) constraints with M sini = 3-10 M⊕, semi-major axis 1-10 au and eccentricity 0.2-0.8 is unlikely to be present from the eccentricity that would be excited in the known planet (albeit cyclically). However, additional planets in proximity to the known planet could stabilize the system against secular perturbations from outer planets. Moreover, these additional planets can have an M sini below RV sensitivity and still affect their neighbours. For example, application to the two super-Earth system 61 Vir shows that an additional hypothetical planet cannot excite high eccentricities in the known planets, unless its mass and orbit lie in a restricted area of parameter space. Inner planets in HD 38858 below RV sensitivity would also modify conclusions above about excluded parameter space. This suggests that it may be possible to infer the presence of additional stabilizing planets in systems with an eccentric outer planet and an inner planet on an otherwise suspiciously circular orbit. This reinforces the point that the full complement of planets in a system is needed to assess its dynamical state.

  17. Computer system identification

    OpenAIRE

    Lesjak, Borut

    2008-01-01

    The concept of computer system identity in computer science bears just as much importance as does the identity of an individual in a human society. Nevertheless, the identity of a computer system is incomparably harder to determine, because there is no standard system of identification we could use and, moreover, a computer system during its life-time is quite indefinite, since all of its regular and necessary hardware and software upgrades soon make it almost unrecognizable: after a number o...

  18. Towards the Use of Super-Resolution in Biomedical Systems-on-Chip

    Directory of Open Access Journals (Sweden)

    Gustavo M. Callico

    2013-08-01

    Full Text Available Super-resolution is a smart process capable of generating images with a higher resolution than the resolution of the sensor used to acquire the images. Due to this reason, it has acquired a significant relevance within the medical community during the last years, especially for those specialties closely related with the medical imaging field. However, the super-resolution algorithms used in this field are normally extremely complex and thus, they tend to be slow and difficult to be implemented in hardware. This paper proposes a new super-resolution algorithm for video sequences that, while maintaining excellent levels in the objective and subjective visual quality of the processed images, presents a reduced computational cost due to its non-iterative nature and the use of fast motion estimation techniques. Additionally, the algorithm has been successfully implemented in a low-cost hardware platform, which guarantees the viability of the proposed solution for real-time biomedical systems-on-chip.

  19. Tensor computations in computer algebra systems

    CERN Document Server

    Korolkova, A V; Sevastyanov, L A

    2014-01-01

    This paper considers three types of tensor computations. On their basis, we attempt to formulate criteria that must be satisfied by a computer algebra system dealing with tensors. We briefly overview the current state of tensor computations in different computer algebra systems. The tensor computations are illustrated with appropriate examples implemented in specific systems: Cadabra and Maxima.

  20. Distributed computer control systems

    Energy Technology Data Exchange (ETDEWEB)

    Suski, G.J.

    1986-01-01

    This book focuses on recent advances in the theory, applications and techniques for distributed computer control systems. Contents (partial): Real-time distributed computer control in a flexible manufacturing system. Semantics and implementation problems of channels in a DCCS specification. Broadcast protocols in distributed computer control systems. Design considerations of distributed control architecture for a thermal power plant. The conic toolset for building distributed systems. Network management issues in distributed control systems. Interprocessor communication system architecture in a distributed control system environment. Uni-level homogenous distributed computer control system and optimal system design. A-nets for DCCS design. A methodology for the specification and design of fault tolerant real time systems. An integrated computer control system - architecture design, engineering methodology and practical experience.

  1. Dynamical Constraints on Outer Planets in Super-Earth Systems

    CERN Document Server

    Read, Matthew J

    2015-01-01

    This paper considers secular interactions within multi-planet systems. In particular we consider dynamical evolution of known planetary systems resulting from an additional hypothetical planet on an eccentric orbit. We start with an analytical study of a general two-planet system, showing that a planet on an elliptical orbit transfers all of its eccentricity to an initially circular planet if the two planets have comparable orbital angular momenta. Application to the single Super-Earth system HD38858 shows that an additional hypothetical planet below current radial velocity (RV) constraints with {\\textit{Msini}}=3-10M$_\\oplus$, semi-major axis 1-10au and eccentricity 0.2-0.8 is unlikely to be present from the eccentricity that would be excited in the known planet (albeit cyclically). However, additional planets in proximity to the known planet could stabilise the system against secular perturbations from outer planets. Moreover these additional planets can have an {\\textit{Msini}} below RV sensitivity and sti...

  2. Super-capacitors as an energy storage for fuel cell automotive hybrid electrical system

    Energy Technology Data Exchange (ETDEWEB)

    Thounthong, P.; Rael, St.; Davat, B. [Institut National Polytechnique, GREEN-INPL-CNRS (UMR 7037), 54 - Vandoeuvre les Nancy (France)

    2004-07-01

    The design, implementation and testing of a purely super-capacitors energy storage system for automotive system having a fuel cell as main source are presented. The system employs a super-capacitive storage device, composed of six components (3500 F, 2.5 V, 400 A) associated in series. This device is connected to automotive 42 V DC bus by a 2-quadrant DC-DC converter. The control structure of the system is realised by means of analogical and digital control. The experimental results show that super-capacitors are suitable as energy storage device for fuel cell automotive electrical system. (authors)

  3. ALMA correlator computer systems

    Science.gov (United States)

    Pisano, Jim; Amestica, Rodrigo; Perez, Jesus

    2004-09-01

    We present a design for the computer systems which control, configure, and monitor the Atacama Large Millimeter Array (ALMA) correlator and process its output. Two distinct computer systems implement this functionality: a rack- mounted PC controls and monitors the correlator, and a cluster of 17 PCs process the correlator output into raw spectral results. The correlator computer systems interface to other ALMA computers via gigabit Ethernet networks utilizing CORBA and raw socket connections. ALMA Common Software provides the software infrastructure for this distributed computer environment. The control computer interfaces to the correlator via multiple CAN busses and the data processing computer cluster interfaces to the correlator via sixteen dedicated high speed data ports. An independent array-wide hardware timing bus connects to the computer systems and the correlator hardware ensuring synchronous behavior and imposing hard deadlines on the control and data processor computers. An aggregate correlator output of 1 gigabyte per second with 16 millisecond periods and computational data rates of approximately 1 billion floating point operations per second define other hard deadlines for the data processing computer cluster.

  4. Local rollback for fault-tolerance in parallel computing systems

    Science.gov (United States)

    Blumrich, Matthias A [Yorktown Heights, NY; Chen, Dong [Yorktown Heights, NY; Gara, Alan [Yorktown Heights, NY; Giampapa, Mark E [Yorktown Heights, NY; Heidelberger, Philip [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Steinmacher-Burow, Burkhard [Boeblingen, DE; Sugavanam, Krishnan [Yorktown Heights, NY

    2012-01-24

    A control logic device performs a local rollback in a parallel super computing system. The super computing system includes at least one cache memory device. The control logic device determines a local rollback interval. The control logic device runs at least one instruction in the local rollback interval. The control logic device evaluates whether an unrecoverable condition occurs while running the at least one instruction during the local rollback interval. The control logic device checks whether an error occurs during the local rollback. The control logic device restarts the local rollback interval if the error occurs and the unrecoverable condition does not occur during the local rollback interval.

  5. Fault tolerant computing systems

    CERN Document Server

    Randell, B

    1981-01-01

    Fault tolerance involves the provision of strategies for error detection, damage assessment, fault treatment and error recovery. A survey is given of the different sorts of strategies used in highly reliable computing systems, together with an outline of recent research on the problems of providing fault tolerance in parallel and distributed computing systems. (15 refs).

  6. Computer controlled antenna system

    Science.gov (United States)

    Raumann, N. A.

    1972-01-01

    The application of small computers using digital techniques for operating the servo and control system of large antennas is discussed. The advantages of the system are described. The techniques were evaluated with a forty foot antenna and the Sigma V computer. Programs have been completed which drive the antenna directly without the need for a servo amplifier, antenna position programmer or a scan generator.

  7. Estimating gravitational radiation from super-emitting compact binary systems

    CERN Document Server

    Hanna, Chad; Lehner, Luis

    2016-01-01

    Binary black hole mergers are among the most violent events in the Universe, leading to extreme warping of spacetime and copious emission of gravitational radiation. Even though black holes are the most compact objects they are not necessarily the most efficient emitters of gravitational radiation in binary systems. The final black hole resulting from a binary black hole merger retains a significant fraction of the pre-merger orbital energy and angular momentum. A non-vacuum system can in principle shed more of this energy than a black hole merger of equivalent mass. We study these super-emitters through a toy model that accounts for the possibility that the merger creates a compact object that retains a long-lived time-varying quadrupole moment. This toy model can capture the merger of neutron stars, but it can also be used to consider more exotic compact binaries. We hope that this toy model can serve as a guide to more rigorous numerical investigations into these systems.

  8. Sensor System for Super-Pressure Balloon Performance Modeling Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Long-duration balloon flights are an exciting new area of scientific ballooning, enabled by the development of large super-pressure balloons. As these balloons...

  9. Attacks on computer systems

    Directory of Open Access Journals (Sweden)

    Dejan V. Vuletić

    2012-01-01

    Full Text Available Computer systems are a critical component of the human society in the 21st century. Economic sector, defense, security, energy, telecommunications, industrial production, finance and other vital infrastructure depend on computer systems that operate at local, national or global scales. A particular problem is that, due to the rapid development of ICT and the unstoppable growth of its application in all spheres of the human society, their vulnerability and exposure to very serious potential dangers increase. This paper analyzes some typical attacks on computer systems.

  10. Study of Super-Twisting sliding mode control for U model based nonlinear system

    Directory of Open Access Journals (Sweden)

    Jianhua ZHANG

    2016-08-01

    Full Text Available The Super-Twisting control algorithm is adopted to analyze the U model based nonlinear control system in order to solve the controller design problems of non-affine nonlinear systems. The non-affine nonlinear systems are studied, the neural network approximation of the nonlinear function is performed, and the Super-Twisting control algorithm is used to control. The convergence of the Super-Twisting algorithm is proved by selecting an appropriate Lyapunov function. The Matlab simulation is carried out to verify the feasibility and effectiveness of the described method. The result shows that the output of the controlled system can be tracked in a very short time by using the designed Super-Twisting controller, and the robustness of the controlled system is significantly improved as well.

  11. Digital Control of a power conditioner for fuel cell/super-capacitor hybrid system

    DEFF Research Database (Denmark)

    Caballero, Juan C Trujillo; Gomis-Bellmunt, Oriol; Montesinos-Miracle, Daniel

    2014-01-01

    This article proposes a digital control scheme to operate a proton exchange membrane fuel cell module of 1.2 kW and a super-capacitor through a DC/DC hybrid converter. A fuel cell has been proposed as a primary source of energy, and a super-capacitor has been proposed as an auxiliary source...... of energy. Experimental validation of the system implemented in the laboratory is provided. Several tests have been performed to verify that the system achieves excellent output voltage (V0) regulation and super-capacitor voltage (V SC) control under disturbances from fuel cell power (PFC) and output power...

  12. Digital Control of a power conditioner for fuel cell/super-capacitor hybrid system

    DEFF Research Database (Denmark)

    Caballero, Juan C Trujillo; Gomis-Bellmunt, Oriol; Montesinos-Miracle, Daniel;

    2014-01-01

    This article proposes a digital control scheme to operate a proton exchange membrane fuel cell module of 1.2 kW and a super-capacitor through a DC/DC hybrid converter. A fuel cell has been proposed as a primary source of energy, and a super-capacitor has been proposed as an auxiliary source...... of energy. Experimental validation of the system implemented in the laboratory is provided. Several tests have been performed to verify that the system achieves excellent output voltage (V0) regulation and super-capacitor voltage (V SC) control under disturbances from fuel cell power (PFC) and output power...

  13. Resilient computer system design

    CERN Document Server

    Castano, Victor

    2015-01-01

    This book presents a paradigm for designing new generation resilient and evolving computer systems, including their key concepts, elements of supportive theory, methods of analysis and synthesis of ICT with new properties of evolving functioning, as well as implementation schemes and their prototyping. The book explains why new ICT applications require a complete redesign of computer systems to address challenges of extreme reliability, high performance, and power efficiency. The authors present a comprehensive treatment for designing the next generation of computers, especially addressing safety-critical, autonomous, real time, military, banking, and wearable health care systems.   §  Describes design solutions for new computer system - evolving reconfigurable architecture (ERA) that is free from drawbacks inherent in current ICT and related engineering models §  Pursues simplicity, reliability, scalability principles of design implemented through redundancy and re-configurability; targeted for energy-,...

  14. The Erasmus Computing Grid - Building a Super-Computer for FREE

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2007-01-01

    textabstractToday advances in scientific research as well as clinical diagnostics and treatment are inevitably connected with information solutions concerning computation power and information storage. The needs for information technology are enormous and are in many cases the limiting

  15. NUMERICAL COMPUTATIONS OF CO-EXISTING SUPER-CRITICAL AND SUB-CRITICAL FLOWS BASED UPON CRD SCHEMES

    Science.gov (United States)

    Horie, Katsuya; Okamura, Seiji; Kobayashi, Yusuke; Hyodo, Makoto; Hida, Yoshihisa; Nishimoto, Naoshi; Mori, Akio

    Stream flows in steep gradient bed form complicating flow configurations, where co-exist super-critical and sub-critical flows. Computing numerically such flows are the key to successful river management. This study applied CRD schemes to 1D and 2D stream flow computations and proposed genuine ways to eliminate expansion shock waves. Through various cases of computing stream flows conducted, CRD schemes showed that i) conservativeness of discharge and accuracy of four significant figures are ensured, ii) artificial viscosity is not explicitly used for computational stabilization, and thus iii) 1D and 2D computations based upon CRD schemes are applicable to evaluating complicating stream flows for river management.

  16. The Erasmus Computing Grid – Building a Super-Computer for Free

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); A. Abuseiris (Anis); R.M. de Graaf (Rob); M. Lesnussa (Michael); F.G. Grosveld (Frank)

    2011-01-01

    textabstractToday advances in scientific research as well as clinical diagnostics and treatment are inevitably connected with information solutions concerning computation power and information storage. The needs for information technology are enormous and are in many cases the limiting factor for ne

  17. The Erasmus Computing Grid - Building a Super-Computer for FREE

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2007-01-01

    textabstractToday advances in scientific research as well as clinical diagnostics and treatment are inevitably connected with information solutions concerning computation power and information storage. The needs for information technology are enormous and are in many cases the limiting factor fo

  18. The Erasmus Computing Grid – Building a Super-Computer for Free

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); A. Abuseiris (Anis); R.M. de Graaf (Rob); M. Lesnussa (Michael); F.G. Grosveld (Frank)

    2011-01-01

    textabstractToday advances in scientific research as well as clinical diagnostics and treatment are inevitably connected with information solutions concerning computation power and information storage. The needs for information technology are enormous and are in many cases the limiting factor for

  19. Finite-Time Synchronization for Uncertain Master-Slave Chaotic System via Adaptive Super Twisting Algorithm

    Directory of Open Access Journals (Sweden)

    P. Siricharuanun

    2016-01-01

    Full Text Available A second-order sliding mode control for chaotic synchronization with bounded disturbance is studied. A robust finite-time controller is designed based on super twisting algorithm which is a popular second-order sliding mode control technique. The proposed controller is designed by combining an adaptive law with super twisting algorithm. New results based on adaptive super twisting control for the synchronization of identical Qi three-dimensional four-wing chaotic system are presented. The finite-time convergence of synchronization is ensured by using Lyapunov stability theory. The simulations results show the usefulness of the developed control method.

  20. COMPARATIVE ANALYSIS OF ENERGY ACCUMULATION SYSTEMS AND DETERMINATION OF OPTIMAL APPLICATION AREAS FOR MODERN SUPER FLYWHEELS

    Directory of Open Access Journals (Sweden)

    M. A. Sokolov

    2014-07-01

    Full Text Available The paper presents a review and comparative analysis of late years native and foreign literature on various energy storage devices: state of the art designs, application experience in various technical fields. Comparative characteristics of energy storage devices are formulated: efficiency, quality and stability. Typical characteristics are shown for such devices as electrochemical batteries, super capacitors, pumped hydroelectric storage, power systems based on compressed air and superconducting magnetic energy storage systems. The advantages and prospects of high-speed super flywheels as means of energy accumulation in the form of rotational kinetic energy are shown. High output power of a super flywheels energy storage system gives the possibility to use it as a buffer source of peak power. It is shown that super flywheels have great life cycle (over 20 years and are environmental. A distinctive feature of these energy storage devices is their good scalability. It is demonstrated that super flywheels are especially effective in hybrid power systems that operate in a charge/discharge mode, and are used particularly in electric vehicles. The most important factors for space applications of the super flywheels are their modularity, high efficiency, no mechanical friction and long operating time without maintenance. Quick response to network disturbances and high power output can be used to maintain the desired power quality and overall network stability along with fulfilling energy accumulation needs.

  1. Computer network defense system

    Science.gov (United States)

    Urias, Vincent; Stout, William M. S.; Loverro, Caleb

    2017-08-22

    A method and apparatus for protecting virtual machines. A computer system creates a copy of a group of the virtual machines in an operating network in a deception network to form a group of cloned virtual machines in the deception network when the group of the virtual machines is accessed by an adversary. The computer system creates an emulation of components from the operating network in the deception network. The components are accessible by the group of the cloned virtual machines as if the group of the cloned virtual machines was in the operating network. The computer system moves network connections for the group of the virtual machines in the operating network used by the adversary from the group of the virtual machines in the operating network to the group of the cloned virtual machines, enabling protecting the group of the virtual machines from actions performed by the adversary.

  2. Computer system operation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young Jae; Lee, Hae Cho; Lee, Ho Yeun; Kim, Young Taek; Lee, Sung Kyu; Park, Jeong Suk; Nam, Ji Wha; Kim, Soon Kon; Yang, Sung Un; Sohn, Jae Min; Moon, Soon Sung; Park, Bong Sik; Lee, Byung Heon; Park, Sun Hee; Kim, Jin Hee; Hwang, Hyeoi Sun; Lee, Hee Ja; Hwang, In A. [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1993-12-01

    The report described the operation and the trouble shooting of main computer and KAERINet. The results of the project are as follows; 1. The operation and trouble shooting of the main computer system. (Cyber 170-875, Cyber 960-31, VAX 6320, VAX 11/780). 2. The operation and trouble shooting of the KAERINet. (PC to host connection, host to host connection, file transfer, electronic-mail, X.25, CATV etc.). 3. The development of applications -Electronic Document Approval and Delivery System, Installation the ORACLE Utility Program. 22 tabs., 12 figs. (Author) .new.

  3. Computer Vision Systems

    Science.gov (United States)

    Gunasekaran, Sundaram

    Food quality is of paramount consideration for all consumers, and its importance is perhaps only second to food safety. By some definition, food safety is also incorporated into the broad categorization of food quality. Hence, the need for careful and accurate evaluation of food quality is at the forefront of research and development both in the academia and industry. Among the many available methods for food quality evaluation, computer vision has proven to be the most powerful, especially for nondestructively extracting and quantifying many features that have direct relevance to food quality assessment and control. Furthermore, computer vision systems serve to rapidly evaluate the most readily observable foods quality attributes - the external characteristics such as color, shape, size, surface texture etc. In addition, it is now possible, using advanced computer vision technologies, to “see” inside a food product and/or package to examine important quality attributes ordinarily unavailable to human evaluators. With rapid advances in electronic hardware and other associated imaging technologies, the cost-effectiveness and speed of computer vision systems have greatly improved and many practical systems are already in place in the food industry.

  4. Wide Operational Range Processor Power Delivery Design for Both Super-Threshold Voltage and Near-Threshold Voltage Computing

    Institute of Scientific and Technical Information of China (English)

    Xin He; Gui-Hai Yan; Yin-He Han; Xiao-Wei Li

    2016-01-01

    The load power range of modern processors is greatly enlarged because many advanced power management techniques are employed, such as dynamic voltage frequency scaling, Turbo Boosting, and near-threshold voltage (NTV) technologies. However, because the efficiency of power delivery varies greatly with different load conditions, conventional power delivery designs cannot maintain high efficiency over the entire voltage spectrum, and the gained power saving may be offset by power loss in power delivery. We propose SuperRange, a wide operational range power delivery unit. SuperRange complements the power delivery capability of on-chip voltage regulator and off-chip voltage regulator. On top of SuperRange, we analyze its power conversion characteristics and propose a voltage regulator (VR) aware power management algorithm. Moreover, as more and more cores have been integrated on a singe chip, multiple SuperRange units can serve as basic building blocks to build, in a highly scalable way, more powerful power delivery subsystem with larger power capacity. Experimental results show SuperRange unit offers 1x and 1.3x higher power conversion efficiency (PCE) than other two conventional power delivery schemes at NTV region and exhibits an average 70%PCE over entire operational range. It also exhibits superior resilience to power-constrained systems.

  5. Computational systems chemical biology.

    Science.gov (United States)

    Oprea, Tudor I; May, Elebeoba E; Leitão, Andrei; Tropsha, Alexander

    2011-01-01

    There is a critical need for improving the level of chemistry awareness in systems biology. The data and information related to modulation of genes and proteins by small molecules continue to accumulate at the same time as simulation tools in systems biology and whole body physiologically based pharmacokinetics (PBPK) continue to evolve. We called this emerging area at the interface between chemical biology and systems biology systems chemical biology (SCB) (Nat Chem Biol 3: 447-450, 2007).The overarching goal of computational SCB is to develop tools for integrated chemical-biological data acquisition, filtering and processing, by taking into account relevant information related to interactions between proteins and small molecules, possible metabolic transformations of small molecules, as well as associated information related to genes, networks, small molecules, and, where applicable, mutants and variants of those proteins. There is yet an unmet need to develop an integrated in silico pharmacology/systems biology continuum that embeds drug-target-clinical outcome (DTCO) triplets, a capability that is vital to the future of chemical biology, pharmacology, and systems biology. Through the development of the SCB approach, scientists will be able to start addressing, in an integrated simulation environment, questions that make the best use of our ever-growing chemical and biological data repositories at the system-wide level. This chapter reviews some of the major research concepts and describes key components that constitute the emerging area of computational systems chemical biology.

  6. Stability analysis of closed-loop super-critical pressure systems

    Science.gov (United States)

    Smith, Walter Castro

    The current study investigates the mechanisms governing flow induced stability of super-critical pressure fluid systems. Super-critical pressure fluid systems have been investigated as a mechanism for heat extraction from power systems for over a century. There are numerous benefits to these systems, but also potential pitfalls which must be examined. While super-critical pressure systems do not undergo phase change, they may be subject to the same flow induced instabilities which affect and limit two-phase systems. The objective of the current study is to develop a modeling and analysis framework to evaluate and understand flow-induced instabilities in super-critical pressure systems. The developed framework is used to evaluate experimental systems which have been constructed and tested by other investigators. The developed model shows good comparison with both the steady state and transient results published by other researchers. The model has been used to predict instabilities in experimental systems, as well as to show how some systems are more susceptible to instability than others. Stability maps have been constructed in a similar manner to those published for single heated flow path analysis.

  7. A POTENTIAL SUPER-VENUS IN THE KEPLER-69 SYSTEM

    Energy Technology Data Exchange (ETDEWEB)

    Kane, Stephen R.; Gelino, Dawn M. [NASA Exoplanet Science Institute, Caltech, MS 100-22, 770 South Wilson Avenue, Pasadena, CA 91125 (United States); Barclay, Thomas, E-mail: skane@ipac.caltech.edu [NASA Ames Research Center, M/S 244-30, Moffett Field, CA 94035 (United States)

    2013-06-20

    Transiting planets have greatly expanded and diversified the exoplanet field. These planets provide greater access to characterization of exoplanet atmospheres and structure. The Kepler mission has been particularly successful in expanding the exoplanet inventory, even to planets smaller than the Earth. The orbital period sensitivity of the Kepler data is now extending into the habitable zones of their host stars, and several planets larger than the Earth have been found to lie therein. Here we examine one such proposed planet, Kepler-69c. We provide new orbital parameters for this planet and an in-depth analysis of the habitable zone. We find that, even under optimistic conditions, this 1.7 R{sub Circled-Plus} planet is unlikely to be within the habitable zone of Kepler-69. Furthermore, the planet receives an incident flux of 1.91 times the solar constant, which is similar to that received by Venus. We thus suggest that this planet is likely a super-Venus rather than a super-Earth in terms of atmospheric properties and habitability, and we propose follow-up observations to disentangle the ambiguity.

  8. A Potential Super-Venus in the Kepler-69 System

    CERN Document Server

    Kane, Stephen R; Gelino, Dawn M

    2013-01-01

    Transiting planets have greatly expanded and diversified the exoplanet field. These planets provide greater access to characterization of exoplanet atmospheres and structure. The Kepler mission has been particularly successful in expanding the exoplanet inventory, even to planets smaller than the Earth. The orbital period sensitivity of the Kepler data is now extending into the Habitable Zones of their host stars, and several planets larger than the Earth have been found to lie therein. Here we examine one such proposed planet, Kepler-69c. We provide new orbital parameters for this planet and an in-depth analysis of the Habitable Zone. We find that, even under optimistic conditions, this 1.7 R$_\\oplus$ planet is unlikely to be within the Habitable Zone of Kepler-69. Furthermore, the planet receives an incident flux of 1.91 times the solar constant, which is similar to that received by Venus. We thus suggest that this planet is likely a super-Venus rather than a super-Earth in terms of atmospheric properties a...

  9. Super capacitors for embarked systems as a storage energy device solution

    Energy Technology Data Exchange (ETDEWEB)

    Ayad, M.Y.; Rael, S.; Pierfederici, S.; Davat, B. [Institut National Polytechnique, GREEN-INPL-CNRS (UMR 7037), 54 - Vandoeuvre les Nancy (France)

    2004-07-01

    The management of embarked electrical energy needs a storage system with high dynamic performances, in order to shave transient power peaks and to compensate for the intrinsic limitations of the main source. The use of super-capacitors for this storage system is quite suitable, because of appropriate electrical characteristics (huge capacitance, weak serial resistance, high specific energy, high specific power), of direct storage (energy ready for use), and of easy control by power electronic conversion. This paper deals with the conception and the achievement of two hybrid power sources using super-capacitors as auxiliary storage device. We present the structures, the control principles, and some experimental results. (authors)

  10. Design of Super-resolution Filters with a Gaussian Beam in Optical Data Storage Systems

    Institute of Scientific and Technical Information of China (English)

    WANG Sha-Sha; ZHAO Xiao-Feng; LI Cheng-Fang; RUAN Hao

    2008-01-01

    @@ Super-resolution filters based on a Ganssian beam are proposed to reduce the focusing spot in optical data storage systems.Both of amplitude filters and pure-phase filters are designed respectively to gain the desired intensity distributions.Their performances are analysed and compared with those based on plane wave in detail.The energy utilizations are presented.The simulation results show that our designed super-resolution filters are favourable for use in optical data storage systems in terms of performance and energy utilization.

  11. Development of Management System for Regional Pollution Source Based on SuperMap Objects

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    Based on the integration of C#.net and SuperMap Objects(tool software of component GIS),the management system of regional pollution source is developed.It mainly includes the demand analysis of system,function design,database construction,program design and concrete realization in the management aspect of pollution source.

  12. A super base station based centralized network architecture for 5G mobile communication systems

    Directory of Open Access Journals (Sweden)

    Manli Qian

    2015-04-01

    Full Text Available To meet the ever increasing mobile data traffic demand, the mobile operators are deploying a heterogeneous network with multiple access technologies and more and more base stations to increase the network coverage and capacity. However, the base stations are isolated from each other, so different types of radio resources and hardware resources cannot be shared and allocated within the overall network in a cooperative way. The mobile operators are thus facing increasing network operational expenses and a high system power consumption. In this paper, a centralized radio access network architecture, referred to as the super base station (super BS, is proposed, as a possible solution for an energy-efficient fifth-generation (5G mobile system. The super base station decouples the logical functions and physical entities of traditional base stations, so different types of system resources can be horizontally shared and statistically multiplexed among all the virtual base stations throughout the entire system. The system framework and main functionalities of the super BS are described. Some key technologies for system implementation, i.e., the resource pooling, real-time virtualization, adaptive hardware resource allocation are also highlighted.

  13. Use of Super-Capacitor to Enhance Charging Performance of Stand-Alone Solar PV System

    KAUST Repository

    Huang, B. J.

    2011-01-01

    Introduction: The battery charging performance in a stand-alone solar PV system affects the PV system efficiency and the load operating time. The New Energy Center of National Taiwan University has been devoted to the development of a PWM charging technique to continue charging the lead-acid battery after the overcharge point to increase the battery storage capacity by more than 10%. The present study intends to use the super-capacitor to further increase the charge capacity before the overcharge point of the battery. The super-capacitor is connected in parallel to the lead-acid battery. This will reduce the overall charging impedance during the charge and increase the charging current, especially in sunny weather. A system dynamics model of the lead-acid battery and super-capacitor was derived and the control system simulation was carried out to predict the charging performance for various weathers. It shows that the overall battery impedance decreases and charging power increases with increasing solar radiation. An outdoor comparative test for two identical PV systems with and without supercapacitor was carried out. The use of super-capacitor is shown to be able to increase the lead-acid charging capacity by more than 25% at sunny weather and 10% in cloudy weather. © Springer-Verlag Berlin Heidelberg 2011.

  14. Super-quantum states in SU(2) invariant 3 × N level systems

    Science.gov (United States)

    Adhikary, Soumik; Panda, Ipsit Kumar; Ravishankar, V.

    2017-02-01

    Nonclassicality of quantum states is expressed in many shades, one of the most stringent of them being a new standard introduced recently in Bharath and Ravishankar (2014), by expanding the notion of local hidden variables (LHV) to generalised local hidden variables (GLHV). Considering the family of SU(2) invariant 3 × N level systems, we identify those states that do not admit a GLHV description, which we designate as super-quantum (called exceptional in Bharath and Ravishankar (2014). We show that all super-quantum states admit a universal geometrical description, and that they are most likely to lie on a line segment in the manifold, irrespective of the value of N. We also show that though a super-quantum state can be highly mixed, its relative rank with respect to the uniform state is always less than that of a state which admits a GLHV description.

  15. Super-ASTROD: Probing primordial gravitational waves and mapping the outer solar system

    CERN Document Server

    Ni, Wei-Tou

    2008-01-01

    Super-ASTROD (Super Astrodynamical Space Test of Relativity using Optical Devices or ASTROD III) is a mission concept with 3-5 spacecraft in 5 AU orbits together with an Earth-Sun L1/L2 spacecraft ranging optically with one another to probe primordial gravitational-waves with frequencies 0.1 microHz - 1 mHz, to test fundamental laws of spacetime and to map the outer solar system. In this paper we address to its scientific goals, orbit and payload selection, and sensitivity to gravitational waves.

  16. The Effect of Surfactant on Synthesis of ZSM-5 in a Super-Concentrated System

    Institute of Scientific and Technical Information of China (English)

    Li Haiyan; Qin Lihong; Gao Guangbo; Sun Famin

    2016-01-01

    ZSM-5 zeolite was synthesized in a super-concentrated system using different kinds of surfactants. The ZSM-5 samples were characterized by XRD, SEM, FT-IR and BET techniques. The surfactant could change the properties of ZSM-5 zeolite, including the crystallinity, the crystal grain size, the surface area, the pore volume and the Si/Al mole ratio.

  17. First commissioning of the SuperKEKB vacuum system

    Science.gov (United States)

    Suetsugu, Y.; Shibata, K.; Ishibashi, T.; Kanazawa, K.; Shirai, M.; Terui, S.; Hisamatsu, H.

    2016-12-01

    The first (Phase-1) commissioning of SuperKEKB, an asymmetric-energy electron-positron collider at KEK, began in February 2016, after more than five years of upgradation work on KEKB and successfully ended in June 2016. A major task of the Phase-1 commissioning was the vacuum scrubbing of new beam pipes in anticipation of a sufficiently long beam lifetime and low background noise in the next commissioning, prior to which a new particle detector will be installed. The pressure rise per unit beam current decreased steadily with increasing beam dose, as expected. Another important task was to check the stabilities of various new vacuum components at high beam currents of approximately 1 A. The temperature increases of the bellows chambers, gate valves, connection flanges, and so on were less than several degrees at 1 A, and no serious problems were found. The effectiveness of the antechambers and TiN coating in suppressing the electron-cloud effect (ECE) in the positron ring was also confirmed. However, the ECE in the Al-alloy bellows chambers was observed where TiN had not been coated. The use of permanent magnets to create an axial magnetic field of approximately 100 G successfully suppressed this effect. Pressure bursts accompanying beam losses were also frequently observed in the positron ring. This phenomenon is still under investigation, but it is likely caused by collisions between the circulating beams and dust particles, especially in the dipole magnet beam pipes.

  18. The Computational Sensorimotor Systems Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — The Computational Sensorimotor Systems Lab focuses on the exploration, analysis, modeling and implementation of biological sensorimotor systems for both scientific...

  19. Basic aspects and contributions to the optimization of energy systems exploitation of a super tanker ship

    Science.gov (United States)

    Faitar, C.; Novac, I.

    2017-08-01

    Today, the concept of energy efficiency or energy optimization in ships has become one of the main problems of engineers in the whole world. To increase the fiability of a crude oil super tanker ship it means, among other things, to improve the energy performance and optimize the fuel consumption of ship through the development of engines and propulsion system or using alternative energies. Also, the importance of having an effective and reliable Power Management System (PMS) in a vessel operating system means to reduce operational costs and maintain power system of machine parts working in minimum stress in all operating conditions. Studying the Energy Efficiency Design Index and Energy Efficiency Operational Indicator for a crude oil super tanker ship, it allows us to study the reconfiguration of ship power system introducing new generation systems.

  20. On the Formation of Super-Earths with Implications for the Solar System

    CERN Document Server

    Martin, Rebecca G

    2016-01-01

    We first consider how the level of turbulence in a protoplanetary disk affects the formation locations for the observed close-in super-Earths in exosolar systems. We find that a protoplanetary disk that includes a dead zone (a region of low turbulence) has substantially more material in the inner parts of the disk, possibly allowing for in situ formation. For the dead zone to last the entire lifetime of the disk requires the active layer surface density to be sufficiently small, <100 g/cm^2. Migration through a dead zone may be very slow and thus super-Earth formation followed by migration towards the star through the dead zone is less likely. For fully turbulent disks, there is not enough material for in situ formation. However, in this case, super-Earths can form farther out in the disk and migrate inwards on a reasonable timescale. We suggest that both of these formation mechanisms operate in different planetary systems. This can help to explain the observed large range in densities of super-Earths beca...

  1. Short-lived radioactivity in the early Solar System: the Super-AGB star hypothesis

    CERN Document Server

    Lugaro, Maria; Karakas, Amanda I; Maddison, Sarah T; Liffman, Kurt; García-Hernández, D A; Siess, Lionel; Lattanzio, John C

    2012-01-01

    The composition of the most primitive Solar System condensates, such as calcium-aluminum-rich inclusions (CAI) and micron-sized corundum grains, show that short-lived radionuclides (SLR), e.g., 26Al, were present in the early Solar System. Their abundances require a local origin, which however is far from being understood. We present for the first time the abundances of several SLR up to 60Fe predicted from stars with initial mass in the range roughly 7-11 Msun. These stars evolve through core H, He, and C burning. After core C burning they go through a "Super"-asymptotic giant branch (Super-AGB) phase, with the H and He shells activated alternately, episodic thermal pulses in the He shell, a very hot temperature at the base of the convective envelope (~ 10^8 K), and strong stellar winds driving the H-rich envelope into the surrounding interstellar medium. The final remnants of the evolution of Super-AGB stars are mostly O-Ne white dwarfs. Our Super-AGB models produce 26Al/27Al yield ratios ~ 0.02 - 0.26. The...

  2. A scheme of optical interconnection for super high speed parallel computer

    Institute of Scientific and Technical Information of China (English)

    Youju Mao(毛幼菊); Yi L(u)(吕翊); Jiang Liu(刘江); Mingrui Dang(党明瑞)

    2004-01-01

    An optical cross connection network which adopts coarse wavelength division multiplexing (CWDM) and data packet is introduced. It can be used to realize communication between multi-CPU and multi-MEM in parallel computing system. It provides an effective way to upgrade the capability of parallel computer by combining optical wavelength division multiplexing (WDM) and data packet switching technology. CWDM used in network construction, optical cross connection (OXC) based on optical switch arrays, and data packet format used in network construction were analyzed. We have also done the optimizing analysis of the number of optical switches needed in different scales of network in this paper. The architecture of the optical interconnection for 8 wavelength channels and 128 bits parallel transmission has been researched. Finally, a parallel transmission system with 4 nodes, 8 channels per node, has been designed.

  3. Secure computing on reconfigurable systems

    NARCIS (Netherlands)

    Fernandes Chaves, R.J.

    2007-01-01

    This thesis proposes a Secure Computing Module (SCM) for reconfigurable computing systems. SC provides a protected and reliable computational environment, where data security and protection against malicious attacks to the system is assured. SC is strongly based on encryption algorithms and on the

  4. Secure computing on reconfigurable systems

    NARCIS (Netherlands)

    Fernandes Chaves, R.J.

    2007-01-01

    This thesis proposes a Secure Computing Module (SCM) for reconfigurable computing systems. SC provides a protected and reliable computational environment, where data security and protection against malicious attacks to the system is assured. SC is strongly based on encryption algorithms and on the a

  5. Computer systems a programmer's perspective

    CERN Document Server

    Bryant, Randal E

    2016-01-01

    Computer systems: A Programmer’s Perspective explains the underlying elements common among all computer systems and how they affect general application performance. Written from the programmer’s perspective, this book strives to teach readers how understanding basic elements of computer systems and executing real practice can lead them to create better programs. Spanning across computer science themes such as hardware architecture, the operating system, and systems software, the Third Edition serves as a comprehensive introduction to programming. This book strives to create programmers who understand all elements of computer systems and will be able to engage in any application of the field--from fixing faulty software, to writing more capable programs, to avoiding common flaws. It lays the groundwork for readers to delve into more intensive topics such as computer architecture, embedded systems, and cybersecurity. This book focuses on systems that execute an x86-64 machine code, and recommends th...

  6. Central nervous system and computation.

    Science.gov (United States)

    Guidolin, Diego; Albertin, Giovanna; Guescini, Michele; Fuxe, Kjell; Agnati, Luigi F

    2011-12-01

    Computational systems are useful in neuroscience in many ways. For instance, they may be used to construct maps of brain structure and activation, or to describe brain processes mathematically. Furthermore, they inspired a powerful theory of brain function, in which the brain is viewed as a system characterized by intrinsic computational activities or as a "computational information processor. "Although many neuroscientists believe that neural systems really perform computations, some are more cautious about computationalism or reject it. Thus, does the brain really compute? Answering this question requires getting clear on a definition of computation that is able to draw a line between physical systems that compute and systems that do not, so that we can discern on which side of the line the brain (or parts of it) could fall. In order to shed some light on the role of computational processes in brain function, available neurobiological data will be summarized from the standpoint of a recently proposed taxonomy of notions of computation, with the aim of identifying which brain processes can be considered computational. The emerging picture shows the brain as a very peculiar system, in which genuine computational features act in concert with noncomputational dynamical processes, leading to continuous self-organization and remodeling under the action of external stimuli from the environment and from the rest of the organism.

  7. SMILEI: A collaborative, open-source, multi-purpose PIC code for the next generation of super-computers

    Science.gov (United States)

    Grech, Mickael; Derouillat, J.; Beck, A.; Chiaramello, M.; Grassi, A.; Niel, F.; Perez, F.; Vinci, T.; Fle, M.; Aunai, N.; Dargent, J.; Plotnikov, I.; Bouchard, G.; Savoini, P.; Riconda, C.

    2016-10-01

    Over the last decades, Particle-In-Cell (PIC) codes have been central tools for plasma simulations. Today, new trends in High-Performance Computing (HPC) are emerging, dramatically changing HPC-relevant software design and putting some - if not most - legacy codes far beyond the level of performance expected on the new and future massively-parallel super computers. SMILEI is a new open-source PIC code co-developed by both plasma physicists and HPC specialists, and applied to a wide range of physics-related studies: from laser-plasma interaction to astrophysical plasmas. It benefits from an innovative parallelization strategy that relies on a super-domain-decomposition allowing for enhanced cache-use and efficient dynamic load balancing. Beyond these HPC-related developments, SMILEI also benefits from additional physics modules allowing to deal with binary collisions, field and collisional ionization and radiation back-reaction. This poster presents the SMILEI project, its HPC capabilities and illustrates some of the physics problems tackled with SMILEI.

  8. Breaking the chains: hot super-Earth systems from migration and disruption of compact resonant chains

    Science.gov (United States)

    Izidoro, Andre; Ogihara, Masahiro; Raymond, Sean N.; Morbidelli, Alessandro; Pierens, Arnaud; Bitsch, Bertram; Cossou, Christophe; Hersant, Franck

    2017-09-01

    'Hot super-Earths' (or 'mini-Neptunes') between one and four times Earth's size with period shorter than 100 d orbit 30-50 per cent of Sun-like stars. Their orbital configuration - measured as the period ratio distribution of adjacent planets in multiplanet systems - is a strong constraint for formation models. Here, we use N-body simulations with synthetic forces from an underlying evolving gaseous disc to model the formation and long-term dynamical evolution of super-Earth systems. While the gas disc is present, planetary embryos grow and migrate inward to form a resonant chain anchored at the inner edge of the disc. These resonant chains are far more compact than the observed super-Earth systems. Once the gas dissipates, resonant chains may become dynamically unstable. They undergo a phase of giant impacts that spreads the systems out. Disc turbulence has no measurable effect on the outcome. Our simulations match observations if a small fraction of resonant chains remain stable, while most super-Earths undergo a late dynamical instability. Our statistical analysis restricts the contribution of stable systems to less than 25 per cent. Our results also suggest that the large fraction of observed single-planet systems does not necessarily imply any dichotomy in the architecture of planetary systems. Finally, we use the low abundance of resonances in Kepler data to argue that, in reality, the survival of resonant chains happens likely only in ∼5 per cent of the cases. This leads to a mystery: in our simulations only 50-60 per cent of resonant chains became unstable, whereas at least 75 per cent (and probably 90-95 per cent) must be unstable to match observations.

  9. Super-capacitor based energy storage system for improved load frequency control

    Energy Technology Data Exchange (ETDEWEB)

    Mufti, Mairaj ud din; Lone, Shameem Ahmad; Iqbal, Shiekh Javed; Ahmad, Muzzafar; Ismail, Mudasir [Electrical Engineering Department, National Institute of Technology, Hazratbal, Srinagar 190006, Jammu and Kashmir (India)

    2009-01-15

    A fuzzy-logic controlled super-capacitor bank (SCB) for improved load frequency control (LFC) of an interconnected power system is proposed, in this paper. The super-capacitor bank in each control area is interfaced with the area control bus through a power conversion system (PCS) comprising of a voltage source converter (VSC) and a buck-boost chopper. The fuzzy controller for SCB is designed in such a way that the effects of load disturbances are rejected on a continuous basis. Necessary models are developed and control and implementation aspects are presented in a detailed manner. Time domain simulations are carried out to demonstrate the effectiveness of the proposed scheme. The performance of the resulting power system under realistic situation is investigated by including the effects of generation rate constraint (GRC) and governor dead band (DB) in the simulation studies. (author)

  10. Ubiquitous Computing Systems

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Friday, Adrian

    2009-01-01

    First introduced two decades ago, the term ubiquitous computing is now part of the common vernacular. Ubicomp, as it is commonly called, has grown not just quickly but broadly so as to encompass a wealth of concepts and technology that serves any number of purposes across all of human endeavor......, an original ubicomp pioneer, Ubiquitous Computing Fundamentals brings together eleven ubiquitous computing trailblazers who each report on his or her area of expertise. Starting with a historical introduction, the book moves on to summarize a number of self-contained topics. Taking a decidedly human...... perspective, the book includes discussion on how to observe people in their natural environments and evaluate the critical points where ubiquitous computing technologies can improve their lives. Among a range of topics this book examines: How to build an infrastructure that supports ubiquitous computing...

  11. Study on Super-Twisting synchronization control of chaotic system based on U model

    Directory of Open Access Journals (Sweden)

    Jianhua ZHANG

    2016-06-01

    Full Text Available A U model based Super-Twisting synchronization control method for chaotic systems is proposed. The chaos control of chaotic systems is prescribed, then, based on the current research status of chaotic systems and some useful research results in nonlinear system design, some new methods for chaos control and synchronization are provided, and the controller is designed to achieve the finite time chaos synchronization. The numerical simulations are carried out for Lorenz system and Chen system, and the result proves the effectiveness of the method.

  12. Video super-resolution using simultaneous motion and intensity calculations

    DEFF Research Database (Denmark)

    Keller, Sune Høgild; Lauze, Francois Bernard; Nielsen, Mads

    2011-01-01

    for the joint estimation of a super-resolution sequence and its flow field. Via the calculus of variations, this leads to a coupled system of partial differential equations for image sequence and motion estimation. We solve a simplified form of this system and as a by-product we indeed provide a motion field...... for super-resolved sequences. Computing super-resolved flows has to our knowledge not been done before. Most advanced super-resolution (SR) methods found in literature cannot be applied to general video with arbitrary scene content and/or arbitrary optical flows, as it is possible with our simultaneous VSR...

  13. Capability-based computer systems

    CERN Document Server

    Levy, Henry M

    2014-01-01

    Capability-Based Computer Systems focuses on computer programs and their capabilities. The text first elaborates capability- and object-based system concepts, including capability-based systems, object-based approach, and summary. The book then describes early descriptor architectures and explains the Burroughs B5000, Rice University Computer, and Basic Language Machine. The text also focuses on early capability architectures. Dennis and Van Horn's Supervisor; CAL-TSS System; MIT PDP-1 Timesharing System; and Chicago Magic Number Machine are discussed. The book then describes Plessey System 25

  14. New computing systems and their impact on computational mechanics

    Science.gov (United States)

    Noor, Ahmed K.

    1989-01-01

    Recent advances in computer technology that are likely to impact computational mechanics are reviewed. The technical needs for computational mechanics technology are outlined. The major features of new and projected computing systems, including supersystems, parallel processing machines, special-purpose computing hardware, and small systems are described. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed, and a novel partitioning strategy is outlined for maximizing the degree of parallelism on multiprocessor computers with a shared memory.

  15. Design and implementation of power supply for super computer%超级计算机电源设计及实现

    Institute of Scientific and Technical Information of China (English)

    姚信安; 宋飞; 胡世平

    2012-01-01

    为了满足某千万亿次超级计算机高效率、低成本、高可靠的供电要求,采用了12 V直流分布式供电系统.详细介绍了机柜级、板级和处理器的电源设计框图和工作原理,采用SIMPLIS仿真软件对电压调节模块和其他DCDC变换器进行了仿真和验证.在此基础上,对系统稳定性进行了分析,提出了解决稳定性问题的若干措施.最后实测了系统运行时的部分电源电压波形.测量和运行结果表明,电源设计完全满足该超级计算机的供电要求.%To meet high efficiency, low cost and high reliability power requirements of petaflops super computer, a distributed power system with 12 V DC was developed. The block diagram and operation principle of power supply for cabinet, motherboard and processor were described in detail. The voltage regulator module and other DC-DC converters were simulated and verified by SIMPLIS software. Based on these, system stability was analyzed and some measures were proposed to improve system stability. Finally, the voltage/current waveforms were measured. Experimental and application results show that proposed power supply can fully meet power requirements of super computer.

  16. Super Bloch Oscillation in a PT symmetric system

    CERN Document Server

    Turker, Z

    2016-01-01

    Wannier-Stark ladder in a PT symmetric system is generally complex that leads to amplified/damped Bloch oscillation. We show that a non-amplified wave packet oscillation with very large amplitude can be realized in a non-Hermitian tight binding lattice if certain conditions are satisfied. We show that pseudo PT symmetry guarantees the reality of the quasi energy spectrum in our system.

  17. Computer Security Systems Enable Access.

    Science.gov (United States)

    Riggen, Gary

    1989-01-01

    A good security system enables access and protects information from damage or tampering, but the most important aspects of a security system aren't technical. A security procedures manual addresses the human element of computer security. (MLW)

  18. MTF Measurement of EBCCD Imaging System by Using Super Resolution Technique

    Institute of Scientific and Technical Information of China (English)

    左昉; 高岳; 高稚允; 苏美开; 周立伟

    2003-01-01

    Existing methods of measurement MTF for discrete imaging system are analysed. A slit target is frequently used to measure the MTF for an imaging system. Usually there are four methods to measure the MTF for a discrete imaging system by using a slit. These methods have something imperfect respectively. But for the discrete imaging systems of under sampling it is difficult to reproduce this type of target properly since frequencies above Nyquist are folded into those below Nyquist, resulting in aliasing effect. To tackle the aliasing problem, a super resolution technique is introduced into our measurement, which gives MTF values both above and below Nyquist more accurately.

  19. Super-Radiant Dynamics, Doorways, and Resonances in Nuclei and Other Open Mesoscopic Systems

    CERN Document Server

    Auerbach, Naftali

    2011-01-01

    The phenomenon of super-radiance (Dicke effect, coherent spontaneous radiation by a gas of atoms coupled through the common radiation field) is well known in quantum optics. The review discusses similar physics that emerges in open and marginally stable quantum many-body systems. In the presence of open decay channels, the intrinsic states are coupled through the continuum. At sufficiently strong continuum coupling, the spectrum of resonances undergoes the restructuring with segregation of very broad super-radiant states and trapping of remaining long-lived compound states. The appropriate formalism describing this phenomenon is based on the Feshbach projection method and effective non-Hermitian Hamiltonian. A broader generalization is related to the idea of doorway states connecting quantum states of different structure. The method is explained in detail and the examples of applications are given to nuclear, atomic and particle physics. The interrelation of the collective dynamics through continuum and possi...

  20. The Importance of \\"Super Sensible Substrate\\" in Kant\\'s System of Philosophy

    Directory of Open Access Journals (Sweden)

    R Mahoozi

    2013-09-01

    Full Text Available In Kant's transcendental philosophy, "sensible" is an object composed of multiple sense intuitions and a priori constitutive of mind. In this philosophy, sensible nature is empirical and mechanical that becomes universal and necessary under determinate concepts and principles of Understanding. But, there is another space not determined by concepts and principles of Understanding. This space is "super sensible". This super sensible is the space of noumenal objects and is very important in Kant's system of philosophy. This sphere is important for explaining the principle of uniformity of nature as a supporter to induction, some ethical items and religion theory, organisms and culture. But how can we get at this realm? And is this realm compatible with the realm of empirical knowledge? In this paper we want to explain these matters.

  1. Quantum Chaos in Physical Systems: from Super Conductors to Quarks

    OpenAIRE

    Bittner, Elmar; Markum, Harald; Pullirsch, Rainer

    2001-01-01

    This article is the written version of a talk delivered at the Bexbach Colloquium of Science 2000 and starts with an introduction into quantum chaos and its relationship to classical chaos. The Bohigas-Giannoni-Schmit conjecture is formulated and evaluated within random-matrix theory. Several examples of physical systems exhibiting quantum chaos ranging from nuclear to solid state physics are presented. The presentation concludes with recent research work on quantum chromodynamics and the qua...

  2. Quantum Chaos in Physical Systems from Super Conductors to Quarks

    CERN Document Server

    Bittner, E; Pullirsch, R; Bittner, Elmar; Markum, Harald; Pullirsch, Rainer

    2001-01-01

    This article is the written version of a talk delivered at the Bexbach Colloquium of Science 2000 and starts with an introduction into quantum chaos and its relationship to classical chaos. The Bohigas-Giannoni-Schmit conjecture is formulated and evaluated within random-matrix theory. Several examples of physical systems exhibiting quantum chaos ranging from nuclear to solid state physics are presented. The presentation concludes with recent research work on quantum chromodynamics and the quark-gluon plasma. In the case of a chemical potential the eigenvalue spectrum becomes complex and one has to deal with non-Hermitian random-matrix theory.

  3. Energy efficient distributed computing systems

    CERN Document Server

    Lee, Young-Choon

    2012-01-01

    The energy consumption issue in distributed computing systems raises various monetary, environmental and system performance concerns. Electricity consumption in the US doubled from 2000 to 2005.  From a financial and environmental standpoint, reducing the consumption of electricity is important, yet these reforms must not lead to performance degradation of the computing systems.  These contradicting constraints create a suite of complex problems that need to be resolved in order to lead to 'greener' distributed computing systems.  This book brings together a group of outsta

  4. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens; Inglett, Todd Alan

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  5. Dynamical Systems Some Computational Problems

    CERN Document Server

    Guckenheimer, J; Guckenheimer, John; Worfolk, Patrick

    1993-01-01

    We present several topics involving the computation of dynamical systems. The emphasis is on work in progress and the presentation is informal -- there are many technical details which are not fully discussed. The topics are chosen to demonstrate the various interactions between numerical computation and mathematical theory in the area of dynamical systems. We present an algorithm for the computation of stable manifolds of equilibrium points, describe the computation of Hopf bifurcations for equilibria in parametrized families of vector fields, survey the results of studies of codimension two global bifurcations, discuss a numerical analysis of the Hodgkin and Huxley equations, and describe some of the effects of symmetry on local bifurcation.

  6. Train Headway Models and Carrying Capacity of Super-Speed Maglev System

    Science.gov (United States)

    He, Shiwei; Song, Rui; Eastham, Tony

    Train headway models are established by analyzing the operation of the Transrapid Super-speed Maglev System (TSMS). The variation in the minimum allowable headway for trains of different speeds and consists is studied under various operational constraints. A potential Beijing-Shanghai Maglev line is used as an illustration to undertake capacity analyses with the model and methods. The example shows that the headway models for analyzing the carrying capacity of Maglev systems are very useful for the configurational design of this new transport system.

  7. On the incidence of eclipsing Am binary systems in the SuperWASP survey

    CERN Document Server

    Smalley, B; Pintado, O I; Gillon, M; Holdsworth, D L; Anderson, D R; Barros, S C C; Cameron, A Collier; Delrez, L; Faedi, F; Haswell, C A; Hellier, C; Horne, K; Jehin, E; Maxted, P F L; Norton, A J; Pollacco, D; Skillen, I; Smith, A M S; West, R G; Wheatley, P J

    2014-01-01

    The results of a search for eclipsing Am star binaries using photometry from the SuperWASP survey are presented. The light curves of 1742 Am stars fainter than V = 8.0 were analysed for the presences of eclipses. A total of 70 stars were found to exhibit eclipses, with 66 having sufficient observations to enable orbital periods to be determined and 28 of which are newly identified eclipsing systems. Also presented are spectroscopic orbits for 5 of the systems. The number of systems and the period distribution is found to be consistent with that identified in previous radial velocity surveys of `classical' Am stars.

  8. Computational Systems Chemical Biology

    OpenAIRE

    Oprea, Tudor I.; Elebeoba E. May; Leitão, Andrei; Tropsha, Alexander

    2011-01-01

    There is a critical need for improving the level of chemistry awareness in systems biology. The data and information related to modulation of genes and proteins by small molecules continue to accumulate at the same time as simulation tools in systems biology and whole body physiologically-based pharmacokinetics (PBPK) continue to evolve. We called this emerging area at the interface between chemical biology and systems biology systems chemical biology, SCB (Oprea et al., 2007).

  9. Hybridity in Embedded Computing Systems

    Institute of Scientific and Technical Information of China (English)

    虞慧群; 孙永强

    1996-01-01

    An embedded system is a system that computer is used as a component in a larger device.In this paper,we study hybridity in embedded systems and present an interval based temporal logic to express and reason about hybrid properties of such kind of systems.

  10. Thermal conductance modeling and characterization of the SuperCDMS-SNOLAB sub-Kelvin cryogenic system

    Energy Technology Data Exchange (ETDEWEB)

    Dhuley, R. C. [Fermilab; Hollister, M. I. [Fermilab; Ruschman, M. K. [Fermilab; Martin, L. D. [Fermilab; Schmitt, R. L. [Fermilab; Tatkowski, Tatkowski,G.L. [Fermilab; Bauer, D. a. [Fermilab; Lukens, P. T. [Fermilab

    2017-09-13

    The detectors of the Super Cryogenic Dark Matter Search experiment at SNOLAB (SuperCDMS SNOLAB) will operate in a seven-layered cryostat with thermal stages between room temperature and the base temperature of 15 mK. The inner three layers of the cryostat, which are to be nominally maintained at 1 K, 250 mK, and 15 mK, will be cooled by a dilution refrigerator via conduction through long copper stems. Bolted and mechanically pressed contacts, at and cylindrical, as well as exible straps are the essential stem components that will facilitate assembly/dismantling of the cryostat. These will also allow for thermal contractions/movements during cooldown of the sub-Kelvin system. To ensure that these components and their contacts meet their design thermal conductance, prototypes were fabricated and cryogenically tested. The present paper gives an overview of the SuperCDMS SNOLAB sub-Kelvin architecture and its conductance requirements. Results from the conductance measurements tests and from sub-Kelvin thermal modeling are discussed.

  11. The GLOBE-Consortium: The Erasmus Computing Grid – Building a Super-Computer at Erasmus MC for FREE

    NARCIS (Netherlands)

    T.A. Knoch (Tobias)

    2005-01-01

    textabstractTo meet the enormous computational needs of live-science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop computing grids in the world – The Erasmus Computing Grid.

  12. The GLOBE-Consortium: The Erasmus Computing Grid – Building a Super-Computer at Erasmus MC for FREE

    NARCIS (Netherlands)

    T.A. Knoch (Tobias)

    2005-01-01

    textabstractTo meet the enormous computational needs of live-science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop computing grids in the world – The Erasmus Computing Grid. Curren

  13. Computer algebra in systems biology

    CERN Document Server

    Laubenbacher, Reinhard

    2007-01-01

    Systems biology focuses on the study of entire biological systems rather than on their individual components. With the emergence of high-throughput data generation technologies for molecular biology and the development of advanced mathematical modeling techniques, this field promises to provide important new insights. At the same time, with the availability of increasingly powerful computers, computer algebra has developed into a useful tool for many applications. This article illustrates the use of computer algebra in systems biology by way of a well-known gene regulatory network, the Lac Operon in the bacterium E. coli.

  14. Students "Hacking" School Computer Systems

    Science.gov (United States)

    Stover, Del

    2005-01-01

    This article deals with students hacking school computer systems. School districts are getting tough with students "hacking" into school computers to change grades, poke through files, or just pit their high-tech skills against district security. Dozens of students have been prosecuted recently under state laws on identity theft and unauthorized…

  15. Students "Hacking" School Computer Systems

    Science.gov (United States)

    Stover, Del

    2005-01-01

    This article deals with students hacking school computer systems. School districts are getting tough with students "hacking" into school computers to change grades, poke through files, or just pit their high-tech skills against district security. Dozens of students have been prosecuted recently under state laws on identity theft and unauthorized…

  16. Shielding optimization studies for the detector systems of the Superconducting Super Collider

    Energy Technology Data Exchange (ETDEWEB)

    Slater, C.O.; Lillie, R.A.; Gabriel, T.A.

    1994-09-01

    Preliminary shielding optimization studies for the Superconducting Super Collider`s Solenoidal Detector Collaboration detector system were performed at the Oak Ridge National Laboratory in 1993. The objective of the study was to reduce the neutron and gamma-ray fluxes leaving the shield to a level that resulted in insignificant effects on the functionality of the detector system. Steel and two types of concrete were considered as components of the shield, and the shield was optimized according to thickness, weight, and cost. Significant differences in the thicknesses, weights, and costs were noted for the three optimization parameters. Results from the study are presented.

  17. Robot computer problem solving system

    Science.gov (United States)

    Becker, J. D.; Merriam, E. W.

    1974-01-01

    The conceptual, experimental, and practical aspects of the development of a robot computer problem solving system were investigated. The distinctive characteristics were formulated of the approach taken in relation to various studies of cognition and robotics. Vehicle and eye control systems were structured, and the information to be generated by the visual system is defined.

  18. Operating systems. [of computers

    Science.gov (United States)

    Denning, P. J.; Brown, R. L.

    1984-01-01

    A counter operating system creates a hierarchy of levels of abstraction, so that at a given level all details concerning lower levels can be ignored. This hierarchical structure separates functions according to their complexity, characteristic time scale, and level of abstraction. The lowest levels include the system's hardware; concepts associated explicitly with the coordination of multiple tasks appear at intermediate levels, which conduct 'primitive processes'. Software semaphore is the mechanism controlling primitive processes that must be synchronized. At higher levels lie, in rising order, the access to the secondary storage devices of a particular machine, a 'virtual memory' scheme for managing the main and secondary memories, communication between processes by way of a mechanism called a 'pipe', access to external input and output devices, and a hierarchy of directories cataloguing the hardware and software objects to which access must be controlled.

  19. Algebra & trigonometry super review

    CERN Document Server

    2012-01-01

    Get all you need to know with Super Reviews! Each Super Review is packed with in-depth, student-friendly topic reviews that fully explain everything about the subject. The Algebra and Trigonometry Super Review includes sets and set operations, number systems and fundamental algebraic laws and operations, exponents and radicals, polynomials and rational expressions, equations, linear equations and systems of linear equations, inequalities, relations and functions, quadratic equations, equations of higher order, ratios, proportions, and variations. Take the Super Review quizzes to see how much y

  20. Super-Twisting-Algorithm-Based Terminal Sliding Mode Control for a Bioreactor System

    Directory of Open Access Journals (Sweden)

    Sendren Sheng-Dong Xu

    2014-01-01

    control (TSMC for a bioreactor system with second-order type dynamics. TSMC not only can retain the advantages of conventional sliding mode control (CSMC, including easy implementation, robustness to disturbances, and fast response, but also can make the system states converge to the equivalent point in a finite amount of time after the system states intersect the sliding surface. The chattering phenomena in TSMC will originally exist on the sliding surface after the system states achieve the sliding surface and before the system states reach the equivalent point. However, by using the super twisting algorithm (STA, the chattering phenomena can be obviously reduced. The proposed method is also compared with two other methods: (1 CSMC without STA and (2 TSMC without STA. Finally, the control schemes are applied to the control of a bioreactor system to illustrate the effectiveness and applicability. Simulation results show that it can achieve better performance by using the proposed method.

  1. Computer System Design System-on-Chip

    CERN Document Server

    Flynn, Michael J

    2011-01-01

    The next generation of computer system designers will be less concerned about details of processors and memories, and more concerned about the elements of a system tailored to particular applications. These designers will have a fundamental knowledge of processors and other elements in the system, but the success of their design will depend on the skills in making system-level tradeoffs that optimize the cost, performance and other attributes to meet application requirements. This book provides a new treatment of computer system design, particularly for System-on-Chip (SOC), which addresses th

  2. Challenges and opportunities of power systems from smart homes to super-grids.

    Science.gov (United States)

    Kuhn, Philipp; Huber, Matthias; Dorfner, Johannes; Hamacher, Thomas

    2016-01-01

    The world's power systems are facing a structural change including liberalization of markets and integration of renewable energy sources. This paper describes the challenges that lie ahead in this process and points out avenues for overcoming different problems at different scopes, ranging from individual homes to international super-grids. We apply energy system models at those different scopes and find a trade-off between technical and social complexity. Small-scale systems would require technological breakthroughs, especially for storage, but individual agents can and do already start to build and operate such systems. In contrast, large-scale systems could potentially be more efficient from a techno-economic point of view. However, new political frameworks are required that enable long-term cooperation among sovereign entities through mutual trust. Which scope first achieves its breakthrough is not clear yet.

  3. On Dependability of Computing Systems

    Institute of Scientific and Technical Information of China (English)

    XU Shiyi

    1999-01-01

    With the rapid development and wideapplications of computing systems on which more reliance has been put, adependable system will be much more important than ever. This paper isfirst aimed at giving informal but precise definitions characterizingthe various attributes of dependability of computing systems and thenthe importance of (and the relationships among) all the attributes areexplained.Dependability is first introduced as a global concept which subsumes theusual attributes of reliability, availability, maintainability, safetyand security. The basic definitions given here are then commended andsupplemented by detailed material and additional explanations in thesubsequent sections.The presentation has been structured as follows so as to attract thereader's attention to the important attributions of dependability.* Search for a few number of concise concepts enabling thedependability attributes to be expressed as clearly as possible.* Use of terms which are identical or as close as possible tothose commonly used nowadays.This paper is also intended to provoke people's interest in designing adependable computing system.

  4. Experimental demonstration of outdoor 2.2 Tbps super-channel FSO transmission system

    KAUST Repository

    Esmail, Maged Abdullah

    2016-07-26

    Free space optic (FSO) is a wireless technology that promises high speed data rate with low deployment cost. Next generation wireless networks require more bandwidth which is not supported by todays wireless techniques. FSO can be a potential candidate for last mile bottle neck in wireless network and for many other applications. In this paper, we experimentally demonstrate a high speed FSO system using super-channel source and multi-format transmitter. The FSO system was installed outdoor on the building roof over 11.5 m distance and built using off-the-shelf components. We designed a comb source capable of generating multi-subcarriers with flexible spacing. Also we designed a multi-format transmitter capable of generating different complex modulation schemes. For single carrier transmission, we were able to transmit a 23 Gbaud 16-QAM signal over FSO link, achieving 320 Gbps with 6 b/s/Hz spectral efficiency. Then using our super-channel system, 12 equal gain subcarriers are generated and modulated by a DP-16QAM signal with different symbol rates. We achieved maximum symbol rate of 23 Gbaud (i.e. 2.2 Tbps) and spectral efficiency of 7.2 b/s/Hz. © 2016 IEEE.

  5. Advances in SVM-Based System Using GMM Super Vectors for Text-Independent Speaker Verification

    Institute of Scientific and Technical Information of China (English)

    ZHAO Jian; DONG Yuan; ZHAO Xianyu; YANG Hao; LU Liang; WANG Haila

    2008-01-01

    For text-independent speaker verification,the Gaussian mixture model (GMM) using a universal background model strategy and the GMM using support vector machines are the two most commonly used methodologies.Recently,a new SVM-based speaker verification method using GMM super vectors has been proposed.This paper describes the construction of a new speaker verification system and investigates the use of nuisance attribute projection and test normalization to further enhance performance.Experiments were conducted on the core test of the 2006 NIST speaker recognition evaluation corpus.The experimental results indicate that an SVM-based speaker verification system using GMM super vectors can achieve ap-pealing performance.With the use of nuisance attribute projection and test normalization,the system per-formance can be significantly improved,with improvements in the equal error rate from 7.78% to 4.92% and detection cost function from 0.0376 to 0.0251.

  6. Computational Intelligence for Engineering Systems

    CERN Document Server

    Madureira, A; Vale, Zita

    2011-01-01

    "Computational Intelligence for Engineering Systems" provides an overview and original analysis of new developments and advances in several areas of computational intelligence. Computational Intelligence have become the road-map for engineers to develop and analyze novel techniques to solve problems in basic sciences (such as physics, chemistry and biology) and engineering, environmental, life and social sciences. The contributions are written by international experts, who provide up-to-date aspects of the topics discussed and present recent, original insights into their own experien

  7. Computers in Information Sciences: On-Line Systems.

    Science.gov (United States)

    COMPUTERS, *BIBLIOGRAPHIES, *ONLINE SYSTEMS, * INFORMATION SCIENCES , DATA PROCESSING, DATA MANAGEMENT, COMPUTER PROGRAMMING, INFORMATION RETRIEVAL, COMPUTER GRAPHICS, DIGITAL COMPUTERS, ANALOG COMPUTERS.

  8. The Super Separator Spectrometer S3 and the associated detection systems: SIRIUS & LEB-REGLIS3

    Science.gov (United States)

    Déchery, F.; Savajols, H.; Authier, M.; Drouart, A.; Nolen, J.; Ackermann, D.; Amthor, A. M.; Bastin, B.; Berryhill, A.; Boutin, D.; Caceres, L.; Coffey, M.; Delferrière, O.; Dorvaux, O.; Gall, B.; Hauschild, K.; Hue, A.; Jacquot, B.; Karkour, N.; Laune, B.; Le Blanc, F.; Lecesne, N.; Lopez-Martens, A.; Lutton, F.; Manikonda, S.; Meinke, R.; Olivier, G.; Payet, J.; Piot, J.; Pochon, O.; Prince, V.; Souli, M.; Stelzer, G.; Stodel, C.; Stodel, M.-H.; Sulignano, B.; Traykov, E.; Uriot, D.

    2016-06-01

    The Super Separator Spectrometer (S3) facility is developed in the framework of the SPIRAL2 project [1]. S3 has been designed to extend the capability of the facility to perform experiments with extremely low cross sections, taking advantage of the very high intensity stable beams of the superconducting linear accelerator of SPIRAL2. It will mainly use fusion-evaporation reactions to reach extreme regions of the nuclear chart: new opportunities will be opened for super-heavy element studies and spectroscopy at and beyond the driplines. In addition to our previous article (Déchery et al. [2]) introducing the optical layout of the spectrometer and the expected performances, this article will present the current status of the main elements of the facility: the target station, the superconducting multipole, and the magnetic and electric dipoles, with a special emphasis on the status of the detection system SIRIUS and on the low-energy branch which includes the REGLIS3 system. S3 will also be a source of low energy radioactive isotopes for delivery to the DESIR facility.

  9. Adaptive quadrature-polybinary detection in super-Nyquist WDM systems.

    Science.gov (United States)

    Chen, Sai; Xie, Chongjin; Zhang, Jie

    2015-03-23

    We propose an adaptive detection technique in super-Nyquist wavelength-division-multiplexed (WDM) polarization-division-multiplexed quadrature-phase-shift-keying (PDM-QPSK) systems, where a QPSK signal is digitally converted to a quadrature n-level polybinary signal followed by a MLSE detector at the receiver, and study the performance of quadrature-duobinary and quadrature four-level polybinary signals using this detection technique. We change the level of the quadrature-polybinary modulation at the coherent receiver according to the channel spacing of a super-Nyquist system. Numerical studies show that the best performance can be achieved by choosing different modulation levels at the receiver in adaption to the channel spacing. In the experiment, we demonstrate the transmission of 3-channel 112-Gbit/s PDM-QPSK signals at a 20-GHz channel spacing, which is detected as a quadrature four-level polybinary signal, with performance comparable to PDM 16-ary quadrature-amplitude modulation (16QAM) at the same bit rate.

  10. Superstring 'ending' on super-D9-brane: a supersymmetric action functional for the coupled brane system

    Energy Technology Data Exchange (ETDEWEB)

    Bandos, Igor E-mail: bandos@kipt.kharkov.ua; Kummer, Wolfgang E-mail: wkummer@tph.tuwien.ac.at

    2000-01-17

    A supersymmetric action functional describing the interaction of the fundamental superstring with the D=10, type IIB Dirichlet super-9-brane is presented. A set of supersymmetric equations for the coupled system is obtained from the action principle. It is found that the interaction of the string endpoints with the super D9-brane gauge field requires some restrictions for the image of the gauge field strength. When those restrictions are not imposed, the equations imply the absence of the endpoints, and the equations coincide either with the ones of the free super-D9-brane or with the ones for the free closed type IIB superstring. Different phases of the coupled system are described. A generalization to an arbitrary system of intersecting branes is discussed.

  11. The high-energy environment in the super-earth system CoRoT-7

    CERN Document Server

    Poppenhaeger, K; Schröter, S; Lalitha, S; Kashyap, V; Schmitt, J H M M

    2012-01-01

    High-energy irradiation of exoplanets has been identified to be a key influence on the stability of these planets' atmospheres. So far, irradiation-driven mass-loss has been observed only in two Hot Jupiters, and the observational data remain even more sparse in the super-earth regime. We present an investigation of the high-energy emission in the CoRoT-7 system, which hosts the first known transiting super-earth. To characterize the high-energy XUV radiation field into which the rocky planets CoRoT-7b and CoRoT-7c are immersed, we analyzed a 25 ks XMM-Newton observation of the host star. Our analysis yields the first clear (3.5 sigma) X-ray detection of CoRoT-7. We determine a coronal temperature of ca. 3 MK and an X-ray luminosity of 3*10^28 erg/s. The level of XUV irradiation on CoRoT-7b amounts to ca. 37000 erg/cm^2/s. Current theories for planetary evaporation can only provide an order-of-magnitude estimate for the planetary mass loss; assuming that CoRoT-7b has formed as a rocky planet, we estimate that...

  12. Aging and computational systems biology.

    Science.gov (United States)

    Mooney, Kathleen M; Morgan, Amy E; Mc Auley, Mark T

    2016-01-01

    Aging research is undergoing a paradigm shift, which has led to new and innovative methods of exploring this complex phenomenon. The systems biology approach endeavors to understand biological systems in a holistic manner, by taking account of intrinsic interactions, while also attempting to account for the impact of external inputs, such as diet. A key technique employed in systems biology is computational modeling, which involves mathematically describing and simulating the dynamics of biological systems. Although a large number of computational models have been developed in recent years, these models have focused on various discrete components of the aging process, and to date no model has succeeded in completely representing the full scope of aging. Combining existing models or developing new models may help to address this need and in so doing could help achieve an improved understanding of the intrinsic mechanisms which underpin aging.

  13. Computational Systems for Multidisciplinary Applications

    Science.gov (United States)

    Soni, Bharat; Haupt, Tomasz; Koomullil, Roy; Luke, Edward; Thompson, David

    2002-01-01

    In this paper, we briefly describe our efforts to develop complex simulation systems. We focus first on four key infrastructure items: enterprise computational services, simulation synthesis, geometry modeling and mesh generation, and a fluid flow solver for arbitrary meshes. We conclude by presenting three diverse applications developed using these technologies.

  14. Adaptive sliding mode controller based on super-twist observer for tethered satellite system

    Science.gov (United States)

    Keshtkar, Sajjad; Poznyak, Alexander

    2016-09-01

    In this work, the sliding mode control based on the super-twist observer is presented. The parameters of the controller as well as the observer are admitted to be time-varying and depending on available current measurements. In view of that, the considered controller is referred to as an adaptive one. It is shown that the deviations of the generated state estimates from real state values together with a distance of the closed-loop system trajectories to a desired sliding surface reach a μ-zone around the origin in finite time. The application of the suggested controller is illustrated for the orientation of a tethered satellite system in a required position.

  15. Computational Aeroacoustic Analysis System Development

    Science.gov (United States)

    Hadid, A.; Lin, W.; Ascoli, E.; Barson, S.; Sindir, M.

    2001-01-01

    Many industrial and commercial products operate in a dynamic flow environment and the aerodynamically generated noise has become a very important factor in the design of these products. In light of the importance in characterizing this dynamic environment, Rocketdyne has initiated a multiyear effort to develop an advanced general-purpose Computational Aeroacoustic Analysis System (CAAS) to address these issues. This system will provide a high fidelity predictive capability for aeroacoustic design and analysis. The numerical platform is able to provide high temporal and spatial accuracy that is required for aeroacoustic calculations through the development of a high order spectral element numerical algorithm. The analysis system is integrated with well-established CAE tools, such as a graphical user interface (GUI) through PATRAN, to provide cost-effective access to all of the necessary tools. These include preprocessing (geometry import, grid generation and boundary condition specification), code set up (problem specification, user parameter definition, etc.), and postprocessing. The purpose of the present paper is to assess the feasibility of such a system and to demonstrate the efficiency and accuracy of the numerical algorithm through numerical examples. Computations of vortex shedding noise were carried out in the context of a two-dimensional low Mach number turbulent flow past a square cylinder. The computational aeroacoustic approach that is used in CAAS relies on coupling a base flow solver to the acoustic solver throughout a computational cycle. The unsteady fluid motion, which is responsible for both the generation and propagation of acoustic waves, is calculated using a high order flow solver. The results of the flow field are then passed to the acoustic solver through an interpolator to map the field values into the acoustic grid. The acoustic field, which is governed by the linearized Euler equations, is then calculated using the flow results computed

  16. Computational models of complex systems

    CERN Document Server

    Dabbaghian, Vahid

    2014-01-01

    Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...

  17. Redundant computing for exascale systems.

    Energy Technology Data Exchange (ETDEWEB)

    Stearley, Jon R.; Riesen, Rolf E.; Laros, James H., III; Ferreira, Kurt Brian; Pedretti, Kevin Thomas Tauke; Oldfield, Ron A.; Brightwell, Ronald Brian

    2010-12-01

    Exascale systems will have hundred thousands of compute nodes and millions of components which increases the likelihood of faults. Today, applications use checkpoint/restart to recover from these faults. Even under ideal conditions, applications running on more than 50,000 nodes will spend more than half of their total running time saving checkpoints, restarting, and redoing work that was lost. Redundant computing is a method that allows an application to continue working even when failures occur. Instead of each failure causing an application interrupt, multiple failures can be absorbed by the application until redundancy is exhausted. In this paper we present a method to analyze the benefits of redundant computing, present simulation results of the cost, and compare it to other proposed methods for fault resilience.

  18. Camera simulation engine enables efficient system optimization for super-resolution imaging

    Science.gov (United States)

    Fullerton, Stephanie; Bennett, Keith; Toda, Eiji; Takahashi, Teruo

    2012-02-01

    Quantitative fluorescent imaging requires optimization of the complete optical system, from the sample to the detector. Such considerations are especially true for precision localization microscopy such as PALM and (d)STORM where the precision of the result is limited by the noise in both the optical and detection systems. Here, we present a Camera Simulation Engine (CSE) that allows comparison of imaging results from CCD, CMOS and EM-CCD cameras under various sample conditions and can accurately validate the quality of precision localization algorithms and camera performance. To achieve these results, the CSE incorporates the following parameters: 1) Sample conditions including optical intensity, wavelength, optical signal shot noise, and optical background shot noise; 2) Camera specifications including QE, pixel size, dark current, read noise, EM-CCD excess noise; 3) Camera operating conditions such as exposure, binning and gain. A key feature of the CSE is that, from a single image (either real or simulated "ideal") we generate a stack of statistically realistic images. We have used the CSE to validate experimental data showing that certain current scientific CMOS technology outperforms EM-CCD in most super-resolution scenarios. Our results support using the CSE to efficiently and methodically select cameras for quantitative imaging applications. Furthermore, the CSE can be used to robustly compare and evaluate new algorithms for data analysis and image reconstruction. These uses of the CSE are particularly relevant to super-resolution precision localization microscopy and provide a faster, simpler and more cost effective means of system optimization, especially camera selection.

  19. Computer-aided system design

    Science.gov (United States)

    Walker, Carrie K.

    1991-01-01

    A technique has been developed for combining features of a systems architecture design and assessment tool and a software development tool. This technique reduces simulation development time and expands simulation detail. The Architecture Design and Assessment System (ADAS), developed at the Research Triangle Institute, is a set of computer-assisted engineering tools for the design and analysis of computer systems. The ADAS system is based on directed graph concepts and supports the synthesis and analysis of software algorithms mapped to candidate hardware implementations. Greater simulation detail is provided by the ADAS functional simulator. With the functional simulator, programs written in either Ada or C can be used to provide a detailed description of graph nodes. A Computer-Aided Software Engineering tool developed at the Charles Stark Draper Laboratory (CSDL CASE) automatically generates Ada or C code from engineering block diagram specifications designed with an interactive graphical interface. A technique to use the tools together has been developed, which further automates the design process.

  20. Raspberry Pi super cluster

    CERN Document Server

    Dennis, Andrew K

    2013-01-01

    This book follows a step-by-step, tutorial-based approach which will teach you how to develop your own super cluster using Raspberry Pi computers quickly and efficiently.Raspberry Pi Super Cluster is an introductory guide for those interested in experimenting with parallel computing at home. Aimed at Raspberry Pi enthusiasts, this book is a primer for getting your first cluster up and running.Basic knowledge of C or Java would be helpful but no prior knowledge of parallel computing is necessary.

  1. Jefferson Laboratory Hall A SuperBigBite Spectrometer Data Acquisition System

    Science.gov (United States)

    Camsonne, Alexandre; Hall A Collaboration; Hall A SuperBigBite Collaboration

    2013-10-01

    The SuperBigBite detector is a large acceptance spectrometer which is being built for Hall A at Jefferson Laboratory and planned for completion in 2017. Several experiments are approved for this detector ranging from form factors to nucleon structure. The detector consists mainly of a large dipole magnet and several plane of Gas Electron Multiplier trackers associated with calorimeters. In order to reduce the cost of the project the electronics used will be a mix of older Fastbus and newly developed electronics. I will present the layout of the system and how we plan to handle the high background rates seen by the different detectors for the different experiments. 12000 Jefferson Avenue Suite #4 Newport News VA 23606 USA.

  2. Translation of Japanese Noun Compounds at Super-Function Based MT System

    Science.gov (United States)

    Zhao, Xin; Ren, Fuji; Kuroiwa, Shingo

    Noun compounds are frequently encountered construction in nature language processing (NLP), consisting of a sequence of two or more nouns which functions syntactically as one noun. The translation of noun compounds has become a major issue in Machine Translation (MT) due to their frequency of occurrence and high productivity. In our previous studies on Super-Function Based Machine Translation (SFBMT), we have found that noun compounds are very frequently used and difficult to be translated correctly, the overgeneration of noun compounds can be dangerous as it may introduce ambiguity in the translation. In this paper, we discuss the challenges in handling Japanese noun compounds in an SFBMT system, we present a shallow method for translating noun compounds by using a word level translation dictionary and target language monolingual corpus.

  3. Control of discrete time systems based on recurrent Super-Twisting-like algorithm.

    Science.gov (United States)

    Salgado, I; Kamal, S; Bandyopadhyay, B; Chairez, I; Fridman, L

    2016-09-01

    Most of the research in sliding mode theory has been carried out to in continuous time to solve the estimation and control problems. However, in discrete time, the results in high order sliding modes have been less developed. In this paper, a discrete time super-twisting-like algorithm (DSTA) was proposed to solve the problems of control and state estimation. The stability proof was developed in terms of the discrete time Lyapunov approach and the linear matrix inequalities theory. The system trajectories were ultimately bounded inside a small region dependent on the sampling period. Simulation results tested the DSTA. The DSTA was applied as a controller for a Furuta pendulum and for a DC motor supplied by a DSTA signal differentiator.

  4. Application of super-twisting observers to the estimation of state and unknown inputs in an anaerobic digestion system.

    Science.gov (United States)

    Sbarciog, M; Moreno, J A; Vande Wouwer, A

    2014-01-01

    This paper presents the estimation of the unknown states and inputs of an anaerobic digestion system characterized by a two-step reaction model. The estimation is based on the measurement of the two substrate concentrations and of the outflow rate of biogas and relies on the use of an observer, consisting of three parts. The first is a generalized super-twisting observer, which estimates a linear combination of the two input concentrations. The second is an asymptotic observer, which provides one of the two biomass concentrations, whereas the third is a super-twisting observer for one of the input concentrations and the second biomass concentration.

  5. Approximate Super- and Sub-harmonic Response of a Multi-DOFs System with Local Cubic Nonlinearities under Resonance

    Directory of Open Access Journals (Sweden)

    Yang CaiJin

    2012-01-01

    nonlinear response of system at super/sub harmonic resonance. For many situations, single resonance mode is often observed to be leading as system enters into super/sub harmonic resonance. In this case, the single modal natural resonance theory can be applied to reduce the system model and a simplified model with only a single DOF is always obtained. Thus, an approximate solution and the analytical expression of frequency response relation are then derived using classical perturbation analysis. While the system is controlled by multiple modes, modal analysis for linearized system is used to decide dominant modes. The reduced model governed by these relevant modes is found and results in an approximate numerical solutions. An illustrative example of the discrete mass-spring-damper nonlinear vibration system with ten DOFs is examined. The approximation results are validated by comparing them with the calculations from direct numerical integration of the equation of motion of the original nonlinear system. Comparably good agreements are obtained.

  6. Computer Networks A Systems Approach

    CERN Document Server

    Peterson, Larry L

    2011-01-01

    This best-selling and classic book teaches you the key principles of computer networks with examples drawn from the real world of network and protocol design. Using the Internet as the primary example, the authors explain various protocols and networking technologies. Their systems-oriented approach encourages you to think about how individual network components fit into a larger, complex system of interactions. Whatever your perspective, whether it be that of an application developer, network administrator, or a designer of network equipment or protocols, you will come away with a "big pictur

  7. SELF LEARNING COMPUTER TROUBLESHOOTING EXPERT SYSTEM

    OpenAIRE

    Amanuel Ayde Ergado

    2016-01-01

    In computer domain the professionals were limited in number but the numbers of institutions looking for computer professionals were high. The aim of this study is developing self learning expert system which is providing troubleshooting information about problems occurred in the computer system for the information and communication technology technicians and computer users to solve problems effectively and efficiently to utilize computer and computer related resources. Domain know...

  8. Poisson structure and stability analysis of a coupled system arising from the supersymmetric breaking of Super KdV

    CERN Document Server

    Restuccia, A

    2014-01-01

    The Poisson structure of a coupled system arising from a supersymmetric breaking of N=1 Super KdV equations is obtained. The supersymmetric breaking is implemented by introducing a Clifford algebra instead of a Grassmann algebra. The Poisson structure follows from the Dirac brackets obtained by the constraint analysis of the hamiltonian of the system. The coupled system has multisolitonic solutions. We show that the one soliton solutions are Liapunov stable.

  9. Computers as components principles of embedded computing system design

    CERN Document Server

    Wolf, Marilyn

    2012-01-01

    Computers as Components: Principles of Embedded Computing System Design, 3e, presents essential knowledge on embedded systems technology and techniques. Updated for today's embedded systems design methods, this edition features new examples including digital signal processing, multimedia, and cyber-physical systems. Author Marilyn Wolf covers the latest processors from Texas Instruments, ARM, and Microchip Technology plus software, operating systems, networks, consumer devices, and more. Like the previous editions, this textbook: Uses real processors to demonstrate both technology and tec

  10. Automated Computer Access Request System

    Science.gov (United States)

    Snook, Bryan E.

    2010-01-01

    The Automated Computer Access Request (AutoCAR) system is a Web-based account provisioning application that replaces the time-consuming paper-based computer-access request process at Johnson Space Center (JSC). Auto- CAR combines rules-based and role-based functionality in one application to provide a centralized system that is easily and widely accessible. The system features a work-flow engine that facilitates request routing, a user registration directory containing contact information and user metadata, an access request submission and tracking process, and a system administrator account management component. This provides full, end-to-end disposition approval chain accountability from the moment a request is submitted. By blending both rules-based and rolebased functionality, AutoCAR has the flexibility to route requests based on a user s nationality, JSC affiliation status, and other export-control requirements, while ensuring a user s request is addressed by either a primary or backup approver. All user accounts that are tracked in AutoCAR are recorded and mapped to the native operating system schema on the target platform where user accounts reside. This allows for future extensibility for supporting creation, deletion, and account management directly on the target platforms by way of AutoCAR. The system s directory-based lookup and day-today change analysis of directory information determines personnel moves, deletions, and additions, and automatically notifies a user via e-mail to revalidate his/her account access as a result of such changes. AutoCAR is a Microsoft classic active server page (ASP) application hosted on a Microsoft Internet Information Server (IIS).

  11. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  12. Computer vision in control systems

    CERN Document Server

    Jain, Lakhmi

    2015-01-01

    Volume 1 : This book is focused on the recent advances in computer vision methodologies and technical solutions using conventional and intelligent paradigms. The Contributions include: ·         Morphological Image Analysis for Computer Vision Applications. ·         Methods for Detecting of Structural Changes in Computer Vision Systems. ·         Hierarchical Adaptive KL-based Transform: Algorithms and Applications. ·         Automatic Estimation for Parameters of Image Projective Transforms Based on Object-invariant Cores. ·         A Way of Energy Analysis for Image and Video Sequence Processing. ·         Optimal Measurement of Visual Motion Across Spatial and Temporal Scales. ·         Scene Analysis Using Morphological Mathematics and Fuzzy Logic. ·         Digital Video Stabilization in Static and Dynamic Scenes. ·         Implementation of Hadamard Matrices for Image Processing. ·         A Generalized Criterion ...

  13. Tidal Q of a Super Earth: Dynamical Constraints from the GJ 876 System

    Science.gov (United States)

    Krishna Puranam, Abhijit; Batygin, Konstantin

    2016-05-01

    GJ 876 is an M-dwarf star 15 light-years from Earth and is the closest known star to harbor a multi-planetary system. This system stands out as an extraordinary member of the extrasolar planetary aggregate, due to the rapid dynamical chaos exhibited by the Laplace resonance of the outer three planets, and the high eccentricity of the non-resonant inner planet. While the origins of chaotic motion within this system are well understood, the mechanism through which the innermost planet maintains its high eccentricity in face of tidal dissipation remains elusive. In this work, we used analytic methods and numerical simulations to show that angular momentum transfer between the resonant chain and the innermost planet stochastically pumps the eccentricity of latter. In light of such interactions, the innermost planet’s eccentricity constitutes an observable proxy for its tidal circularization timescale. Quantitatively, our analysis yields a tidal Q of order a few thousand for an extrasolar super-Earth, GJ 876d.

  14. When does a physical system compute?

    Science.gov (United States)

    Horsman, Clare; Stepney, Susan; Wagner, Rob C; Kendon, Viv

    2014-09-08

    Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution. We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a 'computational entity', and its critical role in defining when computing is taking place in physical systems.

  15. `95 computer system operation project

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Taek; Lee, Hae Cho; Park, Soo Jin; Kim, Hee Kyung; Lee, Ho Yeun; Lee, Sung Kyu; Choi, Mi Kyung [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-12-01

    This report describes overall project works related to the operation of mainframe computers, the management of nuclear computer codes and the project of nuclear computer code conversion. The results of the project are as follows ; 1. The operation and maintenance of the three mainframe computers and other utilities. 2. The management of the nuclear computer codes. 3. The finishing of the computer codes conversion project. 26 tabs., 5 figs., 17 refs. (Author) .new.

  16. Computing abstractions of nonlinear systems

    CERN Document Server

    Reißig, Gunther

    2009-01-01

    We present an efficient algorithm for computing discrete abstractions of arbitrary memory span for nonlinear discrete-time and sampled systems, in which, apart from possibly numerically integrating ordinary differential equations, the only nontrivial operation to be performed repeatedly is to distinguish empty from non-empty convex polyhedra. We also provide sufficient conditions for the convexity of attainable sets, which is an important requirement for the correctness of the method we propose. It turns out that requirement can be met under rather mild conditions, which essentially reduce to sufficient smoothness in the case of sampled systems. Practicability of our approach in the design of discrete controllers for continuous plants is demonstrated by an example.

  17. Hydronic distribution system computer model

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, J.W.; Strasser, J.J.

    1994-10-01

    A computer model of a hot-water boiler and its associated hydronic thermal distribution loop has been developed at Brookhaven National Laboratory (BNL). It is intended to be incorporated as a submodel in a comprehensive model of residential-scale thermal distribution systems developed at Lawrence Berkeley. This will give the combined model the capability of modeling forced-air and hydronic distribution systems in the same house using the same supporting software. This report describes the development of the BNL hydronics model, initial results and internal consistency checks, and its intended relationship to the LBL model. A method of interacting with the LBL model that does not require physical integration of the two codes is described. This will provide capability now, with reduced up-front cost, as long as the number of runs required is not large.

  18. Computer systems and software engineering

    Science.gov (United States)

    Mckay, Charles W.

    1988-01-01

    The High Technologies Laboratory (HTL) was established in the fall of 1982 at the University of Houston Clear Lake. Research conducted at the High Tech Lab is focused upon computer systems and software engineering. There is a strong emphasis on the interrelationship of these areas of technology and the United States' space program. In Jan. of 1987, NASA Headquarters announced the formation of its first research center dedicated to software engineering. Operated by the High Tech Lab, the Software Engineering Research Center (SERC) was formed at the University of Houston Clear Lake. The High Tech Lab/Software Engineering Research Center promotes cooperative research among government, industry, and academia to advance the edge-of-knowledge and the state-of-the-practice in key topics of computer systems and software engineering which are critical to NASA. The center also recommends appropriate actions, guidelines, standards, and policies to NASA in matters pertinent to the center's research. Results of the research conducted at the High Tech Lab/Software Engineering Research Center have given direction to many decisions made by NASA concerning the Space Station Program.

  19. Trusted computing for embedded systems

    CERN Document Server

    Soudris, Dimitrios; Anagnostopoulos, Iraklis

    2015-01-01

    This book describes the state-of-the-art in trusted computing for embedded systems. It shows how a variety of security and trusted computing problems are addressed currently and what solutions are expected to emerge in the coming years. The discussion focuses on attacks aimed at hardware and software for embedded systems, and the authors describe specific solutions to create security features. Case studies are used to present new techniques designed as industrial security solutions. Coverage includes development of tamper resistant hardware and firmware mechanisms for lightweight embedded devices, as well as those serving as security anchors for embedded platforms required by applications such as smart power grids, smart networked and home appliances, environmental and infrastructure sensor networks, etc. ·         Enables readers to address a variety of security threats to embedded hardware and software; ·         Describes design of secure wireless sensor networks, to address secure authen...

  20. The Hamiltonian structure of a coupled system derived from a supersymmetric breaking of super Korteweg-de Vries equations

    Energy Technology Data Exchange (ETDEWEB)

    Restuccia, A. [Departamento de Física, Universidad de Antofagasta, Antofagasta, Chile and Departamento de Física, Universidad Simón Bolívar, Caracas (Venezuela, Bolivarian Republic of); Sotomayor, A. [Departamento de Matemáticas, Universidad de Antofagasta, Antofagasta (Chile)

    2013-11-15

    A supersymmetric breaking procedure for N= 1 super Korteweg-de Vries (KdV), using a Clifford algebra, is implemented. Dirac's method for the determination of constraints is used to obtain the Hamiltonian structure, via a Lagrangian, for the resulting solitonic system of coupled KdV type system. It is shown that the Hamiltonian obtained by this procedure is bounded from below and in that sense represents a model which is physically admissible.

  1. The Hamiltonian structure of a coupled system derived from a supersymmetric breaking of Super KdV equations

    CERN Document Server

    Restuccia, A

    2013-01-01

    A supersymmetric breaking procedure for $N=1$ Super KdV, using a Clifford algebra, is implemented. Dirac's method for the determination of constraints is used to obtain the Hamiltonian structure, via a Lagrangian, for the resulting solitonic system of coupled Korteweg-de Vries type system. It is shown that the Hamiltonian obtained by this procedure is bounded from below and in that sense represents a model which is physically admissible.

  2. A Novel Fuzzy Logic Based Adaptive Super-Twisting Sliding Mode Control Algorithm for Dynamic Uncertain Systems

    OpenAIRE

    Abdul Kareem; Mohammad Fazle Azeem

    2012-01-01

    This paper presents a novel fuzzy logic based Adaptive Super-twisting Sliding Mode Controller for the control of dynamic uncertain systems. The proposed controller combines the advantages of Second order Sliding Mode Control, Fuzzy Logic Control and Adaptive Control. The reaching conditions, stability and robustness of the system with the proposed controller are guaranteed. In addition, the proposed controller is well suited for simple design and implementation. The effectiveness ...

  3. Sub-pixel processing for super-resolution scanning imaging system with fiber bundle coupling

    Institute of Scientific and Technical Information of China (English)

    Bowen An; Bingbin Xue; Shengda Pan; Guilin Chen

    2011-01-01

    A multilayer fiber bundle is used to couple the image in a remote sensing imaging system. The object image passes through all layers of the fiber bundle in micro-scanning mode. The malposition of adjacent layers arranged in a hexagonal pattern is at sub-pixel scale. Therefore, sub-pixel processing can be applied to improve the spatial resolution. The images coupled by the adjacent layer fibers are separated, and subsequently, the intermediate image is obtained by histogram matching based on one of the separated image called base image. Finally, the intermediate and base images are processed in the frequency domain. The malposition of the adjacent layer fiber is converted to the phase difference in Fourier transform. Considering the limited sensitivity of the experimental instruments and human sight, the image is set as a band-limited signal and the interpolation function of image fusion is found. The results indicate that a super-resolution image with ultra-high spatial resolution is obtained.%@@ A multilayer fiber bundle is used to couple the image in a remote sensing imaging system.The object image passes through all layers of the fiber bundle in micro-scanning mode.The malposition of adjacent layers arranged in a hexagonal pattern is at sub-pixel scale.

  4. Super Earths and Dynamical Stability of Planetary Systems: First Parallel GPU Simulations Using GENGA

    CERN Document Server

    Elser, S; Stadel, J G

    2013-01-01

    We report on the stability of hypothetical Super-Earths in the habitable zone of known multi-planetary systems. Most of them have not yet been studied in detail concerning the existence of additional low-mass planets. The new N-body code GENGA developed at the UZH allows us to perform numerous N-body simulations in parallel on GPUs. With this numerical tool, we can study the stability of orbits of hypothetical planets in the semi-major axis and eccentricity parameter space in high resolution. Massless test particle simulations give good predictions on the extension of the stable region and show that HIP 14180 and HD 37124 do not provide stable orbits in the habitable zone. Based on these simulations, we carry out simulations of 10 Earth mass planets in several systems (HD 11964, HD 47186, HD 147018, HD 163607, HD 168443, HD 187123, HD 190360, HD 217107 and HIP 57274). They provide more exact information about orbits at the location of mean motion resonances and at the edges of the stability zones. Beside the ...

  5. Design and implementation of a Cooke triplet based wave-front coded super-resolution imaging system

    Science.gov (United States)

    Zhao, Hui; Wei, Jingxuan

    2015-09-01

    Wave-front coding is a powerful technique that could be used to extend the DOF (depth of focus) of incoherent imaging system. It is the suitably designed phase mask that makes the system defocus invariant and it is the de-convolution algorithm that generates the clear image with large DOF. Compared with the traditional imaging system, the point spread function (PSF) in wave-front coded imaging system has quite a large support size and this characteristic makes wave-front coding be capable of realizing super-resolution imaging without replacing the current sensor with one of smaller pitch size. An amplification based single image super-resolution reconstruction procedure has been specifically designed for wave-front coded imaging system and its effectiveness has been demonstrated experimentally. A Cooke Triplet based wave-front coded imaging system is established. For a focal length of 50 mm and f-number 4.5, objects within the range [5 m, ∞] could be clearly imaged, which indicates a DOF extension ratio of approximately 20. At the same time, the proposed processing procedure could produce at least 3× resolution improvement, with the quality of the reconstructed super-resolution image approaching the diffraction limit.

  6. Using Expert Systems For Computational Tasks

    Science.gov (United States)

    Duke, Eugene L.; Regenie, Victoria A.; Brazee, Marylouise; Brumbaugh, Randal W.

    1990-01-01

    Transformation technique enables inefficient expert systems to run in real time. Paper suggests use of knowledge compiler to transform knowledge base and inference mechanism of expert-system computer program into conventional computer program. Main benefit, faster execution and reduced processing demands. In avionic systems, transformation reduces need for special-purpose computers.

  7. Software For Monitoring VAX Computer Systems

    Science.gov (United States)

    Farkas, Les; Don, Ken; Lavery, David; Baron, Amy

    1994-01-01

    VAX Continuous Monitoring System (VAXCMS) computer program developed at NASA Headquarters to aid system managers in monitoring performances of VAX computer systems through generation of graphic images summarizing trends in performance metrics over time. VAXCMS written in DCL and VAX FORTRAN for use with DEC VAX-series computers running VMS 5.1 or later.

  8. Computer Aided Control System Design (CACSD)

    Science.gov (United States)

    Stoner, Frank T.

    1993-01-01

    The design of modern aerospace systems relies on the efficient utilization of computational resources and the availability of computational tools to provide accurate system modeling. This research focuses on the development of a computer aided control system design application which provides a full range of stability analysis and control design capabilities for aerospace vehicles.

  9. Rapid detection of Bacillus anthracis spores using a super-paramagnetic lateral-flow immunological detection system.

    Science.gov (United States)

    Wang, Dian-Bing; Tian, Bo; Zhang, Zhi-Ping; Deng, Jiao-Yu; Cui, Zong-Qiang; Yang, Rui-Fu; Wang, Xu-Ying; Wei, Hong-Ping; Zhang, Xian-En

    2013-04-15

    There is an urgent need for convenient, sensitive, and specific methods to detect the spores of Bacillus anthracis, the causative agent of anthrax, because of the bioterrorism threat posed by this bacterium. In this study, we firstly develop a super-paramagnetic lateral-flow immunological detection system for B. anthracis spores. This system involves the use of a portable magnetic assay reader, super-paramagnetic iron oxide particles, lateral-flow strips and two different monoclonal antibodies directed against B. anthracis spores. This detection system specifically recognises as few as 400 pure B. anthracis spores in 30 min. This system has a linear range of 4×10³-10⁶ CFU ml⁻¹ and reproducible detection limits of 200 spores mg⁻¹ milk powder and 130 spores mg⁻¹ soil for simulated samples. In addition, this approach shows no obvious cross-reaction with other related Bacillus spores, even at high concentrations, and has no significant dependence on the duration of the storage of the immunological strips. Therefore, this super-paramagnetic lateral-flow immunological detection system is a promising tool for the rapid and sensitive detection of Bacillus anthracis spores under field conditions.

  10. 超级计算中心功能与设计探讨%Discussion on the Function and Design of Super Computer Center

    Institute of Scientific and Technical Information of China (English)

    焦建欣

    2013-01-01

      超级计算中心是数据中心领域中的一个特殊的类型,本文以国家超级计算深圳中心为例,探讨了超级计算中心的功能及相关的设计。%Super computer center is a particular type in the field of data center. Based on the National Supercomputing Center in Shenzhen as an example, the paper discusses the function and related design for such supercomputing centers.

  11. Impact of new computing systems on finite element computations

    Science.gov (United States)

    Noor, A. K.; Storassili, O. O.; Fulton, R. E.

    1983-01-01

    Recent advances in computer technology that are likely to impact finite element computations are reviewed. The characteristics of supersystems, highly parallel systems, and small systems (mini and microcomputers) are summarized. The interrelations of numerical algorithms and software with parallel architectures are discussed. A scenario is presented for future hardware/software environment and finite element systems. A number of research areas which have high potential for improving the effectiveness of finite element analysis in the new environment are identified.

  12. Transient Faults in Computer Systems

    Science.gov (United States)

    Masson, Gerald M.

    1993-01-01

    A powerful technique particularly appropriate for the detection of errors caused by transient faults in computer systems was developed. The technique can be implemented in either software or hardware; the research conducted thus far primarily considered software implementations. The error detection technique developed has the distinct advantage of having provably complete coverage of all errors caused by transient faults that affect the output produced by the execution of a program. In other words, the technique does not have to be tuned to a particular error model to enhance error coverage. Also, the correctness of the technique can be formally verified. The technique uses time and software redundancy. The foundation for an effective, low-overhead, software-based certification trail approach to real-time error detection resulting from transient fault phenomena was developed.

  13. A laboratory exposure system to study the effects of aging on super-micron aerosol particles

    Energy Technology Data Exchange (ETDEWEB)

    Santarpia, Joshua; Sanchez, Andres L.; Lucero, Gabriel Anthony; Servantes, Brandon Lee; Hubbard, Joshua Allen

    2014-02-01

    A laboratory system was constructed that allows the super-micron particles to be aged for long periods of time under conditions that can simulate a range of natural environments and conditions, including relative humidity, oxidizing chemicals, organics and simulated solar radiation. Two proof-of-concept experiments using a non-biological simulant for biological particles and a biological simulant demonstrate the utility of these types of aging experiments. Green Visolite®, which is often used as a tracer material for model validation experiments, does not degrade with exposure to simulated solar radiation, the actual biological material does. This would indicate that Visolite® should be a good tracer compound for mapping the extent of a biological release using fluorescence as an indicator, but that it should not be used to simulate the decay of a biological particle when exposed to sunlight. The decay in the fluorescence measured for B. thurengiensis is similar to what has been previously observed in outdoor environments.

  14. Development of a rotor alloy for advanced ultra super critical turbine power generation system

    Energy Technology Data Exchange (ETDEWEB)

    Miyashita, Shigekazu; Yamada, Masayuki; Suga, Takeo; Imai, Kiyoshi; Nemoto, Kuniyoshi; Yoshioka, Youmei [Toshiba Corporation, Yokohama (Japan)

    2008-07-01

    A Ni-based superalloy ''TOS1X'', for the rotor material of the 700 class advanced ultra super critical (A-USC) turbine power generation system was developed. TOS1X is an alloy that is improved in the creep rupture strength of Inconel trademark 617 maintaining both forgeability and weldability. The 7 t weight model rotor made of TOS1X was manufactured by double melt process, vacuum induction melting and electro slag remelting, and forging. During forging process, forging cracks and any other abnormalities were not detected on the ingots. The metallurgical and the mechanical properties in this rotor were investigated. Macro and micro structure observation, and some mechanical tests were conducted. According to the metallurgical structure investigation, there was no remarkable segregation in whole area and the forging effect was reached in the center part of the rotor ingot. The results of tensile test and creep rupture test proved that proof stress and tensile stress of the TOS1X are higher than those of Inconel trademark 617 and creep rupture strength of TOS1X is much superior than that of Inconel trademark 617. (orig.)

  15. Computer system reliability safety and usability

    CERN Document Server

    Dhillon, BS

    2013-01-01

    Computer systems have become an important element of the world economy, with billions of dollars spent each year on development, manufacture, operation, and maintenance. Combining coverage of computer system reliability, safety, usability, and other related topics into a single volume, Computer System Reliability: Safety and Usability eliminates the need to consult many different and diverse sources in the hunt for the information required to design better computer systems.After presenting introductory aspects of computer system reliability such as safety, usability-related facts and figures,

  16. Integrated Computer System of Management in Logistics

    Science.gov (United States)

    Chwesiuk, Krzysztof

    2011-06-01

    This paper aims at presenting a concept of an integrated computer system of management in logistics, particularly in supply and distribution chains. Consequently, the paper includes the basic idea of the concept of computer-based management in logistics and components of the system, such as CAM and CIM systems in production processes, and management systems for storage, materials flow, and for managing transport, forwarding and logistics companies. The platform which integrates computer-aided management systems is that of electronic data interchange.

  17. Conflict Resolution in Computer Systems

    Directory of Open Access Journals (Sweden)

    G. P. Mojarov

    2015-01-01

    Full Text Available A conflict situation in computer systems CS is the phenomenon arising when the processes have multi-access to the shared resources and none of the involved processes can proceed because of their waiting for the certain resources locked by the other processes which, in turn, are in a similar position. The conflict situation is also called a deadlock that has quite clear impact on the CS state.To find the reduced to practice algorithms to resolve the impasses is of significant applied importance for ensuring information security of computing process and thereupon the presented article is aimed at solving a relevant problem.The gravity of situation depends on the types of processes in a deadlock, types of used resources, number of processes, and a lot of other factors.A disadvantage of the method for preventing the impasses used in many modern operating systems and based on the preliminary planning resources required for the process is obvious - waiting time can be overlong. The preventing method with the process interruption and deallocation of its resources is very specific and a little effective, when there is a set of the polytypic resources requested dynamically. The drawback of another method, to prevent a deadlock by ordering resources, consists in restriction of possible sequences of resource requests.A different way of "struggle" against deadlocks is a prevention of impasses. In the future a prediction of appearing impasses is supposed. There are known methods [1,4,5] to define and prevent conditions under which deadlocks may occur. Thus the preliminary information on what resources a running process can request is used. Before allocating a free resource to the process, a test for a state “safety” condition is provided. The state is "safe" if in the future impasses cannot occur as a result of resource allocation to the process. Otherwise the state is considered to be " hazardous ", and resource allocation is postponed. The obvious

  18. Digital optical computers at the optoelectronic computing systems center

    Science.gov (United States)

    Jordan, Harry F.

    1991-01-01

    The Digital Optical Computing Program within the National Science Foundation Engineering Research Center for Opto-electronic Computing Systems has as its specific goal research on optical computing architectures suitable for use at the highest possible speeds. The program can be targeted toward exploiting the time domain because other programs in the Center are pursuing research on parallel optical systems, exploiting optical interconnection and optical devices and materials. Using a general purpose computing architecture as the focus, we are developing design techniques, tools and architecture for operation at the speed of light limit. Experimental work is being done with the somewhat low speed components currently available but with architectures which will scale up in speed as faster devices are developed. The design algorithms and tools developed for a general purpose, stored program computer are being applied to other systems such as optimally controlled optical communication networks.

  19. The Remote Computer Control (RCC) system

    Science.gov (United States)

    Holmes, W.

    1980-01-01

    A system to remotely control job flow on a host computer from any touchtone telephone is briefly described. Using this system a computer programmer can submit jobs to a host computer from any touchtone telephone. In addition the system can be instructed by the user to call back when a job is finished. Because of this system every touchtone telephone becomes a conversant computer peripheral. This system known as the Remote Computer Control (RCC) system utilizes touchtone input, touchtone output, voice input, and voice output. The RCC system is microprocessor based and is currently using the INTEL 80/30microcomputer. Using the RCC system a user can submit, cancel, and check the status of jobs on a host computer. The RCC system peripherals consist of a CRT for operator control, a printer for logging all activity, mass storage for the storage of user parameters, and a PROM card for program storage.

  20. Implementation of Computational Electromagnetic on Distributed Systems

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Now the new generation of technology could raise the bar for distributed computing. It seems to be a trend to solve computational electromagnetic work on a distributed system with parallel computing techniques. In this paper, we analyze the parallel characteristics of the distributed system and the possibility of setting up a tightly coupled distributed system by using LAN in our lab. The analysis of the performance of different computational methods, such as FEM, MOM, FDTD and finite difference method, are given. Our work on setting up a distributed system and the performance of the test bed is also included. At last, we mention the implementation of one of our computational electromagnetic codes.

  1. Sensitivity of summer ensembles of super-parameterized US mesoscale convective systems to cloud resolving model microphysics and resolution

    Science.gov (United States)

    Elliott, E.; Yu, S.; Kooperman, G. J.; Morrison, H.; Wang, M.; Pritchard, M. S.

    2014-12-01

    Microphysical and resolution sensitivities of explicitly resolved convection within mesoscale convective systems (MCSs) in the central United States are well documented in the context of single case studies simulated by cloud resolving models (CRMs) under tight boundary and initial condition constraints. While such an experimental design allows researchers to causatively isolate the effects of CRM microphysical and resolution parameterizations on modeled MCSs, it is still challenging to produce conclusions generalizable to multiple storms. The uncertainty associated with the results of such experiments comes both from the necessary physical constraints imposed by the limited CRM domain as well as the inability to evaluate or control model internal variability. A computationally practical method to minimize these uncertainties is the use of super-parameterized (SP) global climate models (GCMs), in which CRMs are embedded within GCMs to allow their free interaction with one another as orchestrated by large-scale global dynamics. This study uses NCAR's SP Community Atmosphere Model 5 (SP-CAM5) to evaluate microphysical and horizontal resolution sensitivities in summer ensembles of nocturnal MCSs in the central United States. Storm events within each run were identified using an objective empirical orthogonal function (EOF) algorithm, then further calibrated to harmonize individual storm signals and account for the temporal and spatial heterogeneity between them. Three summers of control data from a baseline simulation are used to assess model internal interannual variability to measure its magnitude relative to sensitivities in a number of distinct experimental runs with varying CRM parameters. Results comparing sensitivities of convective intensity to changes in fall speed assumptions about dense rimed species, one- vs. two-moment microphysics, and CRM horizontal resolution will be discussed.

  2. Development of real-time method to measure SIL-DISK spacing for super-low-flying system

    Science.gov (United States)

    Zhao, Dapeng; Zi, Yanyang; Li, Qingxiang; Bai, Lifen; Li, Yuhe

    2002-09-01

    The advanced data storage technology is important to information era. Among all sorts of solutions of high-density storage, near field optical disc technology (NFOD) is high speed and mass storage technology with excellent aptitude and future, and it is the focus of data storage research field in the pioneer technology. Today the research institutes all over the world are speeding their research on NFOD. By using the techniques of solid immersion lens (SIL) and super-low-flying system, it can achieve not only super-high recording density that can be much more higher than traditional optical disks but also the hard disks. In order to improving near-field coupling efficiency of SIL-TO-DISK, the SIL must keep sub micron flying height from the disk, so it is necessary to discuss the research process of real time method to measure SIL-TO-DISK for super-low-flying system. This paper analyses technique foundation and characteristc and its key problem for the flying height measurement, the paper studies several practicing plan of real time measure the clearance even when the SIL-DISK spacing is down to nanometer level for example, relative light intensity method, the capacitance displacement sensors, effective refractive index method for frustrated total reflection, and compare the characteristic and precision of those approach.

  3. Cybersecurity of embedded computers systems

    OpenAIRE

    Carlioz, Jean

    2016-01-01

    International audience; Several articles have recently raised the issue of computer security of commercial flights by evoking the "connected aircraft, hackers target" or "Wi-Fi on planes, an open door for hackers ? " Or "Can you hack the computer of an Airbus or a Boeing ?". The feared scenario consists in a takeover of operational aircraft software that intentionally cause an accident. Moreover, several computer security experts have lately announced they had detected flaws in embedded syste...

  4. Applied computation and security systems

    CERN Document Server

    Saeed, Khalid; Choudhury, Sankhayan; Chaki, Nabendu

    2015-01-01

    This book contains the extended version of the works that have been presented and discussed in the First International Doctoral Symposium on Applied Computation and Security Systems (ACSS 2014) held during April 18-20, 2014 in Kolkata, India. The symposium has been jointly organized by the AGH University of Science & Technology, Cracow, Poland and University of Calcutta, India. The Volume I of this double-volume book contains fourteen high quality book chapters in three different parts. Part 1 is on Pattern Recognition and it presents four chapters. Part 2 is on Imaging and Healthcare Applications contains four more book chapters. The Part 3 of this volume is on Wireless Sensor Networking and it includes as many as six chapters. Volume II of the book has three Parts presenting a total of eleven chapters in it. Part 4 consists of five excellent chapters on Software Engineering ranging from cloud service design to transactional memory. Part 5 in Volume II is on Cryptography with two book...

  5. Universal blind quantum computation for hybrid system

    Science.gov (United States)

    Huang, He-Liang; Bao, Wan-Su; Li, Tan; Li, Feng-Guang; Fu, Xiang-Qun; Zhang, Shuo; Zhang, Hai-Long; Wang, Xiang

    2017-08-01

    As progress on the development of building quantum computer continues to advance, first-generation practical quantum computers will be available for ordinary users in the cloud style similar to IBM's Quantum Experience nowadays. Clients can remotely access the quantum servers using some simple devices. In such a situation, it is of prime importance to keep the security of the client's information. Blind quantum computation protocols enable a client with limited quantum technology to delegate her quantum computation to a quantum server without leaking any privacy. To date, blind quantum computation has been considered only for an individual quantum system. However, practical universal quantum computer is likely to be a hybrid system. Here, we take the first step to construct a framework of blind quantum computation for the hybrid system, which provides a more feasible way for scalable blind quantum computation.

  6. 基于XML的数据转换系统SuperETL%Data Exchange System SuperETL Based on XML

    Institute of Scientific and Technical Information of China (English)

    柴胜; 周云轩; 黄永平; 王洪媛; 王云霄

    2006-01-01

    针对政府机构和企事业单位对数据资源整合的需求,提出一个数据转换系统SuperETL,主要介绍其设计目标、体系结构,并给出了系统中任务的XML定义标准.测试结果表明,SuperETL能够高效、智能地完成数据抽取(Extract)、清洗(Cleaning)、转换(Transformation)、装载(Loading)及ETL任务.

  7. Super Special Codes using Super Matrices

    CERN Document Server

    Kandasamy, W B Vasantha; Ilanthenral, K

    2010-01-01

    The new classes of super special codes are constructed in this book using the specially constructed super special vector spaces. These codes mainly use the super matrices. These codes can be realized as a special type of concatenated codes. This book has four chapters. In chapter one basic properties of codes and super matrices are given. A new type of super special vector space is constructed in chapter two of this book. Three new classes of super special codes namely, super special row code, super special column code and super special codes are introduced in chapter three. Applications of these codes are given in the final chapter.

  8. Computer Simulation and Computabiblity of Biological Systems

    CERN Document Server

    Baianu, I C

    2004-01-01

    The ability to simulate a biological organism by employing a computer is related to the ability of the computer to calculate the behavior of such a dynamical system, or the "computability" of the system. However, the two questions of computability and simulation are not equivalent. Since the question of computability can be given a precise answer in terms of recursive functions, automata theory and dynamical systems, it will be appropriate to consider it first. The more elusive question of adequate simulation of biological systems by a computer will be then addressed and a possible connection between the two answers given will be considered as follows. A symbolic, algebraic-topological "quantum computer" (as introduced in Baianu, 1971b) is here suggested to provide one such potential means for adequate biological simulations based on QMV Quantum Logic and meta-Categorical Modeling as for example in a QMV-based, Quantum-Topos (Baianu and Glazebrook,2004.

  9. The Computational Complexity of Evolving Systems

    NARCIS (Netherlands)

    Verbaan, P.R.A.

    2006-01-01

    Evolving systems are systems that change over time. Examples of evolving systems are computers with soft-and hardware upgrades and dynamic networks of computers that communicate with each other, but also colonies of cooperating organisms or cells within a single organism. In this research, several m

  10. Computational Models for Nonlinear Aeroelastic Systems Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Clear Science Corp. and Duke University propose to develop and demonstrate new and efficient computational methods of modeling nonlinear aeroelastic systems. The...

  11. ACSES, An Automated Computer Science Education System.

    Science.gov (United States)

    Nievergelt, Jurg; And Others

    A project to accommodate the large and increasing enrollment in introductory computer science courses by automating them with a subsystem for computer science instruction on the PLATO IV Computer-Based Education system at the University of Illinois was started. The subsystem was intended to be used for supplementary instruction at the University…

  12. AGS SUPER NEUTRINO BEAM FACILITY ACCELERATOR AND TARGET SYSTEM DESIGN (NEUTRINO WORKING GROUP REPORT-II).

    Energy Technology Data Exchange (ETDEWEB)

    DIWAN,M.; MARCIANO,W.; WENG,W.; RAPARIA,D.

    2003-04-21

    This document describes the design of the accelerator and target systems for the AGS Super Neutrino Beam Facility. Under the direction of the Associate Laboratory Director Tom Kirk, BNL has established a Neutrino Working Group to explore the scientific case and facility requirements for a very long baseline neutrino experiment. Results of a study of the physics merit and detector performance was published in BNL-69395 in October 2002, where it was shown that a wide-band neutrino beam generated by a 1 MW proton beam from the AGS, coupled with a half megaton water Cerenkov detector located deep underground in the former Homestake mine in South Dakota would be able to measure the complete set of neutrino oscillation parameters: (1) precise determination of the oscillation parameters {Delta}m{sub 32}{sup 2} and sin{sup 2} 2{theta}{sub 32}; (2) detection of the oscillation of {nu}{sub {mu}}-{nu}{sub e} and measurement of sin{sup 2} 2{theta}{sub 13}; (3) measurement of {Delta}m{sub 21}{sup 2} sin 2{theta}{sub 12} in a {nu}{sub {mu}} {yields} {nu}{sub e} appearance mode, independent of the value of {theta}{sub 13}; (4) verification of matter enhancement and the sign of {Delta}m{sub 32}{sup 2}; and (5) determination of the CP-violation parameter {delta}{sub CP} in the neutrino sector. This report details the performance requirements and conceptual design of the accelerator and the target systems for the production of a neutrino beam by a 1.0 MW proton beam from the AGS. The major components of this facility include a new 1.2 GeV superconducting linac, ramping the AGS at 2.5 Hz, and the new target station for 1.0 MW beam. It also calls for moderate increase, about 30%, of the AGS intensity per pulse. Special care is taken to account for all sources of proton beam loss plus shielding and collimation of stray beam halo particles to ensure equipment reliability and personal safety. A preliminary cost estimate and schedule for the accelerator upgrade and target system are also

  13. Task allocation in a distributed computing system

    Science.gov (United States)

    Seward, Walter D.

    1987-01-01

    A conceptual framework is examined for task allocation in distributed systems. Application and computing system parameters critical to task allocation decision processes are discussed. Task allocation techniques are addressed which focus on achieving a balance in the load distribution among the system's processors. Equalization of computing load among the processing elements is the goal. Examples of system performance are presented for specific applications. Both static and dynamic allocation of tasks are considered and system performance is evaluated using different task allocation methodologies.

  14. Distributed computer systems theory and practice

    CERN Document Server

    Zedan, H S M

    2014-01-01

    Distributed Computer Systems: Theory and Practice is a collection of papers dealing with the design and implementation of operating systems, including distributed systems, such as the amoeba system, argus, Andrew, and grapevine. One paper discusses the concepts and notations for concurrent programming, particularly language notation used in computer programming, synchronization methods, and also compares three classes of languages. Another paper explains load balancing or load redistribution to improve system performance, namely, static balancing and adaptive load balancing. For program effici

  15. Comparing the architecture of Grid Computing and Cloud Computing systems

    Directory of Open Access Journals (Sweden)

    Abdollah Doavi

    2015-09-01

    Full Text Available Grid Computing or computational connected networks is a new network model that allows the possibility of massive computational operations using the connected resources, in fact, it is a new generation of distributed networks. Grid architecture is recommended because the widespread nature of the Internet makes an exciting environment called 'Grid' to create a scalable system with high-performance, generalized and secure. Then the central architecture called to this goal is a firmware named GridOS. The term 'cloud computing' means the development and deployment of Internet –based computing technology. This is a style of computing in an environment where IT-related capabilities offered as a service or users services. And it allows him/her to have access to technology-based services on the Internet; without the user having the specific information about this technology or (s he wants to take control of the IT infrastructure supported by him/her. In the paper, general explanations are given about the systems Grid and Cloud. Then their provided components and services are checked by these systems and their security.

  16. Holography based super resolution

    Science.gov (United States)

    Hussain, Anwar; Mudassar, Asloob A.

    2012-05-01

    This paper describes the simulation of a simple technique of superresolution based on holographic imaging in spectral domain. The input beam assembly containing 25 optical fibers with different orientations and positions is placed to illuminate the object in the 4f optical system. The position and orientation of each fiber is calculated with respect to the central fiber in the array. The positions and orientations of the fibers are related to the shift of object spectrum at aperture plane. During the imaging process each fiber is operated once in the whole procedure to illuminate the input object transparency which gives shift to the object spectrum in the spectral domain. This shift of the spectrum is equal to the integral multiple of the pass band aperture width. During the operation of single fiber (ON-state) all other fibers are in OFF-state at that time. The hologram recorded by each fiber at the CCD plane is stored in computer memory. At the end of illumination process total 25 holograms are recorded by the whole fiber array and by applying some post processing and specific algorithm single super resolved image is obtained. The superresolved image is five times better than the band-limited image. The work is demonstrated using computer simulation only.

  17. A Super Computer For a Super City

    Institute of Scientific and Technical Information of China (English)

    FRANCISCO; LITTLE

    2011-01-01

    It’s a gigantic construction site that stretches beyond the horizon. Blue and red heavy-duty trucks in endless lines trundle about beneath a gaggle of cranes hoisting steel. And all the while enormous ultramodern buildings,bearing the names of the largest local and international companies,rise from the earth.

  18. Intelligent computing systems emerging application areas

    CERN Document Server

    Virvou, Maria; Jain, Lakhmi

    2016-01-01

    This book at hand explores emerging scientific and technological areas in which Intelligent Computing Systems provide efficient solutions and, thus, may play a role in the years to come. It demonstrates how Intelligent Computing Systems make use of computational methodologies that mimic nature-inspired processes to address real world problems of high complexity for which exact mathematical solutions, based on physical and statistical modelling, are intractable. Common intelligent computational methodologies are presented including artificial neural networks, evolutionary computation, genetic algorithms, artificial immune systems, fuzzy logic, swarm intelligence, artificial life, virtual worlds and hybrid methodologies based on combinations of the previous. The book will be useful to researchers, practitioners and graduate students dealing with mathematically-intractable problems. It is intended for both the expert/researcher in the field of Intelligent Computing Systems, as well as for the general reader in t...

  19. FPGA-accelerated simulation of computer systems

    CERN Document Server

    Angepat, Hari; Chung, Eric S; Hoe, James C; Chung, Eric S

    2014-01-01

    To date, the most common form of simulators of computer systems are software-based running on standard computers. One promising approach to improve simulation performance is to apply hardware, specifically reconfigurable hardware in the form of field programmable gate arrays (FPGAs). This manuscript describes various approaches of using FPGAs to accelerate software-implemented simulation of computer systems and selected simulators that incorporate those techniques. More precisely, we describe a simulation architecture taxonomy that incorporates a simulation architecture specifically designed f

  20. Formal Protection Architecture for Cloud Computing System

    Institute of Scientific and Technical Information of China (English)

    Yasha Chen; Jianpeng Zhao; Junmao Zhu; Fei Yan

    2014-01-01

    Cloud computing systems play a vital role in national securi-ty. This paper describes a conceptual framework called dual-system architecture for protecting computing environments. While attempting to be logical and rigorous, formalism meth-od is avoided and this paper chooses algebra Communication Sequential Process.

  1. Computer Literacy in a Distance Education System

    Science.gov (United States)

    Farajollahi, Mehran; Zandi, Bahman; Sarmadi, Mohamadreza; Keshavarz, Mohsen

    2015-01-01

    In a Distance Education (DE) system, students must be equipped with seven skills of computer (ICDL) usage. This paper aims at investigating the effect of a DE system on the computer literacy of Master of Arts students at Tehran University. The design of this study is quasi-experimental. Pre-test and post-test were used in both control and…

  2. Computer-Controlled, Motorized Positioning System

    Science.gov (United States)

    Vargas-Aburto, Carlos; Liff, Dale R.

    1994-01-01

    Computer-controlled, motorized positioning system developed for use in robotic manipulation of samples in custom-built secondary-ion mass spectrometry (SIMS) system. Positions sample repeatably and accurately, even during analysis in three linear orthogonal coordinates and one angular coordinate under manual local control, or microprocessor-based local control or remote control by computer via general-purpose interface bus (GPIB).

  3. Advanced Hybrid Computer Systems. Software Technology.

    Science.gov (United States)

    This software technology final report evaluates advances made in Advanced Hybrid Computer System software technology . The report describes what...automatic patching software is available as well as which analog/hybrid programming languages would be most feasible for the Advanced Hybrid Computer...compiler software . The problem of how software would interface with the hybrid system is also presented.

  4. Super-resolution phase reconstruction technique in electron holography with a stage-scanning system

    Science.gov (United States)

    Lei, Dan; Mitsuishi, Kazutaka; Harada, Ken; Shimojo, Masayuki; Ju, Dongying; Takeguchi, Masaki

    2014-02-01

    Super-resolution image reconstruction is a digital signal processing technique that allows creating a high-resolution image from multiple low-resolution images taken at slightly different positions. We introduce the super-resolution image reconstruction technique into electron holography for reconstructing phase images as follows: the studied specimen is shifted step-wise with a high-precision piezo holder, and a series of holograms is recorded. When the step size is not a multiple of the CCD pixel size, processing of the acquired series results in a higher pixel density and spatial resolution as compared to the phase image obtained with conventional holography. The final resolution exceeds the limit of the CCD pixel size divided by the magnification.

  5. Biomolecular computing systems: principles, progress and potential.

    Science.gov (United States)

    Benenson, Yaakov

    2012-06-12

    The task of information processing, or computation, can be performed by natural and man-made 'devices'. Man-made computers are made from silicon chips, whereas natural 'computers', such as the brain, use cells and molecules. Computation also occurs on a much smaller scale in regulatory and signalling pathways in individual cells and even within single biomolecules. Indeed, much of what we recognize as life results from the remarkable capacity of biological building blocks to compute in highly sophisticated ways. Rational design and engineering of biological computing systems can greatly enhance our ability to study and to control biological systems. Potential applications include tissue engineering and regeneration and medical treatments. This Review introduces key concepts and discusses recent progress that has been made in biomolecular computing.

  6. Automated Diversity in Computer Systems

    Science.gov (United States)

    2005-09-01

    P ( EBM I ) = Me2a ; P (ELMP ) = ps and P (EBMP ) = ps. We are interested in the probability of a successful branch (escape) out of a sequence of n...reference is still le- gal. Both can generate false positives, although CRED is less computationally expensive. The common theme in all these

  7. Super Factories

    Indian Academy of Sciences (India)

    D G Hitlin

    2006-11-01

    Heavy-flavor physics, in particular and physics results from the factories, currently provides strong constraints on models of physics beyond the Standard Model. A new generation of colliders, Super Factories, with 50 to 100 times the luminosity of existing colliders, can, in a dialog with LHC and ILC, provide unique clarification of new physics phenomena seen at those machines.

  8. Time-efficient computation of the electronic structure of the C60 super-atom molecular orbital (SAMO) states in TDDFT

    Science.gov (United States)

    Mignolet, B.; Remacle, F.

    2016-12-01

    Fullerenes have a dense manifold of excited states composed of valence excited states and Rydberg states. Among Rydberg states, one distinguishes Super Atom Molecular Orbitals (SAMO), excited states in which an electron is promoted to a diffuse nanometer size molecular orbital with a hydrogenic-like character. Unlike typical Rydberg states, the electronic density of the SAMO states is mainly localized inside and in the close vicinity of the fullerene cage. In this proceeding, we propose a time-saving way to compute the electronic structure of the SAMO and Rydberg states of fullerenes at the TDDFT level by limiting the number of excitations allowed to build the excited states. We investigate the effect of limiting the number of excitations in C60 and compare it to the experimental binding energies. We also investigate the effect of the functional and basis set on the binding energies of the SAMO states.

  9. Quantum dissipative dynamics of a bistable system in the sub-Ohmic to super-Ohmic regime

    Science.gov (United States)

    Magazzù, Luca; Carollo, Angelo; Spagnolo, Bernardo; Valenti, Davide

    2016-05-01

    We investigate the quantum dynamics of a multilevel bistable system coupled to a bosonic heat bath beyond the perturbative regime. We consider different spectral densities of the bath, in the transition from sub-Ohmic to super-Ohmic dissipation, and different cutoff frequencies. The study is carried out by using the real-time path integral approach of the Feynman-Vernon influence functional. We find that, in the crossover dynamical regime characterized by damped intrawell oscillations and incoherent tunneling, the short time behavior and the time scales of the relaxation starting from a nonequilibrium initial condition depend nontrivially on the spectral properties of the heat bath.

  10. The multicomponent (2+1)-dimensional Glachette-Johnson (GJ) equation hierarchy and its super-integrable coupling system

    Institute of Scientific and Technical Information of China (English)

    Yu Fa-Jun; Zhang Hong-Qing

    2008-01-01

    This paper presents a set of multicomponent matrix Lie algebra,which is used to construct a new loop algebra (A)M.By using the Tu scheme,a Liouville integrable multicomponent equation hierarchy is generated,which possesses the Hamiltonian structure.As its reduction cases,the multicomponent (2+1)-dimensional Glaehette-Johnson (GJ) hierarchy is given.Finally,the super-integrable coupling system of multicomponent (2+1)-dimensional GJ hierarchy is established through enlarging the spectral problem.

  11. Analysis and Design of a Bidirectional Isolated DC-DC Converter for Fuel Cell and Super-Capacitor Hybrid System

    DEFF Research Database (Denmark)

    Zhang, Zhe; Ouyang, Ziwei; Thomsen, Ole Cornelius

    2012-01-01

    Electrical power system in future uninterruptible power supply (UPS) or electrical vehicle (EV) may employ hybrid energy sources, such as fuel cells and super-capacitors. It will be necessary to efficiently draw the energy from these two sources as well as recharge the energy storage elements...... for zero voltage switching (ZVS). Moreover, a phase-shift and duty cycle modulation method is utilized to control the bidirectional power flow flexibly and it also makes the converter operate under a quasi-optimal condition over a wide input voltage range. This paper describes the operation principle...

  12. Laser Imaging Systems For Computer Vision

    Science.gov (United States)

    Vlad, Ionel V.; Ionescu-Pallas, Nicholas; Popa, Dragos; Apostol, Ileana; Vlad, Adriana; Capatina, V.

    1989-05-01

    The computer vision is becoming an essential feature of the high level artificial intelligence. Laser imaging systems act as special kind of image preprocessors/converters enlarging the access of the computer "intelligence" to the inspection, analysis and decision in new "world" : nanometric, three-dimensionals(3D), ultrafast, hostile for humans etc. Considering that the heart of the problem is the matching of the optical methods and the compu-ter software , some of the most promising interferometric,projection and diffraction systems are reviewed with discussions of our present results and of their potential in the precise 3D computer vision.

  13. Computer Bits: The Ideal Computer System for Your Center.

    Science.gov (United States)

    Brown, Dennis; Neugebauer, Roger

    1986-01-01

    Reviews five computer systems that can address the needs of a child care center: (1) Sperry PC IT with Bernoulli Box, (2) Compaq DeskPro 286, (3) Macintosh Plus, (4) Epson Equity II, and (5) Leading Edge Model "D." (HOD)

  14. An Optical Tri-valued Computing System

    Directory of Open Access Journals (Sweden)

    Junjie Peng

    2014-03-01

    Full Text Available A new optical computing experimental system is presented. Designed based on tri-valued logic, the system is built as a photoelectric hybrid computer system which is much advantageous over its electronic counterparts. Say, the tri-valued logic of the system guarantees that it is more powerful in information processing than that of systems with binary logic. And the optical characteristic of the system makes it be much capable in huge data processing than that of the electronic computers. The optical computing system includes two parts, electronic part and optical part. The electronic part consists of a PC and two embedded systems which are used for data input/output, monitor, synchronous control, user data combination and separation and so on. The optical part includes three components. They are optical encoder, logic calculator and decoder. It mainly answers for encoding the users' requests into tri-valued optical information, computing and processing the requests, decoding the tri-valued optical information to binary electronic information and so forth. Experiment results show that the system is quite right in optical information processing which demonstrates the feasibility and correctness of the optical computing system.

  15. Hybrid Systems: Computation and Control.

    Science.gov (United States)

    2007-11-02

    elbow) and a pinned first joint (shoul- der) (see Figure 2); it is termed an underactuated system since it is a mechanical system with fewer...Montreal, PQ, Canada, 1998. [10] M. W. Spong. Partial feedback linearization of underactuated mechanical systems . In Proceedings, IROS󈨢, pages 314-321...control mechanism and search for optimal combinations of control variables. Besides the nonlinear and hybrid nature of powertrain systems , hardware

  16. MTA Computer Based Evaluation System.

    Science.gov (United States)

    Brenner, Lisa P.; And Others

    The MTA PLATO-based evaluation system, which has been implemented by a consortium of schools of medical technology, is designed to be general-purpose, modular, data-driven, and interactive, and to accommodate other national and local item banks. The system provides a comprehensive interactive item-banking system in conjunction with online student…

  17. MTA Computer Based Evaluation System.

    Science.gov (United States)

    Brenner, Lisa P.; And Others

    The MTA PLATO-based evaluation system, which has been implemented by a consortium of schools of medical technology, is designed to be general-purpose, modular, data-driven, and interactive, and to accommodate other national and local item banks. The system provides a comprehensive interactive item-banking system in conjunction with online student…

  18. Computer Jet-Engine-Monitoring System

    Science.gov (United States)

    Disbrow, James D.; Duke, Eugene L.; Ray, Ronald J.

    1992-01-01

    "Intelligent Computer Assistant for Engine Monitoring" (ICAEM), computer-based monitoring system intended to distill and display data on conditions of operation of two turbofan engines of F-18, is in preliminary state of development. System reduces burden on propulsion engineer by providing single display of summary information on statuses of engines and alerting engineer to anomalous conditions. Effective use of prior engine-monitoring system requires continuous attention to multiple displays.

  19. A computational system for a Mars rover

    Science.gov (United States)

    Lambert, Kenneth E.

    1989-01-01

    This paper presents an overview of an onboard computing system that can be used for meeting the computational needs of a Mars rover. The paper begins by presenting an overview of some of the requirements which are key factors affecting the architecture. The rest of the paper describes the architecture. Particular emphasis is placed on the criteria used in defining the system and how the system qualitatively meets the criteria.

  20. Computer Jet-Engine-Monitoring System

    Science.gov (United States)

    Disbrow, James D.; Duke, Eugene L.; Ray, Ronald J.

    1992-01-01

    "Intelligent Computer Assistant for Engine Monitoring" (ICAEM), computer-based monitoring system intended to distill and display data on conditions of operation of two turbofan engines of F-18, is in preliminary state of development. System reduces burden on propulsion engineer by providing single display of summary information on statuses of engines and alerting engineer to anomalous conditions. Effective use of prior engine-monitoring system requires continuous attention to multiple displays.

  1. Operation experiences of the super conducting magnet for a gyrotron of the JT-60U ECH system

    Energy Technology Data Exchange (ETDEWEB)

    Igarashi, Koichi; Seki, Masami; Shimono, Mitsugu; Terakado, Masayuki; Ishii, Kazuhiro; Takahashi, Masami [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment

    2003-03-01

    The JT-60U electron cyclotron heating (ECH) system can heat plasmas locally and drive a plasma current with four 1 MW-5 sec gyrotrons. The super conducting magnets (SCM) are required for oscillation of the gyrotron at a working frequency of 110 GHz. The SCM provides a high magnetic field of 4.5T at the cavity inside the gyrotron. This SCM system is characterized by 1) operation without liquid Helium owing to a 4K-refrigerator applied to the magnetic coils, 2) easy maintenance. Operational experiences about the SCM system through a long term experiment for a high power gyrotron are very valuable. According to those operational experiences, it is clarified the 4K-refrigerator should be renewed in order to keep low temperature of the SCM. It is also found that 200 hours or less are required for the super conducting condition (<5K) after long stopping time of the refrigerator up to 150 hours. This is useful information for making a plan about ECH experiments. (author)

  2. Intelligent computational systems for space applications

    Science.gov (United States)

    Lum, Henry, Jr.; Lau, Sonie

    1989-01-01

    The evolution of intelligent computation systems is discussed starting with the Spaceborne VHSIC Multiprocessor System (SVMS). The SVMS is a six-processor system designed to provide at least a 100-fold increase in both numeric and symbolic processing over the i386 uniprocessor. The significant system performance parameters necessary to achieve the performance increase are discussed.

  3. Performing the Super Instrument

    DEFF Research Database (Denmark)

    Kallionpaa, Maria

    2016-01-01

    The genre of contemporary classical music has seen significant innovation and research related to new super, hyper, and hybrid instruments, which opens up a vast palette of expressive potential. An increasing number of composers, performers, instrument designers, engineers, and computer programmers...... provides the performer extensive virtuoso capabilities in terms of instrumental range, harmony, timbre, or spatial, textural, acoustic, technical, or technological qualities. The discussion will be illustrated by a composition case study involving augmented musical instrument electromagnetic resonator...

  4. Performing the Super Instrument

    DEFF Research Database (Denmark)

    Kallionpaa, Maria

    2016-01-01

    provides the performer extensive virtuoso capabilities in terms of instrumental range, harmony, timbre, or spatial, textural, acoustic, technical, or technological qualities. The discussion will be illustrated by a composition case study involving augmented musical instrument electromagnetic resonator......The genre of contemporary classical music has seen significant innovation and research related to new super, hyper, and hybrid instruments, which opens up a vast palette of expressive potential. An increasing number of composers, performers, instrument designers, engineers, and computer programmers...

  5. Computation of Weapons Systems Effectiveness

    Science.gov (United States)

    2013-09-01

    Aircraft Dive Angle : Initial Weapon Release Velocity at x-axis VOx VOz x: x-axis z: z-axis : Initial Weapon Release Velocity at z...altitude Impact Velocity (x− axis), Vix = VOx (3.4) Impact Velocity (z− axis), Viz = VOz + (g ∗ TOF) (3.5) Impact Velocity, Vi = �Vix2 + Viz2 (3.6...compute the ballistic partials to examine the effects that varying h, VOx and VOz have on RB using the following equations: ∂RB ∂h = New RB−Old RB

  6. A cost modelling system for cloud computing

    OpenAIRE

    Ajeh, Daniel; Ellman, Jeremy; Keogh, Shelagh

    2014-01-01

    An advance in technology unlocks new opportunities for organizations to increase their productivity, efficiency and process automation while reducing the cost of doing business as well. The emergence of cloud computing addresses these prospects through the provision of agile systems that are scalable, flexible and reliable as well as cost effective. Cloud computing has made hosting and deployment of computing resources cheaper and easier with no up-front charges but pay per-use flexible payme...

  7. The university computer network security system

    Institute of Scientific and Technical Information of China (English)

    张丁欣

    2012-01-01

    With the development of the times, advances in technology, computer network technology has been deep into all aspects of people's lives, it plays an increasingly important role, is an important tool for information exchange. Colleges and universities is to cultivate the cradle of new technology and new technology, computer network Yulu nectar to nurture emerging technologies, and so, as institutions of higher learning should pay attention to the construction of computer network security system.

  8. QUBIT DATA STRUCTURES FOR ANALYZING COMPUTING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Vladimir Hahanov

    2014-11-01

    Full Text Available Qubit models and methods for improving the performance of software and hardware for analyzing digital devices through increasing the dimension of the data structures and memory are proposed. The basic concepts, terminology and definitions necessary for the implementation of quantum computing when analyzing virtual computers are introduced. The investigation results concerning design and modeling computer systems in a cyberspace based on the use of two-component structure are presented.

  9. Computational Intelligence in Information Systems Conference

    CERN Document Server

    Au, Thien-Wan; Omar, Saiful

    2017-01-01

    This book constitutes the Proceedings of the Computational Intelligence in Information Systems conference (CIIS 2016), held in Brunei, November 18–20, 2016. The CIIS conference provides a platform for researchers to exchange the latest ideas and to present new research advances in general areas related to computational intelligence and its applications. The 26 revised full papers presented in this book have been carefully selected from 62 submissions. They cover a wide range of topics and application areas in computational intelligence and informatics.

  10. Anomalous meteors from the observations with super-isocon TV systems

    Science.gov (United States)

    Kozak, P.; Watanabe, J.; Sato, M.

    2014-07-01

    There is a range of both optical and radar observations of meteors the behavior of which essentially differs from the behavior of most meteors. In some cases such meteors cannot be explained in the frame of the classic physical theory of meteors, in other cases the meteors are just of rare type. First of all these are the meteors with true hyperbolic velocities. In spite of the fact that most of hyperbolic orbits are the results of calculation errors, the meteors with extremely high velocities appreciably exceeding the hyperbolic limit of 73 km/s exist and can be of interstellar origin [1--3]. Another very rare phenomenon describes the possible cluster structure of meteor streams, which could be connected with the ejection of the substance from the cometary nucleus shortly before collision of the particles with the Earth [4]. Among anomalies connected with the meteor motion in the atmosphere one can note, first of all, the ultra-high altitudes of meteor beginnings exceeding 130--140 km [5--7]. Some other observations point to the beginning heights of bright meteors from Leonid shower on altitudes near 200 km [8]. The classic physical theory of meteors cannot explain their radiation on such high altitudes because of low air density [9]. Recently the results of TV observations of meteors with diffusive and cloudy structure appeared [9,10]. The results of observations in which, according to author's opinion, the meteors have a few kilometers transverse jets [9--11] were presented as well. There are video frames with bright meteor obtained with high temporal resolution, where authors declared the radiation, which could be an effect of a spread directly of the shock wave [12]. During many years' double-station observations of meteors which have been carrying out at Astronomical Observatory of Kyiv National Taras Shevchenko University the ultra-sensitive TV transmitting tubes of super-isocon type were used [7]. Given type of the tube is one of the most sensitive in the

  11. Optimization of Operating Systems towards Green Computing

    Directory of Open Access Journals (Sweden)

    Appasami Govindasamy

    2011-01-01

    Full Text Available Green Computing is one of the emerging computing technology in the field of computer science engineering and technology to provide Green Information Technology (Green IT. It is mainly used to protect environment, optimize energy consumption and keeps green environment. Green computing also refers to environmentally sustainable computing. In recent years, companies in the computer industry have come to realize that going green is in their best interest, both in terms of public relations and reduced costs. Information and communication technology (ICT has now become an important department for the success of any organization. Making IT “Green” can not only save money but help save our world by making it a better place through reducing and/or eliminating wasteful practices. In this paper we focus on green computing by optimizing operating systems and scheduling of hardware resources. The objectives of the green computing are human power, electrical energy, time and cost reduction with out polluting the environment while developing the software. Operating System (OS Optimization is very important for Green computing, because it is bridge for both hardware components and Application Soft wares. The important Steps for green computing user and energy efficient usage are also discussed in this paper.

  12. Resilience assessment and evaluation of computing systems

    CERN Document Server

    Wolter, Katinka; Vieira, Marco

    2012-01-01

    The resilience of computing systems includes their dependability as well as their fault tolerance and security. It defines the ability of a computing system to perform properly in the presence of various kinds of disturbances and to recover from any service degradation. These properties are immensely important in a world where many aspects of our daily life depend on the correct, reliable and secure operation of often large-scale distributed computing systems. Wolter and her co-editors grouped the 20 chapters from leading researchers into seven parts: an introduction and motivating examples,

  13. Computer-aided dispatching system design specification

    Energy Technology Data Exchange (ETDEWEB)

    Briggs, M.G.

    1997-12-16

    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol Operations Center. This document reflects the as-built requirements for the system that was delivered by GTE Northwest, Inc. This system provided a commercial off-the-shelf computer-aided dispatching system and alarm monitoring system currently in operations at the Hanford Patrol Operations Center, Building 2721E. This system also provides alarm back-up capability for the Plutonium Finishing Plant (PFP).

  14. Dynamics of Super Quantum Correlations and Quantum Correlations for a System of Three Qubits

    Science.gov (United States)

    Siyouri, F.; El Baz, M.; Rfifi, S.; Hassouni, Y.

    2016-04-01

    The dynamics of quantum discord for two qubits independently interacting with dephasing reservoirs have been studied recently. The authors [Phys. Rev. A 88 (2013) 034304] found that for some Bell-diagonal states (BDS) which interact with their environments the calculation of quantum discord could experience a sudden transition in its dynamics, this phenomenon is known as the sudden change. Here in the present paper, we analyze the dynamics of normal quantum discord and super quantum discord for tripartite Bell-diagonal states independently interacting with dephasing reservoirs. Then, we find that basis change does not necessary mean sudden change of quantum correlations.

  15. Super-resolution image reconstruction methods applied to GFE-referenced navigation system

    Science.gov (United States)

    Yan, Lei; Lin, Yi; Tong, Qingxi

    2007-11-01

    The problem about reference grid data's overlarge spacing, which makes deviated estimation of un-surveyed points and poor accuracy of correlation positioning, has been embarrassing Geophysical Fields of the Earth (GFE) referenced navigation research. The super-resolution images reconstruction methods in remote sensing field give some inspiration, and its brief method, Maximum A-Posterior (MAP) based on Bayesian theory, is transplanted on grid data. The proposed algorithm named MAP-G can implement interpolation of reference data field by reflecting whole distribution trend. Comparison with traditional interpolation algorithms and simulation experiments on underwater terrain/gravity-aided navigation platform, indicate that MAP-G algorithm can effectively improve navigation's performance.

  16. Rendezvous Facilities in a Distributed Computer System

    Institute of Scientific and Technical Information of China (English)

    廖先Zhi; 金兰

    1995-01-01

    The distributed computer system described in this paper is a set of computer nodes interconnected in an interconnection network via packet-switching interfaces.The nodes communicate with each other by means of message-passing protocols.This paper presents the implementation of rendezvous facilities as high-level primitives provided by a parallel programming language to support interprocess communication and synchronization.

  17. Exploring eclipsing binaries, triples and higher-order multiple star systems with the SuperWASP archive

    CERN Document Server

    Lohr, M E

    2015-01-01

    The Super Wide Angle Search for Planets (SuperWASP) is a whole-sky high-cadence optical survey which has searched for exoplanetary transit signatures since 2004. Its archive contains long-term light curves for ~30 million 8-15 V magnitude stars, making it a valuable serendipitous resource for variable star research. We have concentrated on the evidence it provides for eclipsing binaries, in particular those exhibiting orbital period variations, and have developed custom tools to measure periods precisely and detect period changes reliably. Amongst our results are: a collection of 143 candidate contact or semi-detached eclipsing binaries near the short-period limit in the main sequence binary period distribution; a probable hierarchical triple exhibiting dramatic sinusoidal period variations; a new doubly-eclipsing quintuple system; and new evidence for period change or stability in 12 post-common-envelope eclipsing binaries, which may support the existence of circumbinary planets in such systems. A large-scal...

  18. Performance Evaluations for Super-Resolution Mosaicing on UAS Surveillance Videos

    Directory of Open Access Journals (Sweden)

    Aldo Camargo

    2013-05-01

    Full Text Available Abstract Unmanned Aircraft Systems (UAS have been widely applied for reconnaissance and surveillance by exploiting information collected from the digital imaging payload. The super-resolution (SR mosaicing of low-resolution (LR UAS surveillance video frames has become a critical requirement for UAS video processing and is important for further effective image understanding. In this paper we develop a novel super-resolution framework, which does not require the construction of sparse matrices. The proposed method implements image operations in the spatial domain and applies an iterated back-projection to construct super-resolution mosaics from the overlapping UAS surveillance video frames. The Steepest Descent method, the Conjugate Gradient method and the Levenberg-Marquardt algorithm are used to numerically solve the nonlinear optimization problem for estimating a super-resolution mosaic. A quantitative performance comparison in terms of computation time and visual quality of the super-resolution mosaics through the three numerical techniques is presented.

  19. Computer-aided power systems analysis

    CERN Document Server

    Kusic, George

    2008-01-01

    Computer applications yield more insight into system behavior than is possible by using hand calculations on system elements. Computer-Aided Power Systems Analysis: Second Edition is a state-of-the-art presentation of basic principles and software for power systems in steady-state operation. Originally published in 1985, this revised edition explores power systems from the point of view of the central control facility. It covers the elements of transmission networks, bus reference frame, network fault and contingency calculations, power flow on transmission networks, generator base power setti

  20. Sandia Laboratories technical capabilities: computation systems

    Energy Technology Data Exchange (ETDEWEB)

    1977-12-01

    This report characterizes the computation systems capabilities at Sandia Laboratories. Selected applications of these capabilities are presented to illustrate the extent to which they can be applied in research and development programs. 9 figures.

  1. Console Networks for Major Computer Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ophir, D; Shepherd, B; Spinrad, R J; Stonehill, D

    1966-07-22

    A concept for interactive time-sharing of a major computer system is developed in which satellite computers mediate between the central computing complex and the various individual user terminals. These techniques allow the development of a satellite system substantially independent of the details of the central computer and its operating system. Although the user terminals' roles may be rich and varied, the demands on the central facility are merely those of a tape drive or similar batched information transfer device. The particular system under development provides service for eleven visual display and communication consoles, sixteen general purpose, low rate data sources, and up to thirty-one typewriters. Each visual display provides a flicker-free image of up to 4000 alphanumeric characters or tens of thousands of points by employing a swept raster picture generating technique directly compatible with that of commercial television. Users communicate either by typewriter or a manually positioned light pointer.

  2. The structural robustness of multiprocessor computing system

    Directory of Open Access Journals (Sweden)

    N. Andronaty

    1996-03-01

    Full Text Available The model of the multiprocessor computing system on the base of transputers which permits to resolve the question of valuation of a structural robustness (viability, survivability is described.

  3. Computational Models for Nonlinear Aeroelastic Systems Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Clear Science Corp. and Duke University propose to develop and demonstrate a new and efficient computational method of modeling nonlinear aeroelastic systems. The...

  4. Parallel experimental study of a novel super-thin thermal absorber based photovoltaic/thermal (PV/T) system against conventional photovoltaic (PV) system

    OpenAIRE

    2015-01-01

    Photovoltaic (PV) semiconductor degrades in performance due to temperature rise. A super thin-conductive thermal absorber is therefore developed to regulate the PV working temperature by retrofitting the existing PV panel into the photovoltaic/thermal (PV/T) panel. This article presented the parallel comparative investigation of the two different systems through both laboratory and field experiments. The laboratory evaluation consisted of one PV panel and one PV/T panel respectively while the...

  5. A Management System for Computer Performance Evaluation.

    Science.gov (United States)

    1981-12-01

    large unused capacity indicates a potential cost performance improvement (i.e. the potential to perform more within current costs or reduce costs ...necessary to bring the performance of the computer system in line with operational goals. : (Ref. 18 : 7) The General Accouting Office estimates that the...tasks in attempting to improve the efficiency and effectiveness of their computer systems. Cost began to plan an important role in the life of a

  6. Cloud Computing for Standard ERP Systems

    DEFF Research Database (Denmark)

    Schubert, Petra; Adisa, Femi

    for the operation of ERP systems. We argue that the phenomenon of cloud computing could lead to a decisive change in the way business software is deployed in companies. Our reference framework contains three levels (IaaS, PaaS, SaaS) and clarifies the meaning of public, private and hybrid clouds. The three levels...... of cloud computing and their impact on ERP systems operation are discussed. From the literature we identify areas for future research and propose a research agenda....

  7. Computer support for mechatronic control system design

    NARCIS (Netherlands)

    van Amerongen, J.; Coelingh, H.J.; de Vries, Theodorus J.A.

    2000-01-01

    This paper discusses the demands for proper tools for computer aided control system design of mechatronic systems and identifies a number of tasks in this design process. Real mechatronic design, involving input from specialists from varying disciplines, requires that the system can be represented

  8. Computer Systems for Distributed and Distance Learning.

    Science.gov (United States)

    Anderson, M.; Jackson, David

    2000-01-01

    Discussion of network-based learning focuses on a survey of computer systems for distributed and distance learning. Both Web-based systems and non-Web-based systems are reviewed in order to highlight some of the major trends of past projects and to suggest ways in which progress may be made in the future. (Contains 92 references.) (Author/LRW)

  9. The Erasmus Computing Grid - Building a Super-Computer Virtually for Free at the Erasmus Medical Center and the Hogeschool Rotterdam

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2006-01-01

    textabstractThe Set-Up of the 20 Teraflop Erasmus Computing Grid: To meet the enormous computational needs of live- science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop computing

  10. Information systems and computing technology

    CERN Document Server

    Zhang, Lei

    2013-01-01

    Invited papersIncorporating the multi-cross-sectional temporal effect in Geographically Weighted Logit Regression K. Wu, B. Liu, B. Huang & Z. LeiOne shot learning human actions recognition using key posesW.H. Zou, S.G. Li, Z. Lei & N. DaiBand grouping pansharpening for WorldView-2 satellite images X. LiResearch on GIS based haze trajectory data analysis system Y. Wang, J. Chen, J. Shu & X. WangRegular papersA warning model of systemic financial risks W. Xu & Q. WangResearch on smart mobile phone user experience with grounded theory J.P. Wan & Y.H. ZhuThe software reliability analysis based on

  11. Computational approaches for systems metabolomics.

    Science.gov (United States)

    Krumsiek, Jan; Bartel, Jörg; Theis, Fabian J

    2016-06-01

    Systems genetics is defined as the simultaneous assessment and analysis of multi-omics datasets. In the past few years, metabolomics has been established as a robust tool describing an important functional layer in this approach. The metabolome of a biological system represents an integrated state of genetic and environmental factors and has been referred to as a 'link between genotype and phenotype'. In this review, we summarize recent progresses in statistical analysis methods for metabolomics data in combination with other omics layers. We put a special focus on complex, multivariate statistical approaches as well as pathway-based and network-based analysis methods. Moreover, we outline current challenges and pitfalls of metabolomics-focused multi-omics analyses and discuss future steps for the field.

  12. Computational systems biology for aging research.

    Science.gov (United States)

    Mc Auley, Mark T; Mooney, Kathleen M

    2015-01-01

    Computational modelling is a key component of systems biology and integrates with the other techniques discussed thus far in this book by utilizing a myriad of data that are being generated to quantitatively represent and simulate biological systems. This chapter will describe what computational modelling involves; the rationale for using it, and the appropriateness of modelling for investigating the aging process. How a model is assembled and the different theoretical frameworks that can be used to build a model are also discussed. In addition, the chapter will describe several models which demonstrate the effectiveness of each computational approach for investigating the constituents of a healthy aging trajectory. Specifically, a number of models will be showcased which focus on the complex age-related disorders associated with unhealthy aging. To conclude, we discuss the future applications of computational systems modelling to aging research.

  13. Artificial immune system applications in computer security

    CERN Document Server

    Tan, Ying

    2016-01-01

    This book provides state-of-the-art information on the use, design, and development of the Artificial Immune System (AIS) and AIS-based solutions to computer security issues. Artificial Immune System: Applications in Computer Security focuses on the technologies and applications of AIS in malware detection proposed in recent years by the Computational Intelligence Laboratory of Peking University (CIL@PKU). It offers a theoretical perspective as well as practical solutions for readers interested in AIS, machine learning, pattern recognition and computer security. The book begins by introducing the basic concepts, typical algorithms, important features, and some applications of AIS. The second chapter introduces malware and its detection methods, especially for immune-based malware detection approaches. Successive chapters present a variety of advanced detection approaches for malware, including Virus Detection System, K-Nearest Neighbour (KNN), RBF networ s, and Support Vector Machines (SVM), Danger theory, ...

  14. The Super Patalan Numbers

    OpenAIRE

    Richardson, Thomas M.

    2014-01-01

    We introduce the super Patalan numbers, a generalization of the super Catalan numbers in the sense of Gessel, and prove a number of properties analagous to those of the super Catalan numbers. The super Patalan numbers generalize the super Catalan numbers similarly to how the Patalan numbers generalize the Catalan numbers.

  15. Quantum Computing in Solid State Systems

    CERN Document Server

    Ruggiero, B; Granata, C

    2006-01-01

    The aim of Quantum Computation in Solid State Systems is to report on recent theoretical and experimental results on the macroscopic quantum coherence of mesoscopic systems, as well as on solid state realization of qubits and quantum gates. Particular attention has been given to coherence effects in Josephson devices. Other solid state systems, including quantum dots, optical, ion, and spin devices which exhibit macroscopic quantum coherence are also discussed. Quantum Computation in Solid State Systems discusses experimental implementation of quantum computing and information processing devices, and in particular observations of quantum behavior in several solid state systems. On the theoretical side, the complementary expertise of the contributors provides models of the various structures in connection with the problem of minimizing decoherence.

  16. A Novel Fuzzy Logic Based Adaptive Super-Twisting Sliding Mode Control Algorithm for Dynamic Uncertain Systems

    Directory of Open Access Journals (Sweden)

    Abdul Kareem

    2012-07-01

    Full Text Available This paper presents a novel fuzzy logic based Adaptive Super-twisting Sliding Mode Controller for the control of dynamic uncertain systems. The proposed controller combines the advantages of Second order Sliding Mode Control, Fuzzy Logic Control and Adaptive Control. The reaching conditions, stability and robustness of the system with the proposed controller are guaranteed. In addition, the proposed controller is well suited for simple design and implementation. The effectiveness of the proposed controller over the first order Sliding Mode Fuzzy Logic controller is illustrated by Matlab based simulations performed on a DC-DC Buck converter. Based on this comparison, the proposed controller is shown to obtain the desired transient response without causing chattering and error under steady-state conditions. The proposed controller is able to give robust performance in terms of rejection to input voltage variations and load variations.

  17. A Novel Fuzzy Logic Based Adaptive Super-Twisting Sliding Mode Control Algorithm for Dynamic Uncertain Systems

    Directory of Open Access Journals (Sweden)

    Abdul Kareem

    2012-08-01

    Full Text Available This paper presents a novel fuzzy logic based Adaptive Super-twisting Sliding Mode Controller for thecontrol of dynamic uncertain systems. The proposed controller combines the advantages of Second orderSliding Mode Control, Fuzzy Logic Control and Adaptive Control. The reaching conditions, stability androbustness of the system with the proposed controller are guaranteed. In addition, the proposed controlleris well suited for simple design and implementation. The effectiveness of the proposed controller over thefirst order Sliding Mode Fuzzy Logic controller is illustrated by Matlab based simulations performed on aDC-DC Buck converter. Based on this comparison, the proposed controller is shown to obtain the desiredtransient response without causing chattering and error under steady-state conditions. The proposedcontroller is able to give robust performance in terms of rejection to input voltage variations and loadvariations

  18. SuperLU{_}DIST: A scalable distributed-memory sparse direct solver for unsymmetric linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xiaoye S.; Demmel, James W.

    2002-03-27

    In this paper, we present the main algorithmic features in the software package SuperLU{_}DIST, a distributed-memory sparse direct solver for large sets of linear equations. We give in detail our parallelization strategies, with focus on scalability issues, and demonstrate the parallel performance and scalability on current machines. The solver is based on sparse Gaussian elimination, with an innovative static pivoting strategy proposed earlier by the authors. The main advantage of static pivoting over classical partial pivoting is that it permits a priori determination of data structures and communication pattern for sparse Gaussian elimination, which makes it more scalable on distributed memory machines. Based on this a priori knowledge, we designed highly parallel and scalable algorithms for both LU decomposition and triangular solve and we show that they are suitable for large-scale distributed memory machines.

  19. Telemetry Computer System at Wallops Flight Center

    Science.gov (United States)

    Bell, H.; Strock, J.

    1980-01-01

    This paper describes the Telemetry Computer System in operation at NASA's Wallops Flight Center for real-time or off-line processing, storage, and display of telemetry data from rockets and aircraft. The system accepts one or two PCM data streams and one FM multiplex, converting each type of data into computer format and merging time-of-day information. A data compressor merges the active streams, and removes redundant data if desired. Dual minicomputers process data for display, while storing information on computer tape for further processing. Real-time displays are located at the station, at the rocket launch control center, and in the aircraft control tower. The system is set up and run by standard telemetry software under control of engineers and technicians. Expansion capability is built into the system to take care of possible future requirements.

  20. Honeywell Modular Automation System Computer Software Documentation

    Energy Technology Data Exchange (ETDEWEB)

    CUNNINGHAM, L.T.

    1999-09-27

    This document provides a Computer Software Documentation for a new Honeywell Modular Automation System (MAS) being installed in the Plutonium Finishing Plant (PFP). This system will be used to control new thermal stabilization furnaces in HA-211 and vertical denitration calciner in HC-230C-2.

  1. Computation and design of autonomous intelligent systems

    Science.gov (United States)

    Fry, Robert L.

    2008-04-01

    This paper describes a theory of intelligent systems and its reduction to engineering practice. The theory is based on a broader theory of computation wherein information and control are defined within the subjective frame of a system. At its most primitive level, the theory describes what it computationally means to both ask and answer questions which, like traditional logic, are also Boolean. The logic of questions describes the subjective rules of computation that are objective in the sense that all the described systems operate according to its principles. Therefore, all systems are autonomous by construct. These systems include thermodynamic, communication, and intelligent systems. Although interesting, the important practical consequence is that the engineering framework for intelligent systems can borrow efficient constructs and methodologies from both thermodynamics and information theory. Thermodynamics provides the Carnot cycle which describes intelligence dynamics when operating in the refrigeration mode. It also provides the principle of maximum entropy. Information theory has recently provided the important concept of dual-matching useful for the design of efficient intelligent systems. The reverse engineered model of computation by pyramidal neurons agrees well with biology and offers a simple and powerful exemplar of basic engineering concepts.

  2. Remote computer monitors corrosion protection system

    Energy Technology Data Exchange (ETDEWEB)

    Kendrick, A.

    Effective corrosion protection with electrochemical methods requires some method of routine monitoring that provides reliable data that is free of human error. A test installation of a remote computer control monitoring system for electrochemical corrosion protection is described. The unit can handle up to six channel inputs. Each channel comprises 3 analog signals and 1 digital. The operation of the system is discussed.

  3. Terrace Layout Using a Computer Assisted System

    Science.gov (United States)

    Development of a web-based terrace design tool based on the MOTERR program is presented, along with representative layouts for conventional and parallel terrace systems. Using digital elevation maps and geographic information systems (GIS), this tool utilizes personal computers to rapidly construct ...

  4. Cloud Computing for Standard ERP Systems

    DEFF Research Database (Denmark)

    Schubert, Petra; Adisa, Femi

    Cloud Computing is a topic that has gained momentum in the last years. Current studies show that an increasing number of companies is evaluating the promised advantages and considering making use of cloud services. In this paper we investigate the phenomenon of cloud computing and its importance...... for the operation of ERP systems. We argue that the phenomenon of cloud computing could lead to a decisive change in the way business software is deployed in companies. Our reference framework contains three levels (IaaS, PaaS, SaaS) and clarifies the meaning of public, private and hybrid clouds. The three levels...... of cloud computing and their impact on ERP systems operation are discussed. From the literature we identify areas for future research and propose a research agenda....

  5. Building Low Cost Cloud Computing Systems

    Directory of Open Access Journals (Sweden)

    Carlos Antunes

    2013-06-01

    Full Text Available The actual models of cloud computing are based in megalomaniac hardware solutions, being its implementation and maintenance unaffordable to the majority of service providers. The use of jail services is an alternative to current models of cloud computing based on virtualization. Models based in utilization of jail environments instead of the used virtualization systems will provide huge gains in terms of optimization of hardware resources at computation level and in terms of storage and energy consumption. In this paper it will be addressed the practical implementation of jail environments in real scenarios, which allows the visualization of areas where its application will be relevant and will make inevitable the redefinition of the models that are currently defined for cloud computing. In addition it will bring new opportunities in the development of support features for jail environments in the majority of operating systems.

  6. Computer networks ISE a systems approach

    CERN Document Server

    Peterson, Larry L

    2007-01-01

    Computer Networks, 4E is the only introductory computer networking book written by authors who have had first-hand experience with many of the protocols discussed in the book, who have actually designed some of them as well, and who are still actively designing the computer networks today. This newly revised edition continues to provide an enduring, practical understanding of networks and their building blocks through rich, example-based instruction. The authors' focus is on the why of network design, not just the specifications comprising today's systems but how key technologies and p

  7. Unified Computational Intelligence for Complex Systems

    CERN Document Server

    Seiffertt, John

    2010-01-01

    Computational intelligence encompasses a wide variety of techniques that allow computation to learn, to adapt, and to seek. That is, they may be designed to learn information without explicit programming regarding the nature of the content to be retained, they may be imbued with the functionality to adapt to maintain their course within a complex and unpredictably changing environment, and they may help us seek out truths about our own dynamics and lives through their inclusion in complex system modeling. These capabilities place our ability to compute in a category apart from our ability to e

  8. Design, construction and cooling system performance of a prototype cryogenic stopping cell for the Super-FRS at FAIR

    Energy Technology Data Exchange (ETDEWEB)

    Ranjan, M. [KVI-Center for Advanced Radiation Technology, University of Groningen - Zernikelaan 25, 9747 AA Groningen (Netherlands); Dendooven, P., E-mail: p.g.dendooven@rug.nl [KVI-Center for Advanced Radiation Technology, University of Groningen - Zernikelaan 25, 9747 AA Groningen (Netherlands); Purushothaman, S. [GSI Helmholtz Centre for Heavy Ion Research - Planckstraße 1, 64291 Darmstadt (Germany); Dickel, T. [GSI Helmholtz Centre for Heavy Ion Research - Planckstraße 1, 64291 Darmstadt (Germany); II. Physikalisches Institut, Justus-Liebig-Universität Gießen - Heinrich-Buff-Ring 16, 35392 Gießen (Germany); Reiter, M.P. [II. Physikalisches Institut, Justus-Liebig-Universität Gießen - Heinrich-Buff-Ring 16, 35392 Gießen (Germany); Ayet, S. [GSI Helmholtz Centre for Heavy Ion Research - Planckstraße 1, 64291 Darmstadt (Germany); Haettner, E. [GSI Helmholtz Centre for Heavy Ion Research - Planckstraße 1, 64291 Darmstadt (Germany); II. Physikalisches Institut, Justus-Liebig-Universität Gießen - Heinrich-Buff-Ring 16, 35392 Gießen (Germany); Moore, I.D. [University of Jyväskylä - FI-40014, Jyväskylä (Finland); Kalantar-Nayestanaki, N. [KVI-Center for Advanced Radiation Technology, University of Groningen - Zernikelaan 25, 9747 AA Groningen (Netherlands); and others

    2015-01-11

    A cryogenic stopping cell for stopping energetic radioactive ions and extracting them as a low energy beam was developed. This first ever cryogenically operated stopping cell serves as prototype device for the Low-Energy Branch of the Super-FRS at FAIR. The cell has a stopping volume that is 1 m long and 25 cm in diameter. Ions are guided by a DC field along the length of the stopping cell and by a combined RF and DC fields provided by an RF carpet at the exit-hole side. The ultra-high purity of the stopping gas required for optimum ion survival is reached by cryogenic operation. The design considerations and construction of the cryogenic stopping cell, as well as some performance characteristics, are described in detail. Special attention is given to the cryogenic aspects in the design and construction of the stopping cell and the cryocooler-based cooling system. The cooling system allows the operation of the stopping cell at any desired temperature between about 70 K and room temperature. The cooling system performance in realistic on-line conditions at the FRS Ion Catcher Facility at GSI is discussed. A temperature of 110 K at which efficient ion survival was observed is obtained after 10 h of cooling. A minimum temperature of the stopping gas of 72 K was reached. The expertise gained from the design, construction and performance of the prototype cryogenic stopping cell has allowed the development of a final version for the Low-Energy Branch of the Super-FRS to proceed.

  9. Computer surety: computer system inspection guidance. [Contains glossary

    Energy Technology Data Exchange (ETDEWEB)

    1981-07-01

    This document discusses computer surety in NRC-licensed nuclear facilities from the perspective of physical protection inspectors. It gives background information and a glossary of computer terms, along with threats and computer vulnerabilities, methods used to harden computer elements, and computer audit controls.

  10. Optical design and characterization of an advanced computational imaging system

    Science.gov (United States)

    Shepard, R. Hamilton; Fernandez-Cull, Christy; Raskar, Ramesh; Shi, Boxin; Barsi, Christopher; Zhao, Hang

    2014-09-01

    We describe an advanced computational imaging system with an optical architecture that enables simultaneous and dynamic pupil-plane and image-plane coding accommodating several task-specific applications. We assess the optical requirement trades associated with custom and commercial-off-the-shelf (COTS) optics and converge on the development of two low-cost and robust COTS testbeds. The first is a coded-aperture programmable pixel imager employing a digital micromirror device (DMD) for image plane per-pixel oversampling and spatial super-resolution experiments. The second is a simultaneous pupil-encoded and time-encoded imager employing a DMD for pupil apodization or a deformable mirror for wavefront coding experiments. These two testbeds are built to leverage two MIT Lincoln Laboratory focal plane arrays - an orthogonal transfer CCD with non-uniform pixel sampling and on-chip dithering and a digital readout integrated circuit (DROIC) with advanced on-chip per-pixel processing capabilities. This paper discusses the derivation of optical component requirements, optical design metrics, and performance analyses for the two testbeds built.

  11. Fault tolerant hypercube computer system architecture

    Science.gov (United States)

    Madan, Herb S. (Inventor); Chow, Edward (Inventor)

    1989-01-01

    A fault-tolerant multiprocessor computer system of the hypercube type comprising a hierarchy of computers of like kind which can be functionally substituted for one another as necessary is disclosed. Communication between the working nodes is via one communications network while communications between the working nodes and watch dog nodes and load balancing nodes higher in the structure is via another communications network separate from the first. A typical branch of the hierarchy reporting to a master node or host computer comprises, a plurality of first computing nodes; a first network of message conducting paths for interconnecting the first computing nodes as a hypercube. The first network provides a path for message transfer between the first computing nodes; a first watch dog node; and a second network of message connecting paths for connecting the first computing nodes to the first watch dog node independent from the first network, the second network provides an independent path for test message and reconfiguration affecting transfers between the first computing nodes and the first switch watch dog node. There is additionally, a plurality of second computing nodes; a third network of message conducting paths for interconnecting the second computing nodes as a hypercube. The third network provides a path for message transfer between the second computing nodes; a fourth network of message conducting paths for connecting the second computing nodes to the first watch dog node independent from the third network. The fourth network provides an independent path for test message and reconfiguration affecting transfers between the second computing nodes and the first watch dog node; and a first multiplexer disposed between the first watch dog node and the second and fourth networks for allowing the first watch dog node to selectively communicate with individual ones of the computing nodes through the second and fourth networks; as well as, a second watch dog node

  12. The Erasmus Computing Grid - Building a Super-Computer Virtually for Free at the Erasmus Medical Center and the Hogeschool Rotterdam

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2006-01-01

    textabstractThe Set-Up of the 20 Teraflop Erasmus Computing Grid: To meet the enormous computational needs of live- science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop

  13. Monitoring SLAC High Performance UNIX Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC

    2005-12-15

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.

  14. Operator support system using computational intelligence techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bueno, Elaine Inacio, E-mail: ebueno@ifsp.edu.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Sao Paulo (IFSP), Sao Paulo, SP (Brazil); Pereira, Iraci Martinez, E-mail: martinez@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    Computational Intelligence Systems have been widely applied in Monitoring and Fault Detection Systems in several processes and in different kinds of applications. These systems use interdependent components ordered in modules. It is a typical behavior of such systems to ensure early detection and diagnosis of faults. Monitoring and Fault Detection Techniques can be divided into two categories: estimative and pattern recognition methods. The estimative methods use a mathematical model, which describes the process behavior. The pattern recognition methods use a database to describe the process. In this work, an operator support system using Computational Intelligence Techniques was developed. This system will show the information obtained by different CI techniques in order to help operators to take decision in real time and guide them in the fault diagnosis before the normal alarm limits are reached. (author)

  15. Attacker Modelling in Ubiquitous Computing Systems

    DEFF Research Database (Denmark)

    Papini, Davide

    Within the last five to ten years we have experienced an incredible growth of ubiquitous technologies which has allowed for improvements in several areas, including energy distribution and management, health care services, border surveillance, secure monitoring and management of buildings......, localisation services and many others. These technologies can be classified under the name of ubiquitous systems. The term Ubiquitous System dates back to 1991 when Mark Weiser at Xerox PARC Lab first referred to it in writing. He envisioned a future where computing technologies would have been melted...... in with our everyday life. This future is visible to everyone nowadays: terms like smartphone, cloud, sensor, network etc. are widely known and used in our everyday life. But what about the security of such systems. Ubiquitous computing devices can be limited in terms of energy, computing power and memory...

  16. A New System Architecture for Pervasive Computing

    CERN Document Server

    Ismail, Anis; Ismail, Ziad

    2011-01-01

    We present new system architecture, a distributed framework designed to support pervasive computing applications. We propose a new architecture consisting of a search engine and peripheral clients that addresses issues in scalability, data sharing, data transformation and inherent platform heterogeneity. Key features of our application are a type-aware data transport that is capable of extract data, and present data through handheld devices (PDA (personal digital assistant), mobiles, etc). Pervasive computing uses web technology, portable devices, wireless communications and nomadic or ubiquitous computing systems. The web and the simple standard HTTP protocol that it is based on, facilitate this kind of ubiquitous access. This can be implemented on a variety of devices - PDAs, laptops, information appliances such as digital cameras and printers. Mobile users get transparent access to resources outside their current environment. We discuss our system's architecture and its implementation. Through experimental...

  17. Metasynthetic computing and engineering of complex systems

    CERN Document Server

    Cao, Longbing

    2015-01-01

    Provides a comprehensive overview and introduction to the concepts, methodologies, analysis, design and applications of metasynthetic computing and engineering. The author: Presents an overview of complex systems, especially open complex giant systems such as the Internet, complex behavioural and social problems, and actionable knowledge discovery and delivery in the big data era. Discusses ubiquitous intelligence in complex systems, including human intelligence, domain intelligence, social intelligence, network intelligence, data intelligence and machine intelligence, and their synergy thro

  18. Reliable computer systems design and evaluatuion

    CERN Document Server

    Siewiorek, Daniel

    2014-01-01

    Enhance your hardware/software reliabilityEnhancement of system reliability has been a major concern of computer users and designers ¦ and this major revision of the 1982 classic meets users' continuing need for practical information on this pressing topic. Included are case studies of reliablesystems from manufacturers such as Tandem, Stratus, IBM, and Digital, as well as coverage of special systems such as the Galileo Orbiter fault protection system and AT&T telephone switching processors.

  19. Model for personal computer system selection.

    Science.gov (United States)

    Blide, L

    1987-12-01

    Successful computer software and hardware selection is best accomplished by following an organized approach such as the one described in this article. The first step is to decide what you want to be able to do with the computer. Secondly, select software that is user friendly, well documented, bug free, and that does what you want done. Next, you select the computer, printer and other needed equipment from the group of machines on which the software will run. Key factors here are reliability and compatibility with other microcomputers in your facility. Lastly, you select a reliable vendor who will provide good, dependable service in a reasonable time. The ability to correctly select computer software and hardware is a key skill needed by medical record professionals today and in the future. Professionals can make quality computer decisions by selecting software and systems that are compatible with other computers in their facility, allow for future net-working, ease of use, and adaptability for expansion as new applications are identified. The key to success is to not only provide for your present needs, but to be prepared for future rapid expansion and change in your computer usage as technology and your skills grow.

  20. Supersaturated self-nanoemulsifying drug delivery systems (super-SNEDDS) enhance the bioavailability of the poorly water-soluble drug Simvastatin in dogs

    DEFF Research Database (Denmark)

    Thomas, Nicky; Holm, René; Garmer, Mats

    2013-01-01

    This study investigates the potential of supersaturated self-nanoemulsifying drug delivery systems (super-SNEDDS) to improve the bioavailability of poorly water-soluble drugs compared to conventional SNEDDS. Conventional SNEDDS contained simvastatin (SIM) at 75% of the equilibrium solubility (S (eq...

  1. Evaluation of the aero-optical properties of the SOFIA cavity by means of computional fluid dynamics and a super fast diagnostic camera

    Science.gov (United States)

    Engfer, Christian; Pfüller, Enrico; Wiedemann, Manuel; Wolf, Jürgen; Lutz, Thorsten; Krämer, Ewald; Röser, Hans-Peter

    2012-09-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) is a 2.5 m reflecting telescope housed in an open cavity on board of a Boeing 747SP. During observations, the cavity is exposed to transonic flow conditions. The oncoming boundary layer evolves into a free shear layer being responsible for optical aberrations and for aerodynamic and aeroacoustic disturbances within the cavity. While the aero-acoustical excitation of an airborne telescope can be minimized by using passive flow control devices, the aero-optical properties of the flow are difficult to improve. Hence it is important to know how much the image seen through the SOFIA telescope is perturbed by so called seeing effects. Prior to the SOFIA science fights Computational Fluid Dynamics (CFD) simulations using URANS and DES methods were carried out to determine the flow field within and above the cavity and hence in the optical path in order to provide an assessment of the aero-optical properties under baseline conditions. In addition and for validation purposes, out of focus images have been taken during flight with a Super Fast Diagnostic Camera (SFDC). Depending on the binning factor and the sub-array size, the SFDC is able to take and to read out images at very high frame rates. The paper explains the numerical approach based on CFD to evaluate the aero-optical properties of SOFIA. The CFD data is then compared to the high speed images taken by the SFDC during flight.

  2. The JASMIN super-data-cluster

    CERN Document Server

    Lawrence, B N; Churchill, J; Juckes, M; Kershaw, P; Oliver, P; Pritchard, M; Stephens, A

    2012-01-01

    The JASMIN super-data-cluster is being deployed to support the data analysis requirements of the UK and European climate and earth system modelling community. Physical colocation of the core JASMIN resource with significant components of the facility for Climate and Environmental Monitoring from Space (CEMS) provides additional support for the earth observation community, as well as facilitating further comparison and evaluation of models with data. JASMIN and CEMS together centrally deploy 9.3 PB of storage - 4.6 PB of Panasas fast disk storage alongside the STFC Atlas Tape Store. Over 370 computing cores provide local computation. Remote JASMIN resources at Bristol, Leeds and Reading provide additional distributed storage and compute configured to support local workflow as a stepping stone to using the central JASMIN system. Fast network links from JASMIN provide reliable communication between the UK supercomputers MONSooN (at the Met Office) and HECToR (at the University of Edinburgh). JASMIN also supports...

  3. Architecture, systems research and computational sciences

    CERN Document Server

    2012-01-01

    The Winter 2012 (vol. 14 no. 1) issue of the Nexus Network Journal is dedicated to the theme “Architecture, Systems Research and Computational Sciences”. This is an outgrowth of the session by the same name which took place during the eighth international, interdisciplinary conference “Nexus 2010: Relationships between Architecture and Mathematics, held in Porto, Portugal, in June 2010. Today computer science is an integral part of even strictly historical investigations, such as those concerning the construction of vaults, where the computer is used to survey the existing building, analyse the data and draw the ideal solution. What the papers in this issue make especially evident is that information technology has had an impact at a much deeper level as well: architecture itself can now be considered as a manifestation of information and as a complex system. The issue is completed with other research papers, conference reports and book reviews.

  4. NIF Integrated Computer Controls System Description

    Energy Technology Data Exchange (ETDEWEB)

    VanArsdall, P.

    1998-01-26

    This System Description introduces the NIF Integrated Computer Control System (ICCS). The architecture is sufficiently abstract to allow the construction of many similar applications from a common framework. As discussed below, over twenty software applications derived from the framework comprise the NIF control system. This document lays the essential foundation for understanding the ICCS architecture. The NIF design effort is motivated by the magnitude of the task. Figure 1 shows a cut-away rendition of the coliseum-sized facility. The NIF requires integration of about 40,000 atypical control points, must be highly automated and robust, and will operate continuously around the clock. The control system coordinates several experimental cycles concurrently, each at different stages of completion. Furthermore, facilities such as the NIF represent major capital investments that will be operated, maintained, and upgraded for decades. The computers, control subsystems, and functionality must be relatively easy to extend or replace periodically with newer technology.

  5. NIF Integrated Computer Controls System Description

    Energy Technology Data Exchange (ETDEWEB)

    VanArsdall, P.

    1998-01-26

    This System Description introduces the NIF Integrated Computer Control System (ICCS). The architecture is sufficiently abstract to allow the construction of many similar applications from a common framework. As discussed below, over twenty software applications derived from the framework comprise the NIF control system. This document lays the essential foundation for understanding the ICCS architecture. The NIF design effort is motivated by the magnitude of the task. Figure 1 shows a cut-away rendition of the coliseum-sized facility. The NIF requires integration of about 40,000 atypical control points, must be highly automated and robust, and will operate continuously around the clock. The control system coordinates several experimental cycles concurrently, each at different stages of completion. Furthermore, facilities such as the NIF represent major capital investments that will be operated, maintained, and upgraded for decades. The computers, control subsystems, and functionality must be relatively easy to extend or replace periodically with newer technology.

  6. Some Unexpected Results Using Computer Algebra Systems.

    Science.gov (United States)

    Alonso, Felix; Garcia, Alfonsa; Garcia, Francisco; Hoya, Sara; Rodriguez, Gerardo; de la Villa, Agustin

    2001-01-01

    Shows how teachers can often use unexpected outputs from Computer Algebra Systems (CAS) to reinforce concepts and to show students the importance of thinking about how they use the software and reflecting on their results. Presents different examples where DERIVE, MAPLE, or Mathematica does not work as expected and suggests how to use them as a…

  7. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  8. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data dissem

  9. Computer Graphics for System Effectiveness Analysis.

    Science.gov (United States)

    1986-05-01

    02139, August 1982. Chapra , Steven C., and Raymond P. Canale, (1985), Numerical Methods for Engineers with Personal Computer Applications New York...I -~1.2 Outline of Thesis .................................. 1..... .......... CHAPTER 11. METHOD OF ANALYSIS...Chapter VII summarizes the results and gives recommendations for future research. I - P** METHOD OF ANALYSIS 2.1 Introduction Systems effectiveness

  10. Characterizing Video Coding Computing in Conference Systems

    NARCIS (Netherlands)

    Tuquerres, G.

    2000-01-01

    In this paper, a number of coding operations is provided for computing continuous data streams, in particular, video streams. A coding capability of the operations is expressed by a pyramidal structure in which coding processes and requirements of a distributed information system are represented. Th

  11. Lumber Grading With A Computer Vision System

    Science.gov (United States)

    Richard W. Conners; Tai-Hoon Cho; Philip A. Araman

    1989-01-01

    Over the past few years significant progress has been made in developing a computer vision system for locating and identifying defects on surfaced hardwood lumber. Unfortunately, until September of 1988 little research had gone into developing methods for analyzing rough lumber. This task is arguably more complex than the analysis of surfaced lumber. The prime...

  12. Computer Algebra Systems, Pedagogy, and Epistemology

    Science.gov (United States)

    Bosse, Michael J.; Nandakumar, N. R.

    2004-01-01

    The advent of powerful Computer Algebra Systems (CAS) continues to dramatically affect curricula, pedagogy, and epistemology in secondary and college algebra classrooms. However, epistemological and pedagogical research regarding the role and effectiveness of CAS in the learning of algebra lags behind. This paper investigates concerns regarding…

  13. Computer system SANC: its development and applications

    Science.gov (United States)

    Arbuzov, A.; Bardin, D.; Bondarenko, S.; Christova, P.; Kalinovskaya, L.; Sadykov, R.; Sapronov, A.; Riemann, T.

    2016-10-01

    The SANC system is used for systematic calculations of various processes within the Standard Model in the one-loop approximation. QED, electroweak, and QCD corrections are computed to a number of processes being of interest for modern and future high-energy experiments. Several applications for the LHC physics program are presented. Development of the system and the general problems and perspectives for future improvement of the theoretical precision are discussed.

  14. Personal healthcare system using cloud computing.

    Science.gov (United States)

    Takeuchi, Hiroshi; Mayuzumi, Yuuki; Kodama, Naoki; Sato, Keiichi

    2013-01-01

    A personal healthcare system used with cloud computing has been developed. It enables a daily time-series of personal health and lifestyle data to be stored in the cloud through mobile devices. The cloud automatically extracts personally useful information, such as rules and patterns concerning lifestyle and health conditions embedded in the personal big data, by using a data mining technology. The system provides three editions (Diet, Lite, and Pro) corresponding to users' needs.

  15. The CMS Computing System: Successes and Challenges

    CERN Document Server

    Bloom, Kenneth

    2009-01-01

    Each LHC experiment will produce datasets with sizes of order one petabyte per year. All of this data must be stored, processed, transferred, simulated and analyzed, which requires a computing system of a larger scale than ever mounted for any particle physics experiment, and possibly for any enterprise in the world. I discuss how CMS has chosen to address these challenges, focusing on recent tests of the system that demonstrate the experiment's readiness for producing physics results with the first LHC data.

  16. Integrative Genomics and Computational Systems Medicine

    Energy Technology Data Exchange (ETDEWEB)

    McDermott, Jason E.; Huang, Yufei; Zhang, Bing; Xu, Hua; Zhao, Zhongming

    2014-01-01

    The exponential growth in generation of large amounts of genomic data from biological samples has driven the emerging field of systems medicine. This field is promising because it improves our understanding of disease processes at the systems level. However, the field is still in its young stage. There exists a great need for novel computational methods and approaches to effectively utilize and integrate various omics data.

  17. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  18. Adaptive Fuzzy Systems in Computational Intelligence

    Science.gov (United States)

    Berenji, Hamid R.

    1996-01-01

    In recent years, the interest in computational intelligence techniques, which currently includes neural networks, fuzzy systems, and evolutionary programming, has grown significantly and a number of their applications have been developed in the government and industry. In future, an essential element in these systems will be fuzzy systems that can learn from experience by using neural network in refining their performances. The GARIC architecture, introduced earlier, is an example of a fuzzy reinforcement learning system which has been applied in several control domains such as cart-pole balancing, simulation of to Space Shuttle orbital operations, and tether control. A number of examples from GARIC's applications in these domains will be demonstrated.

  19. Cluster Computing for Embedded/Real-Time Systems

    Science.gov (United States)

    Katz, D.; Kepner, J.

    1999-01-01

    Embedded and real-time systems, like other computing systems, seek to maximize computing power for a given price, and thus can significantly benefit from the advancing capabilities of cluster computing.

  20. SuperLU users' guide

    Energy Technology Data Exchange (ETDEWEB)

    Demmel, James W.; Gilbert, John R.; Li, Xiaoye S.

    1999-11-01

    This document describes a collection of three related ANSI C subroutine libraries for solving sparse linear systems of equations AX = B: Here A is a square, nonsingular, n x n sparse matrix, and X and B are dense n x nrhs matrices, where nrhs is the number of right-hand sides and solution vectors. Matrix A need not be symmetric or definite; indeed, SuperLU is particularly appropriate for matrices with very unsymmetric structure. All three libraries use variations of Gaussian elimination optimized to take advantage both of sparsity and the computer architecture, in particular memory hierarchies (caches) and parallelism.

  1. Turing pattern dynamics and adaptive discretization for a super-diffusive Lotka-Volterra model.

    Science.gov (United States)

    Bendahmane, Mostafa; Ruiz-Baier, Ricardo; Tian, Canrong

    2016-05-01

    In this paper we analyze the effects of introducing the fractional-in-space operator into a Lotka-Volterra competitive model describing population super-diffusion. First, we study how cross super-diffusion influences the formation of spatial patterns: a linear stability analysis is carried out, showing that cross super-diffusion triggers Turing instabilities, whereas classical (self) super-diffusion does not. In addition we perform a weakly nonlinear analysis yielding a system of amplitude equations, whose study shows the stability of Turing steady states. A second goal of this contribution is to propose a fully adaptive multiresolution finite volume method that employs shifted Grünwald gradient approximations, and which is tailored for a larger class of systems involving fractional diffusion operators. The scheme is aimed at efficient dynamic mesh adaptation and substantial savings in computational burden. A numerical simulation of the model was performed near the instability boundaries, confirming the behavior predicted by our analysis.

  2. Landauer Bound for Analog Computing Systems

    CERN Document Server

    Diamantini, M Cristina; Trugenberger, Carlo A

    2016-01-01

    By establishing a relation between information erasure and continuous phase transitions we generalise the Landauer bound to analog computing systems. The entropy production per degree of freedom during erasure of an analog variable (reset to standard value) is given by the logarithm of the configurational volume measured in units of its minimal quantum. As a consequence every computation has to be carried on with a finite number of bits and infinite precision is forbidden by the fundamental laws of physics, since it would require an infinite amount of energy.

  3. Landauer bound for analog computing systems

    Science.gov (United States)

    Diamantini, M. Cristina; Gammaitoni, Luca; Trugenberger, Carlo A.

    2016-07-01

    By establishing a relation between information erasure and continuous phase transitions we generalize the Landauer bound to analog computing systems. The entropy production per degree of freedom during erasure of an analog variable (reset to standard value) is given by the logarithm of the configurational volume measured in units of its minimal quantum. As a consequence, every computation has to be carried on with a finite number of bits and infinite precision is forbidden by the fundamental laws of physics, since it would require an infinite amount of energy.

  4. International Conference on Soft Computing Systems

    CERN Document Server

    Panigrahi, Bijaya

    2016-01-01

    The book is a collection of high-quality peer-reviewed research papers presented in International Conference on Soft Computing Systems (ICSCS 2015) held at Noorul Islam Centre for Higher Education, Chennai, India. These research papers provide the latest developments in the emerging areas of Soft Computing in Engineering and Technology. The book is organized in two volumes and discusses a wide variety of industrial, engineering and scientific applications of the emerging techniques. It presents invited papers from the inventors/originators of new applications and advanced technologies.

  5. Embedded systems for supporting computer accessibility.

    Science.gov (United States)

    Mulfari, Davide; Celesti, Antonio; Fazio, Maria; Villari, Massimo; Puliafito, Antonio

    2015-01-01

    Nowadays, customized AT software solutions allow their users to interact with various kinds of computer systems. Such tools are generally available on personal devices (e.g., smartphones, laptops and so on) commonly used by a person with a disability. In this paper, we investigate a way of using the aforementioned AT equipments in order to access many different devices without assistive preferences. The solution takes advantage of open source hardware and its core component consists of an affordable Linux embedded system: it grabs data coming from the assistive software, which runs on the user's personal device, then, after processing, it generates native keyboard and mouse HID commands for the target computing device controlled by the end user. This process supports any operating system available on the target machine and it requires no specialized software installation; therefore the user with a disability can rely on a single assistive tool to control a wide range of computing platforms, including conventional computers and many kinds of mobile devices, which receive input commands through the USB HID protocol.

  6. Degenerate Operators and the $1/c$ Expansion: Lorentzian Resummations, High Order Computations, and Super-Virasoro Blocks

    CERN Document Server

    Chen, Hongbin; Kaplan, Jared; Li, Daliang; Wang, Junpu

    2016-01-01

    One can obtain exact information about Virasoro conformal blocks by analytically continuing the correlators of degenerate operators. We argued in recent work that this technique can be used to explicitly resolve information loss problems in AdS$_3$/CFT$_2$. In this paper we use the technique to perform calculations in the small $1/c \\propto G_N$ expansion: (1) we prove the all-orders resummation of logarithmic factors $\\propto \\frac{1}{c} \\log z$ in the Lorentzian regime, demonstrating that $1/c$ corrections directly shift Lyapunov exponents associated with chaos, as claimed in prior work, (2) we perform another all-orders resummation in the limit of large $c$ with fixed $cz$, interpolating between the early onset of chaos and late time behavior, (3) we explicitly compute the Virasoro vacuum block to order $1/c^2$ and $1/c^3$, corresponding to $2$ and $3$ loop calculations in AdS$_3$, and (4) we derive the heavy-light vacuum blocks in theories with $\\mathcal{N}=1,2$ superconformal symmetry.

  7. Music Genre Classification Systems - A Computational Approach

    DEFF Research Database (Denmark)

    Ahrendt, Peter

    2006-01-01

    Automatic music genre classification is the classification of a piece of music into its corresponding genre (such as jazz or rock) by a computer. It is considered to be a cornerstone of the research area Music Information Retrieval (MIR) and closely linked to the other areas in MIR. It is thought...... that MIR will be a key element in the processing, searching and retrieval of digital music in the near future. This dissertation is concerned with music genre classification systems and in particular systems which use the raw audio signal as input to estimate the corresponding genre. This is in contrast...... to systems which use e.g. a symbolic representation or textual information about the music. The approach to music genre classification systems has here been system-oriented. In other words, all the different aspects of the systems have been considered and it is emphasized that the systems should...

  8. Autonomous Systems, Robotics, and Computing Systems Capability Roadmap: NRC Dialogue

    Science.gov (United States)

    Zornetzer, Steve; Gage, Douglas

    2005-01-01

    Contents include the following: Introduction. Process, Mission Drivers, Deliverables, and Interfaces. Autonomy. Crew-Centered and Remote Operations. Integrated Systems Health Management. Autonomous Vehicle Control. Autonomous Process Control. Robotics. Robotics for Solar System Exploration. Robotics for Lunar and Planetary Habitation. Robotics for In-Space Operations. Computing Systems. Conclusion.

  9. Nature-inspired computing for control systems

    CERN Document Server

    2016-01-01

    The book presents recent advances in nature-inspired computing, giving a special emphasis to control systems applications. It reviews different techniques used for simulating physical, chemical, biological or social phenomena at the purpose of designing robust, predictive and adaptive control strategies. The book is a collection of several contributions, covering either more general approaches in control systems, or methodologies for control tuning and adaptive controllers, as well as exciting applications of nature-inspired techniques in robotics. On one side, the book is expected to motivate readers with a background in conventional control systems to try out these powerful techniques inspired by nature. On the other side, the book provides advanced readers with a deeper understanding of the field and a broad spectrum of different methods and techniques. All in all, the book is an outstanding, practice-oriented reference guide to nature-inspired computing addressing graduate students, researchers and practi...

  10. Design and Application of Fundamental Geographic Information Database System Based on SuperMap in Bengbu%基于SuperMap的蚌埠市基础地理信息数据库系统设计与实现

    Institute of Scientific and Technical Information of China (English)

    刘虎

    2012-01-01

    首先阐述了蚌埠市基础地理信息数据库系统建设的必要性和条件,然后结合数据生产入库过程,从原始数据整理、CASS数据到SuperMap格式数据的转换、入库数据的质量检查、数据库设计以及空间数据的管理与维护等方面对数据库系统的总体设计及功能模块进行了分析和探讨,其中重点介绍了入库数据质量检查和空间数据管理子系统两大功能模块。%This article introduces the necessity and conditions of the Bengbu fundamental geographic information database system at first,and then combined with data production and storage process,this article introduces the overall design and function modules of the database system based on raw data sorting,CASS data to SuperMap format data conversion,data quality checks,database design,management and maintenance of spatial data,which focuses on the storage of data quality checks and the subsystem of spatial data management.

  11. Decomposability queueing and computer system applications

    CERN Document Server

    Courtois, P J

    1977-01-01

    Decomposability: Queueing and Computer System Applications presents a set of powerful methods for systems analysis. This 10-chapter text covers the theory of nearly completely decomposable systems upon which specific analytic methods are based.The first chapters deal with some of the basic elements of a theory of nearly completely decomposable stochastic matrices, including the Simon-Ando theorems and the perturbation theory. The succeeding chapters are devoted to the analysis of stochastic queuing networks that appear as a type of key model. These chapters also discuss congestion problems in

  12. Computer-aided Analysis of Phisiological Systems

    Directory of Open Access Journals (Sweden)

    Balázs Benyó

    2007-12-01

    Full Text Available This paper presents the recent biomedical engineering research activity of theMedical Informatics Laboratory at the Budapest University of Technology and Economics.The research projects are carried out in the fields as follows: Computer aidedidentification of physiological systems; Diabetic management and blood glucose control;Remote patient monitoring and diagnostic system; Automated system for analyzing cardiacultrasound images; Single-channel hybrid ECG segmentation; Event recognition and stateclassification to detect brain ischemia by means of EEG signal processing; Detection ofbreathing disorders like apnea and hypopnea; Molecular biology studies with DNA-chips;Evaluation of the cry of normal hearing and hard of hearing infants.

  13. Applicability of Computational Systems Biology in Toxicology

    DEFF Research Database (Denmark)

    Kongsbak, Kristine Grønning; Hadrup, Niels; Audouze, Karine Marie Laure

    2014-01-01

    be used to establish hypotheses on links between the chemical and human diseases. Such information can also be applied for designing more intelligent animal/cell experiments that can test the established hypotheses. Here, we describe how and why to apply an integrative systems biology method......Systems biology as a research field has emerged within the last few decades. Systems biology, often defined as the antithesis of the reductionist approach, integrates information about individual components of a biological system. In integrative systems biology, large data sets from various sources...... and databases are used to model and predict effects of chemicals on, for instance, human health. In toxicology, computational systems biology enables identification of important pathways and molecules from large data sets; tasks that can be extremely laborious when performed by a classical literature search...

  14. Interactive Super Mario Bros Evolution

    DEFF Research Database (Denmark)

    Sørensen, Patrikk D.; Olsen, Jeppeh M.; Risi, Sebastian

    2016-01-01

    to encourage the evolution of desired behaviors. In this paper, we show how casual users can create controllers for \\emph{Super Mario Bros} through an interactive evolutionary computation (IEC) approach, without prior domain or programming knowledge. By iteratively selecting Super Mario behaviors from a set...... of candidates, users are able to guide evolution towards a variety of different behaviors, which would be difficult with an automated approach. Additionally, the user-evolved controllers perform similarly well as controllers evolved with a traditional fitness-based approach when comparing distance traveled...

  15. Low Power Dynamic Scheduling for Computing Systems

    CERN Document Server

    Neely, Michael J

    2011-01-01

    This paper considers energy-aware control for a computing system with two states: "active" and "idle." In the active state, the controller chooses to perform a single task using one of multiple task processing modes. The controller then saves energy by choosing an amount of time for the system to be idle. These decisions affect processing time, energy expenditure, and an abstract attribute vector that can be used to model other criteria of interest (such as processing quality or distortion). The goal is to optimize time average system performance. Applications of this model include a smart phone that makes energy-efficient computation and transmission decisions, a computer that processes tasks subject to rate, quality, and power constraints, and a smart grid energy manager that allocates resources in reaction to a time varying energy price. The solution methodology of this paper uses the theory of optimization for renewal systems developed in our previous work. This paper is written in tutorial form and devel...

  16. Applicability of computational systems biology in toxicology.

    Science.gov (United States)

    Kongsbak, Kristine; Hadrup, Niels; Audouze, Karine; Vinggaard, Anne Marie

    2014-07-01

    Systems biology as a research field has emerged within the last few decades. Systems biology, often defined as the antithesis of the reductionist approach, integrates information about individual components of a biological system. In integrative systems biology, large data sets from various sources and databases are used to model and predict effects of chemicals on, for instance, human health. In toxicology, computational systems biology enables identification of important pathways and molecules from large data sets; tasks that can be extremely laborious when performed by a classical literature search. However, computational systems biology offers more advantages than providing a high-throughput literature search; it may form the basis for establishment of hypotheses on potential links between environmental chemicals and human diseases, which would be very difficult to establish experimentally. This is possible due to the existence of comprehensive databases containing information on networks of human protein-protein interactions and protein-disease associations. Experimentally determined targets of the specific chemical of interest can be fed into these networks to obtain additional information that can be used to establish hypotheses on links between the chemical and human diseases. Such information can also be applied for designing more intelligent animal/cell experiments that can test the established hypotheses. Here, we describe how and why to apply an integrative systems biology method in the hypothesis-generating phase of toxicological research.

  17. Interactive computer-enhanced remote viewing system

    Energy Technology Data Exchange (ETDEWEB)

    Tourtellott, J.A.; Wagner, J.F. [Mechanical Technology Incorporated, Latham, NY (United States)

    1995-10-01

    Remediation activities such as decontamination and decommissioning (D&D) typically involve materials and activities hazardous to humans. Robots are an attractive way to conduct such remediation, but for efficiency they need a good three-dimensional (3-D) computer model of the task space where they are to function. This model can be created from engineering plans and architectural drawings and from empirical data gathered by various sensors at the site. The model is used to plan robotic tasks and verify that selected paths are clear of obstacles. This report describes the development of an Interactive Computer-Enhanced Remote Viewing System (ICERVS), a software system to provide a reliable geometric description of a robotic task space, and enable robotic remediation to be conducted more effectively and more economically.

  18. Cloud Computing Security in Business Information Systems

    CERN Document Server

    Ristov, Sasko; Kostoska, Magdalena

    2012-01-01

    Cloud computing providers' and customers' services are not only exposed to existing security risks, but, due to multi-tenancy, outsourcing the application and data, and virtualization, they are exposed to the emergent, as well. Therefore, both the cloud providers and customers must establish information security system and trustworthiness each other, as well as end users. In this paper we analyze main international and industrial standards targeting information security and their conformity with cloud computing security challenges. We evaluate that almost all main cloud service providers (CSPs) are ISO 27001:2005 certified, at minimum. As a result, we propose an extension to the ISO 27001:2005 standard with new control objective about virtualization, to retain generic, regardless of company's type, size and nature, that is, to be applicable for cloud systems, as well, where virtualization is its baseline. We also define a quantitative metric and evaluate the importance factor of ISO 27001:2005 control objecti...

  19. Thermoelectric property measurements with computer controlled systems

    Science.gov (United States)

    Chmielewski, A. B.; Wood, C.

    1984-01-01

    A joint JPL-NASA program to develop an automated system to measure the thermoelectric properties of newly developed materials is described. Consideration is given to the difficulties created by signal drift in measurements of Hall voltage and the Large Delta T Seebeck coefficient. The benefits of a computerized system were examined with respect to error reduction and time savings for human operators. It is shown that the time required to measure Hall voltage can be reduced by a factor of 10 when a computer is used to fit a curve to the ratio of the measured signal and its standard deviation. The accuracy of measurements of the Large Delta T Seebeck coefficient and thermal diffusivity was also enhanced by the use of computers.

  20. Checkpoint triggering in a computer system

    Science.gov (United States)

    Cher, Chen-Yong

    2016-09-06

    According to an aspect, a method for triggering creation of a checkpoint in a computer system includes executing a task in a processing node of the computer system and determining whether it is time to read a monitor associated with a metric of the task. The monitor is read to determine a value of the metric based on determining that it is time to read the monitor. A threshold for triggering creation of the checkpoint is determined based on the value of the metric. Based on determining that the value of the metric has crossed the threshold, the checkpoint including state data of the task is created to enable restarting execution of the task upon a restart operation.

  1. A NEW SYSTEM ARCHITECTURE FOR PERVASIVE COMPUTING

    Directory of Open Access Journals (Sweden)

    Anis ISMAIL

    2011-08-01

    Full Text Available We present new system architecture, a distributed framework designed to support pervasive computingapplications. We propose a new architecture consisting of a search engine and peripheral clients thataddresses issues in scalability, data sharing, data transformation and inherent platform heterogeneity. Keyfeatures of our application are a type-aware data transport that is capable of extract data, and presentdata through handheld devices (PDA (personal digital assistant, mobiles, etc. Pervasive computing usesweb technology, portable devices, wireless communications and nomadic or ubiquitous computing systems.The web and the simple standard HTTP protocol that it is based on, facilitate this kind of ubiquitousaccess. This can be implemented on a variety of devices - PDAs, laptops, information appliances such asdigital cameras and printers. Mobile users get transparent access to resources outside their currentenvironment. We discuss our system’s architecture and its implementation. Through experimental study,we show reasonable performance and adaptation for our system’s implementation for the mobile devices.

  2. Compact Stellar Systems in the Fornax Cluster Super-massive Star Clusters or Extremely Compact Dwarf Galaxies?

    CERN Document Server

    Drinkwater, M J; Gregg, M D; Phillipps, S

    2000-01-01

    We describe a population of compact objects in the centre of the Fornax Cluster which were discovered as part of our 2dF Fornax Spectroscopic Survey. These objects have spectra typical of old stellar systems, but are unresolved on photographic sky survey plates. They have absolute magnitudes -13system of that galaxy. We suggest that these objects are either super-massive star clusters (intra-cluster globular clusters or tidally stripped nuclei of dwarf galaxies) or a new type of low-luminosity compact elliptical dwarf (M32-type) galaxy. The best way to test these hypotheses will be to obtain high resolution imaging and high-dispersion spectroscopy to determine their structures and mass-to-light ratios. This will allow us ...

  3. Music Genre Classification Systems - A Computational Approach

    OpenAIRE

    Ahrendt, Peter; Hansen, Lars Kai

    2006-01-01

    Automatic music genre classification is the classification of a piece of music into its corresponding genre (such as jazz or rock) by a computer. It is considered to be a cornerstone of the research area Music Information Retrieval (MIR) and closely linked to the other areas in MIR. It is thought that MIR will be a key element in the processing, searching and retrieval of digital music in the near future. This dissertation is concerned with music genre classification systems and in particular...

  4. Research on Dynamic Distributed Computing System for Small and Medium-Sized Computer Clusters

    Institute of Scientific and Technical Information of China (English)

    Le Kang; Jianliang Xu; Feng Liu

    2012-01-01

      Distributed computing system is a science by which a complex task that need for large amount of computation can be divided into small pieces and calculated by more than one computer,and we can get the final result according to results from each computer.This paper considers a distributed computing system running in the small and medium-sized computer clusters to solve the problem that single computer has a low efficiency,and improve the efficiency of large-scale computing.The experiments show that the system can effectively improve the efficiency and it is a viable program.

  5. Performance evaluation of a computed radiography system

    Energy Technology Data Exchange (ETDEWEB)

    Roussilhe, J.; Fallet, E. [Carestream Health France, 71 - Chalon/Saone (France); Mango, St.A. [Carestream Health, Inc. Rochester, New York (United States)

    2007-07-01

    Computed radiography (CR) standards have been formalized and published in Europe and in the US. The CR system classification is defined in those standards by - minimum normalized signal-to-noise ratio (SNRN), and - maximum basic spatial resolution (SRb). Both the signal-to-noise ratio (SNR) and the contrast sensitivity of a CR system depend on the dose (exposure time and conditions) at the detector. Because of their wide dynamic range, the same storage phosphor imaging plate can qualify for all six CR system classes. The exposure characteristics from 30 to 450 kV, the contrast sensitivity, and the spatial resolution of the KODAK INDUSTREX CR Digital System have been thoroughly evaluated. This paper will present some of the factors that determine the system's spatial resolution performance. (authors)

  6. TMX-U computer system in evolution

    Science.gov (United States)

    Casper, T. A.; Bell, H.; Brown, M.; Gorvad, M.; Jenkins, S.; Meyer, W.; Moller, J.; Perkins, D.

    1986-08-01

    Over the past three years, the total TMX-U diagnostic data base has grown to exceed 10 Mbytes from over 1300 channels; roughly triple the originally designed size. This acquisition and processing load has resulted in an experiment repetition rate exceeding 10 min per shot using the five original Hewlett-Packard HP-1000 computers with their shared disks. Our new diagnostics tend to be multichannel instruments, which, in our environment, can be more easily managed using local computers. For this purpose, we are using HP series 9000 computers for instrument control, data acquisition, and analysis. Fourteen such systems are operational with processed format output exchanged via a shared resource manager. We are presently implementing the necessary hardware and software changes to create a local area network allowing us to combine the data from these systems with our main data archive. The expansion of our diagnostic system using the parallel acquisition and processing concept allows us to increase our data base with a minimum of impact on the experimental repetition rate.

  7. The relationship between mean birth weight and poverty using the Townsend deprivation score and the Super Profile classification system.

    Science.gov (United States)

    Aveyard, P; Manaseki, S; Chambers, J

    2002-11-01

    Super Profiles have been used as alternative methods of characterising the deprivation of an area. Some reports suggest that Super Profiles are as accurate as established indices such as the Townsend score (TS). This was a test of this assertion.A total of 138 696 live born singleton births to Birmingham residents born between 1986 and 1996 (inclusive) were allocated to enumeration districts (EDs) by linkage from the postcode. We allocated the TS of the individual's ED. We allocated a Lifestyle and Target Market (TM) from Super Profiles by linkage to the ED. We examined the gradient between mean birth weight and the 10 Super Profile Lifestyles and compared this to the gradient between 10 Townsend groups and mean birth weight. We repeated this approach using the 40 TMs and 40 Townsend groups. We used both the median income and a census-derived deprivation measure to rank Lifestyles and TMs. The gradient between mean birth weight and area deprivation was linear for Townsend groups but not linear using either Lifestyles or TMs whichever method of ranking Lifestyles or TMs was used. Where Lifestyles or TMs were out of line with their neighbours, the TS of that group mostly explained this. As Super Profiles are generated using nationally representative data, applying the affluence ranking to small areas can lead to inaccuracies, as shown in this data. We conclude that Super Profiles are probably unsuitable as measures of deprivation of small areas.

  8. Physical Optics Based Computational Imaging Systems

    Science.gov (United States)

    Olivas, Stephen Joseph

    There is an ongoing demand on behalf of the consumer, medical and military industries to make lighter weight, higher resolution, wider field-of-view and extended depth-of-focus cameras. This leads to design trade-offs between performance and cost, be it size, weight, power, or expense. This has brought attention to finding new ways to extend the design space while adhering to cost constraints. Extending the functionality of an imager in order to achieve extraordinary performance is a common theme of computational imaging, a field of study which uses additional hardware along with tailored algorithms to formulate and solve inverse problems in imaging. This dissertation details four specific systems within this emerging field: a Fiber Bundle Relayed Imaging System, an Extended Depth-of-Focus Imaging System, a Platform Motion Blur Image Restoration System, and a Compressive Imaging System. The Fiber Bundle Relayed Imaging System is part of a larger project, where the work presented in this thesis was to use image processing techniques to mitigate problems inherent to fiber bundle image relay and then, form high-resolution wide field-of-view panoramas captured from multiple sensors within a custom state-of-the-art imager. The Extended Depth-of-Focus System goals were to characterize the angular and depth dependence of the PSF of a focal swept imager in order to increase the acceptably focused imaged scene depth. The goal of the Platform Motion Blur Image Restoration System was to build a system that can capture a high signal-to-noise ratio (SNR), long-exposure image which is inherently blurred while at the same time capturing motion data using additional optical sensors in order to deblur the degraded images. Lastly, the objective of the Compressive Imager was to design and build a system functionally similar to the Single Pixel Camera and use it to test new sampling methods for image generation and to characterize it against a traditional camera. These computational

  9. Computer performance optimization systems, applications, processes

    CERN Document Server

    Osterhage, Wolfgang W

    2013-01-01

    Computing power performance was important at times when hardware was still expensive, because hardware had to be put to the best use. Later on this criterion was no longer critical, since hardware had become inexpensive. Meanwhile, however, people have realized that performance again plays a significant role, because of the major drain on system resources involved in developing complex applications. This book distinguishes between three levels of performance optimization: the system level, application level and business processes level. On each, optimizations can be achieved and cost-cutting p

  10. Computational modeling of shallow geothermal systems

    CERN Document Server

    Al-Khoury, Rafid

    2011-01-01

    A Step-by-step Guide to Developing Innovative Computational Tools for Shallow Geothermal Systems Geothermal heat is a viable source of energy and its environmental impact in terms of CO2 emissions is significantly lower than conventional fossil fuels. Shallow geothermal systems are increasingly utilized for heating and cooling of buildings and greenhouses. However, their utilization is inconsistent with the enormous amount of energy available underneath the surface of the earth. Projects of this nature are not getting the public support they deserve because of the uncertainties associated with

  11. Prestandardisation Activities for Computer Based Safety Systems

    DEFF Research Database (Denmark)

    Taylor, J. R.; Bologna, S.; Ehrenberger, W.

    1981-01-01

    Questions of technical safety become more and more important. Due to the higher complexity of their functions computer based safety systems have special problems. Researchers, producers, licensing personnel and customers have met on a European basis to exchange knowledge and formulate positions....... The Commission of the european Community supports the work. Major topics comprise hardware configuration and self supervision, software design, verification and testing, documentation, system specification and concurrent processing. Preliminary results have been used for the draft of an IEC standard and for some...

  12. Tools for Embedded Computing Systems Software

    Science.gov (United States)

    1978-01-01

    A workshop was held to assess the state of tools for embedded systems software and to determine directions for tool development. A synopsis of the talk and the key figures of each workshop presentation, together with chairmen summaries, are presented. The presentations covered four major areas: (1) tools and the software environment (development and testing); (2) tools and software requirements, design, and specification; (3) tools and language processors; and (4) tools and verification and validation (analysis and testing). The utility and contribution of existing tools and research results for the development and testing of embedded computing systems software are described and assessed.

  13. Computer-Assisted Photo Interpretation System

    Science.gov (United States)

    Niedzwiadek, Harry A.

    1981-11-01

    A computer-assisted photo interpretation research (CAPIR) system has been developed at the U.S. Army Engineer Topographic Laboratories (ETL), Fort Belvoir, Virginia. The system is based around the APPS-IV analytical plotter, a photogrammetric restitution device that was designed and developed by Autometric specifically for interactive, computerized data collection activities involving high-resolution, stereo aerial photographs. The APPS-IV is ideally suited for feature analysis and feature extraction, the primary functions of a photo interpreter. The APPS-IV is interfaced with a minicomputer and a geographic information system called AUTOGIS. The AUTOGIS software provides the tools required to collect or update digital data using an APPS-IV, construct and maintain a geographic data base, and analyze or display the contents of the data base. Although the CAPIR system is fully functional at this time, considerable enhancements are planned for the future.

  14. Computational systems biology in cancer brain metastasis.

    Science.gov (United States)

    Peng, Huiming; Tan, Hua; Zhao, Weiling; Jin, Guangxu; Sharma, Sambad; Xing, Fei; Watabe, Kounosuke; Zhou, Xiaobo

    2016-01-01

    Brain metastases occur in 20-40% of patients with advanced malignancies. A better understanding of the mechanism of this disease will help us to identify novel therapeutic strategies. In this review, we will discuss the systems biology approaches used in this area, including bioinformatics and mathematical modeling. Bioinformatics has been used for identifying the molecular mechanisms driving brain metastasis and mathematical modeling methods for analyzing dynamics of a system and predicting optimal therapeutic strategies. We will illustrate the strategies, procedures, and computational techniques used for studying systems biology in cancer brain metastases. We will give examples on how to use a systems biology approach to analyze a complex disease. Some of the approaches used to identify relevant networks, pathways, and possibly biomarkers in metastasis will be reviewed into details. Finally, certain challenges and possible future directions in this area will also be discussed.

  15. A computer-aided continuous assessment system

    Directory of Open Access Journals (Sweden)

    B. C.H. Turton

    1996-12-01

    Full Text Available Universities within the United Kingdom have had to cope with a massive expansion in undergraduate student numbers over the last five years (Committee of Scottish University Principals, 1993; CVCP Briefing Note, 1994. In addition, there has been a move towards modularization and a closer monitoring of a student's progress throughout the year. Since the price/performance ratio of computer systems has continued to improve, Computer- Assisted Learning (CAL has become an attractive option. (Fry, 1990; Benford et al, 1994; Laurillard et al, 1994. To this end, the Universities Funding Council (UFQ has funded the Teaching and Learning Technology Programme (TLTP. However universities also have a duty to assess as well as to teach. This paper describes a Computer-Aided Assessment (CAA system capable of assisting in grading students and providing feedback. In this particular case, a continuously assessed course (Low-Level Languages of over 100 students is considered. Typically, three man-days are required to mark one assessed piece of coursework from the students in this class. Any feedback on how the questions were dealt with by the student are of necessity brief. Most of the feedback is provided in a tutorial session that covers the pitfalls encountered by the majority of the students.

  16. OPTIMIZATION OF PARAMETERS OF ELEMENTS COMPUTER SYSTEM

    Directory of Open Access Journals (Sweden)

    Nesterov G. D.

    2016-03-01

    Full Text Available The work is devoted to the topical issue of increasing the productivity of computers. It has an experimental character. Therefore, the description of a number of the carried-out tests and the analysis of their results is offered. Previously basic characteristics of modules of the computer for the regular mode of functioning are provided in the article. Further the technique of regulating their parameters in the course of experiment is described. Thus the special attention is paid to observing the necessary thermal mode in order to avoid an undesirable overheat of the central processor. Also, operability of system in the conditions of the increased energy consumption is checked. The most responsible moment thus is regulating the central processor. As a result of the test its optimum tension, frequency and delays of data reading from memory are found. The analysis of stability of characteristics of the RAM, in particular, a condition of its tires in the course of experiment is made. As the executed tests took place within the standard range of characteristics of modules, and, therefore, the margin of safety put in the computer and capacity of system wasn't used, further experiments were made at extreme dispersal in the conditions of air cooling. The received results are also given in the offered article

  17. Looking for Super-Earths in the HD 189733 System: A Search for Transits in Most Space-Based Photometry

    CERN Document Server

    Croll, Bryce; Rowe, Jason F; Gladman, Brett; Miller-Ricci, Eliza; Sasselov, Dimitar; Walker, Gordon A H; Kuschnig, Rainer; Lin, Douglas N C; Guenther, David B; Moffat, Anthony F J; Rucinski, Slavek M; Weiss, Werner W

    2007-01-01

    We have made a comprehensive transit search for exoplanets down to ~1.5 - 2 Earth radii in the HD 189733 system, based on 21-days of nearly uninterrupted broadband optical photometry obtained with the MOST (Microvariability & Oscillations of STars) satellite in 2006. We have searched these data for realistic limb-darkened transits from exoplanets other than the known hot Jupiter, HD 189733b, with periods ranging from about 0.4 days to one week. Monte Carlo statistical tests of the data with synthetic transits inserted into the data-set allow us to rule out additional close-in exoplanets with sizes ranging from about 0.15 - 0.31 RJ (Jupiter radii), or 1.7 - 3.5 RE (Earth radii) on orbits whose planes are near that of HD 189733b. These null results constrain theories that invoke lower-mass hot Super-Earth and hot Neptune planets in orbits similar to HD 189733b due to the inward migration of this hot Jupiter. This work also illustrates the feasibility of discovering smaller transiting planets around chromosp...

  18. Considerations upon energetic efficiency of a recirculating aquatic system (RAS for super intensive fish culture

    Directory of Open Access Journals (Sweden)

    Petru David

    2009-04-01

    Full Text Available The efficiency of the aquaculture using recirculating systems depends on many factors among which the most important it is the energy consumption of the system. To assure a high levelenergy conservation in an aquatic recirculating system, the intensity of water recirculation must be maximized, but this leads to a increasing of the consumed energy for water circulation. That is why is required a rigorous analysis for the energetic consumption for a system of this type and establishment of optimum solutions to minimize the consumption. This paperwork presents a detailed analysis of the energy consumption for a recirculating aquatic system for fish breeding, as well as considerations and solutions for optimization of the energy consumption.

  19. Identification of Underspread Linear Systems with Application to Super-Resolution Radar

    CERN Document Server

    Bajwa, Waheed U; Eldar, Yonina C

    2010-01-01

    Identification of time-varying linear systems, which introduce both time-shifts (delays) and frequency-shifts (Doppler-shifts) to the input signal, is one of the central tasks in many engineering applications. This paper studies the problem of identification of "underspread linear systems," defined as time-varying linear systems whose responses lie within a unit-area region in the delay--Doppler space, by probing them with a single known input signal and analyzing the resulting system output. One of the main contributions of the paper is that it characterizes the conditions on the temporal support and the bandwidth of the input signal that ensure identification of underspread linear systems described by a set of discrete delays and Doppler-shifts---and referred to as parametric underspread linear systems---from single observations. In particular, the paper establishes that sufficiently underspread parametric linear systems are identifiable as long as the time--bandwidth product of the input signal is proporti...

  20. Adaptive super-twisting observer for estimation of random road excitation profile in automotive suspension systems.

    Science.gov (United States)

    Rath, J J; Veluvolu, K C; Defoort, M

    2014-01-01

    The estimation of road excitation profile is important for evaluation of vehicle stability and vehicle suspension performance for autonomous vehicle control systems. In this work, the nonlinear dynamics of the active automotive system that is excited by the unknown road excitation profile are considered for modeling. To address the issue of estimation of road profile, we develop an adaptive supertwisting observer for state and unknown road profile estimation. Under Lipschitz conditions for the nonlinear functions, the convergence of the estimation error is proven. Simulation results with Ford Fiesta MK2 demonstrate the effectiveness of the proposed observer for state and unknown input estimation for nonlinear active suspension system.

  1. Adaptive Super-Twisting Observer for Estimation of Random Road Excitation Profile in Automotive Suspension Systems

    Directory of Open Access Journals (Sweden)

    J. J. Rath

    2014-01-01

    Full Text Available The estimation of road excitation profile is important for evaluation of vehicle stability and vehicle suspension performance for autonomous vehicle control systems. In this work, the nonlinear dynamics of the active automotive system that is excited by the unknown road excitation profile are considered for modeling. To address the issue of estimation of road profile, we develop an adaptive supertwisting observer for state and unknown road profile estimation. Under Lipschitz conditions for the nonlinear functions, the convergence of the estimation error is proven. Simulation results with Ford Fiesta MK2 demonstrate the effectiveness of the proposed observer for state and unknown input estimation for nonlinear active suspension system.

  2. Visual computing model for immune system and medical system.

    Science.gov (United States)

    Gong, Tao; Cao, Xinxue; Xiong, Qin

    2015-01-01

    Natural immune system is an intelligent self-organizing and adaptive system, which has a variety of immune cells with different types of immune mechanisms. The mutual cooperation between the immune cells shows the intelligence of this immune system, and modeling this immune system has an important significance in medical science and engineering. In order to build a comprehensible model of this immune system for better understanding with the visualization method than the traditional mathematic model, a visual computing model of this immune system was proposed and also used to design a medical system with the immune system, in this paper. Some visual simulations of the immune system were made to test the visual effect. The experimental results of the simulations show that the visual modeling approach can provide a more effective way for analyzing this immune system than only the traditional mathematic equations.

  3. Lipophilic super-absorbent swelling gels as cleaners for use on weapons systems and platforms

    Science.gov (United States)

    Increasingly stringent environmental regulations on volatile organic compounds (VOCs) and hazardous air pollutants (HAPs) demand the development of disruptive technologies for cleaning weapons systems and platforms. Currently employed techniques such as vapor degreasing, solvent, aqueous, or blast c...

  4. An Optical Wake Vortex Detection System for Super-Density Airport Operation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — OSI proposes to develop a wake vortex detection system including a group of double-ended and single-ended optical scintillometers properly deployed in the airfield...

  5. Visual computing scientific visualization and imaging systems

    CERN Document Server

    2014-01-01

    This volume aims to stimulate discussions on research involving the use of data and digital images as an understanding approach for analysis and visualization of phenomena and experiments. The emphasis is put not only on graphically representing data as a way of increasing its visual analysis, but also on the imaging systems which contribute greatly to the comprehension of real cases. Scientific Visualization and Imaging Systems encompass multidisciplinary areas, with applications in many knowledge fields such as Engineering, Medicine, Material Science, Physics, Geology, Geographic Information Systems, among others. This book is a selection of 13 revised and extended research papers presented in the International Conference on Advanced Computational Engineering and Experimenting -ACE-X conferences 2010 (Paris), 2011 (Algarve), 2012 (Istanbul) and 2013 (Madrid). The examples were particularly chosen from materials research, medical applications, general concepts applied in simulations and image analysis and ot...

  6. Epilepsy analytic system with cloud computing.

    Science.gov (United States)

    Shen, Chia-Ping; Zhou, Weizhi; Lin, Feng-Seng; Sung, Hsiao-Ya; Lam, Yan-Yu; Chen, Wei; Lin, Jeng-Wei; Pan, Ming-Kai; Chiu, Ming-Jang; Lai, Feipei

    2013-01-01

    Biomedical data analytic system has played an important role in doing the clinical diagnosis for several decades. Today, it is an emerging research area of analyzing these big data to make decision support for physicians. This paper presents a parallelized web-based tool with cloud computing service architecture to analyze the epilepsy. There are many modern analytic functions which are wavelet transform, genetic algorithm (GA), and support vector machine (SVM) cascaded in the system. To demonstrate the effectiveness of the system, it has been verified by two kinds of electroencephalography (EEG) data, which are short term EEG and long term EEG. The results reveal that our approach achieves the total classification accuracy higher than 90%. In addition, the entire training time accelerate about 4.66 times and prediction time is also meet requirements in real time.

  7. 10 CFR 35.457 - Therapy-related computer systems.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Therapy-related computer systems. 35.457 Section 35.457... Therapy-related computer systems. The licensee shall perform acceptance testing on the treatment planning system of therapy-related computer systems in accordance with published protocols accepted by...

  8. Knowledge and intelligent computing system in medicine.

    Science.gov (United States)

    Pandey, Babita; Mishra, R B

    2009-03-01

    Knowledge-based systems (KBS) and intelligent computing systems have been used in the medical planning, diagnosis and treatment. The KBS consists of rule-based reasoning (RBR), case-based reasoning (CBR) and model-based reasoning (MBR) whereas intelligent computing method (ICM) encompasses genetic algorithm (GA), artificial neural network (ANN), fuzzy logic (FL) and others. The combination of methods in KBS such as CBR-RBR, CBR-MBR and RBR-CBR-MBR and the combination of methods in ICM is ANN-GA, fuzzy-ANN, fuzzy-GA and fuzzy-ANN-GA. The combination of methods from KBS to ICM is RBR-ANN, CBR-ANN, RBR-CBR-ANN, fuzzy-RBR, fuzzy-CBR and fuzzy-CBR-ANN. In this paper, we have made a study of different singular and combined methods (185 in number) applicable to medical domain from mid 1970s to 2008. The study is presented in tabular form, showing the methods and its salient features, processes and application areas in medical domain (diagnosis, treatment and planning). It is observed that most of the methods are used in medical diagnosis very few are used for planning and moderate number in treatment. The study and its presentation in this context would be helpful for novice researchers in the area of medical expert system.

  9. Jupiter-like planets as dynamical barriers to inward-migrating super-Earths: a new understanding of the origin of Uranus and Neptune and predictions for extrasolar planetary systems

    Science.gov (United States)

    Morbidelli, Alessandro; Izidoro Da Costa, Andre'; Raymond, Sean

    2014-11-01

    Planets of 1-4 times Earth's size on orbits shorter than 100 days exist around 30-50% of all Sun-like stars. These ``hot super-Earths'' (or ``mini-Neptunes''), or their building blocks, might have formed on wider orbits and migrated inward due to interactions with the gaseous protoplanetary disk. The Solar System is statistically unusual in its lack of hot super-Earths. Here, we use a suite of dynamical simulations to show that gas-giant planets act as barriers to the inward migration of super-Earths initially placed on more distant orbits. Jupiter's early formation may have prevented Uranus and Neptune (and perhaps Saturn's core) from becoming hot super-Earths. It may actually have been crucial to the very formation of Uranus and Neptune. In fact, the large spin obliquities of these two planets argue that they experienced a stage of giant impacts from multi-Earth mass planetary embryos. We show that the dynamical barrier offered by Jupiter favors the mutual accretion of multiple migrating planetary embryos, favoring the formation of a few massive objects like Uranus and Neptune. Our model predicts that the populations of hot super-Earth systems and Jupiter-like planets should be anti-correlated: gas giants (especially if they form early) should be rare in systems with many hot super-Earths. Testing this prediction will constitute a crucial assessment of the validity of the migration hypothesis for the origin of close-in super-Earths.

  10. An Applet-based Anonymous Distributed Computing System.

    Science.gov (United States)

    Finkel, David; Wills, Craig E.; Ciaraldi, Michael J.; Amorin, Kevin; Covati, Adam; Lee, Michael

    2001-01-01

    Defines anonymous distributed computing systems and focuses on the specifics of a Java, applet-based approach for large-scale, anonymous, distributed computing on the Internet. Explains the possibility of a large number of computers participating in a single computation and describes a test of the functionality of the system. (Author/LRW)

  11. Final Report on the Automated Computer Science Education System.

    Science.gov (United States)

    Danielson, R. L.; And Others

    At the University of Illinois at Urbana, a computer based curriculum called Automated Computer Science Education System (ACSES) has been developed to supplement instruction in introductory computer science courses or to assist individuals interested in acquiring a foundation in computer science through independent study. The system, which uses…

  12. CleverFarm - A SuperSCADA system for wind farms

    Energy Technology Data Exchange (ETDEWEB)

    Giebel, G. (ed.); Juhl, A.; Gram Hansen, K.; Biebhardt, J. (and others)

    2004-08-01

    The CleverFarm project started out to build an integrated monitoring system for wind farms, where all information would be available and could be used across the wind farm for maintenance and component health assessments. This would enable wind farm operators to prioritise their efforts, since they have a good view of the farm status from home. A large emphasis was placed on the integration of condition monitoring approaches in the central system, enabling estimates of the remaining lifetime of components, especially in the nacelle. During the 3,5 years of the project, software and hardware was developed and installed in two wind farms in Denmark and Germany. The connected hardware included two different condition monitoring systems based on vibration sensors from Gram&Juhl and ISET, plus a camera system developed by Overspeed. Additionally, short-term predictions of the wind farm output were delivered by DMI and Risoes Prediktor system throughout the period of the project. All these diverse information sources are integrated through a web interface based on Java Server Pages. The software was developed in Java, and is delivered as so-called CleverBeans. The main part of the software is open-sourced. The report contains the experiences and results of a one-year experimental period. This report is a slightly edited version of the final publishable report to the EU Commission as part of the requirements of the CleverFarm project.

  13. Techniques for High Contrast Imaging in Multi-Star Systems I: Super-Nyquist Wavefront Control

    CERN Document Server

    Thomas, Sandrine J; Bendek, Eduardo

    2015-01-01

    Extra-solar planets direct imaging is now a reality with the deployment and commissioning of the first generation of specialized ground-based instruments (GPI, SPHERE, P1640 and SCExAO). These systems allow of planets $ 10 ^ 7 $ times fainter than their host star. For space-based missions (EXCEDE, EXO-C, EXO-S, WFIRST), various teams have demonstrated laboratory contrasts reaching $ 10 ^ { -10 } $ within a few diffraction limits from the star. However, all of these current and future systems are designed to detect faint planets around a single host star or unresolved multiples, while most non M-dwarf stars such as Alpha Centauri belong to multi-star systems. Direct imaging around binaries/multiple systems at a level of contrast allowing Earth-like planet detection is challenging because the region of interest is contaminated by the hosts star companion as well as the host Generally, the light leakage is caused by both diffraction and aberrations in the system. Moreover, the region of interest usually falls ou...

  14. Neural circuits as computational dynamical systems.

    Science.gov (United States)

    Sussillo, David

    2014-04-01

    Many recent studies of neurons recorded from cortex reveal complex temporal dynamics. How such dynamics embody the computations that ultimately lead to behavior remains a mystery. Approaching this issue requires developing plausible hypotheses couched in terms of neural dynamics. A tool ideally suited to aid in this question is the recurrent neural network (RNN). RNNs straddle the fields of nonlinear dynamical systems and machine learning and have recently seen great advances in both theory and application. I summarize recent theoretical and technological advances and highlight an example of how RNNs helped to explain perplexing high-dimensional neurophysiological data in the prefrontal cortex.

  15. Controlling Energy Demand in Mobile Computing Systems

    CERN Document Server

    Ellis, Carla

    2007-01-01

    This lecture provides an introduction to the problem of managing the energy demand of mobile devices. Reducing energy consumption, primarily with the goal of extending the lifetime of battery-powered devices, has emerged as a fundamental challenge in mobile computing and wireless communication. The focus of this lecture is on a systems approach where software techniques exploit state-of-the-art architectural features rather than relying only upon advances in lower-power circuitry or the slow improvements in battery technology to solve the problem. Fortunately, there are many opportunities to i

  16. Large-scale neuromorphic computing systems

    Science.gov (United States)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  17. Super-High Temperature Alloys and Composites from NbW-Cr Systems

    Energy Technology Data Exchange (ETDEWEB)

    Shailendra Varma

    2008-12-31

    Nickel base superalloys must be replaced if the demand for the materials continues to rise for applications beyond 1000{sup o}C which is the upper limit for such alloys at this time. There are non-metallic materials available for such high temperature applications but they all present processing difficulties because of the lack of ductility. Metallic systems can present a chance to find materials with adequate room temperature ductility. Obviously the system must contain elements with high melting points. Nb has been chosen by many investigators which has a potential of being considered as a candidate if alloyed properly. This research is exploring the Nb-W-Cr system for the possible choice of alloys to be used as a high temperature material.

  18. Rotating target wheel system for super-heavy element production at ATLAS

    CERN Document Server

    Greene, J P; Falout, J; Janssens, R V F

    2004-01-01

    A new scattering chamber housing a large diameter rotating target wheel has been designed and constructed in front of the Fragment Mass Analyzer (FMA) for the production of very heavy nuclei (Z greater than 100) using beams from the Argonne Tandem Linear Accelerator System (ATLAS). In addition to the target and drive system, the chamber is extensively instrumented in order to monitor target performance and deterioration. Capabilities also exist to install rotating entrance and exit windows for gas cooling of the target within the scattering chamber. The design and initial tests are described.

  19. Short-lived radioactivity in the early Solar System: the Super-AGB star hypothesis

    OpenAIRE

    Lugaro, Maria; Doherty, Carolyn; Karakas, A. I.; Maddison, S. T.; Liffman, K.; Garc'ia-Hernández, D.A.; Siess, Lionel; Lattanzio, J. C.

    2012-01-01

    The composition of the most primitive solar system condensates, such as calcium-aluminum-rich inclusions (CAIs) and micron-sized corundum grains, show that short-lived radionuclides (SLR), e.g. 26Al, were present in the early solar system. Their abundances require a local or stellar origin, which, however, is far from being understood. We present for the first time the abundances of several SLR up to 60Fe predicted from stars with initial mass in the range approximately 7-11M⊙. These stars ev...

  20. Reliability of system identification technique in super high-rise building

    Directory of Open Access Journals (Sweden)

    Ayumi eIkeda

    2015-07-01

    Full Text Available A smart physical-parameter based system identification method has been proposed in the previous paper. This method deals with time-variant nonparametric identification of natural frequencies and modal damping ratios using ARX (Auto-Regressive eXogenous models and has been applied to high-rise buildings during the 2011 off the Pacific coast of Tohoku earthquake. In this perspective article, the current state of knowledge in this class of system identification methods is explained briefly and the reliability of this smart method is discussed through the comparison with the result by a more confident technique.

  1. Analysis of Forensic Super Timelines

    Science.gov (United States)

    2012-06-14

    BIB .1  vii List of Figures Figure Page...Hacker disconnects from User’s system  User clicks off Screen Saver  User closes Solitaire program  User logs off system BIB .1...analysis- tapestry_33836. BIB .2 Guðjónsson, K. (2010). Mastering the super timeline with log2timeline. SANS Gold Paper accepted June 29,2010

  2. CleverFarm - A superSCADA system for wind farms

    DEFF Research Database (Denmark)

    Juhl, A.; Hansen, K.G.; Giebhardt, J.;

    2004-01-01

    The CleverFarm project started out to build an integrated monitoring system for wind farms, where all information would be available and could be used across the wind farm for maintenance and component health assessments. This would enable wind farmoperators to prioritise their efforts, since the...

  3. The Spartan attitude control system - Ground support computer

    Science.gov (United States)

    Schnurr, R. G., Jr.

    1986-01-01

    The Spartan Attitude Control System (ACS) contains a command and control computer. This computer is optimized for the activities of the flight and contains very little human interface hardware and software. The computer system provides the technicians testing of Spartan ACS with a convenient command-oriented interface to the flight ACS computer. The system also decodes and time tags data automatically sent out by the flight computer as key events occur. The duration and magnitude of all system maneuvers is also derived and displayed by this system. The Ground Support Computer is also the primary Ground Support Equipment for the flight sequencer which controls all payload maneuvers, and long term program timing.

  4. Bone metastasis versus bone marrow metastasis? Integration of diagnosis by 18F-fluorodeoxyglucose positron emission/computed tomography in advanced malignancy with super bone scan: Two case reports and literature review

    Directory of Open Access Journals (Sweden)

    Chia-Yang Lin

    2013-04-01

    Full Text Available Super scan pattern on technetium-99m methyldiphosphonate (Tc-99m MDP bone scintigraphy is a special condition of extremely high bone uptake relative to soft tissue with absent or faint renal radioactivity visualization, which is usually seen in diffuse bone metastases or discrete endocrine entities. Here, two cases with super bone scan are presented. One was a young man diagnosed with gastric cancer. The other was a middle-aged woman with a history of breast cancer with recent recurrence. Both cases had 18-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT diagnosis simultaneously. Based on imaging of 18F-FDG PET/CT, diffusely incremental 18F-FDG avidity in spine/pelvis on PET and subtle erosion of cortical bone on CT were seen. The cytological results of bone marrow biopsy showed evidence of malignant metastasis. However, there were several focal discrepant findings between the 18F-FDG PET/CT and Tc-99m MDP bone scan. According to integration of both imaging findings and the result of bone marrow biopsy, we believe that the disseminated malignant spread in bone marrow is a primitive alternation in the super bone scan and that it is also as a result of neoplasm-related endocrine factors.

  5. State exact reconstruction for switched linear systems via a super-twisting algorithm

    Science.gov (United States)

    Bejarano, Francisco J.; Fridman, Leonid

    2011-05-01

    This article discusses the problem of state reconstruction synthesis for switched linear systems. Based only on the continuous output information, an observer is proposed ensuring the reconstruction of the entire state (continuous and discrete) in finite time. For the observer design an exact sliding mode differentiator is used, which allows the finite time convergence of the observer trajectories to the actual trajectories. The design scheme includes both cases: zero control input and nonzero control input. Simulations illustrate the effectiveness of the proposed observer.

  6. Computer system for monitoring power boiler operation

    Energy Technology Data Exchange (ETDEWEB)

    Taler, J.; Weglowski, B.; Zima, W.; Duda, P.; Gradziel, S.; Sobota, T.; Cebula, A.; Taler, D. [Cracow University of Technology, Krakow (Poland). Inst. for Process & Power Engineering

    2008-02-15

    The computer-based boiler performance monitoring system was developed to perform thermal-hydraulic computations of the boiler working parameters in an on-line mode. Measurements of temperatures, heat flux, pressures, mass flowrates, and gas analysis data were used to perform the heat transfer analysis in the evaporator, furnace, and convection pass. A new construction technique of heat flux tubes for determining heat flux absorbed by membrane water-walls is also presented. The current paper presents the results of heat flux measurement in coal-fired steam boilers. During changes of the boiler load, the necessary natural water circulation cannot be exceeded. A rapid increase of pressure may cause fading of the boiling process in water-wall tubes, whereas a rapid decrease of pressure leads to water boiling in all elements of the boiler's evaporator - water-wall tubes and downcomers. Both cases can cause flow stagnation in the water circulation leading to pipe cracking. Two flowmeters were assembled on central downcomers, and an investigation of natural water circulation in an OP-210 boiler was carried out. On the basis of these measurements, the maximum rates of pressure change in the boiler evaporator were determined. The on-line computation of the conditions in the combustion chamber allows for real-time determination of the heat flowrate transferred to the power boiler evaporator. Furthermore, with a quantitative indication of surface cleanliness, selective sootblowing can be directed at specific problem areas. A boiler monitoring system is also incorporated to provide details of changes in boiler efficiency and operating conditions following sootblowing, so that the effects of a particular sootblowing sequence can be analysed and optimized at a later stage.

  7. Engineering Control Systems and Computing in the 1990s

    OpenAIRE

    Casti, J.L.

    1985-01-01

    The relationship between computing hardware/software and engineering control systems is projected into the next decade, and conjectures are made as to the areas of control and system theory that will most benefit from various types of computing advances.

  8. Computer Based Information Systems and the Middle Manager.

    Science.gov (United States)

    Why do some computer based information systems succeed while others fail. It concludes with eleven recommended areas that middle management must...understand in order to effectively use computer based information systems . (Modified author abstract)

  9. Potential of Cognitive Computing and Cognitive Systems

    Science.gov (United States)

    Noor, Ahmed K.

    2014-11-01

    Cognitive computing and cognitive technologies are game changers for future engineering systems, as well as for engineering practice and training. They are major drivers for knowledge automation work, and the creation of cognitive products with higher levels of intelligence than current smart products. This paper gives a brief review of cognitive computing and some of the cognitive engineering systems activities. The potential of cognitive technologies is outlined, along with a brief description of future cognitive environments, incorporating cognitive assistants - specialized proactive intelligent software agents designed to follow and interact with humans and other cognitive assistants across the environments. The cognitive assistants engage, individually or collectively, with humans through a combination of adaptive multimodal interfaces, and advanced visualization and navigation techniques. The realization of future cognitive environments requires the development of a cognitive innovation ecosystem for the engineering workforce. The continuously expanding major components of the ecosystem include integrated knowledge discovery and exploitation facilities (incorporating predictive and prescriptive big data analytics); novel cognitive modeling and visual simulation facilities; cognitive multimodal interfaces; and cognitive mobile and wearable devices. The ecosystem will provide timely, engaging, personalized / collaborative, learning and effective decision making. It will stimulate creativity and innovation, and prepare the participants to work in future cognitive enterprises and develop new cognitive products of increasing complexity. http://www.aee.odu.edu/cognitivecomp

  10. COMPUTER-BASED REASONING SYSTEMS: AN OVERVIEW

    Directory of Open Access Journals (Sweden)

    CIPRIAN CUCU

    2012-12-01

    Full Text Available Argumentation is nowadays seen both as skill that people use in various aspects of their lives, as well as an educational technique that can support the transfer or creation of knowledge thus aiding in the development of other skills (e.g. Communication, critical thinking or attitudes. However, teaching argumentation and teaching with argumentation is still a rare practice, mostly due to the lack of available resources such as time or expert human tutors that are specialized in argumentation. Intelligent Computer Systems (i.e. Systems that implement an inner representation of particular knowledge and try to emulate the behavior of humans could allow more people to understand the purpose, techniques and benefits of argumentation. The proposed paper investigates the state of the art concepts of computer-based argumentation used in education and tries to develop a conceptual map, showing benefits, limitation and relations between various concepts focusing on the duality “learning to argue – arguing to learn”.

  11. Computational System For Rapid CFD Analysis In Engineering

    Science.gov (United States)

    Barson, Steven L.; Ascoli, Edward P.; Decroix, Michelle E.; Sindir, Munir M.

    1995-01-01

    Computational system comprising modular hardware and software sub-systems developed to accelerate and facilitate use of techniques of computational fluid dynamics (CFD) in engineering environment. Addresses integration of all aspects of CFD analysis process, including definition of hardware surfaces, generation of computational grids, CFD flow solution, and postprocessing. Incorporates interfaces for integration of all hardware and software tools needed to perform complete CFD analysis. Includes tools for efficient definition of flow geometry, generation of computational grids, computation of flows on grids, and postprocessing of flow data. System accepts geometric input from any of three basic sources: computer-aided design (CAD), computer-aided engineering (CAE), or definition by user.

  12. [Computer modeling of electrodynamic processes in SHF-based water disinfection and heating system as part of the spacecrew life support system].

    Science.gov (United States)

    Klimarev, S I; Zaĭtsev, K A

    2012-01-01

    To optimize the design of SHF-based potable water disinfection and heating subsystem within the life support system (LSS), computer modeling of the super-high frequency electromagnetic field in SHF-based waveguide-coaxial and coaxial running water heaters was performed Software package CST Microwave Studio 2010 was used as the main instrument in the investigation. Results of the investigation can contribute to the development and prototyping of an HSF-based water heater as an integral part of advanced life support system for spacecrews

  13. Spraying of Super Fine Powders With HVOF and Axial Plasma Thermal Spray Systems

    Institute of Scientific and Technical Information of China (English)

    Alan Burgess; G(o)tz Matth(a)us

    2004-01-01

    The use of fine powders in thermal spray can lead to many advantages. These advantages include denser coatings, coatings with increased wear resistance, coatings with smoother surface finish, coatings that can be applied to internal surfaces, less expensive coatings. The use of fine powders also has an disadvantage that th ey can have poor flow characteristics. The paper will discuss a feeder that is able to feed fine powders to overcome this difficulty and the coating equipment, both axial plasma and HVOF systems, used to deposit these materials to produce smooth dense coatings.

  14. Performance analysis of super-orthogonal space-frequency trellis coded OFDM system

    CSIR Research Space (South Africa)

    Sokoya, O

    2009-08-01

    Full Text Available ) and (8) and substituting it in (9) gives equation (10) P (S → Ŝ│G) = Pr{ || G1j (S1- Ŝ1) ||2 + || G2j (S2- Ŝ2) ||2 } = Pr{||Gj ∆||2 > 0}, (10...) where Gj = [G1j G2j], ∆ is the block codeword matrix that characterize the SOSFTC-OFDM system and ||.|| stands for the norm of the matrix element.. The expression of ∆ is given in equation (11). 1 1 2 2 ˆ ˆ...

  15. New Extreme Trans-Neptunian Objects: Towards a Super-Earth in the Outer Solar System

    CERN Document Server

    Sheppard, Scott S

    2016-01-01

    We are conducting a wide and deep survey for extreme distant solar system objects. Our goal is to understand the high perihelion objects Sedna and 2012 VP113 and determine if an unknown massive planet exists in the outer solar system. The discovery of new extreme objects from our survey of some 1080 square degrees of sky to over 24th magnitude in the r-band are reported. Two of the new objects, 2014 SR349 and 2013 FT28, are extreme detached trans-Neptunian objects, which have semi-major axes greater than 150 AU and perihelia well beyond Neptune (q>40 AU). Both new objects have orbits with arguments of perihelia within the range of the clustering of this angle seen in the other known extreme objects. One of these objects, 2014 SR349, has a longitude of perihelion similar to the other extreme objects, but 2013 FT28, which may have more significant Neptune interactions, is about 180 degrees away or anti-aligned in its longitude of perihelion. We also discovered the first outer Oort cloud object with a perihelion...

  16. Multiaxis, Lightweight, Computer-Controlled Exercise System

    Science.gov (United States)

    Haynes, Leonard; Bachrach, Benjamin; Harvey, William

    2006-01-01

    The multipurpose, multiaxial, isokinetic dynamometer (MMID) is a computer-controlled system of exercise machinery that can serve as a means for quantitatively assessing a subject s muscle coordination, range of motion, strength, and overall physical condition with respect to a wide variety of forces, motions, and exercise regimens. The MMID is easily reconfigurable and compactly stowable and, in comparison with prior computer-controlled exercise systems, it weighs less, costs less, and offers more capabilities. Whereas a typical prior isokinetic exercise machine is limited to operation in only one plane, the MMID can operate along any path. In addition, the MMID is not limited to the isokinetic (constant-speed) mode of operation. The MMID provides for control and/or measurement of position, force, and/or speed of exertion in as many as six degrees of freedom simultaneously; hence, it can accommodate more complex, more nearly natural combinations of motions and, in so doing, offers greater capabilities for physical conditioning and evaluation. The MMID (see figure) includes as many as eight active modules, each of which can be anchored to a floor, wall, ceiling, or other fixed object. A cable is payed out from a reel in each module to a bar or other suitable object that is gripped and manipulated by the subject. The reel is driven by a DC brushless motor or other suitable electric motor via a gear reduction unit. The motor can be made to function as either a driver or an electromagnetic brake, depending on the required nature of the interaction with the subject. The module includes a force and a displacement sensor for real-time monitoring of the tension in and displacement of the cable, respectively. In response to commands from a control computer, the motor can be operated to generate a required tension in the cable, to displace the cable a required distance, or to reel the cable in or out at a required speed. The computer can be programmed, either locally or via

  17. 14 CFR 415.123 - Computing systems and software.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Computing systems and software. 415.123... Launch Vehicle From a Non-Federal Launch Site § 415.123 Computing systems and software. (a) An applicant's safety review document must describe all computing systems and software that perform a safety...

  18. Super-twisting sliding mode differentiation for improving PD controllers performance of second order systems.

    Science.gov (United States)

    Salgado, Ivan; Chairez, Isaac; Camacho, Oscar; Yañez, Cornelio

    2014-07-01

    Designing a proportional derivative (PD) controller has as main problem, to obtain the derivative of the output error signal when it is contaminated with high frequency noises. To overcome this disadvantage, the supertwisting algorithm (STA) is applied in closed-loop with a PD structure for multi-input multi-output (MIMO) second order nonlinear systems. The stability conditions were analyzed in terms of a strict non-smooth Lyapunov function and the solution of Riccati equations. A set of numerical test was designed to show the advantages of implementing PD controllers that used STA as a robust exact differentiator. The first numerical example showed the stabilization of an inverted pendulum. The second example was designed to solve the tracking problem of a two-link robot manipulator.

  19. Intelligent Computer Vision System for Automated Classification

    Science.gov (United States)

    Jordanov, Ivan; Georgieva, Antoniya

    2010-05-01

    In this paper we investigate an Intelligent Computer Vision System applied for recognition and classification of commercially available cork tiles. The system is capable of acquiring and processing gray images using several feature generation and analysis techniques. Its functionality includes image acquisition, feature extraction and preprocessing, and feature classification with neural networks (NN). We also discuss system test and validation results from the recognition and classification tasks. The system investigation also includes statistical feature processing (features number and dimensionality reduction techniques) and classifier design (NN architecture, target coding, learning complexity and performance, and training with our own metaheuristic optimization method). The NNs trained with our genetic low-discrepancy search method (GLPτS) for global optimisation demonstrated very good generalisation abilities. In our view, the reported testing success rate of up to 95% is due to several factors: combination of feature generation techniques; application of Analysis of Variance (ANOVA) and Principal Component Analysis (PCA), which appeared to be very efficient for preprocessing the data; and use of suitable NN design and learning method.

  20. Computational dynamics of acoustically driven microsphere systems.

    Science.gov (United States)

    Glosser, Connor; Piermarocchi, Carlo; Li, Jie; Dault, Dan; Shanker, B

    2016-01-01

    We propose a computational framework for the self-consistent dynamics of a microsphere system driven by a pulsed acoustic field in an ideal fluid. Our framework combines a molecular dynamics integrator describing the dynamics of the microsphere system with a time-dependent integral equation solver for the acoustic field that makes use of fields represented as surface expansions in spherical harmonic basis functions. The presented approach allows us to describe the interparticle interaction induced by the field as well as the dynamics of trapping in counter-propagating acoustic pulses. The integral equation formulation leads to equations of motion for the microspheres describing the effect of nondissipative drag forces. We show (1) that the field-induced interactions between the microspheres give rise to effective dipolar interactions, with effective dipoles defined by their velocities and (2) that the dominant effect of an ultrasound pulse through a cloud of microspheres gives rise mainly to a translation of the system, though we also observe both expansion and contraction of the cloud determined by the initial system geometry.

  1. Monitoring super-volcanoes: Geophysical and geochemical signals at Yellowstone and other large caldera systems

    Science.gov (United States)

    Lowenstern, J. B.; Smith, R.B.; Hill, D.P.

    2006-01-01

    Earth's largest calderas form as the ground collapses during immense volcanic eruptions, when hundreds to thousands of cubic kilometres of magma are explosively withdrawn from the Earth's crust over a period of days to weeks. Continuing long after such great eruptions, the resulting calderas often exhibit pronounced unrest, with frequent earthquakes, alternating uplift and subsidence of the ground, and considerable heat and mass flux. Because many active and extinct calderas show evidence for repetition of large eruptions, such systems demand detailed scientific study and monitoring. Two calderas in North America, Yellowstone (Wyoming) and Long Valley (California), are in areas of youthful tectonic complexity. Scientists strive to understand the signals generated when tectonic, volcanic and hydrothermal (hot ground water) processes intersect. One obstacle to accurate forecasting of large volcanic events is humanity's lack of familiarity with the signals leading up to the largest class of volcanic eruptions. Accordingly, it may be difficult to recognize the difference between smaller and larger eruptions. To prepare ourselves and society, scientists must scrutinize a spectrum of volcanic signals and assess the many factors contributing to unrest and toward diverse modes of eruption. ?? 2006 The Royal Society.

  2. Monitoring super-volcanoes: geophysical and geochemical signals at Yellowstone and other large caldera systems.

    Science.gov (United States)

    Lowenstern, Jacob B; Smith, Robert B; Hill, David P

    2006-08-15

    Earth's largest calderas form as the ground collapses during immense volcanic eruptions, when hundreds to thousands of cubic kilometres of magma are explosively withdrawn from the Earth's crust over a period of days to weeks. Continuing long after such great eruptions, the resulting calderas often exhibit pronounced unrest, with frequent earthquakes, alternating uplift and subsidence of the ground, and considerable heat and mass flux. Because many active and extinct calderas show evidence for repetition of large eruptions, such systems demand detailed scientific study and monitoring. Two calderas in North America, Yellowstone (Wyoming) and Long Valley (California), are in areas of youthful tectonic complexity. Scientists strive to understand the signals generated when tectonic, volcanic and hydrothermal (hot ground water) processes intersect. One obstacle to accurate forecasting of large volcanic events is humanity's lack of familiarity with the signals leading up to the largest class of volcanic eruptions. Accordingly, it may be difficult to recognize the difference between smaller and larger eruptions. To prepare ourselves and society, scientists must scrutinize a spectrum of volcanic signals and assess the many factors contributing to unrest and toward diverse modes of eruption.

  3. Computing the Moore-Penrose Inverse of a Matrix with a Computer Algebra System

    Science.gov (United States)

    Schmidt, Karsten

    2008-01-01

    In this paper "Derive" functions are provided for the computation of the Moore-Penrose inverse of a matrix, as well as for solving systems of linear equations by means of the Moore-Penrose inverse. Making it possible to compute the Moore-Penrose inverse easily with one of the most commonly used Computer Algebra Systems--and to have the blueprint…

  4. Toward a Deterministic Model of Planetary Formation VI: Dynamical Interaction and Coagulation of Multiple Rocky Embryos and Super-Earth Systems around Solar Type Stars

    CERN Document Server

    Ida, S

    2010-01-01

    Radial velocity and transit surveys indicate that solar-type stars bear super-Earths, with mass and period up to ~ 20 M_E and a few months, are more common than those with Jupiter-mass gas giants. In many cases, these super-Earths are members of multiple-planet systems in which their mutual dynamical interaction has influenced their formation and evolution. In this paper, we modify an existing numerical population synthesis scheme to take into account protoplanetary embryos' interaction with their evolving natal gaseous disk, as well as their close scatterings and resonant interaction with each other. We show that it is possible for a group of compact embryos to emerge interior to the ice line, grow, migrate, and congregate into closely-packed convoys which stall in the proximity of their host stars. After the disk-gas depletion, they undergo orbit crossing, close scattering, and giant impacts to form multiple rocky Earths or super-Earths in non-resonant orbits around ~ 0.1AU with moderate eccentricities of ~...

  5. Performance Aspects of Synthesizable Computing Systems

    DEFF Research Database (Denmark)

    Schleuniger, Pascal

    . However, high setup and design costs make ASICs economically viable only for high volume production. Therefore, FPGAs are increasingly being used in low and medium volume markets. The evolution of FPGAs has reached a point where multiple processor cores, dedicated accelerators, and a large number...... of interfaces can be integrated on a single device. This thesis consists of ve parts that address performance aspects of synthesizable computing systems on FPGAs. First, it is evaluated how synthesizable processor cores can exploit current state-of-the-art FPGA architectures. This evaluation results...... in a processor architecture optimized for a high throughput on modern FPGA architectures. The current hardware implementation, the Tinuso I core, can be clocked as high as 376MHz on a Xilinx Virtex 6 device and consumes fewer hardware resources than similar commercial processor congurations. The Tinuso...

  6. The fundamentals of computational intelligence system approach

    CERN Document Server

    Zgurovsky, Mikhail Z

    2017-01-01

    This monograph is dedicated to the systematic presentation of main trends, technologies and methods of computational intelligence (CI). The book pays big attention to novel important CI technology- fuzzy logic (FL) systems and fuzzy neural networks (FNN). Different FNN including new class of FNN- cascade neo-fuzzy neural networks are considered and their training algorithms are described and analyzed. The applications of FNN to the forecast in macroeconomics and at stock markets are examined. The book presents the problem of portfolio optimization under uncertainty, the novel theory of fuzzy portfolio optimization free of drawbacks of classical model of Markovitz as well as an application for portfolios optimization at Ukrainian, Russian and American stock exchanges. The book also presents the problem of corporations bankruptcy risk forecasting under incomplete and fuzzy information, as well as new methods based on fuzzy sets theory and fuzzy neural networks and results of their application for bankruptcy ris...

  7. Living with Computers. Young Danes' Uses of and Thoughts on the Uses of Computers

    DEFF Research Database (Denmark)

    Stald, Gitte Bang

    1998-01-01

    Young Danes, computers,users, super users, non users, computer access, unge danskere, computere,brugere,superbrugere,ikke-brugere......Young Danes, computers,users, super users, non users, computer access, unge danskere, computere,brugere,superbrugere,ikke-brugere...

  8. Computational Modeling of Biological Systems From Molecules to Pathways

    CERN Document Server

    2012-01-01

    Computational modeling is emerging as a powerful new approach for studying and manipulating biological systems. Many diverse methods have been developed to model, visualize, and rationally alter these systems at various length scales, from atomic resolution to the level of cellular pathways. Processes taking place at larger time and length scales, such as molecular evolution, have also greatly benefited from new breeds of computational approaches. Computational Modeling of Biological Systems: From Molecules to Pathways provides an overview of established computational methods for the modeling of biologically and medically relevant systems. It is suitable for researchers and professionals working in the fields of biophysics, computational biology, systems biology, and molecular medicine.

  9. A computing system for LBB considerations

    Energy Technology Data Exchange (ETDEWEB)

    Ikonen, K.; Miettinen, J.; Raiko, H.; Keskinen, R.

    1997-04-01

    A computing system has been developed at VTT Energy for making efficient leak-before-break (LBB) evaluations of piping components. The system consists of fracture mechanics and leak rate analysis modules which are linked via an interactive user interface LBBCAL. The system enables quick tentative analysis of standard geometric and loading situations by means of fracture mechanics estimation schemes such as the R6, FAD, EPRI J, Battelle, plastic limit load and moments methods. Complex situations are handled with a separate in-house made finite-element code EPFM3D which uses 20-noded isoparametric solid elements, automatic mesh generators and advanced color graphics. Analytical formulas and numerical procedures are available for leak area evaluation. A novel contribution for leak rate analysis is the CRAFLO code which is based on a nonequilibrium two-phase flow model with phase slip. Its predictions are essentially comparable with those of the well known SQUIRT2 code; additionally it provides outputs for temperature, pressure and velocity distributions in the crack depth direction. An illustrative application to a circumferentially cracked elbow indicates expectedly that a small margin relative to the saturation temperature of the coolant reduces the leak rate and is likely to influence the LBB implementation to intermediate diameter (300 mm) primary circuit piping of BWR plants.

  10. Computer vision for driver assistance systems

    Science.gov (United States)

    Handmann, Uwe; Kalinke, Thomas; Tzomakas, Christos; Werner, Martin; von Seelen, Werner

    1998-07-01

    Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.

  11. Advances in Future Computer and Control Systems v.2

    CERN Document Server

    Lin, Sally; 2012 International Conference on Future Computer and Control Systems(FCCS2012)

    2012-01-01

    FCCS2012 is an integrated conference concentrating its focus on Future Computer and Control Systems. “Advances in Future Computer and Control Systems” presents the proceedings of the 2012 International Conference on Future Computer and Control Systems(FCCS2012) held April 21-22,2012, in Changsha, China including recent research results on Future Computer and Control Systems of researchers from all around the world.

  12. Advances in Future Computer and Control Systems v.1

    CERN Document Server

    Lin, Sally; 2012 International Conference on Future Computer and Control Systems(FCCS2012)

    2012-01-01

    FCCS2012 is an integrated conference concentrating its focus on Future Computer and Control Systems. “Advances in Future Computer and Control Systems” presents the proceedings of the 2012 International Conference on Future Computer and Control Systems(FCCS2012) held April 21-22,2012, in Changsha, China including recent research results on Future Computer and Control Systems of researchers from all around the world.

  13. Powering a Home with Just 25 Watts of Solar PV. Super-Efficient Appliances Can Enable Expanded Off-Grid Energy Service Using Small Solar Power Systems

    Energy Technology Data Exchange (ETDEWEB)

    Phadke, Amol A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jacobson, Arne [Schatz Energy Research Center, Arcata, CA (United States); Park, Won Young [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lee, Ga Rick [Schatz Energy Research Center, Arcata, CA (United States); Alstone, Peter [Univ. of California, Berkeley, CA (United States); Khare, Amit [Schatz Energy Research Center, Arcata, CA (United States)

    2015-04-01

    Highly efficient direct current (DC) appliances have the potential to dramatically increase the affordability of off-grid solar power systems used for rural electrification in developing countries by reducing the size of the systems required. For example, the combined power requirement of a highly efficient color TV, four DC light emitting diode (LED) lamps, a mobile phone charger, and a radio is approximately 18 watts and can be supported by a small solar power system (at 27 watts peak, Wp). Price declines and efficiency advances in LED technology are already enabling rapidly increased use of small off-grid lighting systems in Africa and Asia. Similar progress is also possible for larger household-scale solar home systems that power appliances such as lights, TVs, fans, radios, and mobile phones. When super-efficient appliances are used, the total cost of solar home systems and their associated appliances can be reduced by as much as 50%. The results vary according to the appliances used with the system. These findings have critical relevance for efforts to provide modern energy services to the 1.2 billion people worldwide without access to the electrical grid and one billion more with unreliable access. However, policy and market support are needed to realize rapid adoption of super-efficient appliances.

  14. Second-Order Super-Twisting Sliding Mode Control for Finite-Time Leader-Follower Consensus with Uncertain Nonlinear Multiagent Systems

    Directory of Open Access Journals (Sweden)

    Nan Liu

    2015-01-01

    Full Text Available Consensus tracking problem of the leader-follower multiagent systems is resolved via second-order super-twisting sliding mode control approach. The followers’ states can keep consistent with the leader’s states on sliding surfaces. The proposed approach can ensure the finite-time consensus if the directed graph of the nonlinear system has a directed path under the condition that leader’s control input is unavailable to any followers. It is proved by using the finite-time Lyapunov stability theory. Simulation results verify availability of the proposed approach.

  15. Reachability computation for hybrid systems with Ariadne

    NARCIS (Netherlands)

    L. Benvenuti; D. Bresolin; A. Casagrande; P.J. Collins (Pieter); A. Ferrari; E. Mazzi; T. Villa; A. Sangiovanni-Vincentelli

    2008-01-01

    htmlabstractAriadne is an in-progress open environment to design algorithms for computing with hybrid automata, that relies on a rigorous computable analysis theory to represent geometric objects, in order to achieve provable approximation bounds along the computations. In this paper we discuss the

  16. N=2 Super - $W_{3}$ Algebra and N=2 Super Boussinesq Equations

    CERN Document Server

    Ivanov, E; Malik, R P

    1995-01-01

    We study classical $N=2$ super-$W_3$ algebra and its interplay with $N=2$ supersymmetric extensions of the Boussinesq equation in the framework of the nonlinear realization method and the inverse Higgs - covariant reduction approach. These techniques have been previously applied by us in the bosonic $W_3$ case to give a new geometric interpretation of the Boussinesq hierarchy. Here we deduce the most general $N=2$ super Boussinesq equation and two kinds of the modified $N=2$ super Boussinesq equations, as well as the super Miura maps relating these systems to each other, by applying the covariant reduction to certain coset manifolds of linear $N=2$ super-$W_3^{\\infty}$ symmetry associated with $N=2$ super-$W_3$. We discuss the integrability properties of the equations obtained and their correspondence with the formulation based on the notion of the second hamiltonian structure.

  17. On the dynamics of multiple systems of hot super-Earths and Neptunes: Tidal circularization, resonance and the HD 40307 system

    CERN Document Server

    Papaloizou, John C B

    2010-01-01

    [Abridged] We consider the dynamics of a system of hot super-Earths or Neptunes such as HD 40307. We show that, as tidal interaction with the central star leads to small eccentricities, the planets in this system could be undergoing resonant coupling even though the period ratios depart significantly from very precise commensurability. In a three planet system, this is indicated by the fact that resonant angles librate or are associated with long term changes to the orbital elements. We propose that the planets in HD 40307 were in a strict Laplace resonance while they migrated through the disc. After entering the disc inner cavity, tidal interaction would cause the period ratios to increase from two but with the inner pair deviating less than the outer pair, counter to what occurs in HD 40307. However, the relationship between these pairs that occurs in HD 40307 might be produced if the resonance is impulsively modified by an event like a close encounter shortly after the planetary system decouples from the d...

  18. Assessment of the nanostructure of acid-base resistant zone by the application of all-in-one adhesive systems: Super dentin formation.

    Science.gov (United States)

    Nikaido, Toru; Weerasinghe, Dinesh D S; Waidyasekera, Kanchana; Inoue, Go; Foxton, Richard M; Tagami, Junji

    2009-01-01

    An acid-base resistant zone (ABRZ) has been shown to be created under a hybrid layer in a self-etching adhesive system at the adhesive/dentin interface. The purpose of this study was to assess the nanostructure of the ABRZ by applying all-in-one adhesive systems. Human premolar dentin was treated with one of two all-in-one adhesive systems; Clearfil Tri-S Bond and G-Bond according to the manufacturers' instructions. After placement of a resin composite, the bonded interface was vertically sectioned and subjected to an acid-base challenge. Following this, the nanostructure of the ABRZ was examined by SEM and TEM. The SEM observations of the adhesive-dentin interface after the acid-base challenge indicated that a hybrid layer less than 1 mum thick was created, and a ABRZ was formed beneath the hybrid layer for each adhesive system. The TEM observations indicated that the ABRZ contained mineral components in both adhesive systems, however, the thickness of the ABRZ was material dependent. The application of the all-in-one adhesive systems created an ABRZ at the underlying dentin, which reinforced normal dentin against dental caries. Therefore, this zone was named 'Super Dentin'. Formation of 'Super Dentin' is a new approach in caries prevention.

  19. Genost: A System for Introductory Computer Science Education with a Focus on Computational Thinking

    Science.gov (United States)

    Walliman, Garret

    Computational thinking, the creative thought process behind algorithmic design and programming, is a crucial introductory skill for both computer scientists and the population in general. In this thesis I perform an investigation into introductory computer science education in the United States and find that computational thinking is not effectively taught at either the high school or the college level. To remedy this, I present a new educational system intended to teach computational thinking called Genost. Genost consists of a software tool and a curriculum based on teaching computational thinking through fundamental programming structures and algorithm design. Genost's software design is informed by a review of eight major computer science educational software systems. Genost's curriculum is informed by a review of major literature on computational thinking. In two educational tests of Genost utilizing both college and high school students, Genost was shown to significantly increase computational thinking ability with a large effect size.

  20. A computer control system using a virtual keyboard

    Science.gov (United States)

    Ejbali, Ridha; Zaied, Mourad; Ben Amar, Chokri

    2015-02-01

    This work is in the field of human-computer communication, namely in the field of gestural communication. The objective was to develop a system for gesture recognition. This system will be used to control a computer without a keyboard. The idea consists in using a visual panel printed on an ordinary paper to communicate with a computer.

  1. 10 CFR 35.657 - Therapy-related computer systems.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Therapy-related computer systems. 35.657 Section 35.657... Units, Teletherapy Units, and Gamma Stereotactic Radiosurgery Units § 35.657 Therapy-related computer... computer systems in accordance with published protocols accepted by nationally recognized bodies. At...

  2. Factory automation management computer system and its applications. FA kanri computer system no tekiyo jirei

    Energy Technology Data Exchange (ETDEWEB)

    Maeda, M. (Meidensha Corp., Tokyo (Japan))

    1993-06-11

    A plurality of NC composite lathes used in a breaker manufacturing and processing line were integrated under a system mainly comprising the industrial computer [mu] PORT, an exclusive LAN, and material handling robots. This paper describes this flexible manufacturing system (FMS) that operates on an unmanned basis from process control to material distribution and processing. This system has achieved the following results: efficiency improvement in lines producing a great variety of products in small quantities and in mixed flow production lines enhancement in facility operating rates by means of group management of NC machine tools; orientation to developing into integrated production systems; expansion of processing capacity; reduction in number of processes; and reduction in management and indirect manpowers. This system allocates the production control plans transmitted from the production control system operated by a host computer to the processes on a daily basis and by machines, using the [mu] PORT. This FMS utilizes features of the multi-task processing function of the [mu] PORT and the ultra high-speed real-time-based BASIC. The system processes simultaneously the process management such as machining programs and processing results, the processing data management, and the operation control of a plurality of machines. The system achieved systematized machining processes. 6 figs., 2 tabs.

  3. Distributed computing system with dual independent communications paths between computers and employing split tokens

    Science.gov (United States)

    Rasmussen, Robert D. (Inventor); Manning, Robert M. (Inventor); Lewis, Blair F. (Inventor); Bolotin, Gary S. (Inventor); Ward, Richard S. (Inventor)

    1990-01-01

    This is a distributed computing system providing flexible fault tolerance; ease of software design and concurrency specification; and dynamic balance of the loads. The system comprises a plurality of computers each having a first input/output interface and a second input/output interface for interfacing to communications networks each second input/output interface including a bypass for bypassing the associated computer. A global communications network interconnects the first input/output interfaces for providing each computer the ability to broadcast messages simultaneously to the remainder of the computers. A meshwork communications network interconnects the second input/output interfaces providing each computer with the ability to establish a communications link with another of the computers bypassing the remainder of computers. Each computer is controlled by a resident copy of a common operating system. Communications between respective ones of computers is by means of split tokens each having a moving first portion which is sent from computer to computer and a resident second portion which is disposed in the memory of at least one of computer and wherein the location of the second portion is part of the first portion. The split tokens represent both functions to be executed by the computers and data to be employed in the execution of the functions. The first input/output interfaces each include logic for detecting a collision between messages and for terminating the broadcasting of a message whereby collisions between messages are detected and avoided.

  4. New computing systems, future computing environment, and their implications on structural analysis and design

    Science.gov (United States)

    Noor, Ahmed K.; Housner, Jerrold M.

    1993-01-01

    Recent advances in computer technology that are likely to impact structural analysis and design of flight vehicles are reviewed. A brief summary is given of the advances in microelectronics, networking technologies, and in the user-interface hardware and software. The major features of new and projected computing systems, including high performance computers, parallel processing machines, and small systems, are described. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed. The impact of the advances in computer technology on structural analysis and the design of flight vehicles is described. A scenario for future computing paradigms is presented, and the near-term needs in the computational structures area are outlined.

  5. Study of Super-Twisting sliding mode control for U model based nonlinear system%基于U模型的非线性系统Super-Twisting滑模控制研究

    Institute of Scientific and Technical Information of China (English)

    张建华; 李杨; 吴学礼; 霍佳楠; 庄沈阳

    2016-01-01

    为了对基于U模型的非线性控制系统进行研究,利用Super-Twisting控制算法,解决非仿射非线性系统的控制问题,对非线性函数进行神经网络逼近,运用Super-Twisting控制算法进行控制.选取恰当的Lyapunov函数,对Super-Twisting算法的收敛性进行了证明.为了验证该方法的可行性和有效性,利用Matlab软件进行仿真,结果表明在神经网络自适应Super-Twisting控制器的作用下,被控系统具有快速的跟踪性能和输出的有界性.

  6. The Solution Construction of Heterotic Super-Liouville Model

    Institute of Scientific and Technical Information of China (English)

    YANG Zhan-Ying; ZHEN Yi

    2001-01-01

    We investigate the heterotic super-Liouville model on the base of the basic Lie super-algebra Osp(1|2).Using the super extension of Leznov-Saveliev analysis and Drinfeld Sokolov linear system, we construct the explicit solution of the heterotic super-Liouville system in component form. We also show that the solutions are local and periodic by calculating the exchange relation of the solution. Finally starting from the action of heterotic super-Liou ville model, we obtain the conserved current and conserved charge which possessed the BR ST properties.

  7. Applications of membrane computing in systems and synthetic biology

    CERN Document Server

    Gheorghe, Marian; Pérez-Jiménez, Mario

    2014-01-01

    Membrane Computing was introduced as a computational paradigm in Natural Computing. The models introduced, called Membrane (or P) Systems, provide a coherent platform to describe and study living cells as computational systems. Membrane Systems have been investigated for their computational aspects and employed to model problems in other fields, like: Computer Science, Linguistics, Biology, Economy, Computer Graphics, Robotics, etc. Their inherent parallelism, heterogeneity and intrinsic versatility allow them to model a broad range of processes and phenomena, being also an efficient means to solve and analyze problems in a novel way. Membrane Computing has been used to model biological systems, becoming with time a thorough modeling paradigm comparable, in its modeling and predicting capabilities, to more established models in this area. This book is the result of the need to collect, in an organic way, different facets of this paradigm. The chapters of this book, together with the web pages accompanying th...

  8. Modeling and Characteristic Parameters Analysis of a Trough Concentrating Photovoltaic/Thermal System with GaAs and Super Cell Arrays

    Directory of Open Access Journals (Sweden)

    Xu Ji

    2012-01-01

    Full Text Available The paper established the one-dimension steady models of a trough concentrating photovoltaic/thermal system with a super cell array and a GaAs cell array, respectively, and verified the models by experiments. The gaps between calculation results and experimental results were less than 5%. Utilizing the models, the paper analyzed the influences of the characteristic parameters on the performances of the TCPV/T system with a super cell array and a GaAs cell array, respectively. The reflectivity of the parabolic mirror in the TCPV/T system was an important factor to determine the utilizing efficiency of solar energy. The performances of the TCPV/T system can be optimized by improving the mirror reflectivity and the thermal solar radiation absorptivity of the lighting plate and pursuing a suitable focal line with uniform light intensity distribution. All these works will benefit to the utilization of the trough concentrating system and the combined heat/power supply.

  9. COMPUTER APPLICATION SYSTEM FOR OPERATIONAL EFFICIENCY OF DIESEL RAILBUSES

    Directory of Open Access Journals (Sweden)

    Łukasz WOJCIECHOWSKI

    2016-09-01

    Full Text Available The article presents a computer algorithm to calculate the estimated operating cost analysis rail bus. This computer application system compares the cost of employment locomotive and wagon, the cost of using locomotives and cost of using rail bus. An intensive growth of passenger railway traffic increased a demand for modern computer systems to management means of transportation. Described computer application operates on the basis of selected operating parameters of rail buses.

  10. Computers as Components Principles of Embedded Computing System Design

    CERN Document Server

    Wolf, Wayne

    2008-01-01

    This book was the first to bring essential knowledge on embedded systems technology and techniques under a single cover. This second edition has been updated to the state-of-the-art by reworking and expanding performance analysis with more examples and exercises, and coverage of electronic systems now focuses on the latest applications. Researchers, students, and savvy professionals schooled in hardware or software design, will value Wayne Wolf's integrated engineering design approach.The second edition gives a more comprehensive view of multiprocessors including VLIW and superscalar archite

  11. An operating system for future aerospace vehicle computer systems

    Science.gov (United States)

    Foudriat, E. C.; Berman, W. J.; Will, R. W.; Bynum, W. L.

    1984-01-01

    The requirements for future aerospace vehicle computer operating systems are examined in this paper. The computer architecture is assumed to be distributed with a local area network connecting the nodes. Each node is assumed to provide a specific functionality. The network provides for communication so that the overall tasks of the vehicle are accomplished. The O/S structure is based upon the concept of objects. The mechanisms for integrating node unique objects with node common objects in order to implement both the autonomy and the cooperation between nodes is developed. The requirements for time critical performance and reliability and recovery are discussed. Time critical performance impacts all parts of the distributed operating system; e.g., its structure, the functional design of its objects, the language structure, etc. Throughout the paper the tradeoffs - concurrency, language structure, object recovery, binding, file structure, communication protocol, programmer freedom, etc. - are considered to arrive at a feasible, maximum performance design. Reliability of the network system is considered. A parallel multipath bus structure is proposed for the control of delivery time for time critical messages. The architecture also supports immediate recovery for the time critical message system after a communication failure.

  12. Possible Computer Vision Systems and Automated or Computer-Aided Edging and Trimming

    Science.gov (United States)

    Philip A. Araman

    1990-01-01

    This paper discusses research which is underway to help our industry reduce costs, increase product volume and value recovery, and market more accurately graded and described products. The research is part of a team effort to help the hardwood sawmill industry automate with computer vision systems, and computer-aided or computer controlled processing. This paper...

  13. 不确定欠驱动系统的高阶自适应Super-Twisting滑模控制%High-Order Adaptive Super-Twisting Sliding Mode Control for Uncertain Underactuated Systems

    Institute of Scientific and Technical Information of China (English)

    杨兴明; 高银平

    2014-01-01

    为实现一类不确定欠驱动系统在未知干扰情况下的鲁棒控制,针对传统滑模控制中存在的抖振问题,提出一种基于二次型Lyapunov函数的二阶Super-Twisting自适应滑模控制策略。首先,控制器的不连续项采用二阶Super-Twisting算法,将不连续控制作用在滑模量的二阶导数。然后,针对滑模面受不确定干扰影响的情况,为调节参数设计一种自适应律方法,该方法不受传统二阶滑模控制中干扰项的一阶导数边界已知的条件限制,保证滑模面在有干扰情况下的收敛,削弱控制器输入的抖振现象。最后,以两轮自平衡车为实验对象验证该方法,并与传统滑模及普通二阶滑模方法做仿真对比。仿真结果表明文中所提的二阶自适应滑模控制方法在控制效果和降低抖振方面表现更优。%To achieve good robustness against disturbances for a class of uncertain underactuated systems, a second-order adaptive sliding mode control method is proposed based on quadratic Lyapunov function to reduce the inherent chattering of conventional sliding mode control ( SMC ) . Firstly, a second-order super-twisting algorithm is used by the discontinuous part of controller, which acts on the second-order derivative of sliding mode variables. Secondly, as for the effects of unknown disturbances on sliding mode surface, an adaptive law is designed to adjust the parameters. This method eliminates the restriction of the first derivative of disturbances boundary being known in the traditional second-order sliding mode control, which not only keeps convergence of sliding mode surface but also reduces chattering. Finally, a two-wheeled self-balancing cart is used to test the proposed approach. The simulation results show that compared with conventional SMC and ordinary second-order SMC, the proposed method outperforms the above methods on effectiveness and reducing chattering.

  14. Computational systems analysis of dopamine metabolism.

    Directory of Open Access Journals (Sweden)

    Zhen Qi

    Full Text Available A prominent feature of Parkinson's disease (PD is the loss of dopamine in the striatum, and many therapeutic interventions for the disease are aimed at restoring dopamine signaling. Dopamine signaling includes the synthesis, storage, release, and recycling of dopamine in the presynaptic terminal and activation of pre- and post-synaptic receptors and various downstream signaling cascades. As an aid that might facilitate our understanding of dopamine dynamics in the pathogenesis and treatment in PD, we have begun to merge currently available information and expert knowledge regarding presynaptic dopamine homeostasis into a computational model, following the guidelines of biochemical systems theory. After subjecting our model to mathematical diagnosis and analysis, we made direct comparisons between model predictions and experimental observations and found that the model exhibited a high degree of predictive capacity with respect to genetic and pharmacological changes in gene expression or function. Our results suggest potential approaches to restoring the dopamine imbalance and the associated generation of oxidative stress. While the proposed model of dopamine metabolism is preliminary, future extensions and refinements may eventually serve as an in silico platform for prescreening potential therapeutics, identifying immediate side effects, screening for biomarkers, and assessing the impact of risk factors of the disease.

  15. Lightness computation by the human visual system

    Science.gov (United States)

    Rudd, Michael E.

    2017-05-01

    A model of achromatic color computation by the human visual system is presented, which is shown to account in an exact quantitative way for a large body of appearance matching data collected with simple visual displays. The model equations are closely related to those of the original Retinex model of Land and McCann. However, the present model differs in important ways from Land and McCann's theory in that it invokes additional biological and perceptual mechanisms, including contrast gain control, different inherent neural gains for incremental, and decremental luminance steps, and two types of top-down influence on the perceptual weights applied to local luminance steps in the display: edge classification and spatial integration attentional windowing. Arguments are presented to support the claim that these various visual processes must be instantiated by a particular underlying neural architecture. By pointing to correspondences between the architecture of the model and findings from visual neurophysiology, this paper suggests that edge classification involves a top-down gating of neural edge responses in early visual cortex (cortical areas V1 and/or V2) while spatial integration windowing occurs in cortical area V4 or beyond.

  16. Quantum Computing in Fock Space Systems

    Science.gov (United States)

    Berezin, Alexander A.

    1997-04-01

    Fock space system (FSS) has unfixed number (N) of particles and/or degrees of freedom. In quantum computing (QC) main requirement is sustainability of coherent Q-superpositions. This normally favoured by low noise environment. High excitation/high temperature (T) limit is hence discarded as unfeasible for QC. Conversely, if N is itself a quantized variable, the dimensionality of Hilbert basis for qubits may increase faster (say, N-exponentially) than thermal noise (likely, in powers of N and T). Hence coherency may win over T-randomization. For this type of QC speed (S) of factorization of long integers (with D digits) may increase with D (for 'ordinary' QC speed polynomially decreases with D). This (apparent) paradox rests on non-monotonic bijectivity (cf. Georg Cantor's diagonal counting of rational numbers). This brings entire aleph-null structurality ("Babylonian Library" of infinite informational content of integer field) to superposition determining state of quantum analogue of Turing machine head. Structure of integer infinititude (e.g. distribution of primes) results in direct "Platonic pressure" resembling semi-virtual Casimir efect (presure of cut-off vibrational modes). This "effect", the embodiment of Pythagorean "Number is everything", renders Godelian barrier arbitrary thin and hence FSS-based QC can in principle be unlimitedly efficient (e.g. D/S may tend to zero when D tends to infinity).

  17. Context-aware computing and self-managing systems

    CERN Document Server

    Dargie, Waltenegus

    2009-01-01

    Bringing together an extensively researched area with an emerging research issue, Context-Aware Computing and Self-Managing Systems presents the core contributions of context-aware computing in the development of self-managing systems, including devices, applications, middleware, and networks. The expert contributors reveal the usefulness of context-aware computing in developing autonomous systems that have practical application in the real world.The first chapter of the book identifies features that are common to both context-aware computing and autonomous computing. It offers a basic definit

  18. Time computations in anuran auditory systems

    Directory of Open Access Journals (Sweden)

    Gary J Rose

    2014-05-01

    Full Text Available Temporal computations are important in the acoustic communication of anurans. In many cases, calls between closely related species are nearly identical spectrally but differ markedly in temporal structure. Depending on the species, calls can differ in pulse duration, shape and/or rate (i.e., amplitude modulation, direction and rate of frequency modulation, and overall call duration. Also, behavioral studies have shown that anurans are able to discriminate between calls that differ in temporal structure. In the peripheral auditory system, temporal information is coded primarily in the spatiotemporal patterns of activity of auditory-nerve fibers. However, major transformations in the representation of temporal information occur in the central auditory system. In this review I summarize recent advances in understanding how temporal information is represented in the anuran midbrain, with particular emphasis on mechanisms that underlie selectivity for pulse duration and pulse rate (i.e., intervals between onsets of successive pulses. Two types of neurons have been identified that show selectivity for pulse rate: long-interval cells respond well to slow pulse rates but fail to spike or respond phasically to fast pulse rates; conversely, interval-counting neurons respond to intermediate or fast pulse rates, but only after a threshold number of pulses, presented at optimal intervals, have occurred. Duration-selectivity is manifest as short-pass, band-pass or long-pass tuning. Whole-cell patch recordings, in vivo, suggest that excitation and inhibition are integrated in diverse ways to generate temporal selectivity. In many cases, activity-related enhancement or depression of excitatory or inhibitory processes appear to contribute to selective responses.

  19. Modelling, abstraction, and computation in systems biology: A view from computer science.

    Science.gov (United States)

    Melham, Tom

    2013-04-01

    Systems biology is centrally engaged with computational modelling across multiple scales and at many levels of abstraction. Formal modelling, precise and formalised abstraction relationships, and computation also lie at the heart of computer science--and over the past decade a growing number of computer scientists have been bringing their discipline's core intellectual and computational tools to bear on biology in fascinating new ways. This paper explores some of the apparent points of contact between the two fields, in the context of a multi-disciplinary discussion on conceptual foundations of systems biology. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Calculus super review

    CERN Document Server

    2012-01-01

    Get all you need to know with Super Reviews! Each Super Review is packed with in-depth, student-friendly topic reviews that fully explain everything about the subject. The Calculus I Super Review includes a review of functions, limits, basic derivatives, the definite integral, combinations, and permutations. Take the Super Review quizzes to see how much you've learned - and where you need more study. Makes an excellent study aid and textbook companion. Great for self-study!DETAILS- From cover to cover, each in-depth topic review is easy-to-follow and easy-to-grasp - Perfect when preparing for

  1. Software Systems for High-performance Quantum Computing

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; Britt, Keith A [ORNL

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  2. Fiscal 1998 research report. R and D on super metal (Al system mesoscopic texture-controlled material); 1998 nendo seika hokokusho. Super metal no gijutsu kaihatsu (aluminium kei mesoscopic soshiki seigyo zairyo no gijutsu kaihatsu)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    For development of Al materials with superior industrial characteristics (strength, corrosion resistance), this research has promoted development of large-size Al system materials with mesoscopic crystalline texture by high- strain accumulation control technology, and recovery and recrystallization control technology. In this fiscal year, (1) basic study on high-strain accumulation control technology, (2) study on a formation mechanism of ultra- fine crystal grains, and (3) development of a machining process were made. In (1), basic study on low-temperature rolling and study on rolling by rollers having different peripheral speeds were made. In (2), study on refining of recrystallized grains of 5000-base and 7000-base alloys was made. In (3), a low-temperature rolling equipment, and a ultra-rapid heating device were introduced. For the whole R and D project on super metal, the main research facilities such as a low-temperature rolling body for high- strain accumulation and a high-strain accumulative structure formation equipment (melt rolling equipment) for uniform nucleus formation in recrystallization were introduced to gain a firm foothold for the future application research. (NEDO)

  3. A Heterogeneous High-Performance System for Computational and Computer Science

    Science.gov (United States)

    2016-11-15

    Science The views, opinions and/or findings contained in this report are those of the author(s) and should not contrued as an official Department of the...System for Computational and Computer Science Report Title This DoD HBC/MI Equipment/Instrumentation grant was awarded in October 2014 for the purchase...Computing (HPC) course taught in the department of computer science as to attract more graduate students from many disciplines where their research

  4. Parallel experimental study of a novel super-thin thermal absorber based photovoltaic/thermal (PV/T system against conventional photovoltaic (PV system

    Directory of Open Access Journals (Sweden)

    Peng Xu

    2015-11-01

    Full Text Available Photovoltaic (PV semiconductor degrades in performance due to temperature rise. A super thin-conductive thermal absorber is therefore developed to regulate the PV working temperature by retrofitting the existing PV panel into the photovoltaic/thermal (PV/T panel. This article presented the parallel comparative investigation of the two different systems through both laboratory and field experiments. The laboratory evaluation consisted of one PV panel and one PV/T panel respectively while the overall field system involved 15 stand-alone PV panels and 15 retrofitted PV/T panels. The laboratory testing results demonstrated the PV/T panel could achieve the electrical efficiency of about 16.8% (relatively 5% improvement comparing with the stand-alone PV panel, and yield an extra amount of heat with thermal efficiency of nearly 65%. The field testing results indicated that the hybrid PV/T panel could enhance the electrical return of PV panels by nearly 3.5%, and increase the overall energy output by nearly 324.3%. Further opportunities and challenges were then discussed from aspects of different PV/T stakeholders to accelerate the development. It is expected that such technology could become a significant solution to yield more electricity, offset heating load freely and reduce carbon footprint in contemporary energy environment.

  5. Computer controlled vent and pressurization system

    Science.gov (United States)

    Cieslewicz, E. J.

    1975-01-01

    The Centaur space launch vehicle airborne computer, which was primarily used to perform guidance, navigation, and sequencing tasks, was further used to monitor and control inflight pressurization and venting of the cryogenic propellant tanks. Computer software flexibility also provided a failure detection and correction capability necessary to adopt and operate redundant hardware techniques and enhance the overall vehicle reliability.

  6. Generalised Computability and Applications to Hybrid Systems

    DEFF Research Database (Denmark)

    Korovina, Margarita V.; Kudinov, Oleg V.

    2001-01-01

    We investigate the concept of generalised computability of operators and functionals defined on the set of continuous functions, firstly introduced in [9]. By working in the reals, with equality and without equality, we study properties of generalised computable operators and functionals. Also we...

  7. The hack attack - Increasing computer system awareness of vulnerability threats

    Science.gov (United States)

    Quann, John; Belford, Peter

    1987-01-01

    The paper discusses the issue of electronic vulnerability of computer based systems supporting NASA Goddard Space Flight Center (GSFC) by unauthorized users. To test the security of the system and increase security awareness, NYMA, Inc. employed computer 'hackers' to attempt to infiltrate the system(s) under controlled conditions. Penetration procedures, methods, and descriptions are detailed in the paper. The procedure increased the security consciousness of GSFC management to the electronic vulnerability of the system(s).

  8. PLAID- A COMPUTER AIDED DESIGN SYSTEM

    Science.gov (United States)

    Brown, J. W.

    1994-01-01

    PLAID is a three-dimensional Computer Aided Design (CAD) system which enables the user to interactively construct, manipulate, and display sets of highly complex geometric models. PLAID was initially developed by NASA to assist in the design of Space Shuttle crewstation panels, and the detection of payload object collisions. It has evolved into a more general program for convenient use in many engineering applications. Special effort was made to incorporate CAD techniques and features which minimize the users workload in designing and managing PLAID models. PLAID consists of three major modules: the Primitive Object Generator (BUILD), the Composite Object Generator (COG), and the DISPLAY Processor. The BUILD module provides a means of constructing simple geometric objects called primitives. The primitives are created from polygons which are defined either explicitly by vertex coordinates, or graphically by use of terminal crosshairs or a digitizer. Solid objects are constructed by combining, rotating, or translating the polygons. Corner rounding, hole punching, milling, and contouring are special features available in BUILD. The COG module hierarchically organizes and manipulates primitives and other previously defined COG objects to form complex assemblies. The composite object is constructed by applying transformations to simpler objects. The transformations which can be applied are scalings, rotations, and translations. These transformations may be defined explicitly or defined graphically using the interactive COG commands. The DISPLAY module enables the user to view COG assemblies from arbitrary viewpoints (inside or outside the object) both in wireframe and hidden line renderings. The PLAID projection of a three-dimensional object can be either orthographic or with perspective. A conflict analysis option enables detection of spatial conflicts or collisions. DISPLAY provides camera functions to simulate a view of the model through different lenses. Other

  9. Overview of ASC Capability Computing System Governance Model

    Energy Technology Data Exchange (ETDEWEB)

    Doebling, Scott W. [Los Alamos National Laboratory

    2012-07-11

    This document contains a description of the Advanced Simulation and Computing Program's Capability Computing System Governance Model. Objectives of the Governance Model are to ensure that the capability system resources are allocated on a priority-driven basis according to the Program requirements; and to utilize ASC Capability Systems for the large capability jobs for which they were designed and procured.

  10. High-Speed Computer-Controlled Switch-Matrix System

    Science.gov (United States)

    Spisz, E.; Cory, B.; Ho, P.; Hoffman, M.

    1985-01-01

    High-speed computer-controlled switch-matrix system developed for communication satellites. Satellite system controlled by onboard computer and all message-routing functions between uplink and downlink beams handled by newly developed switch-matrix system. Message requires only 2-microsecond interconnect period, repeated every millisecond.

  11. Granular computing analysis and design of intelligent systems

    CERN Document Server

    Pedrycz, Witold

    2013-01-01

    Information granules, as encountered in natural language, are implicit in nature. To make them fully operational so they can be effectively used to analyze and design intelligent systems, information granules need to be made explicit. An emerging discipline, granular computing focuses on formalizing information granules and unifying them to create a coherent methodological and developmental environment for intelligent system design and analysis. Granular Computing: Analysis and Design of Intelligent Systems presents the unified principles of granular computing along with its comprehensive algo

  12. Thermal-gravitational modeling and scaling of two-phase heat transport systems from micro-gravity to super-gravity levels

    Science.gov (United States)

    Delil, A. A. M.

    2001-02-01

    Earlier publications extensively describe NLR research on thermal-gravitational modeling and scaling of two-phase heat transport systems for spacecraft applications. These publications on mechanically and capillary pumped two-phase loops discuss pure geometric scaling, pure fluid to fluid scaling, and combined (hybrid) scaling of a prototype system by a model at the same gravity level, and of a prototype in micro-gravity environment by a scale-model on earth. More recent publications include the scaling aspects of prototype two-phase loops for Moon or Mars applications by scale-models on earth. Recent work, discussed here, concerns extension of thermal-gravitational scaling to super-g acceleration levels. This turned out to be necessary, since a very promising super-g application for (two-phase) heat transport systems will be cooling of high-power electronics in spinning satellites and in military combat aircraft. In such aircraft, the electronics can be exposed during maneuvres to transient accelerations up to 120 m/s2. The discussions focus on ``conventional'' (capillary) pumped two-phase loops. It can be considered as introduction to the accompanying article, which focuses on pulsating and oscillating devices. .

  13. Computational Modeling of Flow Control Systems for Aerospace Vehicles Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Clear Science Corp. proposes to develop computational methods for designing active flow control systems on aerospace vehicles with the primary objective of...

  14. Simulation model of load balancing in distributed computing systems

    Science.gov (United States)

    Botygin, I. A.; Popov, V. N.; Frolov, S. G.

    2017-02-01

    The availability of high-performance computing, high speed data transfer over the network and widespread of software for the design and pre-production in mechanical engineering have led to the fact that at the present time the large industrial enterprises and small engineering companies implement complex computer systems for efficient solutions of production and management tasks. Such computer systems are generally built on the basis of distributed heterogeneous computer systems. The analytical problems solved by such systems are the key models of research, but the system-wide problems of efficient distribution (balancing) of the computational load and accommodation input, intermediate and output databases are no less important. The main tasks of this balancing system are load and condition monitoring of compute nodes, and the selection of a node for transition of the user’s request in accordance with a predetermined algorithm. The load balancing is one of the most used methods of increasing productivity of distributed computing systems through the optimal allocation of tasks between the computer system nodes. Therefore, the development of methods and algorithms for computing optimal scheduling in a distributed system, dynamically changing its infrastructure, is an important task.

  15. Evolutionary Computing for Intelligent Power System Optimization and Control

    DEFF Research Database (Denmark)

    This new book focuses on how evolutionary computing techniques benefit engineering research and development tasks by converting practical problems of growing complexities into simple formulations, thus largely reducing development efforts. This book begins with an overview of the optimization the...... theory and modern evolutionary computing techniques, and goes on to cover specific applications of evolutionary computing to power system optimization and control problems....

  16. Top 10 Threats to Computer Systems Include Professors and Students

    Science.gov (United States)

    Young, Jeffrey R.

    2008-01-01

    User awareness is growing in importance when it comes to computer security. Not long ago, keeping college networks safe from cyberattackers mainly involved making sure computers around campus had the latest software patches. New computer worms or viruses would pop up, taking advantage of some digital hole in the Windows operating system or in…

  17. Top 10 Threats to Computer Systems Include Professors and Students

    Science.gov (United States)

    Young, Jeffrey R.

    2008-01-01

    User awareness is growing in importance when it comes to computer security. Not long ago, keeping college networks safe from cyberattackers mainly involved making sure computers around campus had the latest software patches. New computer worms or viruses would pop up, taking advantage of some digital hole in the Windows operating system or in…

  18. CAVASS: a computer-assisted visualization and analysis software system - image processing aspects

    Science.gov (United States)

    Udupa, Jayaram K.; Grevera, George J.; Odhner, Dewey; Zhuge, Ying; Souza, Andre; Mishra, Shipra; Iwanaga, Tad

    2007-03-01

    The development of the concepts within 3DVIEWNIX and of the software system 3DVIEWNIX itself dates back to the 1970s. Since then, a series of software packages for Computer Assisted Visualization and Analysis (CAVA) of images came out from our group, 3DVIEWNIX released in 1993, being the most recent, and all were distributed with source code. CAVASS, an open source system, is the latest in this series, and represents the next major incarnation of 3DVIEWNIX. It incorporates four groups of operations: IMAGE PROCESSING (including ROI, interpolation, filtering, segmentation, registration, morphological, and algebraic operations), VISUALIZATION (including slice display, reslicing, MIP, surface rendering, and volume rendering), MANIPULATION (for modifying structures and surgery simulation), ANALYSIS (various ways of extracting quantitative information). CAVASS is designed to work on all platforms. Its key features are: (1) most major CAVA operations incorporated; (2) very efficient algorithms and their highly efficient implementations; (3) parallelized algorithms for computationally intensive operations; (4) parallel implementation via distributed computing on a cluster of PCs; (5) interface to other systems such as CAD/CAM software, ITK, and statistical packages; (6) easy to use GUI. In this paper, we focus on the image processing operations and compare the performance of CAVASS with that of ITK. Our conclusions based on assessing performance by utilizing a regular (6 MB), large (241 MB), and a super (873 MB) 3D image data set are as follows: CAVASS is considerably more efficient than ITK, especially in those operations which are computationally intensive. It can handle considerably larger data sets than ITK. It is easy and ready to use in applications since it provides an easy to use GUI. The users can easily build a cluster from ordinary inexpensive PCs and reap the full power of CAVASS inexpensively compared to expensive multiprocessing systems which are less

  19. The HARPS search for southern extra-solar planets. XVII. Super-Earth and Neptune-mass planets in multiple planet systems HD47186 and HD181433

    CERN Document Server

    Bouchy, F; Lovis, C; Udry, S; Benz, W; Bertaux, J-L; Delfosse, X; Mordasini, C; Pepe, F; Queloz, D; Ségransan, D

    2008-01-01

    This paper reports on the detection of two new multiple planet systems around solar-like stars HD47186 and HD181433. The first system includes a hot Neptune of 22.78 M_Earth at 4.08-days period and a Saturn of 0.35 M_Jup at 3.7-years period. The second system includes a Super-Earth of 7.5 M_Earth at 9.4-days period, a 0.64 M$_Jup at 2.6-years period as well as a third companion of 0.54 M_Jup with a period of about 6 years. These detections increase to 20 the number of close-in low-mass exoplanets (below 0.1 M_Jup) and strengthen the fact that 80% of these planets are in a multiple planetary systems.

  20. Bringing the CMS distributed computing system into scalable operations

    CERN Document Server

    Belforte, S; Fisk, I; Flix, J; Hernández, J M; Kress, T; Letts, J; Magini, N; Miccio, V; Sciabà, A

    2010-01-01

    Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure an...

  1. A Survey of Civilian Dental Computer Systems.

    Science.gov (United States)

    1988-01-01

    r.arketplace, the orthodontic community continued to pioneer clinical automation through diagnosis, treat- (1) patient registration, identification...profession." New York State Dental Journal 34:76, 1968. 17. Ehrlich, A., The Role of Computers in Dental Practice Management. Champaign, IL: Colwell...Council on Dental military dental clinic. Medical Bulletin of the US Army Practice. Report: Dental Computer Vendors. 1984 Europe 39:14-16, 1982. 19

  2. Distributed computing environments for future space control systems

    Science.gov (United States)

    Viallefont, Pierre

    1993-01-01

    The aim of this paper is to present the results of a CNES research project on distributed computing systems. The purpose of this research was to study the impact of the use of new computer technologies in the design and development of future space applications. The first part of this study was a state-of-the-art review of distributed computing systems. One of the interesting ideas arising from this review is the concept of a 'virtual computer' allowing the distributed hardware architecture to be hidden from a software application. The 'virtual computer' can improve system performance by adapting the best architecture (addition of computers) to the software application without having to modify its source code. This concept can also decrease the cost and obsolescence of the hardware architecture. In order to verify the feasibility of the 'virtual computer' concept, a prototype representative of a distributed space application is being developed independently of the hardware architecture.

  3. A computational design system for rapid CFD analysis

    Science.gov (United States)

    Ascoli, E. P.; Barson, S. L.; Decroix, M. E.; Sindir, Munir M.

    1992-01-01

    A computation design system (CDS) is described in which these tools are integrated in a modular fashion. This CDS ties together four key areas of computational analysis: description of geometry; grid generation; computational codes; and postprocessing. Integration of improved computational fluid dynamics (CFD) analysis tools through integration with the CDS has made a significant positive impact in the use of CFD for engineering design problems. Complex geometries are now analyzed on a frequent basis and with greater ease.

  4. Super Tomboy Style

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Sparked by Super Girl, the androgynous look is in among Chinese youth On September 8, this year's top six contestants on the Super Girl television show, a singing contest for young women, stepped into the spotlight. Nearly none of them had long black hair or wore evening gowns, traditionally associated with beauty in China. Rather, they

  5. 超轻度混合动力传动系统建模和仿真%Modeling and simulation of super-mild hybrid transmission system

    Institute of Scientific and Technical Information of China (English)

    惠金芹; 郭家田; 张伯俊

    2012-01-01

    分析超轻度混合动力汽车及其传动系统,采用键合图理论,建立了超轻度混合动力汽车高速纯无级调速工况下的整车传动系统键合图模型,列出高速挡传动系统的状态方程.建立整车控制仿真模型,制定高速挡控制策略并进行了仿真分析.%The paper analyzes the super-mild hybrid electric vehicle and its transmission system, and by the bond graph theory, the whole machine bond graph model of the super-mild hybrid electric vehicle under the high-speed pure stepless speed regulating condition is established, with the state equation for high-speed transmission system formed. The simulation model for whole vehicle control is established, and high-speed block control strategy is formulated, with simulation analysis as well in the paper.

  6. Study of the pulse power supply unit for the four-horn system of the CERN to Fréjus neutrino super beam

    CERN Document Server

    Baussan, E; Dracos, M; Gaudiot, G; Osswald, F; Poussot, P; Vassilopoulos, N; Wurtz, J; Zeter, V

    2013-01-01

    The power supply studies for the four-horn system for the CERN to Fréjus neutrino Super Beam oscillation experiment are discussed here. The power supply is being studied to meet the physics potential and the mega-watt (MW) power requirements of the proton driver of the Super Beam. A one-half sinusoid current waveform with a 350 kA maximum current and pulse length of 100 \\mu s at 50 Hz frequency is generated and distributed to four-horns. In order to provide the necessary current needed to focus the charged mesons producing the neutrino beam, a bench of capacitors is charged at 50 Hz frequency to a +12 kV reference voltage and then discharged through a large switch to each horn via a set of strip-lines at the same rate. A current recovery stage allows to invert rapidly the negative voltage of the capacitor after the discharging stage in order to recuperate large part of the injected energy and thus to limit the power consuption. The energy recovery efficiency of that system is very high at 97%. For feasibilit...

  7. Characterization of the Kepler-101 planetary system with HARPS-N. A hot super-Neptune with an Earth-sized low-mass companion

    CERN Document Server

    Bonomo, A S; Lovis, C; Malavolta, L; Rice, K; Buchhave, L A; Sasselov, D; Cameron, A C; Latham, D W; Molinari, E; Pepe, F; Udry, S; Affer, L; Charbonneau, D; Cosentino, R; Dressing, C D; Dumusque, X; Figueira, P; Fiorenzano, A F M; Gettel, S; Harutyunyan, A; Haywood, R D; Horne, K; Lopez-Morales, M; Mayor, M; Micela, G; Motalebi, F; Nascimbeni, V; Phillips, D F; Piotto, G; Pollacco, D; Queloz, D; Ségransan, D; Szentgyorgyi, A; Watson, C

    2014-01-01

    We report on the characterization of the Kepler-101 planetary system, thanks to a combined DE-MCMC analysis of Kepler data and forty radial velocities obtained with the HARPS-N spectrograph. This system was previously validated by Rowe et al. (2014) and is composed of a hot super-Neptune, Kepler-101b, and an Earth-sized planet, Kepler-101c. These two planets orbit the slightly evolved and metal-rich G-type star in 3.49 and 6.03 days, respectively. With mass $M_{\\rm p}=51.1_{-4.7}^{+5.1}~M_{\\oplus}$, radius $R_{\\rm p}=5.77_{-0.79}^{+0.85}~R_{\\oplus}$, and density $\\rho_{\\rm p}=1.45_{-0.48}^{+0.83} \\rm g\\;cm^{-3}$, Kepler-101b is the first fully-characterized super-Neptune, and its density suggests that heavy elements make up a significant fraction of its interior; more than $60\\%$ of its total mass. Kepler-101c has a radius of $1.25_{-0.17}^{+0.19}~R_{\\oplus}$, which implies the absence of any H/He envelope, but its mass could not be determined due to the relative faintness of the parent star for highly precis...

  8. Safety Metrics for Human-Computer Controlled Systems

    Science.gov (United States)

    Leveson, Nancy G; Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems.This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  9. Computational system identification of continuous-time nonlinear systems using approximate Bayesian computation

    Science.gov (United States)

    Krishnanathan, Kirubhakaran; Anderson, Sean R.; Billings, Stephen A.; Kadirkamanathan, Visakan

    2016-11-01

    In this paper, we derive a system identification framework for continuous-time nonlinear systems, for the first time using a simulation-focused computational Bayesian approach. Simulation approaches to nonlinear system identification have been shown to outperform regression methods under certain conditions, such as non-persistently exciting inputs and fast-sampling. We use the approximate Bayesian computation (ABC) algorithm to perform simulation-based inference of model parameters. The framework has the following main advantages: (1) parameter distributions are intrinsically generated, giving the user a clear description of uncertainty, (2) the simulation approach avoids the difficult problem of estimating signal derivatives as is common with other continuous-time methods, and (3) as noted above, the simulation approach improves identification under conditions of non-persistently exciting inputs and fast-sampling. Term selection is performed by judging parameter significance using parameter distributions that are intrinsically generated as part of the ABC procedure. The results from a numerical example demonstrate that the method performs well in noisy scenarios, especially in comparison to competing techniques that rely on signal derivative estimation.

  10. The Robotic Super-LOTIS Telescope: Results & Future Plans

    OpenAIRE

    Williams, G. G.; Milne, P. A.; Park, H.S.; Barthelmy, S. D.; Hartmann, D. H.; Updike, A.; Hurley, K.

    2008-01-01

    We provide an overview of the robotic Super-LOTIS (Livermore Optical Transient Imaging System) telescope and present results from gamma-ray burst (GRB) afterglow observations using Super-LOTIS and other Steward Observatory telescopes. The 0.6-m Super-LOTIS telescope is a fully robotic system dedicated to the measurement of prompt and early time optical emission from GRBs. The system began routine operations from its Steward Observatory site atop Kitt Peak in April 2000 and currently operates ...

  11. Design technologies for green and sustainable computing systems

    CERN Document Server

    Ganguly, Amlan; Chakrabarty, Krishnendu

    2013-01-01

    This book provides a comprehensive guide to the design of sustainable and green computing systems (GSC). Coverage includes important breakthroughs in various aspects of GSC, including multi-core architectures, interconnection technology, data centers, high-performance computing (HPC), and sensor networks. The authors address the challenges of power efficiency and sustainability in various contexts, including system design, computer architecture, programming languages, compilers and networking. ·         Offers readers a single-source reference for addressing the challenges of power efficiency and sustainability in embedded computing systems; ·         Provides in-depth coverage of the key underlying design technologies for green and sustainable computing; ·         Covers a wide range of topics, from chip-level design to architectures, computing systems, and networks.

  12. A comparison of queueing, cluster and distributed computing systems

    Science.gov (United States)

    Kaplan, Joseph A.; Nelson, Michael L.

    1993-01-01

    Using workstation clusters for distributed computing has become popular with the proliferation of inexpensive, powerful workstations. Workstation clusters offer both a cost effective alternative to batch processing and an easy entry into parallel computing. However, a number of workstations on a network does not constitute a cluster. Cluster management software is necessary to harness the collective computing power. A variety of cluster management and queuing systems are compared: Distributed Queueing Systems (DQS), Condor, Load Leveler, Load Balancer, Load Sharing Facility (LSF - formerly Utopia), Distributed Job Manager (DJM), Computing in Distributed Networked Environments (CODINE), and NQS/Exec. The systems differ in their design philosophy and implementation. Based on published reports on the different systems and conversations with the system's developers and vendors, a comparison of the systems are made on the integral issues of clustered computing.

  13. Computer Generated Hologram System for Wavefront Measurement System Calibration

    Science.gov (United States)

    Olczak, Gene

    2011-01-01

    Computer Generated Holograms (CGHs) have been used for some time to calibrate interferometers that require nulling optics. A typical scenario is the testing of aspheric surfaces with an interferometer placed near the paraxial center of curvature. Existing CGH technology suffers from a reduced capacity to calibrate middle and high spatial frequencies. The root cause of this shortcoming is as follows: the CGH is not placed at an image conjugate of the asphere due to limitations imposed by the geometry of the test and the allowable size of the CGH. This innovation provides a calibration system where the imaging properties in calibration can be made comparable to the test configuration. Thus, if the test is designed to have good imaging properties, then middle and high spatial frequency errors in the test system can be well calibrated. The improved imaging properties are provided by a rudimentary auxiliary optic as part of the calibration system. The auxiliary optic is simple to characterize and align to the CGH. Use of the auxiliary optic also reduces the size of the CGH required for calibration and the density of the lines required for the CGH. The resulting CGH is less expensive than the existing technology and has reduced write error and alignment error sensitivities. This CGH system is suitable for any kind of calibration using an interferometer when high spatial resolution is required. It is especially well suited for tests that include segmented optical components or large apertures.

  14. The Cc1 Project – System For Private Cloud Computing

    Directory of Open Access Journals (Sweden)

    J Chwastowski

    2012-01-01

    Full Text Available The main features of the Cloud Computing system developed at IFJ PAN are described. The project is financed from the structural resources provided by the European Commission and the Polish Ministry of Science and Higher Education (Innovative Economy, National Cohesion Strategy. The system delivers a solution for carrying out computer calculations on a Private Cloud computing infrastructure. It consists of an intuitive Web based user interface, a module for the users and resources administration and the standard EC2 interface implementation. Thanks to the distributed character of the system it allows for the integration of a geographically distant federation of computer clusters within a uniform user environment.

  15. National electronic medical records integration on cloud computing system.

    Science.gov (United States)

    Mirza, Hebah; El-Masri, Samir

    2013-01-01

    Few Healthcare providers have an advanced level of Electronic Medical Record (EMR) adoption. Others have a low level and most have no EMR at all. Cloud computing technology is a new emerging technology that has been used in other industry and showed a great success. Despite the great features of Cloud computing, they haven't been utilized fairly yet in healthcare industry. This study presents an innovative Healthcare Cloud Computing system for Integrating Electronic Health Record (EHR). The proposed Cloud system applies the Cloud Computing technology on EHR system, to present a comprehensive EHR integrated environment.

  16. A Brief Talk on Teaching Reform Program of Computer Network Course System about Computer Related Professional

    Institute of Scientific and Technical Information of China (English)

    Wang Jian-Ping; Huang Yong

    2008-01-01

    The computer network course is the mainstay required course that college computer-related professional sets up,in regard to current teaching condition analysis,the teaching of this course has not formed a complete system,the new knowledge points can be added in promptly while the outdated technology is still there in teaching The article describes the current situation and maladies which appears in the university computer network related professional teaching,the teaching systems and teaching reform schemes about the computer network coupe are presented.

  17. On Price Strategy and Channel Strategy of "Super Online Banking System"%“超级网银”价格策略与渠道策略探析

    Institute of Scientific and Technical Information of China (English)

    张健; 胡乐炜; 赵应文

    2012-01-01

    On the base of briefly introducing the stages of price strategies of the "super online banking "super online banking system", the paper discusses three system" one by one, including introduction stage, growth stage and mature stage. And from mobile phone banking, telephone banking, online and offline promotion four aspects, it elaborates further channel strategy of "super online banking system".%该文在简要介绍“超级网银”的基础上,对“超级网银”导入期、成长期、成熟期三个发展阶段的价格策略进行了逐一论述,并从手机银行、电话银行、线上推广和线下推广四个方面进一步分析了“超级网银”的渠道策略。

  18. Mechanisms of protection of information in computer networks and systems

    Directory of Open Access Journals (Sweden)

    Sergey Petrovich Evseev

    2011-10-01

    Full Text Available Protocols of information protection in computer networks and systems are investigated. The basic types of threats of infringement of the protection arising from the use of computer networks are classified. The basic mechanisms, services and variants of realization of cryptosystems for maintaining authentication, integrity and confidentiality of transmitted information are examined. Their advantages and drawbacks are described. Perspective directions of development of cryptographic transformations for the maintenance of information protection in computer networks and systems are defined and analyzed.

  19. Research on computer virus database management system

    Science.gov (United States)

    Qi, Guoquan

    2011-12-01

    The growing proliferation of computer viruses becomes the lethal threat and research focus of the security of network information. While new virus is emerging, the number of viruses is growing, virus classification increasing complex. Virus naming because of agencies' capture time differences can not be unified. Although each agency has its own virus database, the communication between each other lacks, or virus information is incomplete, or a small number of sample information. This paper introduces the current construction status of the virus database at home and abroad, analyzes how to standardize and complete description of virus characteristics, and then gives the information integrity, storage security and manageable computer virus database design scheme.

  20. Sensor fusion control system for computer integrated manufacturing

    CSIR Research Space (South Africa)

    Kumile, CM

    2007-08-01

    Full Text Available of products in unpredictable quantities. Computer Integrated Manufacturing (CIM) systems plays an important role towards integrating such flexible systems. This paper presents a methodology of increasing flexibility and reusability of a generic CIM cell...

  1. Computer-Based Integrated Learning Systems: Research and Theory.

    Science.gov (United States)

    Hativa, Nira, Ed.; Becker, Henry Jay, Ed.

    1994-01-01

    The eight chapters of this theme issue discuss recent research and theory concerning computer-based integrated learning systems. Following an introduction about their theoretical background and current use in schools, the effects of using computer-based integrated learning systems in the elementary school classroom are considered. (SLD)

  2. Entrepreneurial Health Informatics for Computer Science and Information Systems Students

    Science.gov (United States)

    Lawler, James; Joseph, Anthony; Narula, Stuti

    2014-01-01

    Corporate entrepreneurship is a critical area of curricula for computer science and information systems students. Few institutions of computer science and information systems have entrepreneurship in the curricula however. This paper presents entrepreneurial health informatics as a course in a concentration of Technology Entrepreneurship at a…

  3. On the Computation of Lyapunov Functions for Interconnected Systems

    DEFF Research Database (Denmark)

    Sloth, Christoffer

    2016-01-01

    This paper addresses the computation of additively separable Lyapunov functions for interconnected systems. The presented results can be applied to reduce the complexity of the computations associated with stability analysis of large scale systems. We provide a necessary and sufficient condition...

  4. Software For Computer-Aided Design Of Control Systems

    Science.gov (United States)

    Wette, Matthew

    1994-01-01

    Computer Aided Engineering System (CAESY) software developed to provide means to evaluate methods for dealing with users' needs in computer-aided design of control systems. Interpreter program for performing engineering calculations. Incorporates features of both Ada and MATLAB. Designed to be flexible and powerful. Includes internally defined functions, procedures and provides for definition of functions and procedures by user. Written in C language.

  5. 3-D Signal Processing in a Computer Vision System

    Science.gov (United States)

    Dongping Zhu; Richard W. Conners; Philip A. Araman

    1991-01-01

    This paper discusses the problem of 3-dimensional image filtering in a computer vision system that would locate and identify internal structural failure. In particular, a 2-dimensional adaptive filter proposed by Unser has been extended to 3-dimension. In conjunction with segmentation and labeling, the new filter has been used in the computer vision system to...

  6. Experiments and simulation models of a basic computation element of an autonomous molecular computing system.

    Science.gov (United States)

    Takinoue, Masahiro; Kiga, Daisuke; Shohda, Koh-Ichiroh; Suyama, Akira

    2008-10-01

    Autonomous DNA computers have been attracting much attention because of their ability to integrate into living cells. Autonomous DNA computers can process information through DNA molecules and their molecular reactions. We have already proposed an idea of an autonomous molecular computer with high computational ability, which is now named Reverse-transcription-and-TRanscription-based Autonomous Computing System (RTRACS). In this study, we first report an experimental demonstration of a basic computation element of RTRACS and a mathematical modeling method for RTRACS. We focus on an AND gate, which produces an output RNA molecule only when two input RNA molecules exist, because it is one of the most basic computation elements in RTRACS. Experimental results demonstrated that the basic computation element worked as designed. In addition, its behaviors were analyzed using a mathematical model describing the molecular reactions of the RTRACS computation elements. A comparison between experiments and simulations confirmed the validity of the mathematical modeling method. This study will accelerate construction of various kinds of computation elements and computational circuits of RTRACS, and thus advance the research on autonomous DNA computers.

  7. Mechatronic sensory system for computer integrated manufacturing

    CSIR Research Space (South Africa)

    Kumile, CM

    2007-05-01

    Full Text Available (CIM) systems plays an important role towards integrating such flexible systems. The requirement of fast and cheap design and redesign of manufacturing systems therefore is gaining in importance, considering not only the products and the physical...

  8. 基于云的计算机取证系统研究%Research on Computer Forensics System Based on Cloud Computing

    Institute of Scientific and Technical Information of China (English)

    武鲁; 王连海; 顾卫东

    2012-01-01

    Cloud computing is the most popular computing mode of Internet, which features clastic computing,resource virtualization and on-demand service,etc. In this environment,resources such as infrastructures, development platforms and applications are directly provided by could computing center. Users no longer own infrastructures, software and data themselves, but share entire cloud computing resources. This will affects security and availability of could computing environment, and leave it tremendous risks. Security flaws and threats of cloud computing were analyzed in this paper,and a design of computer forensic system based on cloud computing was presented, which resolves security problems of cloud computing using computer forensic techniques, and meets the need of high-performance computing using cloud computing and super computing techniques.%云计算是目前最流行的互联网计算模式,它具有弹性计算、资源虚拟化、按需服务等特点.在云计算的环境中,云计算中心直接提供由基础设施、平台、应用等组成的各种服务,用户不再拥有自己的基础设施、软件和数据,而是共享整个云基础架构.这种方式直接影响了云计算环境的安全性和可用性,给云计算带来了巨大的隐患.在分析和研究云计算环境下的安全缺陷和安全威胁的前提下,提出了一种基于云的计算机取证系统的设计方案——利用计算机取证技术解决云计算环境的安全问题,同时利用云计算技术和超算技术满足传统取证对高性能计算的需求.

  9. Impact of new computing systems on computational mechanics and flight-vehicle structures technology

    Science.gov (United States)

    Noor, A. K.; Storaasli, O. O.; Fulton, R. E.

    1984-01-01

    Advances in computer technology which may have an impact on computational mechanics and flight vehicle structures technology were reviewed. The characteristics of supersystems, highly parallel systems, and small systems are summarized. The interrelations of numerical algorithms and software with parallel architectures are discussed. A scenario for future hardware/software environment and engineering analysis systems is presented. Research areas with potential for improving the effectiveness of analysis methods in the new environment are identified.

  10. Data systems and computer science programs: Overview

    Science.gov (United States)

    Smith, Paul H.; Hunter, Paul

    1991-01-01

    An external review of the Integrated Technology Plan for the Civil Space Program is presented. The topics are presented in viewgraph form and include the following: onboard memory and storage technology; advanced flight computers; special purpose flight processors; onboard networking and testbeds; information archive, access, and retrieval; visualization; neural networks; software engineering; and flight control and operations.

  11. Central Computer IMS Processing System (CIMS).

    Science.gov (United States)

    Wolfe, Howard

    As part of the IMS Version 3 tryout in 1971-72, software was developed to enable data submitted by IMS users to be transmitted to the central computer, which acted on the data to create IMS reports and to update the Pupil Data Base with criterion exercise and class roster information. The program logic is described, and the subroutines and…

  12. Cloud Computing Based E-Learning System

    Science.gov (United States)

    Al-Zoube, Mohammed; El-Seoud, Samir Abou; Wyne, Mudasser F.

    2010-01-01

    Cloud computing technologies although in their early stages, have managed to change the way applications are going to be developed and accessed. These technologies are aimed at running applications as services over the internet on a flexible infrastructure. Microsoft office applications, such as word processing, excel spreadsheet, access database…

  13. Cloud Computing Based E-Learning System

    Science.gov (United States)

    Al-Zoube, Mohammed; El-Seoud, Samir Abou; Wyne, Mudasser F.

    2010-01-01

    Cloud computing technologies although in their early stages, have managed to change the way applications are going to be developed and accessed. These technologies are aimed at running applications as services over the internet on a flexible infrastructure. Microsoft office applications, such as word processing, excel spreadsheet, access database…

  14. Nonlinear Super Integrable Couplings of Super Classical-Boussinesq Hierarchy

    Directory of Open Access Journals (Sweden)

    Xiuzhi Xing

    2014-01-01

    Full Text Available Nonlinear integrable couplings of super classical-Boussinesq hierarchy based upon an enlarged matrix Lie super algebra were constructed. Then, its super Hamiltonian structures were established by using super trace identity. As its reduction, nonlinear integrable couplings of the classical integrable hierarchy were obtained.

  15. Evaluation of computer-based ultrasonic inservice inspection systems

    Energy Technology Data Exchange (ETDEWEB)

    Harris, R.V. Jr.; Angel, L.J.; Doctor, S.R.; Park, W.R.; Schuster, G.J.; Taylor, T.T. [Pacific Northwest Lab., Richland, WA (United States)

    1994-03-01

    This report presents the principles, practices, terminology, and technology of computer-based ultrasonic testing for inservice inspection (UT/ISI) of nuclear power plants, with extensive use of drawings, diagrams, and LTT images. The presentation is technical but assumes limited specific knowledge of ultrasonics or computers. The report is divided into 9 sections covering conventional LTT, computer-based LTT, and evaluation methodology. Conventional LTT topics include coordinate axes, scanning, instrument operation, RF and video signals, and A-, B-, and C-scans. Computer-based topics include sampling, digitization, signal analysis, image presentation, SAFI, ultrasonic holography, transducer arrays, and data interpretation. An evaluation methodology for computer-based LTT/ISI systems is presented, including questions, detailed procedures, and test block designs. Brief evaluations of several computer-based LTT/ISI systems are given; supplementary volumes will provide detailed evaluations of selected systems.

  16. Cloud Computing for Network Security Intrusion Detection System

    Directory of Open Access Journals (Sweden)

    Jin Yang

    2013-01-01

    Full Text Available In recent years, as a new distributed computing model, cloud computing has developed rapidly and become the focus of academia and industry. But now the security issue of cloud computing is a main critical problem of most enterprise customers faced. In the current network environment, that relying on a single terminal to check the Trojan virus is considered increasingly unreliable. This paper analyzes the characteristics of current cloud computing, and then proposes a comprehensive real-time network risk evaluation model for cloud computing based on the correspondence between the artificial immune system antibody and pathogen invasion intensity. The paper also combines assets evaluation system and network integration evaluation system, considering from the application layer, the host layer, network layer may be factors that affect the network risks. The experimental results show that this model improves the ability of intrusion detection and can support for the security of current cloud computing.

  17. Computer graphics application in the engineering design integration system

    Science.gov (United States)

    Glatt, C. R.; Abel, R. W.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Stewart, W. A.

    1975-01-01

    The computer graphics aspect of the Engineering Design Integration (EDIN) system and its application to design problems were discussed. Three basic types of computer graphics may be used with the EDIN system for the evaluation of aerospace vehicles preliminary designs: offline graphics systems using vellum-inking or photographic processes, online graphics systems characterized by direct coupled low cost storage tube terminals with limited interactive capabilities, and a minicomputer based refresh terminal offering highly interactive capabilities. The offline line systems are characterized by high quality (resolution better than 0.254 mm) and slow turnaround (one to four days). The online systems are characterized by low cost, instant visualization of the computer results, slow line speed (300 BAUD), poor hard copy, and the early limitations on vector graphic input capabilities. The recent acquisition of the Adage 330 Graphic Display system has greatly enhanced the potential for interactive computer aided design.

  18. Security for small computer systems a practical guide for users

    CERN Document Server

    Saddington, Tricia

    1988-01-01

    Security for Small Computer Systems: A Practical Guide for Users is a guidebook for security concerns for small computers. The book provides security advice for the end-users of small computers in different aspects of computing security. Chapter 1 discusses the security and threats, and Chapter 2 covers the physical aspect of computer security. The text also talks about the protection of data, and then deals with the defenses against fraud. Survival planning and risk assessment are also encompassed. The last chapter tackles security management from an organizational perspective. The bo

  19. Belle II grid computing: An overview of the distributed data management system.

    Science.gov (United States)

    Bansal, Vikas; Schram, Malachi; Belle Collaboration, II

    2017-01-01

    The Belle II experiment at the SuperKEKB collider in Tsukuba, Japan, will start physics data taking in 2018 and will accumulate 50/ab of e +e- collision data, about 50 times larger than the data set of the Belle experiment. The computing requirements of Belle II are comparable to those of a Run I LHC experiment. Computing at this scale requires efficient use of the compute grids in North America, Asia and Europe and will take advantage of upgrades to the high-speed global network. We present the architecture of data flow and data handling as a part of the Belle II computing infrastructure.

  20. Fast high-resolution computer-generated hologram computation using multiple graphics processing unit cluster system.

    Science.gov (United States)

    Takada, Naoki; Shimobaba, Tomoyoshi; Nakayama, Hirotaka; Shiraki, Atsushi; Okada, Naohisa; Oikawa, Minoru; Masuda, Nobuyuki; Ito, Tomoyoshi

    2012-10-20

    To overcome the computational complexity of a computer-generated hologram (CGH), we implement an optimized CGH computation in our multi-graphics processing unit cluster system. Our system can calculate a CGH of 6,400×3,072 pixels from a three-dimensional (3D) object composed of 2,048 points in 55 ms. Furthermore, in the case of a 3D object composed of 4096 points, our system is 553 times faster than a conventional central processing unit (using eight threads).

  1. Super-resolution

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.

    2014-01-01

    Super-resolution, the process of obtaining one or more high-resolution images from one or more low-resolution observations, has been a very attractive research topic over the last two decades. It has found practical applications in many real world problems in different fields, from satellite...... the contributions of different authors to the basic concepts of each group. Furthermore, common issues in super-resolution algorithms, such as imaging models and registration algorithms, optimization of the cost functions employed, dealing with color information, improvement factors, assessment of super...

  2. Software fault tolerance in computer operating systems

    Science.gov (United States)

    Iyer, Ravishankar K.; Lee, Inhwan

    1994-01-01

    This chapter provides data and analysis of the dependability and fault tolerance for three operating systems: the Tandem/GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Based on measurements from these systems, basic software error characteristics are investigated. Fault tolerance in operating systems resulting from the use of process pairs and recovery routines is evaluated. Two levels of models are developed to analyze error and recovery processes inside an operating system and interactions among multiple instances of an operating system running in a distributed environment. The measurements show that the use of process pairs in Tandem systems, which was originally intended for tolerating hardware faults, allows the system to tolerate about 70% of defects in system software that result in processor failures. The loose coupling between processors which results in the backup execution (the processor state and the sequence of events occurring) being different from the original execution is a major reason for the measured software fault tolerance. The IBM/MVS system fault tolerance almost doubles when recovery routines are provided, in comparison to the case in which no recovery routines are available. However, even when recovery routines are provided, there is almost a 50% chance of system failure when critical system jobs are involved.

  3. SuperB Progress Report for Physics

    Energy Technology Data Exchange (ETDEWEB)

    O' Leary, B.; /Aachen, Tech. Hochsch.; Matias, J.; Ramon, M.; /Barcelona, IFAE; Pous, E.; /Barcelona U.; De Fazio, F.; Palano, A.; /INFN, Bari; Eigen, G.; /Bergen U.; Asgeirsson, D.; /British Columbia U.; Cheng, C.H.; Chivukula, A.; Echenard, B.; Hitlin, D.G.; Porter, F.; Rakitin, A.; /Caltech; Heinemeyer, S.; /Cantabria Inst. of Phys.; McElrath, B.; /CERN; Andreassen, R.; Meadows, B.; Sokoloff, M.; /Cincinnati U.; Blanke, M.; /Cornell U., Phys. Dept.; Lesiak, T.; /Cracow, INP /DESY /Zurich, ETH /INFN, Ferrara /Frascati /INFN, Genoa /Glasgow U. /Indiana U. /Mainz U., Inst. Phys. /Karlsruhe, Inst. Technol. /KEK, Tsukuba /LBL, Berkeley /UC, Berkeley /Lisbon, IST /Ljubljana U. /Madrid, Autonoma U. /Maryland U. /MIT /INFN, Milan /McGill U. /Munich, Tech. U. /Notre Dame U. /PNL, Richland /INFN, Padua /Paris U., VI-VII /Orsay, LAL /Orsay, LPT /INFN, Pavia /INFN, Perugia /INFN, Pisa /Queen Mary, U. of London /Regensburg U. /Republica U., Montevideo /Frascati /INFN, Rome /INFN, Rome /INFN, Rome /Rutherford /Sassari U. /Siegen U. /SLAC /Southern Methodist U. /Tel Aviv U. /Tohoku U. /INFN, Turin /INFN, Trieste /Uppsala U. /Valencia U., IFIC /Victoria U. /Wayne State U. /Wisconsin U., Madison

    2012-02-14

    SuperB is a high luminosity e{sup +}e{sup -} collider that will be able to indirectly probe new physics at energy scales far beyond the reach of any man made accelerator planned or in existence. Just as detailed understanding of the Standard Model of particle physics was developed from stringent constraints imposed by flavour changing processes between quarks, the detailed structure of any new physics is severely constrained by flavour processes. In order to elucidate this structure it is necessary to perform a number of complementary studies of a set of golden channels. With these measurements in hand, the pattern of deviations from the Standard Model behavior can be used as a test of the structure of new physics. If new physics is found at the LHC, then the many golden measurements from SuperB will help decode the subtle nature of the new physics. However if no new particles are found at the LHC, SuperB will be able to search for new physics at energy scales up to 10-100 TeV. In either scenario, flavour physics measurements that can be made at SuperB play a pivotal role in understanding the nature of physics beyond the Standard Model. Examples for using the interplay between measurements to discriminate New Physics models are discussed in this document. SuperB is a Super Flavour Factory, in addition to studying large samples of B{sub u,d,s}, D and {tau} decays, SuperB has a broad physics programme that includes spectroscopy both in terms of the Standard Model and exotica, and precision measurements of sin{sup 2} {theta}{sub W}. In addition to performing CP violation measurements at the {Upsilon}(4S) and {phi}(3770), SuperB will test CPT in these systems, and lepton universality in a number of different processes. The multitude of rare decay measurements possible at SuperB can be used to constrain scenarios of physics beyond the Standard Model. In terms of other precision tests of the Standard Model, this experiment will be able to perform precision over

  4. Integrability in N=4 super Yang-Mills theory

    Energy Technology Data Exchange (ETDEWEB)

    Eden, B. [ITF and Spinoza Institute, University of Utrecht, Minnaertgebouw, Leuvenlaan 4, 3584 CE Utrecht (Netherlands)

    2008-10-15

    We use the Bethe ansatz to calculate the cusp anomalous dimension in planar N=4 super Yang-Mills theory as an exact function of the coupling constant. The calculation allows us to fix the remaining ambiguities in the integrable system describing the spectrum of operators/string energy levels in the AdS/CFT correspondence. The cusp anomalous dimension is not affected by finite size effects, which in general remain ill-understood. We suggest a method for computing the lowest example of an anomalous dimension modified by such corrections.

  5. High performance computing in science and engineering Garching/Munich 2016

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, Siegfried; Bode, Arndt; Bruechle, Helmut; Brehm, Matthias (eds.)

    2016-11-01

    Computer simulations are the well-established third pillar of natural sciences along with theory and experimentation. Particularly high performance computing is growing fast and constantly demands more and more powerful machines. To keep pace with this development, in spring 2015, the Leibniz Supercomputing Centre installed the high performance computing system SuperMUC Phase 2, only three years after the inauguration of its sibling SuperMUC Phase 1. Thereby, the compute capabilities were more than doubled. This book covers the time-frame June 2014 until June 2016. Readers will find many examples of outstanding research in the more than 130 projects that are covered in this book, with each one of these projects using at least 4 million core-hours on SuperMUC. The largest scientific communities using SuperMUC in the last two years were computational fluid dynamics simulations, chemistry and material sciences, astrophysics, and life sciences.

  6. TRL Computer System User’s Guide

    Energy Technology Data Exchange (ETDEWEB)

    Engel, David W.; Dalton, Angela C.

    2014-01-31

    We have developed a wiki-based graphical user-interface system that implements our technology readiness level (TRL) uncertainty models. This document contains the instructions for using this wiki-based system.

  7. Computer Sciences and Data Systems, volume 1

    Science.gov (United States)

    1987-01-01

    Topics addressed include: software engineering; university grants; institutes; concurrent processing; sparse distributed memory; distributed operating systems; intelligent data management processes; expert system for image analysis; fault tolerant software; and architecture research.

  8. A super-element approach for structural identification in time domain

    Institute of Scientific and Technical Information of China (English)

    LI Jie; ZHAO Xin

    2006-01-01

    For most time-domain identification methods,a complete measurement for unique identification results is required for structural responses.However,the number of transducers is commonly far less than the number of structural degrees of freedom (DOFs) in practical applications,and thus make the time-domain identification methods rarely feasible for practical systems.A super-element approach is proposed in this study to identify the structural parameters of a large-scale structure in the time domain.The most interesting feature of the proposed super-element approach is its divide-and-conquer ability,which can be applied to identify large-scale structures using a relatively small number of transducers.The super-element model used for time domain identification is first discussed in this study.Then a parameterization procedure based on the sensitivities of response forces is introduced to establish the identification equations of super-elements.Some principles are suggested on effective decomposing of the whole structure into super-elements for identification purposes.Numerical simulations are conducted at the end of this study.The numerical results show that all structural parameters can be identified using a relatively small number of transducers,and the computational time can also be greatly shortened.

  9. EVALUATION & TRENDS OF SURVEILLANCE SYSTEM NETWORK IN UBIQUITOUS COMPUTING ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Sunil Kr Singh

    2015-03-01

    Full Text Available With the emergence of ubiquitous computing, whole scenario of computing has been changed. It affected many inter disciplinary fields. This paper visions the impact of ubiquitous computing on video surveillance system. With increase in population and highly specific security areas, intelligent monitoring is the major requirement of modern world .The paper describes the evolution of surveillance system from analog to multi sensor ubiquitous system. It mentions the demand of context based architectures. It draws the benefit of merging of cloud computing to boost the surveillance system and at the same time reducing cost and maintenance. It analyzes some surveillance system architectures which are made for ubiquitous deployment. It provides major challenges and opportunities for the researchers to make surveillance system highly efficient and make them seamlessly embed in our environments.

  10. Information Hiding based Trusted Computing System Design

    Science.gov (United States)

    2014-07-18

    and the environment where the system operates (electrical network frequency signals), and how to improve the trust in a wireless sensor network with...the system (silicon PUF) and the environment where the system operates (ENF signals). We also study how to improve the trust in a wireless sensor...Harbin Institute of Technology, Shenzhen , China, May 26, 2013. (Host: Prof. Aijiao Cui) 13) “Designing Trusted Energy-Efficient Circuits and Systems

  11. On the Computational Capabilities of Physical Systems. Part 1; The Impossibility of Infallible Computation

    Science.gov (United States)

    Wolpert, David H.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In this first of two papers, strong limits on the accuracy of physical computation are established. First it is proven that there cannot be a physical computer C to which one can pose any and all computational tasks concerning the physical universe. Next it is proven that no physical computer C can correctly carry out any computational task in the subset of such tasks that can be posed to C. This result holds whether the computational tasks concern a system that is physically isolated from C, or instead concern a system that is coupled to C. As a particular example, this result means that there cannot be a physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly 'processing information faster than the universe does'. The results also mean that there cannot exist an infallible, general-purpose observation apparatus, and that there cannot be an infallible, general-purpose control apparatus. These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - a definition of 'physical computation' - is needed to address the issues considered in these papers. While this definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. The second in this pair of papers presents a preliminary exploration of some of this mathematical structure, including in particular that of prediction complexity, which is a 'physical computation

  12. Automated fermentation equipment. 2. Computer-fermentor system

    Energy Technology Data Exchange (ETDEWEB)

    Nyeste, L.; Szigeti, L.; Veres, A.; Pungor, E. Jr.; Kurucz, I.; Hollo, J.

    1981-02-01

    An inexpensive computer-operated system suitable for data collection and steady-state optimum control of fermentation processes is presented. With this system, minimum generation time has been determined as a function of temperature and pH in the turbidostat cultivation of a yeast strain. The applicability of the computer-fermentor system is also presented by the determination of the dynamic Kla value.

  13. Managing trust in information systems by using computer simulations

    OpenAIRE

    Zupančič, Eva

    2009-01-01

    Human factor is more and more important in new information systems and it should be also taken into consideration when developing new systems. Trust issues, which are tightly tied to human factor, are becoming an important topic in computer science. In this work we research trust in IT systems and present computer-based trust management solutions. After a review of qualitative and quantitative methods for trust management, a precise description of a simulation tool for trust management ana...

  14. Personal Computer System for Automatic Coronary Venous Flow Measurement

    OpenAIRE

    Dew, Robert B.

    1985-01-01

    We developed an automated system based on an IBM PC/XT Personal computer to measure coronary venous blood flow during cardiac catheterization. Flow is determined by a thermodilution technique in which a cold saline solution is infused through a catheter into the coronary venous system. Regional temperature fluctuations sensed by the catheter are used to determine great cardiac vein and coronary sinus blood flow. The computer system replaces manual methods of acquiring and analyzing temperatur...

  15. Improving the safety features of general practice computer systems

    OpenAIRE

    Anthony Avery; Boki Savelyich; Sheila Teasdale

    2003-01-01

    General practice computer systems already have a number of important safety features. However, there are problems in that general practitioners (GPs) have come to rely on hazard alerts when they are not foolproof. Furthermore, GPs do not know how to make best use of safety features on their systems. There are a number of solutions that could help to improve the safety features of general practice computer systems and also help to improve the abilities of healthcare professionals to use these ...

  16. Multiple-User, Multitasking, Virtual-Memory Computer System

    Science.gov (United States)

    Generazio, Edward R.; Roth, Don J.; Stang, David B.

    1993-01-01

    Computer system designed and programmed to serve multiple users in research laboratory. Provides for computer control and monitoring of laboratory instruments, acquisition and anlaysis of data from those instruments, and interaction with users via remote terminals. System provides fast access to shared central processing units and associated large (from megabytes to gigabytes) memories. Underlying concept of system also applicable to monitoring and control of industrial processes.

  17. Performance Models for Split-execution Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; McCaskey, Alex [ORNL; Schrock, Jonathan [ORNL; Seddiqi, Hadayat [ORNL; Britt, Keith A [ORNL; Imam, Neena [ORNL

    2016-01-01

    Split-execution computing leverages the capabilities of multiple computational models to solve problems, but splitting program execution across different computational models incurs costs associated with the translation between domains. We analyze the performance of a split-execution computing system developed from conventional and quantum processing units (QPUs) by using behavioral models that track resource usage. We focus on asymmetric processing models built using conventional CPUs and a family of special-purpose QPUs that employ quantum computing principles. Our performance models account for the translation of a classical optimization problem into the physical representation required by the quantum processor while also accounting for hardware limitations and conventional processor speed and memory. We conclude that the bottleneck in this split-execution computing system lies at the quantum-classical interface and that the primary time cost is independent of quantum processor behavior.

  18. Intelligent decision support systems for sustainable computing paradigms and applications

    CERN Document Server

    Abraham, Ajith; Siarry, Patrick; Sheng, Michael

    2017-01-01

    This unique book dicusses the latest research, innovative ideas, challenges and computational intelligence (CI) solutions in sustainable computing. It presents novel, in-depth fundamental research on achieving a sustainable lifestyle for society, either from a methodological or from an application perspective. Sustainable computing has expanded to become a significant research area covering the fields of computer science and engineering, electrical engineering and other engineering disciplines, and there has been an increase in the amount of literature on aspects sustainable computing such as energy efficiency and natural resources conservation that emphasizes the role of ICT (information and communications technology) in achieving system design and operation objectives. The energy impact/design of more efficient IT infrastructures is a key challenge in realizing new computing paradigms. The book explores the uses of computational intelligence (CI) techniques for intelligent decision support that can be explo...

  19. Resource requirements for digital computations on electrooptical systems.

    Science.gov (United States)

    Eshaghian, M M; Panda, D K; Kumar, V K

    1991-03-10

    In this paper we study the resource requirements of electrooptical organizations in performing digital computing tasks. We define a generic model of parallel computation using optical interconnects, called the optical model of computation (OMC). In this model, computation is performed in digital electronics and communication is performed using free space optics. Using this model we derive relationships between information transfer and computational resources in solving a given problem. To illustrate our results, we concentrate on a computationally intensive operation, 2-D digital image convolution. Irrespective of the input/output scheme and the order of computation, we show a lower bound of ?(nw) on the optical volume required for convolving a w x w kernel with an n x n image, if the input bits are given to the system only once.

  20. Resource requirements for digital computations on electrooptical systems

    Science.gov (United States)

    Eshaghian, Mary M.; Panda, Dhabaleswar K.; Kumar, V. K. Prasanna

    1991-03-01

    The resource requirements of electrooptical organizations in performing digital computing tasks are studied via a generic model of parallel computation using optical interconnects, called the 'optical model of computation' (OMC). In this model, computation is performed in digital electronics and communication is performed using free space optics. Relationships between information transfer and computational resources in solving a given problem are derived. A computationally intensive operation, two-dimensional digital image convolution is undertaken. Irrespective of the input/output scheme and the order of computation, a lower bound of Omega(nw) is obtained on the optical volume required for convolving a w x w kernel with an n x n image, if the input bits are given to the system only once.