WorldWideScience

Sample records for superscalar-based computer system

  1. Computer systems

    Science.gov (United States)

    Olsen, Lola

    1992-01-01

    In addition to the discussions, Ocean Climate Data Workshop hosts gave participants an opportunity to hear about, see, and test for themselves some of the latest computer tools now available for those studying climate change and the oceans. Six speakers described computer systems and their functions. The introductory talks were followed by demonstrations to small groups of participants and some opportunities for participants to get hands-on experience. After this familiarization period, attendees were invited to return during the course of the Workshop and have one-on-one discussions and further hands-on experience with these systems. Brief summaries or abstracts of introductory presentations are addressed.

  2. Computer programming and computer systems

    CERN Document Server

    Hassitt, Anthony

    1966-01-01

    Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten

  3. Computer system identification

    OpenAIRE

    Lesjak, Borut

    2008-01-01

    The concept of computer system identity in computer science bears just as much importance as does the identity of an individual in a human society. Nevertheless, the identity of a computer system is incomparably harder to determine, because there is no standard system of identification we could use and, moreover, a computer system during its life-time is quite indefinite, since all of its regular and necessary hardware and software upgrades soon make it almost unrecognizable: after a number o...

  4. Tensor computations in computer algebra systems

    CERN Document Server

    Korolkova, A V; Sevastyanov, L A

    2014-01-01

    This paper considers three types of tensor computations. On their basis, we attempt to formulate criteria that must be satisfied by a computer algebra system dealing with tensors. We briefly overview the current state of tensor computations in different computer algebra systems. The tensor computations are illustrated with appropriate examples implemented in specific systems: Cadabra and Maxima.

  5. Distributed computer control systems

    Energy Technology Data Exchange (ETDEWEB)

    Suski, G.J.

    1986-01-01

    This book focuses on recent advances in the theory, applications and techniques for distributed computer control systems. Contents (partial): Real-time distributed computer control in a flexible manufacturing system. Semantics and implementation problems of channels in a DCCS specification. Broadcast protocols in distributed computer control systems. Design considerations of distributed control architecture for a thermal power plant. The conic toolset for building distributed systems. Network management issues in distributed control systems. Interprocessor communication system architecture in a distributed control system environment. Uni-level homogenous distributed computer control system and optimal system design. A-nets for DCCS design. A methodology for the specification and design of fault tolerant real time systems. An integrated computer control system - architecture design, engineering methodology and practical experience.

  6. ALMA correlator computer systems

    Science.gov (United States)

    Pisano, Jim; Amestica, Rodrigo; Perez, Jesus

    2004-09-01

    We present a design for the computer systems which control, configure, and monitor the Atacama Large Millimeter Array (ALMA) correlator and process its output. Two distinct computer systems implement this functionality: a rack- mounted PC controls and monitors the correlator, and a cluster of 17 PCs process the correlator output into raw spectral results. The correlator computer systems interface to other ALMA computers via gigabit Ethernet networks utilizing CORBA and raw socket connections. ALMA Common Software provides the software infrastructure for this distributed computer environment. The control computer interfaces to the correlator via multiple CAN busses and the data processing computer cluster interfaces to the correlator via sixteen dedicated high speed data ports. An independent array-wide hardware timing bus connects to the computer systems and the correlator hardware ensuring synchronous behavior and imposing hard deadlines on the control and data processor computers. An aggregate correlator output of 1 gigabyte per second with 16 millisecond periods and computational data rates of approximately 1 billion floating point operations per second define other hard deadlines for the data processing computer cluster.

  7. Fault tolerant computing systems

    CERN Document Server

    Randell, B

    1981-01-01

    Fault tolerance involves the provision of strategies for error detection, damage assessment, fault treatment and error recovery. A survey is given of the different sorts of strategies used in highly reliable computing systems, together with an outline of recent research on the problems of providing fault tolerance in parallel and distributed computing systems. (15 refs).

  8. Computer controlled antenna system

    Science.gov (United States)

    Raumann, N. A.

    1972-01-01

    The application of small computers using digital techniques for operating the servo and control system of large antennas is discussed. The advantages of the system are described. The techniques were evaluated with a forty foot antenna and the Sigma V computer. Programs have been completed which drive the antenna directly without the need for a servo amplifier, antenna position programmer or a scan generator.

  9. Attacks on computer systems

    Directory of Open Access Journals (Sweden)

    Dejan V. Vuletić

    2012-01-01

    Full Text Available Computer systems are a critical component of the human society in the 21st century. Economic sector, defense, security, energy, telecommunications, industrial production, finance and other vital infrastructure depend on computer systems that operate at local, national or global scales. A particular problem is that, due to the rapid development of ICT and the unstoppable growth of its application in all spheres of the human society, their vulnerability and exposure to very serious potential dangers increase. This paper analyzes some typical attacks on computer systems.

  10. Resilient computer system design

    CERN Document Server

    Castano, Victor

    2015-01-01

    This book presents a paradigm for designing new generation resilient and evolving computer systems, including their key concepts, elements of supportive theory, methods of analysis and synthesis of ICT with new properties of evolving functioning, as well as implementation schemes and their prototyping. The book explains why new ICT applications require a complete redesign of computer systems to address challenges of extreme reliability, high performance, and power efficiency. The authors present a comprehensive treatment for designing the next generation of computers, especially addressing safety-critical, autonomous, real time, military, banking, and wearable health care systems.   §  Describes design solutions for new computer system - evolving reconfigurable architecture (ERA) that is free from drawbacks inherent in current ICT and related engineering models §  Pursues simplicity, reliability, scalability principles of design implemented through redundancy and re-configurability; targeted for energy-,...

  11. Computer network defense system

    Science.gov (United States)

    Urias, Vincent; Stout, William M. S.; Loverro, Caleb

    2017-08-22

    A method and apparatus for protecting virtual machines. A computer system creates a copy of a group of the virtual machines in an operating network in a deception network to form a group of cloned virtual machines in the deception network when the group of the virtual machines is accessed by an adversary. The computer system creates an emulation of components from the operating network in the deception network. The components are accessible by the group of the cloned virtual machines as if the group of the cloned virtual machines was in the operating network. The computer system moves network connections for the group of the virtual machines in the operating network used by the adversary from the group of the virtual machines in the operating network to the group of the cloned virtual machines, enabling protecting the group of the virtual machines from actions performed by the adversary.

  12. Computer system operation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young Jae; Lee, Hae Cho; Lee, Ho Yeun; Kim, Young Taek; Lee, Sung Kyu; Park, Jeong Suk; Nam, Ji Wha; Kim, Soon Kon; Yang, Sung Un; Sohn, Jae Min; Moon, Soon Sung; Park, Bong Sik; Lee, Byung Heon; Park, Sun Hee; Kim, Jin Hee; Hwang, Hyeoi Sun; Lee, Hee Ja; Hwang, In A. [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1993-12-01

    The report described the operation and the trouble shooting of main computer and KAERINet. The results of the project are as follows; 1. The operation and trouble shooting of the main computer system. (Cyber 170-875, Cyber 960-31, VAX 6320, VAX 11/780). 2. The operation and trouble shooting of the KAERINet. (PC to host connection, host to host connection, file transfer, electronic-mail, X.25, CATV etc.). 3. The development of applications -Electronic Document Approval and Delivery System, Installation the ORACLE Utility Program. 22 tabs., 12 figs. (Author) .new.

  13. Computer Vision Systems

    Science.gov (United States)

    Gunasekaran, Sundaram

    Food quality is of paramount consideration for all consumers, and its importance is perhaps only second to food safety. By some definition, food safety is also incorporated into the broad categorization of food quality. Hence, the need for careful and accurate evaluation of food quality is at the forefront of research and development both in the academia and industry. Among the many available methods for food quality evaluation, computer vision has proven to be the most powerful, especially for nondestructively extracting and quantifying many features that have direct relevance to food quality assessment and control. Furthermore, computer vision systems serve to rapidly evaluate the most readily observable foods quality attributes - the external characteristics such as color, shape, size, surface texture etc. In addition, it is now possible, using advanced computer vision technologies, to “see” inside a food product and/or package to examine important quality attributes ordinarily unavailable to human evaluators. With rapid advances in electronic hardware and other associated imaging technologies, the cost-effectiveness and speed of computer vision systems have greatly improved and many practical systems are already in place in the food industry.

  14. Computational systems chemical biology.

    Science.gov (United States)

    Oprea, Tudor I; May, Elebeoba E; Leitão, Andrei; Tropsha, Alexander

    2011-01-01

    There is a critical need for improving the level of chemistry awareness in systems biology. The data and information related to modulation of genes and proteins by small molecules continue to accumulate at the same time as simulation tools in systems biology and whole body physiologically based pharmacokinetics (PBPK) continue to evolve. We called this emerging area at the interface between chemical biology and systems biology systems chemical biology (SCB) (Nat Chem Biol 3: 447-450, 2007).The overarching goal of computational SCB is to develop tools for integrated chemical-biological data acquisition, filtering and processing, by taking into account relevant information related to interactions between proteins and small molecules, possible metabolic transformations of small molecules, as well as associated information related to genes, networks, small molecules, and, where applicable, mutants and variants of those proteins. There is yet an unmet need to develop an integrated in silico pharmacology/systems biology continuum that embeds drug-target-clinical outcome (DTCO) triplets, a capability that is vital to the future of chemical biology, pharmacology, and systems biology. Through the development of the SCB approach, scientists will be able to start addressing, in an integrated simulation environment, questions that make the best use of our ever-growing chemical and biological data repositories at the system-wide level. This chapter reviews some of the major research concepts and describes key components that constitute the emerging area of computational systems chemical biology.

  15. The Computational Sensorimotor Systems Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — The Computational Sensorimotor Systems Lab focuses on the exploration, analysis, modeling and implementation of biological sensorimotor systems for both scientific...

  16. Secure computing on reconfigurable systems

    NARCIS (Netherlands)

    Fernandes Chaves, R.J.

    2007-01-01

    This thesis proposes a Secure Computing Module (SCM) for reconfigurable computing systems. SC provides a protected and reliable computational environment, where data security and protection against malicious attacks to the system is assured. SC is strongly based on encryption algorithms and on the

  17. Secure computing on reconfigurable systems

    NARCIS (Netherlands)

    Fernandes Chaves, R.J.

    2007-01-01

    This thesis proposes a Secure Computing Module (SCM) for reconfigurable computing systems. SC provides a protected and reliable computational environment, where data security and protection against malicious attacks to the system is assured. SC is strongly based on encryption algorithms and on the a

  18. Computer systems a programmer's perspective

    CERN Document Server

    Bryant, Randal E

    2016-01-01

    Computer systems: A Programmer’s Perspective explains the underlying elements common among all computer systems and how they affect general application performance. Written from the programmer’s perspective, this book strives to teach readers how understanding basic elements of computer systems and executing real practice can lead them to create better programs. Spanning across computer science themes such as hardware architecture, the operating system, and systems software, the Third Edition serves as a comprehensive introduction to programming. This book strives to create programmers who understand all elements of computer systems and will be able to engage in any application of the field--from fixing faulty software, to writing more capable programs, to avoiding common flaws. It lays the groundwork for readers to delve into more intensive topics such as computer architecture, embedded systems, and cybersecurity. This book focuses on systems that execute an x86-64 machine code, and recommends th...

  19. Central nervous system and computation.

    Science.gov (United States)

    Guidolin, Diego; Albertin, Giovanna; Guescini, Michele; Fuxe, Kjell; Agnati, Luigi F

    2011-12-01

    Computational systems are useful in neuroscience in many ways. For instance, they may be used to construct maps of brain structure and activation, or to describe brain processes mathematically. Furthermore, they inspired a powerful theory of brain function, in which the brain is viewed as a system characterized by intrinsic computational activities or as a "computational information processor. "Although many neuroscientists believe that neural systems really perform computations, some are more cautious about computationalism or reject it. Thus, does the brain really compute? Answering this question requires getting clear on a definition of computation that is able to draw a line between physical systems that compute and systems that do not, so that we can discern on which side of the line the brain (or parts of it) could fall. In order to shed some light on the role of computational processes in brain function, available neurobiological data will be summarized from the standpoint of a recently proposed taxonomy of notions of computation, with the aim of identifying which brain processes can be considered computational. The emerging picture shows the brain as a very peculiar system, in which genuine computational features act in concert with noncomputational dynamical processes, leading to continuous self-organization and remodeling under the action of external stimuli from the environment and from the rest of the organism.

  20. Ubiquitous Computing Systems

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Friday, Adrian

    2009-01-01

    First introduced two decades ago, the term ubiquitous computing is now part of the common vernacular. Ubicomp, as it is commonly called, has grown not just quickly but broadly so as to encompass a wealth of concepts and technology that serves any number of purposes across all of human endeavor......, an original ubicomp pioneer, Ubiquitous Computing Fundamentals brings together eleven ubiquitous computing trailblazers who each report on his or her area of expertise. Starting with a historical introduction, the book moves on to summarize a number of self-contained topics. Taking a decidedly human...... perspective, the book includes discussion on how to observe people in their natural environments and evaluate the critical points where ubiquitous computing technologies can improve their lives. Among a range of topics this book examines: How to build an infrastructure that supports ubiquitous computing...

  1. Capability-based computer systems

    CERN Document Server

    Levy, Henry M

    2014-01-01

    Capability-Based Computer Systems focuses on computer programs and their capabilities. The text first elaborates capability- and object-based system concepts, including capability-based systems, object-based approach, and summary. The book then describes early descriptor architectures and explains the Burroughs B5000, Rice University Computer, and Basic Language Machine. The text also focuses on early capability architectures. Dennis and Van Horn's Supervisor; CAL-TSS System; MIT PDP-1 Timesharing System; and Chicago Magic Number Machine are discussed. The book then describes Plessey System 25

  2. New computing systems and their impact on computational mechanics

    Science.gov (United States)

    Noor, Ahmed K.

    1989-01-01

    Recent advances in computer technology that are likely to impact computational mechanics are reviewed. The technical needs for computational mechanics technology are outlined. The major features of new and projected computing systems, including supersystems, parallel processing machines, special-purpose computing hardware, and small systems are described. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed, and a novel partitioning strategy is outlined for maximizing the degree of parallelism on multiprocessor computers with a shared memory.

  3. Computer Security Systems Enable Access.

    Science.gov (United States)

    Riggen, Gary

    1989-01-01

    A good security system enables access and protects information from damage or tampering, but the most important aspects of a security system aren't technical. A security procedures manual addresses the human element of computer security. (MLW)

  4. Energy efficient distributed computing systems

    CERN Document Server

    Lee, Young-Choon

    2012-01-01

    The energy consumption issue in distributed computing systems raises various monetary, environmental and system performance concerns. Electricity consumption in the US doubled from 2000 to 2005.  From a financial and environmental standpoint, reducing the consumption of electricity is important, yet these reforms must not lead to performance degradation of the computing systems.  These contradicting constraints create a suite of complex problems that need to be resolved in order to lead to 'greener' distributed computing systems.  This book brings together a group of outsta

  5. Dynamical Systems Some Computational Problems

    CERN Document Server

    Guckenheimer, J; Guckenheimer, John; Worfolk, Patrick

    1993-01-01

    We present several topics involving the computation of dynamical systems. The emphasis is on work in progress and the presentation is informal -- there are many technical details which are not fully discussed. The topics are chosen to demonstrate the various interactions between numerical computation and mathematical theory in the area of dynamical systems. We present an algorithm for the computation of stable manifolds of equilibrium points, describe the computation of Hopf bifurcations for equilibria in parametrized families of vector fields, survey the results of studies of codimension two global bifurcations, discuss a numerical analysis of the Hodgkin and Huxley equations, and describe some of the effects of symmetry on local bifurcation.

  6. Computational Systems Chemical Biology

    OpenAIRE

    Oprea, Tudor I.; Elebeoba E. May; Leitão, Andrei; Tropsha, Alexander

    2011-01-01

    There is a critical need for improving the level of chemistry awareness in systems biology. The data and information related to modulation of genes and proteins by small molecules continue to accumulate at the same time as simulation tools in systems biology and whole body physiologically-based pharmacokinetics (PBPK) continue to evolve. We called this emerging area at the interface between chemical biology and systems biology systems chemical biology, SCB (Oprea et al., 2007).

  7. Hybridity in Embedded Computing Systems

    Institute of Scientific and Technical Information of China (English)

    虞慧群; 孙永强

    1996-01-01

    An embedded system is a system that computer is used as a component in a larger device.In this paper,we study hybridity in embedded systems and present an interval based temporal logic to express and reason about hybrid properties of such kind of systems.

  8. Computer algebra in systems biology

    CERN Document Server

    Laubenbacher, Reinhard

    2007-01-01

    Systems biology focuses on the study of entire biological systems rather than on their individual components. With the emergence of high-throughput data generation technologies for molecular biology and the development of advanced mathematical modeling techniques, this field promises to provide important new insights. At the same time, with the availability of increasingly powerful computers, computer algebra has developed into a useful tool for many applications. This article illustrates the use of computer algebra in systems biology by way of a well-known gene regulatory network, the Lac Operon in the bacterium E. coli.

  9. Students "Hacking" School Computer Systems

    Science.gov (United States)

    Stover, Del

    2005-01-01

    This article deals with students hacking school computer systems. School districts are getting tough with students "hacking" into school computers to change grades, poke through files, or just pit their high-tech skills against district security. Dozens of students have been prosecuted recently under state laws on identity theft and unauthorized…

  10. Students "Hacking" School Computer Systems

    Science.gov (United States)

    Stover, Del

    2005-01-01

    This article deals with students hacking school computer systems. School districts are getting tough with students "hacking" into school computers to change grades, poke through files, or just pit their high-tech skills against district security. Dozens of students have been prosecuted recently under state laws on identity theft and unauthorized…

  11. Robot computer problem solving system

    Science.gov (United States)

    Becker, J. D.; Merriam, E. W.

    1974-01-01

    The conceptual, experimental, and practical aspects of the development of a robot computer problem solving system were investigated. The distinctive characteristics were formulated of the approach taken in relation to various studies of cognition and robotics. Vehicle and eye control systems were structured, and the information to be generated by the visual system is defined.

  12. Operating systems. [of computers

    Science.gov (United States)

    Denning, P. J.; Brown, R. L.

    1984-01-01

    A counter operating system creates a hierarchy of levels of abstraction, so that at a given level all details concerning lower levels can be ignored. This hierarchical structure separates functions according to their complexity, characteristic time scale, and level of abstraction. The lowest levels include the system's hardware; concepts associated explicitly with the coordination of multiple tasks appear at intermediate levels, which conduct 'primitive processes'. Software semaphore is the mechanism controlling primitive processes that must be synchronized. At higher levels lie, in rising order, the access to the secondary storage devices of a particular machine, a 'virtual memory' scheme for managing the main and secondary memories, communication between processes by way of a mechanism called a 'pipe', access to external input and output devices, and a hierarchy of directories cataloguing the hardware and software objects to which access must be controlled.

  13. Computer System Design System-on-Chip

    CERN Document Server

    Flynn, Michael J

    2011-01-01

    The next generation of computer system designers will be less concerned about details of processors and memories, and more concerned about the elements of a system tailored to particular applications. These designers will have a fundamental knowledge of processors and other elements in the system, but the success of their design will depend on the skills in making system-level tradeoffs that optimize the cost, performance and other attributes to meet application requirements. This book provides a new treatment of computer system design, particularly for System-on-Chip (SOC), which addresses th

  14. On Dependability of Computing Systems

    Institute of Scientific and Technical Information of China (English)

    XU Shiyi

    1999-01-01

    With the rapid development and wideapplications of computing systems on which more reliance has been put, adependable system will be much more important than ever. This paper isfirst aimed at giving informal but precise definitions characterizingthe various attributes of dependability of computing systems and thenthe importance of (and the relationships among) all the attributes areexplained.Dependability is first introduced as a global concept which subsumes theusual attributes of reliability, availability, maintainability, safetyand security. The basic definitions given here are then commended andsupplemented by detailed material and additional explanations in thesubsequent sections.The presentation has been structured as follows so as to attract thereader's attention to the important attributions of dependability.* Search for a few number of concise concepts enabling thedependability attributes to be expressed as clearly as possible.* Use of terms which are identical or as close as possible tothose commonly used nowadays.This paper is also intended to provoke people's interest in designing adependable computing system.

  15. Computational Intelligence for Engineering Systems

    CERN Document Server

    Madureira, A; Vale, Zita

    2011-01-01

    "Computational Intelligence for Engineering Systems" provides an overview and original analysis of new developments and advances in several areas of computational intelligence. Computational Intelligence have become the road-map for engineers to develop and analyze novel techniques to solve problems in basic sciences (such as physics, chemistry and biology) and engineering, environmental, life and social sciences. The contributions are written by international experts, who provide up-to-date aspects of the topics discussed and present recent, original insights into their own experien

  16. Computers in Information Sciences: On-Line Systems.

    Science.gov (United States)

    COMPUTERS, *BIBLIOGRAPHIES, *ONLINE SYSTEMS, * INFORMATION SCIENCES , DATA PROCESSING, DATA MANAGEMENT, COMPUTER PROGRAMMING, INFORMATION RETRIEVAL, COMPUTER GRAPHICS, DIGITAL COMPUTERS, ANALOG COMPUTERS.

  17. Aging and computational systems biology.

    Science.gov (United States)

    Mooney, Kathleen M; Morgan, Amy E; Mc Auley, Mark T

    2016-01-01

    Aging research is undergoing a paradigm shift, which has led to new and innovative methods of exploring this complex phenomenon. The systems biology approach endeavors to understand biological systems in a holistic manner, by taking account of intrinsic interactions, while also attempting to account for the impact of external inputs, such as diet. A key technique employed in systems biology is computational modeling, which involves mathematically describing and simulating the dynamics of biological systems. Although a large number of computational models have been developed in recent years, these models have focused on various discrete components of the aging process, and to date no model has succeeded in completely representing the full scope of aging. Combining existing models or developing new models may help to address this need and in so doing could help achieve an improved understanding of the intrinsic mechanisms which underpin aging.

  18. Computational Systems for Multidisciplinary Applications

    Science.gov (United States)

    Soni, Bharat; Haupt, Tomasz; Koomullil, Roy; Luke, Edward; Thompson, David

    2002-01-01

    In this paper, we briefly describe our efforts to develop complex simulation systems. We focus first on four key infrastructure items: enterprise computational services, simulation synthesis, geometry modeling and mesh generation, and a fluid flow solver for arbitrary meshes. We conclude by presenting three diverse applications developed using these technologies.

  19. Computational Aeroacoustic Analysis System Development

    Science.gov (United States)

    Hadid, A.; Lin, W.; Ascoli, E.; Barson, S.; Sindir, M.

    2001-01-01

    Many industrial and commercial products operate in a dynamic flow environment and the aerodynamically generated noise has become a very important factor in the design of these products. In light of the importance in characterizing this dynamic environment, Rocketdyne has initiated a multiyear effort to develop an advanced general-purpose Computational Aeroacoustic Analysis System (CAAS) to address these issues. This system will provide a high fidelity predictive capability for aeroacoustic design and analysis. The numerical platform is able to provide high temporal and spatial accuracy that is required for aeroacoustic calculations through the development of a high order spectral element numerical algorithm. The analysis system is integrated with well-established CAE tools, such as a graphical user interface (GUI) through PATRAN, to provide cost-effective access to all of the necessary tools. These include preprocessing (geometry import, grid generation and boundary condition specification), code set up (problem specification, user parameter definition, etc.), and postprocessing. The purpose of the present paper is to assess the feasibility of such a system and to demonstrate the efficiency and accuracy of the numerical algorithm through numerical examples. Computations of vortex shedding noise were carried out in the context of a two-dimensional low Mach number turbulent flow past a square cylinder. The computational aeroacoustic approach that is used in CAAS relies on coupling a base flow solver to the acoustic solver throughout a computational cycle. The unsteady fluid motion, which is responsible for both the generation and propagation of acoustic waves, is calculated using a high order flow solver. The results of the flow field are then passed to the acoustic solver through an interpolator to map the field values into the acoustic grid. The acoustic field, which is governed by the linearized Euler equations, is then calculated using the flow results computed

  20. Computational models of complex systems

    CERN Document Server

    Dabbaghian, Vahid

    2014-01-01

    Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...

  1. Redundant computing for exascale systems.

    Energy Technology Data Exchange (ETDEWEB)

    Stearley, Jon R.; Riesen, Rolf E.; Laros, James H., III; Ferreira, Kurt Brian; Pedretti, Kevin Thomas Tauke; Oldfield, Ron A.; Brightwell, Ronald Brian

    2010-12-01

    Exascale systems will have hundred thousands of compute nodes and millions of components which increases the likelihood of faults. Today, applications use checkpoint/restart to recover from these faults. Even under ideal conditions, applications running on more than 50,000 nodes will spend more than half of their total running time saving checkpoints, restarting, and redoing work that was lost. Redundant computing is a method that allows an application to continue working even when failures occur. Instead of each failure causing an application interrupt, multiple failures can be absorbed by the application until redundancy is exhausted. In this paper we present a method to analyze the benefits of redundant computing, present simulation results of the cost, and compare it to other proposed methods for fault resilience.

  2. Computer-aided system design

    Science.gov (United States)

    Walker, Carrie K.

    1991-01-01

    A technique has been developed for combining features of a systems architecture design and assessment tool and a software development tool. This technique reduces simulation development time and expands simulation detail. The Architecture Design and Assessment System (ADAS), developed at the Research Triangle Institute, is a set of computer-assisted engineering tools for the design and analysis of computer systems. The ADAS system is based on directed graph concepts and supports the synthesis and analysis of software algorithms mapped to candidate hardware implementations. Greater simulation detail is provided by the ADAS functional simulator. With the functional simulator, programs written in either Ada or C can be used to provide a detailed description of graph nodes. A Computer-Aided Software Engineering tool developed at the Charles Stark Draper Laboratory (CSDL CASE) automatically generates Ada or C code from engineering block diagram specifications designed with an interactive graphical interface. A technique to use the tools together has been developed, which further automates the design process.

  3. Computer Networks A Systems Approach

    CERN Document Server

    Peterson, Larry L

    2011-01-01

    This best-selling and classic book teaches you the key principles of computer networks with examples drawn from the real world of network and protocol design. Using the Internet as the primary example, the authors explain various protocols and networking technologies. Their systems-oriented approach encourages you to think about how individual network components fit into a larger, complex system of interactions. Whatever your perspective, whether it be that of an application developer, network administrator, or a designer of network equipment or protocols, you will come away with a "big pictur

  4. SELF LEARNING COMPUTER TROUBLESHOOTING EXPERT SYSTEM

    OpenAIRE

    Amanuel Ayde Ergado

    2016-01-01

    In computer domain the professionals were limited in number but the numbers of institutions looking for computer professionals were high. The aim of this study is developing self learning expert system which is providing troubleshooting information about problems occurred in the computer system for the information and communication technology technicians and computer users to solve problems effectively and efficiently to utilize computer and computer related resources. Domain know...

  5. Computers as components principles of embedded computing system design

    CERN Document Server

    Wolf, Marilyn

    2012-01-01

    Computers as Components: Principles of Embedded Computing System Design, 3e, presents essential knowledge on embedded systems technology and techniques. Updated for today's embedded systems design methods, this edition features new examples including digital signal processing, multimedia, and cyber-physical systems. Author Marilyn Wolf covers the latest processors from Texas Instruments, ARM, and Microchip Technology plus software, operating systems, networks, consumer devices, and more. Like the previous editions, this textbook: Uses real processors to demonstrate both technology and tec

  6. Automated Computer Access Request System

    Science.gov (United States)

    Snook, Bryan E.

    2010-01-01

    The Automated Computer Access Request (AutoCAR) system is a Web-based account provisioning application that replaces the time-consuming paper-based computer-access request process at Johnson Space Center (JSC). Auto- CAR combines rules-based and role-based functionality in one application to provide a centralized system that is easily and widely accessible. The system features a work-flow engine that facilitates request routing, a user registration directory containing contact information and user metadata, an access request submission and tracking process, and a system administrator account management component. This provides full, end-to-end disposition approval chain accountability from the moment a request is submitted. By blending both rules-based and rolebased functionality, AutoCAR has the flexibility to route requests based on a user s nationality, JSC affiliation status, and other export-control requirements, while ensuring a user s request is addressed by either a primary or backup approver. All user accounts that are tracked in AutoCAR are recorded and mapped to the native operating system schema on the target platform where user accounts reside. This allows for future extensibility for supporting creation, deletion, and account management directly on the target platforms by way of AutoCAR. The system s directory-based lookup and day-today change analysis of directory information determines personnel moves, deletions, and additions, and automatically notifies a user via e-mail to revalidate his/her account access as a result of such changes. AutoCAR is a Microsoft classic active server page (ASP) application hosted on a Microsoft Internet Information Server (IIS).

  7. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  8. Computer vision in control systems

    CERN Document Server

    Jain, Lakhmi

    2015-01-01

    Volume 1 : This book is focused on the recent advances in computer vision methodologies and technical solutions using conventional and intelligent paradigms. The Contributions include: ·         Morphological Image Analysis for Computer Vision Applications. ·         Methods for Detecting of Structural Changes in Computer Vision Systems. ·         Hierarchical Adaptive KL-based Transform: Algorithms and Applications. ·         Automatic Estimation for Parameters of Image Projective Transforms Based on Object-invariant Cores. ·         A Way of Energy Analysis for Image and Video Sequence Processing. ·         Optimal Measurement of Visual Motion Across Spatial and Temporal Scales. ·         Scene Analysis Using Morphological Mathematics and Fuzzy Logic. ·         Digital Video Stabilization in Static and Dynamic Scenes. ·         Implementation of Hadamard Matrices for Image Processing. ·         A Generalized Criterion ...

  9. When does a physical system compute?

    Science.gov (United States)

    Horsman, Clare; Stepney, Susan; Wagner, Rob C; Kendon, Viv

    2014-09-08

    Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution. We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a 'computational entity', and its critical role in defining when computing is taking place in physical systems.

  10. `95 computer system operation project

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Taek; Lee, Hae Cho; Park, Soo Jin; Kim, Hee Kyung; Lee, Ho Yeun; Lee, Sung Kyu; Choi, Mi Kyung [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-12-01

    This report describes overall project works related to the operation of mainframe computers, the management of nuclear computer codes and the project of nuclear computer code conversion. The results of the project are as follows ; 1. The operation and maintenance of the three mainframe computers and other utilities. 2. The management of the nuclear computer codes. 3. The finishing of the computer codes conversion project. 26 tabs., 5 figs., 17 refs. (Author) .new.

  11. Computing abstractions of nonlinear systems

    CERN Document Server

    Reißig, Gunther

    2009-01-01

    We present an efficient algorithm for computing discrete abstractions of arbitrary memory span for nonlinear discrete-time and sampled systems, in which, apart from possibly numerically integrating ordinary differential equations, the only nontrivial operation to be performed repeatedly is to distinguish empty from non-empty convex polyhedra. We also provide sufficient conditions for the convexity of attainable sets, which is an important requirement for the correctness of the method we propose. It turns out that requirement can be met under rather mild conditions, which essentially reduce to sufficient smoothness in the case of sampled systems. Practicability of our approach in the design of discrete controllers for continuous plants is demonstrated by an example.

  12. Hydronic distribution system computer model

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, J.W.; Strasser, J.J.

    1994-10-01

    A computer model of a hot-water boiler and its associated hydronic thermal distribution loop has been developed at Brookhaven National Laboratory (BNL). It is intended to be incorporated as a submodel in a comprehensive model of residential-scale thermal distribution systems developed at Lawrence Berkeley. This will give the combined model the capability of modeling forced-air and hydronic distribution systems in the same house using the same supporting software. This report describes the development of the BNL hydronics model, initial results and internal consistency checks, and its intended relationship to the LBL model. A method of interacting with the LBL model that does not require physical integration of the two codes is described. This will provide capability now, with reduced up-front cost, as long as the number of runs required is not large.

  13. Computer systems and software engineering

    Science.gov (United States)

    Mckay, Charles W.

    1988-01-01

    The High Technologies Laboratory (HTL) was established in the fall of 1982 at the University of Houston Clear Lake. Research conducted at the High Tech Lab is focused upon computer systems and software engineering. There is a strong emphasis on the interrelationship of these areas of technology and the United States' space program. In Jan. of 1987, NASA Headquarters announced the formation of its first research center dedicated to software engineering. Operated by the High Tech Lab, the Software Engineering Research Center (SERC) was formed at the University of Houston Clear Lake. The High Tech Lab/Software Engineering Research Center promotes cooperative research among government, industry, and academia to advance the edge-of-knowledge and the state-of-the-practice in key topics of computer systems and software engineering which are critical to NASA. The center also recommends appropriate actions, guidelines, standards, and policies to NASA in matters pertinent to the center's research. Results of the research conducted at the High Tech Lab/Software Engineering Research Center have given direction to many decisions made by NASA concerning the Space Station Program.

  14. Trusted computing for embedded systems

    CERN Document Server

    Soudris, Dimitrios; Anagnostopoulos, Iraklis

    2015-01-01

    This book describes the state-of-the-art in trusted computing for embedded systems. It shows how a variety of security and trusted computing problems are addressed currently and what solutions are expected to emerge in the coming years. The discussion focuses on attacks aimed at hardware and software for embedded systems, and the authors describe specific solutions to create security features. Case studies are used to present new techniques designed as industrial security solutions. Coverage includes development of tamper resistant hardware and firmware mechanisms for lightweight embedded devices, as well as those serving as security anchors for embedded platforms required by applications such as smart power grids, smart networked and home appliances, environmental and infrastructure sensor networks, etc. ·         Enables readers to address a variety of security threats to embedded hardware and software; ·         Describes design of secure wireless sensor networks, to address secure authen...

  15. Using Expert Systems For Computational Tasks

    Science.gov (United States)

    Duke, Eugene L.; Regenie, Victoria A.; Brazee, Marylouise; Brumbaugh, Randal W.

    1990-01-01

    Transformation technique enables inefficient expert systems to run in real time. Paper suggests use of knowledge compiler to transform knowledge base and inference mechanism of expert-system computer program into conventional computer program. Main benefit, faster execution and reduced processing demands. In avionic systems, transformation reduces need for special-purpose computers.

  16. Software For Monitoring VAX Computer Systems

    Science.gov (United States)

    Farkas, Les; Don, Ken; Lavery, David; Baron, Amy

    1994-01-01

    VAX Continuous Monitoring System (VAXCMS) computer program developed at NASA Headquarters to aid system managers in monitoring performances of VAX computer systems through generation of graphic images summarizing trends in performance metrics over time. VAXCMS written in DCL and VAX FORTRAN for use with DEC VAX-series computers running VMS 5.1 or later.

  17. Computer Aided Control System Design (CACSD)

    Science.gov (United States)

    Stoner, Frank T.

    1993-01-01

    The design of modern aerospace systems relies on the efficient utilization of computational resources and the availability of computational tools to provide accurate system modeling. This research focuses on the development of a computer aided control system design application which provides a full range of stability analysis and control design capabilities for aerospace vehicles.

  18. Impact of new computing systems on finite element computations

    Science.gov (United States)

    Noor, A. K.; Storassili, O. O.; Fulton, R. E.

    1983-01-01

    Recent advances in computer technology that are likely to impact finite element computations are reviewed. The characteristics of supersystems, highly parallel systems, and small systems (mini and microcomputers) are summarized. The interrelations of numerical algorithms and software with parallel architectures are discussed. A scenario is presented for future hardware/software environment and finite element systems. A number of research areas which have high potential for improving the effectiveness of finite element analysis in the new environment are identified.

  19. Transient Faults in Computer Systems

    Science.gov (United States)

    Masson, Gerald M.

    1993-01-01

    A powerful technique particularly appropriate for the detection of errors caused by transient faults in computer systems was developed. The technique can be implemented in either software or hardware; the research conducted thus far primarily considered software implementations. The error detection technique developed has the distinct advantage of having provably complete coverage of all errors caused by transient faults that affect the output produced by the execution of a program. In other words, the technique does not have to be tuned to a particular error model to enhance error coverage. Also, the correctness of the technique can be formally verified. The technique uses time and software redundancy. The foundation for an effective, low-overhead, software-based certification trail approach to real-time error detection resulting from transient fault phenomena was developed.

  20. Computer system reliability safety and usability

    CERN Document Server

    Dhillon, BS

    2013-01-01

    Computer systems have become an important element of the world economy, with billions of dollars spent each year on development, manufacture, operation, and maintenance. Combining coverage of computer system reliability, safety, usability, and other related topics into a single volume, Computer System Reliability: Safety and Usability eliminates the need to consult many different and diverse sources in the hunt for the information required to design better computer systems.After presenting introductory aspects of computer system reliability such as safety, usability-related facts and figures,

  1. Integrated Computer System of Management in Logistics

    Science.gov (United States)

    Chwesiuk, Krzysztof

    2011-06-01

    This paper aims at presenting a concept of an integrated computer system of management in logistics, particularly in supply and distribution chains. Consequently, the paper includes the basic idea of the concept of computer-based management in logistics and components of the system, such as CAM and CIM systems in production processes, and management systems for storage, materials flow, and for managing transport, forwarding and logistics companies. The platform which integrates computer-aided management systems is that of electronic data interchange.

  2. Conflict Resolution in Computer Systems

    Directory of Open Access Journals (Sweden)

    G. P. Mojarov

    2015-01-01

    Full Text Available A conflict situation in computer systems CS is the phenomenon arising when the processes have multi-access to the shared resources and none of the involved processes can proceed because of their waiting for the certain resources locked by the other processes which, in turn, are in a similar position. The conflict situation is also called a deadlock that has quite clear impact on the CS state.To find the reduced to practice algorithms to resolve the impasses is of significant applied importance for ensuring information security of computing process and thereupon the presented article is aimed at solving a relevant problem.The gravity of situation depends on the types of processes in a deadlock, types of used resources, number of processes, and a lot of other factors.A disadvantage of the method for preventing the impasses used in many modern operating systems and based on the preliminary planning resources required for the process is obvious - waiting time can be overlong. The preventing method with the process interruption and deallocation of its resources is very specific and a little effective, when there is a set of the polytypic resources requested dynamically. The drawback of another method, to prevent a deadlock by ordering resources, consists in restriction of possible sequences of resource requests.A different way of "struggle" against deadlocks is a prevention of impasses. In the future a prediction of appearing impasses is supposed. There are known methods [1,4,5] to define and prevent conditions under which deadlocks may occur. Thus the preliminary information on what resources a running process can request is used. Before allocating a free resource to the process, a test for a state “safety” condition is provided. The state is "safe" if in the future impasses cannot occur as a result of resource allocation to the process. Otherwise the state is considered to be " hazardous ", and resource allocation is postponed. The obvious

  3. Digital optical computers at the optoelectronic computing systems center

    Science.gov (United States)

    Jordan, Harry F.

    1991-01-01

    The Digital Optical Computing Program within the National Science Foundation Engineering Research Center for Opto-electronic Computing Systems has as its specific goal research on optical computing architectures suitable for use at the highest possible speeds. The program can be targeted toward exploiting the time domain because other programs in the Center are pursuing research on parallel optical systems, exploiting optical interconnection and optical devices and materials. Using a general purpose computing architecture as the focus, we are developing design techniques, tools and architecture for operation at the speed of light limit. Experimental work is being done with the somewhat low speed components currently available but with architectures which will scale up in speed as faster devices are developed. The design algorithms and tools developed for a general purpose, stored program computer are being applied to other systems such as optimally controlled optical communication networks.

  4. The Remote Computer Control (RCC) system

    Science.gov (United States)

    Holmes, W.

    1980-01-01

    A system to remotely control job flow on a host computer from any touchtone telephone is briefly described. Using this system a computer programmer can submit jobs to a host computer from any touchtone telephone. In addition the system can be instructed by the user to call back when a job is finished. Because of this system every touchtone telephone becomes a conversant computer peripheral. This system known as the Remote Computer Control (RCC) system utilizes touchtone input, touchtone output, voice input, and voice output. The RCC system is microprocessor based and is currently using the INTEL 80/30microcomputer. Using the RCC system a user can submit, cancel, and check the status of jobs on a host computer. The RCC system peripherals consist of a CRT for operator control, a printer for logging all activity, mass storage for the storage of user parameters, and a PROM card for program storage.

  5. Implementation of Computational Electromagnetic on Distributed Systems

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Now the new generation of technology could raise the bar for distributed computing. It seems to be a trend to solve computational electromagnetic work on a distributed system with parallel computing techniques. In this paper, we analyze the parallel characteristics of the distributed system and the possibility of setting up a tightly coupled distributed system by using LAN in our lab. The analysis of the performance of different computational methods, such as FEM, MOM, FDTD and finite difference method, are given. Our work on setting up a distributed system and the performance of the test bed is also included. At last, we mention the implementation of one of our computational electromagnetic codes.

  6. Cybersecurity of embedded computers systems

    OpenAIRE

    Carlioz, Jean

    2016-01-01

    International audience; Several articles have recently raised the issue of computer security of commercial flights by evoking the "connected aircraft, hackers target" or "Wi-Fi on planes, an open door for hackers ? " Or "Can you hack the computer of an Airbus or a Boeing ?". The feared scenario consists in a takeover of operational aircraft software that intentionally cause an accident. Moreover, several computer security experts have lately announced they had detected flaws in embedded syste...

  7. Applied computation and security systems

    CERN Document Server

    Saeed, Khalid; Choudhury, Sankhayan; Chaki, Nabendu

    2015-01-01

    This book contains the extended version of the works that have been presented and discussed in the First International Doctoral Symposium on Applied Computation and Security Systems (ACSS 2014) held during April 18-20, 2014 in Kolkata, India. The symposium has been jointly organized by the AGH University of Science & Technology, Cracow, Poland and University of Calcutta, India. The Volume I of this double-volume book contains fourteen high quality book chapters in three different parts. Part 1 is on Pattern Recognition and it presents four chapters. Part 2 is on Imaging and Healthcare Applications contains four more book chapters. The Part 3 of this volume is on Wireless Sensor Networking and it includes as many as six chapters. Volume II of the book has three Parts presenting a total of eleven chapters in it. Part 4 consists of five excellent chapters on Software Engineering ranging from cloud service design to transactional memory. Part 5 in Volume II is on Cryptography with two book...

  8. Universal blind quantum computation for hybrid system

    Science.gov (United States)

    Huang, He-Liang; Bao, Wan-Su; Li, Tan; Li, Feng-Guang; Fu, Xiang-Qun; Zhang, Shuo; Zhang, Hai-Long; Wang, Xiang

    2017-08-01

    As progress on the development of building quantum computer continues to advance, first-generation practical quantum computers will be available for ordinary users in the cloud style similar to IBM's Quantum Experience nowadays. Clients can remotely access the quantum servers using some simple devices. In such a situation, it is of prime importance to keep the security of the client's information. Blind quantum computation protocols enable a client with limited quantum technology to delegate her quantum computation to a quantum server without leaking any privacy. To date, blind quantum computation has been considered only for an individual quantum system. However, practical universal quantum computer is likely to be a hybrid system. Here, we take the first step to construct a framework of blind quantum computation for the hybrid system, which provides a more feasible way for scalable blind quantum computation.

  9. Computer Simulation and Computabiblity of Biological Systems

    CERN Document Server

    Baianu, I C

    2004-01-01

    The ability to simulate a biological organism by employing a computer is related to the ability of the computer to calculate the behavior of such a dynamical system, or the "computability" of the system. However, the two questions of computability and simulation are not equivalent. Since the question of computability can be given a precise answer in terms of recursive functions, automata theory and dynamical systems, it will be appropriate to consider it first. The more elusive question of adequate simulation of biological systems by a computer will be then addressed and a possible connection between the two answers given will be considered as follows. A symbolic, algebraic-topological "quantum computer" (as introduced in Baianu, 1971b) is here suggested to provide one such potential means for adequate biological simulations based on QMV Quantum Logic and meta-Categorical Modeling as for example in a QMV-based, Quantum-Topos (Baianu and Glazebrook,2004.

  10. The Computational Complexity of Evolving Systems

    NARCIS (Netherlands)

    Verbaan, P.R.A.

    2006-01-01

    Evolving systems are systems that change over time. Examples of evolving systems are computers with soft-and hardware upgrades and dynamic networks of computers that communicate with each other, but also colonies of cooperating organisms or cells within a single organism. In this research, several m

  11. Computational Models for Nonlinear Aeroelastic Systems Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Clear Science Corp. and Duke University propose to develop and demonstrate new and efficient computational methods of modeling nonlinear aeroelastic systems. The...

  12. ACSES, An Automated Computer Science Education System.

    Science.gov (United States)

    Nievergelt, Jurg; And Others

    A project to accommodate the large and increasing enrollment in introductory computer science courses by automating them with a subsystem for computer science instruction on the PLATO IV Computer-Based Education system at the University of Illinois was started. The subsystem was intended to be used for supplementary instruction at the University…

  13. Task allocation in a distributed computing system

    Science.gov (United States)

    Seward, Walter D.

    1987-01-01

    A conceptual framework is examined for task allocation in distributed systems. Application and computing system parameters critical to task allocation decision processes are discussed. Task allocation techniques are addressed which focus on achieving a balance in the load distribution among the system's processors. Equalization of computing load among the processing elements is the goal. Examples of system performance are presented for specific applications. Both static and dynamic allocation of tasks are considered and system performance is evaluated using different task allocation methodologies.

  14. Distributed computer systems theory and practice

    CERN Document Server

    Zedan, H S M

    2014-01-01

    Distributed Computer Systems: Theory and Practice is a collection of papers dealing with the design and implementation of operating systems, including distributed systems, such as the amoeba system, argus, Andrew, and grapevine. One paper discusses the concepts and notations for concurrent programming, particularly language notation used in computer programming, synchronization methods, and also compares three classes of languages. Another paper explains load balancing or load redistribution to improve system performance, namely, static balancing and adaptive load balancing. For program effici

  15. Comparing the architecture of Grid Computing and Cloud Computing systems

    Directory of Open Access Journals (Sweden)

    Abdollah Doavi

    2015-09-01

    Full Text Available Grid Computing or computational connected networks is a new network model that allows the possibility of massive computational operations using the connected resources, in fact, it is a new generation of distributed networks. Grid architecture is recommended because the widespread nature of the Internet makes an exciting environment called 'Grid' to create a scalable system with high-performance, generalized and secure. Then the central architecture called to this goal is a firmware named GridOS. The term 'cloud computing' means the development and deployment of Internet –based computing technology. This is a style of computing in an environment where IT-related capabilities offered as a service or users services. And it allows him/her to have access to technology-based services on the Internet; without the user having the specific information about this technology or (s he wants to take control of the IT infrastructure supported by him/her. In the paper, general explanations are given about the systems Grid and Cloud. Then their provided components and services are checked by these systems and their security.

  16. Intelligent computing systems emerging application areas

    CERN Document Server

    Virvou, Maria; Jain, Lakhmi

    2016-01-01

    This book at hand explores emerging scientific and technological areas in which Intelligent Computing Systems provide efficient solutions and, thus, may play a role in the years to come. It demonstrates how Intelligent Computing Systems make use of computational methodologies that mimic nature-inspired processes to address real world problems of high complexity for which exact mathematical solutions, based on physical and statistical modelling, are intractable. Common intelligent computational methodologies are presented including artificial neural networks, evolutionary computation, genetic algorithms, artificial immune systems, fuzzy logic, swarm intelligence, artificial life, virtual worlds and hybrid methodologies based on combinations of the previous. The book will be useful to researchers, practitioners and graduate students dealing with mathematically-intractable problems. It is intended for both the expert/researcher in the field of Intelligent Computing Systems, as well as for the general reader in t...

  17. FPGA-accelerated simulation of computer systems

    CERN Document Server

    Angepat, Hari; Chung, Eric S; Hoe, James C; Chung, Eric S

    2014-01-01

    To date, the most common form of simulators of computer systems are software-based running on standard computers. One promising approach to improve simulation performance is to apply hardware, specifically reconfigurable hardware in the form of field programmable gate arrays (FPGAs). This manuscript describes various approaches of using FPGAs to accelerate software-implemented simulation of computer systems and selected simulators that incorporate those techniques. More precisely, we describe a simulation architecture taxonomy that incorporates a simulation architecture specifically designed f

  18. Formal Protection Architecture for Cloud Computing System

    Institute of Scientific and Technical Information of China (English)

    Yasha Chen; Jianpeng Zhao; Junmao Zhu; Fei Yan

    2014-01-01

    Cloud computing systems play a vital role in national securi-ty. This paper describes a conceptual framework called dual-system architecture for protecting computing environments. While attempting to be logical and rigorous, formalism meth-od is avoided and this paper chooses algebra Communication Sequential Process.

  19. Computer Literacy in a Distance Education System

    Science.gov (United States)

    Farajollahi, Mehran; Zandi, Bahman; Sarmadi, Mohamadreza; Keshavarz, Mohsen

    2015-01-01

    In a Distance Education (DE) system, students must be equipped with seven skills of computer (ICDL) usage. This paper aims at investigating the effect of a DE system on the computer literacy of Master of Arts students at Tehran University. The design of this study is quasi-experimental. Pre-test and post-test were used in both control and…

  20. Computer-Controlled, Motorized Positioning System

    Science.gov (United States)

    Vargas-Aburto, Carlos; Liff, Dale R.

    1994-01-01

    Computer-controlled, motorized positioning system developed for use in robotic manipulation of samples in custom-built secondary-ion mass spectrometry (SIMS) system. Positions sample repeatably and accurately, even during analysis in three linear orthogonal coordinates and one angular coordinate under manual local control, or microprocessor-based local control or remote control by computer via general-purpose interface bus (GPIB).

  1. Advanced Hybrid Computer Systems. Software Technology.

    Science.gov (United States)

    This software technology final report evaluates advances made in Advanced Hybrid Computer System software technology . The report describes what...automatic patching software is available as well as which analog/hybrid programming languages would be most feasible for the Advanced Hybrid Computer...compiler software . The problem of how software would interface with the hybrid system is also presented.

  2. Biomolecular computing systems: principles, progress and potential.

    Science.gov (United States)

    Benenson, Yaakov

    2012-06-12

    The task of information processing, or computation, can be performed by natural and man-made 'devices'. Man-made computers are made from silicon chips, whereas natural 'computers', such as the brain, use cells and molecules. Computation also occurs on a much smaller scale in regulatory and signalling pathways in individual cells and even within single biomolecules. Indeed, much of what we recognize as life results from the remarkable capacity of biological building blocks to compute in highly sophisticated ways. Rational design and engineering of biological computing systems can greatly enhance our ability to study and to control biological systems. Potential applications include tissue engineering and regeneration and medical treatments. This Review introduces key concepts and discusses recent progress that has been made in biomolecular computing.

  3. Automated Diversity in Computer Systems

    Science.gov (United States)

    2005-09-01

    P ( EBM I ) = Me2a ; P (ELMP ) = ps and P (EBMP ) = ps. We are interested in the probability of a successful branch (escape) out of a sequence of n...reference is still le- gal. Both can generate false positives, although CRED is less computationally expensive. The common theme in all these

  4. Laser Imaging Systems For Computer Vision

    Science.gov (United States)

    Vlad, Ionel V.; Ionescu-Pallas, Nicholas; Popa, Dragos; Apostol, Ileana; Vlad, Adriana; Capatina, V.

    1989-05-01

    The computer vision is becoming an essential feature of the high level artificial intelligence. Laser imaging systems act as special kind of image preprocessors/converters enlarging the access of the computer "intelligence" to the inspection, analysis and decision in new "world" : nanometric, three-dimensionals(3D), ultrafast, hostile for humans etc. Considering that the heart of the problem is the matching of the optical methods and the compu-ter software , some of the most promising interferometric,projection and diffraction systems are reviewed with discussions of our present results and of their potential in the precise 3D computer vision.

  5. Computer Bits: The Ideal Computer System for Your Center.

    Science.gov (United States)

    Brown, Dennis; Neugebauer, Roger

    1986-01-01

    Reviews five computer systems that can address the needs of a child care center: (1) Sperry PC IT with Bernoulli Box, (2) Compaq DeskPro 286, (3) Macintosh Plus, (4) Epson Equity II, and (5) Leading Edge Model "D." (HOD)

  6. An Optical Tri-valued Computing System

    Directory of Open Access Journals (Sweden)

    Junjie Peng

    2014-03-01

    Full Text Available A new optical computing experimental system is presented. Designed based on tri-valued logic, the system is built as a photoelectric hybrid computer system which is much advantageous over its electronic counterparts. Say, the tri-valued logic of the system guarantees that it is more powerful in information processing than that of systems with binary logic. And the optical characteristic of the system makes it be much capable in huge data processing than that of the electronic computers. The optical computing system includes two parts, electronic part and optical part. The electronic part consists of a PC and two embedded systems which are used for data input/output, monitor, synchronous control, user data combination and separation and so on. The optical part includes three components. They are optical encoder, logic calculator and decoder. It mainly answers for encoding the users' requests into tri-valued optical information, computing and processing the requests, decoding the tri-valued optical information to binary electronic information and so forth. Experiment results show that the system is quite right in optical information processing which demonstrates the feasibility and correctness of the optical computing system.

  7. Hybrid Systems: Computation and Control.

    Science.gov (United States)

    2007-11-02

    elbow) and a pinned first joint (shoul- der) (see Figure 2); it is termed an underactuated system since it is a mechanical system with fewer...Montreal, PQ, Canada, 1998. [10] M. W. Spong. Partial feedback linearization of underactuated mechanical systems . In Proceedings, IROS󈨢, pages 314-321...control mechanism and search for optimal combinations of control variables. Besides the nonlinear and hybrid nature of powertrain systems , hardware

  8. MTA Computer Based Evaluation System.

    Science.gov (United States)

    Brenner, Lisa P.; And Others

    The MTA PLATO-based evaluation system, which has been implemented by a consortium of schools of medical technology, is designed to be general-purpose, modular, data-driven, and interactive, and to accommodate other national and local item banks. The system provides a comprehensive interactive item-banking system in conjunction with online student…

  9. MTA Computer Based Evaluation System.

    Science.gov (United States)

    Brenner, Lisa P.; And Others

    The MTA PLATO-based evaluation system, which has been implemented by a consortium of schools of medical technology, is designed to be general-purpose, modular, data-driven, and interactive, and to accommodate other national and local item banks. The system provides a comprehensive interactive item-banking system in conjunction with online student…

  10. Computer Jet-Engine-Monitoring System

    Science.gov (United States)

    Disbrow, James D.; Duke, Eugene L.; Ray, Ronald J.

    1992-01-01

    "Intelligent Computer Assistant for Engine Monitoring" (ICAEM), computer-based monitoring system intended to distill and display data on conditions of operation of two turbofan engines of F-18, is in preliminary state of development. System reduces burden on propulsion engineer by providing single display of summary information on statuses of engines and alerting engineer to anomalous conditions. Effective use of prior engine-monitoring system requires continuous attention to multiple displays.

  11. A computational system for a Mars rover

    Science.gov (United States)

    Lambert, Kenneth E.

    1989-01-01

    This paper presents an overview of an onboard computing system that can be used for meeting the computational needs of a Mars rover. The paper begins by presenting an overview of some of the requirements which are key factors affecting the architecture. The rest of the paper describes the architecture. Particular emphasis is placed on the criteria used in defining the system and how the system qualitatively meets the criteria.

  12. Computer Jet-Engine-Monitoring System

    Science.gov (United States)

    Disbrow, James D.; Duke, Eugene L.; Ray, Ronald J.

    1992-01-01

    "Intelligent Computer Assistant for Engine Monitoring" (ICAEM), computer-based monitoring system intended to distill and display data on conditions of operation of two turbofan engines of F-18, is in preliminary state of development. System reduces burden on propulsion engineer by providing single display of summary information on statuses of engines and alerting engineer to anomalous conditions. Effective use of prior engine-monitoring system requires continuous attention to multiple displays.

  13. Intelligent computational systems for space applications

    Science.gov (United States)

    Lum, Henry, Jr.; Lau, Sonie

    1989-01-01

    The evolution of intelligent computation systems is discussed starting with the Spaceborne VHSIC Multiprocessor System (SVMS). The SVMS is a six-processor system designed to provide at least a 100-fold increase in both numeric and symbolic processing over the i386 uniprocessor. The significant system performance parameters necessary to achieve the performance increase are discussed.

  14. Computation of Weapons Systems Effectiveness

    Science.gov (United States)

    2013-09-01

    Aircraft Dive Angle : Initial Weapon Release Velocity at x-axis VOx VOz x: x-axis z: z-axis : Initial Weapon Release Velocity at z...altitude Impact Velocity (x− axis), Vix = VOx (3.4) Impact Velocity (z− axis), Viz = VOz + (g ∗ TOF) (3.5) Impact Velocity, Vi = �Vix2 + Viz2 (3.6...compute the ballistic partials to examine the effects that varying h, VOx and VOz have on RB using the following equations: ∂RB ∂h = New RB−Old RB

  15. A cost modelling system for cloud computing

    OpenAIRE

    Ajeh, Daniel; Ellman, Jeremy; Keogh, Shelagh

    2014-01-01

    An advance in technology unlocks new opportunities for organizations to increase their productivity, efficiency and process automation while reducing the cost of doing business as well. The emergence of cloud computing addresses these prospects through the provision of agile systems that are scalable, flexible and reliable as well as cost effective. Cloud computing has made hosting and deployment of computing resources cheaper and easier with no up-front charges but pay per-use flexible payme...

  16. The university computer network security system

    Institute of Scientific and Technical Information of China (English)

    张丁欣

    2012-01-01

    With the development of the times, advances in technology, computer network technology has been deep into all aspects of people's lives, it plays an increasingly important role, is an important tool for information exchange. Colleges and universities is to cultivate the cradle of new technology and new technology, computer network Yulu nectar to nurture emerging technologies, and so, as institutions of higher learning should pay attention to the construction of computer network security system.

  17. QUBIT DATA STRUCTURES FOR ANALYZING COMPUTING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Vladimir Hahanov

    2014-11-01

    Full Text Available Qubit models and methods for improving the performance of software and hardware for analyzing digital devices through increasing the dimension of the data structures and memory are proposed. The basic concepts, terminology and definitions necessary for the implementation of quantum computing when analyzing virtual computers are introduced. The investigation results concerning design and modeling computer systems in a cyberspace based on the use of two-component structure are presented.

  18. Computational Intelligence in Information Systems Conference

    CERN Document Server

    Au, Thien-Wan; Omar, Saiful

    2017-01-01

    This book constitutes the Proceedings of the Computational Intelligence in Information Systems conference (CIIS 2016), held in Brunei, November 18–20, 2016. The CIIS conference provides a platform for researchers to exchange the latest ideas and to present new research advances in general areas related to computational intelligence and its applications. The 26 revised full papers presented in this book have been carefully selected from 62 submissions. They cover a wide range of topics and application areas in computational intelligence and informatics.

  19. Optimization of Operating Systems towards Green Computing

    Directory of Open Access Journals (Sweden)

    Appasami Govindasamy

    2011-01-01

    Full Text Available Green Computing is one of the emerging computing technology in the field of computer science engineering and technology to provide Green Information Technology (Green IT. It is mainly used to protect environment, optimize energy consumption and keeps green environment. Green computing also refers to environmentally sustainable computing. In recent years, companies in the computer industry have come to realize that going green is in their best interest, both in terms of public relations and reduced costs. Information and communication technology (ICT has now become an important department for the success of any organization. Making IT “Green” can not only save money but help save our world by making it a better place through reducing and/or eliminating wasteful practices. In this paper we focus on green computing by optimizing operating systems and scheduling of hardware resources. The objectives of the green computing are human power, electrical energy, time and cost reduction with out polluting the environment while developing the software. Operating System (OS Optimization is very important for Green computing, because it is bridge for both hardware components and Application Soft wares. The important Steps for green computing user and energy efficient usage are also discussed in this paper.

  20. Resilience assessment and evaluation of computing systems

    CERN Document Server

    Wolter, Katinka; Vieira, Marco

    2012-01-01

    The resilience of computing systems includes their dependability as well as their fault tolerance and security. It defines the ability of a computing system to perform properly in the presence of various kinds of disturbances and to recover from any service degradation. These properties are immensely important in a world where many aspects of our daily life depend on the correct, reliable and secure operation of often large-scale distributed computing systems. Wolter and her co-editors grouped the 20 chapters from leading researchers into seven parts: an introduction and motivating examples,

  1. Computer-aided dispatching system design specification

    Energy Technology Data Exchange (ETDEWEB)

    Briggs, M.G.

    1997-12-16

    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol Operations Center. This document reflects the as-built requirements for the system that was delivered by GTE Northwest, Inc. This system provided a commercial off-the-shelf computer-aided dispatching system and alarm monitoring system currently in operations at the Hanford Patrol Operations Center, Building 2721E. This system also provides alarm back-up capability for the Plutonium Finishing Plant (PFP).

  2. Rendezvous Facilities in a Distributed Computer System

    Institute of Scientific and Technical Information of China (English)

    廖先Zhi; 金兰

    1995-01-01

    The distributed computer system described in this paper is a set of computer nodes interconnected in an interconnection network via packet-switching interfaces.The nodes communicate with each other by means of message-passing protocols.This paper presents the implementation of rendezvous facilities as high-level primitives provided by a parallel programming language to support interprocess communication and synchronization.

  3. Computer-aided power systems analysis

    CERN Document Server

    Kusic, George

    2008-01-01

    Computer applications yield more insight into system behavior than is possible by using hand calculations on system elements. Computer-Aided Power Systems Analysis: Second Edition is a state-of-the-art presentation of basic principles and software for power systems in steady-state operation. Originally published in 1985, this revised edition explores power systems from the point of view of the central control facility. It covers the elements of transmission networks, bus reference frame, network fault and contingency calculations, power flow on transmission networks, generator base power setti

  4. Sandia Laboratories technical capabilities: computation systems

    Energy Technology Data Exchange (ETDEWEB)

    1977-12-01

    This report characterizes the computation systems capabilities at Sandia Laboratories. Selected applications of these capabilities are presented to illustrate the extent to which they can be applied in research and development programs. 9 figures.

  5. Console Networks for Major Computer Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ophir, D; Shepherd, B; Spinrad, R J; Stonehill, D

    1966-07-22

    A concept for interactive time-sharing of a major computer system is developed in which satellite computers mediate between the central computing complex and the various individual user terminals. These techniques allow the development of a satellite system substantially independent of the details of the central computer and its operating system. Although the user terminals' roles may be rich and varied, the demands on the central facility are merely those of a tape drive or similar batched information transfer device. The particular system under development provides service for eleven visual display and communication consoles, sixteen general purpose, low rate data sources, and up to thirty-one typewriters. Each visual display provides a flicker-free image of up to 4000 alphanumeric characters or tens of thousands of points by employing a swept raster picture generating technique directly compatible with that of commercial television. Users communicate either by typewriter or a manually positioned light pointer.

  6. The structural robustness of multiprocessor computing system

    Directory of Open Access Journals (Sweden)

    N. Andronaty

    1996-03-01

    Full Text Available The model of the multiprocessor computing system on the base of transputers which permits to resolve the question of valuation of a structural robustness (viability, survivability is described.

  7. Computational Models for Nonlinear Aeroelastic Systems Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Clear Science Corp. and Duke University propose to develop and demonstrate a new and efficient computational method of modeling nonlinear aeroelastic systems. The...

  8. A Management System for Computer Performance Evaluation.

    Science.gov (United States)

    1981-12-01

    large unused capacity indicates a potential cost performance improvement (i.e. the potential to perform more within current costs or reduce costs ...necessary to bring the performance of the computer system in line with operational goals. : (Ref. 18 : 7) The General Accouting Office estimates that the...tasks in attempting to improve the efficiency and effectiveness of their computer systems. Cost began to plan an important role in the life of a

  9. Cloud Computing for Standard ERP Systems

    DEFF Research Database (Denmark)

    Schubert, Petra; Adisa, Femi

    for the operation of ERP systems. We argue that the phenomenon of cloud computing could lead to a decisive change in the way business software is deployed in companies. Our reference framework contains three levels (IaaS, PaaS, SaaS) and clarifies the meaning of public, private and hybrid clouds. The three levels...... of cloud computing and their impact on ERP systems operation are discussed. From the literature we identify areas for future research and propose a research agenda....

  10. Computer support for mechatronic control system design

    NARCIS (Netherlands)

    van Amerongen, J.; Coelingh, H.J.; de Vries, Theodorus J.A.

    2000-01-01

    This paper discusses the demands for proper tools for computer aided control system design of mechatronic systems and identifies a number of tasks in this design process. Real mechatronic design, involving input from specialists from varying disciplines, requires that the system can be represented

  11. Computer Systems for Distributed and Distance Learning.

    Science.gov (United States)

    Anderson, M.; Jackson, David

    2000-01-01

    Discussion of network-based learning focuses on a survey of computer systems for distributed and distance learning. Both Web-based systems and non-Web-based systems are reviewed in order to highlight some of the major trends of past projects and to suggest ways in which progress may be made in the future. (Contains 92 references.) (Author/LRW)

  12. Information systems and computing technology

    CERN Document Server

    Zhang, Lei

    2013-01-01

    Invited papersIncorporating the multi-cross-sectional temporal effect in Geographically Weighted Logit Regression K. Wu, B. Liu, B. Huang & Z. LeiOne shot learning human actions recognition using key posesW.H. Zou, S.G. Li, Z. Lei & N. DaiBand grouping pansharpening for WorldView-2 satellite images X. LiResearch on GIS based haze trajectory data analysis system Y. Wang, J. Chen, J. Shu & X. WangRegular papersA warning model of systemic financial risks W. Xu & Q. WangResearch on smart mobile phone user experience with grounded theory J.P. Wan & Y.H. ZhuThe software reliability analysis based on

  13. Computational approaches for systems metabolomics.

    Science.gov (United States)

    Krumsiek, Jan; Bartel, Jörg; Theis, Fabian J

    2016-06-01

    Systems genetics is defined as the simultaneous assessment and analysis of multi-omics datasets. In the past few years, metabolomics has been established as a robust tool describing an important functional layer in this approach. The metabolome of a biological system represents an integrated state of genetic and environmental factors and has been referred to as a 'link between genotype and phenotype'. In this review, we summarize recent progresses in statistical analysis methods for metabolomics data in combination with other omics layers. We put a special focus on complex, multivariate statistical approaches as well as pathway-based and network-based analysis methods. Moreover, we outline current challenges and pitfalls of metabolomics-focused multi-omics analyses and discuss future steps for the field.

  14. Computational systems biology for aging research.

    Science.gov (United States)

    Mc Auley, Mark T; Mooney, Kathleen M

    2015-01-01

    Computational modelling is a key component of systems biology and integrates with the other techniques discussed thus far in this book by utilizing a myriad of data that are being generated to quantitatively represent and simulate biological systems. This chapter will describe what computational modelling involves; the rationale for using it, and the appropriateness of modelling for investigating the aging process. How a model is assembled and the different theoretical frameworks that can be used to build a model are also discussed. In addition, the chapter will describe several models which demonstrate the effectiveness of each computational approach for investigating the constituents of a healthy aging trajectory. Specifically, a number of models will be showcased which focus on the complex age-related disorders associated with unhealthy aging. To conclude, we discuss the future applications of computational systems modelling to aging research.

  15. Artificial immune system applications in computer security

    CERN Document Server

    Tan, Ying

    2016-01-01

    This book provides state-of-the-art information on the use, design, and development of the Artificial Immune System (AIS) and AIS-based solutions to computer security issues. Artificial Immune System: Applications in Computer Security focuses on the technologies and applications of AIS in malware detection proposed in recent years by the Computational Intelligence Laboratory of Peking University (CIL@PKU). It offers a theoretical perspective as well as practical solutions for readers interested in AIS, machine learning, pattern recognition and computer security. The book begins by introducing the basic concepts, typical algorithms, important features, and some applications of AIS. The second chapter introduces malware and its detection methods, especially for immune-based malware detection approaches. Successive chapters present a variety of advanced detection approaches for malware, including Virus Detection System, K-Nearest Neighbour (KNN), RBF networ s, and Support Vector Machines (SVM), Danger theory, ...

  16. Quantum Computing in Solid State Systems

    CERN Document Server

    Ruggiero, B; Granata, C

    2006-01-01

    The aim of Quantum Computation in Solid State Systems is to report on recent theoretical and experimental results on the macroscopic quantum coherence of mesoscopic systems, as well as on solid state realization of qubits and quantum gates. Particular attention has been given to coherence effects in Josephson devices. Other solid state systems, including quantum dots, optical, ion, and spin devices which exhibit macroscopic quantum coherence are also discussed. Quantum Computation in Solid State Systems discusses experimental implementation of quantum computing and information processing devices, and in particular observations of quantum behavior in several solid state systems. On the theoretical side, the complementary expertise of the contributors provides models of the various structures in connection with the problem of minimizing decoherence.

  17. Telemetry Computer System at Wallops Flight Center

    Science.gov (United States)

    Bell, H.; Strock, J.

    1980-01-01

    This paper describes the Telemetry Computer System in operation at NASA's Wallops Flight Center for real-time or off-line processing, storage, and display of telemetry data from rockets and aircraft. The system accepts one or two PCM data streams and one FM multiplex, converting each type of data into computer format and merging time-of-day information. A data compressor merges the active streams, and removes redundant data if desired. Dual minicomputers process data for display, while storing information on computer tape for further processing. Real-time displays are located at the station, at the rocket launch control center, and in the aircraft control tower. The system is set up and run by standard telemetry software under control of engineers and technicians. Expansion capability is built into the system to take care of possible future requirements.

  18. Honeywell Modular Automation System Computer Software Documentation

    Energy Technology Data Exchange (ETDEWEB)

    CUNNINGHAM, L.T.

    1999-09-27

    This document provides a Computer Software Documentation for a new Honeywell Modular Automation System (MAS) being installed in the Plutonium Finishing Plant (PFP). This system will be used to control new thermal stabilization furnaces in HA-211 and vertical denitration calciner in HC-230C-2.

  19. Computation and design of autonomous intelligent systems

    Science.gov (United States)

    Fry, Robert L.

    2008-04-01

    This paper describes a theory of intelligent systems and its reduction to engineering practice. The theory is based on a broader theory of computation wherein information and control are defined within the subjective frame of a system. At its most primitive level, the theory describes what it computationally means to both ask and answer questions which, like traditional logic, are also Boolean. The logic of questions describes the subjective rules of computation that are objective in the sense that all the described systems operate according to its principles. Therefore, all systems are autonomous by construct. These systems include thermodynamic, communication, and intelligent systems. Although interesting, the important practical consequence is that the engineering framework for intelligent systems can borrow efficient constructs and methodologies from both thermodynamics and information theory. Thermodynamics provides the Carnot cycle which describes intelligence dynamics when operating in the refrigeration mode. It also provides the principle of maximum entropy. Information theory has recently provided the important concept of dual-matching useful for the design of efficient intelligent systems. The reverse engineered model of computation by pyramidal neurons agrees well with biology and offers a simple and powerful exemplar of basic engineering concepts.

  20. Remote computer monitors corrosion protection system

    Energy Technology Data Exchange (ETDEWEB)

    Kendrick, A.

    Effective corrosion protection with electrochemical methods requires some method of routine monitoring that provides reliable data that is free of human error. A test installation of a remote computer control monitoring system for electrochemical corrosion protection is described. The unit can handle up to six channel inputs. Each channel comprises 3 analog signals and 1 digital. The operation of the system is discussed.

  1. Terrace Layout Using a Computer Assisted System

    Science.gov (United States)

    Development of a web-based terrace design tool based on the MOTERR program is presented, along with representative layouts for conventional and parallel terrace systems. Using digital elevation maps and geographic information systems (GIS), this tool utilizes personal computers to rapidly construct ...

  2. Cloud Computing for Standard ERP Systems

    DEFF Research Database (Denmark)

    Schubert, Petra; Adisa, Femi

    Cloud Computing is a topic that has gained momentum in the last years. Current studies show that an increasing number of companies is evaluating the promised advantages and considering making use of cloud services. In this paper we investigate the phenomenon of cloud computing and its importance...... for the operation of ERP systems. We argue that the phenomenon of cloud computing could lead to a decisive change in the way business software is deployed in companies. Our reference framework contains three levels (IaaS, PaaS, SaaS) and clarifies the meaning of public, private and hybrid clouds. The three levels...... of cloud computing and their impact on ERP systems operation are discussed. From the literature we identify areas for future research and propose a research agenda....

  3. Building Low Cost Cloud Computing Systems

    Directory of Open Access Journals (Sweden)

    Carlos Antunes

    2013-06-01

    Full Text Available The actual models of cloud computing are based in megalomaniac hardware solutions, being its implementation and maintenance unaffordable to the majority of service providers. The use of jail services is an alternative to current models of cloud computing based on virtualization. Models based in utilization of jail environments instead of the used virtualization systems will provide huge gains in terms of optimization of hardware resources at computation level and in terms of storage and energy consumption. In this paper it will be addressed the practical implementation of jail environments in real scenarios, which allows the visualization of areas where its application will be relevant and will make inevitable the redefinition of the models that are currently defined for cloud computing. In addition it will bring new opportunities in the development of support features for jail environments in the majority of operating systems.

  4. Computer networks ISE a systems approach

    CERN Document Server

    Peterson, Larry L

    2007-01-01

    Computer Networks, 4E is the only introductory computer networking book written by authors who have had first-hand experience with many of the protocols discussed in the book, who have actually designed some of them as well, and who are still actively designing the computer networks today. This newly revised edition continues to provide an enduring, practical understanding of networks and their building blocks through rich, example-based instruction. The authors' focus is on the why of network design, not just the specifications comprising today's systems but how key technologies and p

  5. Unified Computational Intelligence for Complex Systems

    CERN Document Server

    Seiffertt, John

    2010-01-01

    Computational intelligence encompasses a wide variety of techniques that allow computation to learn, to adapt, and to seek. That is, they may be designed to learn information without explicit programming regarding the nature of the content to be retained, they may be imbued with the functionality to adapt to maintain their course within a complex and unpredictably changing environment, and they may help us seek out truths about our own dynamics and lives through their inclusion in complex system modeling. These capabilities place our ability to compute in a category apart from our ability to e

  6. Computer surety: computer system inspection guidance. [Contains glossary

    Energy Technology Data Exchange (ETDEWEB)

    1981-07-01

    This document discusses computer surety in NRC-licensed nuclear facilities from the perspective of physical protection inspectors. It gives background information and a glossary of computer terms, along with threats and computer vulnerabilities, methods used to harden computer elements, and computer audit controls.

  7. Fault tolerant hypercube computer system architecture

    Science.gov (United States)

    Madan, Herb S. (Inventor); Chow, Edward (Inventor)

    1989-01-01

    A fault-tolerant multiprocessor computer system of the hypercube type comprising a hierarchy of computers of like kind which can be functionally substituted for one another as necessary is disclosed. Communication between the working nodes is via one communications network while communications between the working nodes and watch dog nodes and load balancing nodes higher in the structure is via another communications network separate from the first. A typical branch of the hierarchy reporting to a master node or host computer comprises, a plurality of first computing nodes; a first network of message conducting paths for interconnecting the first computing nodes as a hypercube. The first network provides a path for message transfer between the first computing nodes; a first watch dog node; and a second network of message connecting paths for connecting the first computing nodes to the first watch dog node independent from the first network, the second network provides an independent path for test message and reconfiguration affecting transfers between the first computing nodes and the first switch watch dog node. There is additionally, a plurality of second computing nodes; a third network of message conducting paths for interconnecting the second computing nodes as a hypercube. The third network provides a path for message transfer between the second computing nodes; a fourth network of message conducting paths for connecting the second computing nodes to the first watch dog node independent from the third network. The fourth network provides an independent path for test message and reconfiguration affecting transfers between the second computing nodes and the first watch dog node; and a first multiplexer disposed between the first watch dog node and the second and fourth networks for allowing the first watch dog node to selectively communicate with individual ones of the computing nodes through the second and fourth networks; as well as, a second watch dog node

  8. Monitoring SLAC High Performance UNIX Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC

    2005-12-15

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.

  9. Operator support system using computational intelligence techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bueno, Elaine Inacio, E-mail: ebueno@ifsp.edu.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Sao Paulo (IFSP), Sao Paulo, SP (Brazil); Pereira, Iraci Martinez, E-mail: martinez@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    Computational Intelligence Systems have been widely applied in Monitoring and Fault Detection Systems in several processes and in different kinds of applications. These systems use interdependent components ordered in modules. It is a typical behavior of such systems to ensure early detection and diagnosis of faults. Monitoring and Fault Detection Techniques can be divided into two categories: estimative and pattern recognition methods. The estimative methods use a mathematical model, which describes the process behavior. The pattern recognition methods use a database to describe the process. In this work, an operator support system using Computational Intelligence Techniques was developed. This system will show the information obtained by different CI techniques in order to help operators to take decision in real time and guide them in the fault diagnosis before the normal alarm limits are reached. (author)

  10. Attacker Modelling in Ubiquitous Computing Systems

    DEFF Research Database (Denmark)

    Papini, Davide

    Within the last five to ten years we have experienced an incredible growth of ubiquitous technologies which has allowed for improvements in several areas, including energy distribution and management, health care services, border surveillance, secure monitoring and management of buildings......, localisation services and many others. These technologies can be classified under the name of ubiquitous systems. The term Ubiquitous System dates back to 1991 when Mark Weiser at Xerox PARC Lab first referred to it in writing. He envisioned a future where computing technologies would have been melted...... in with our everyday life. This future is visible to everyone nowadays: terms like smartphone, cloud, sensor, network etc. are widely known and used in our everyday life. But what about the security of such systems. Ubiquitous computing devices can be limited in terms of energy, computing power and memory...

  11. A New System Architecture for Pervasive Computing

    CERN Document Server

    Ismail, Anis; Ismail, Ziad

    2011-01-01

    We present new system architecture, a distributed framework designed to support pervasive computing applications. We propose a new architecture consisting of a search engine and peripheral clients that addresses issues in scalability, data sharing, data transformation and inherent platform heterogeneity. Key features of our application are a type-aware data transport that is capable of extract data, and present data through handheld devices (PDA (personal digital assistant), mobiles, etc). Pervasive computing uses web technology, portable devices, wireless communications and nomadic or ubiquitous computing systems. The web and the simple standard HTTP protocol that it is based on, facilitate this kind of ubiquitous access. This can be implemented on a variety of devices - PDAs, laptops, information appliances such as digital cameras and printers. Mobile users get transparent access to resources outside their current environment. We discuss our system's architecture and its implementation. Through experimental...

  12. Metasynthetic computing and engineering of complex systems

    CERN Document Server

    Cao, Longbing

    2015-01-01

    Provides a comprehensive overview and introduction to the concepts, methodologies, analysis, design and applications of metasynthetic computing and engineering. The author: Presents an overview of complex systems, especially open complex giant systems such as the Internet, complex behavioural and social problems, and actionable knowledge discovery and delivery in the big data era. Discusses ubiquitous intelligence in complex systems, including human intelligence, domain intelligence, social intelligence, network intelligence, data intelligence and machine intelligence, and their synergy thro

  13. Reliable computer systems design and evaluatuion

    CERN Document Server

    Siewiorek, Daniel

    2014-01-01

    Enhance your hardware/software reliabilityEnhancement of system reliability has been a major concern of computer users and designers ¦ and this major revision of the 1982 classic meets users' continuing need for practical information on this pressing topic. Included are case studies of reliablesystems from manufacturers such as Tandem, Stratus, IBM, and Digital, as well as coverage of special systems such as the Galileo Orbiter fault protection system and AT&T telephone switching processors.

  14. Model for personal computer system selection.

    Science.gov (United States)

    Blide, L

    1987-12-01

    Successful computer software and hardware selection is best accomplished by following an organized approach such as the one described in this article. The first step is to decide what you want to be able to do with the computer. Secondly, select software that is user friendly, well documented, bug free, and that does what you want done. Next, you select the computer, printer and other needed equipment from the group of machines on which the software will run. Key factors here are reliability and compatibility with other microcomputers in your facility. Lastly, you select a reliable vendor who will provide good, dependable service in a reasonable time. The ability to correctly select computer software and hardware is a key skill needed by medical record professionals today and in the future. Professionals can make quality computer decisions by selecting software and systems that are compatible with other computers in their facility, allow for future net-working, ease of use, and adaptability for expansion as new applications are identified. The key to success is to not only provide for your present needs, but to be prepared for future rapid expansion and change in your computer usage as technology and your skills grow.

  15. Architecture, systems research and computational sciences

    CERN Document Server

    2012-01-01

    The Winter 2012 (vol. 14 no. 1) issue of the Nexus Network Journal is dedicated to the theme “Architecture, Systems Research and Computational Sciences”. This is an outgrowth of the session by the same name which took place during the eighth international, interdisciplinary conference “Nexus 2010: Relationships between Architecture and Mathematics, held in Porto, Portugal, in June 2010. Today computer science is an integral part of even strictly historical investigations, such as those concerning the construction of vaults, where the computer is used to survey the existing building, analyse the data and draw the ideal solution. What the papers in this issue make especially evident is that information technology has had an impact at a much deeper level as well: architecture itself can now be considered as a manifestation of information and as a complex system. The issue is completed with other research papers, conference reports and book reviews.

  16. NIF Integrated Computer Controls System Description

    Energy Technology Data Exchange (ETDEWEB)

    VanArsdall, P.

    1998-01-26

    This System Description introduces the NIF Integrated Computer Control System (ICCS). The architecture is sufficiently abstract to allow the construction of many similar applications from a common framework. As discussed below, over twenty software applications derived from the framework comprise the NIF control system. This document lays the essential foundation for understanding the ICCS architecture. The NIF design effort is motivated by the magnitude of the task. Figure 1 shows a cut-away rendition of the coliseum-sized facility. The NIF requires integration of about 40,000 atypical control points, must be highly automated and robust, and will operate continuously around the clock. The control system coordinates several experimental cycles concurrently, each at different stages of completion. Furthermore, facilities such as the NIF represent major capital investments that will be operated, maintained, and upgraded for decades. The computers, control subsystems, and functionality must be relatively easy to extend or replace periodically with newer technology.

  17. NIF Integrated Computer Controls System Description

    Energy Technology Data Exchange (ETDEWEB)

    VanArsdall, P.

    1998-01-26

    This System Description introduces the NIF Integrated Computer Control System (ICCS). The architecture is sufficiently abstract to allow the construction of many similar applications from a common framework. As discussed below, over twenty software applications derived from the framework comprise the NIF control system. This document lays the essential foundation for understanding the ICCS architecture. The NIF design effort is motivated by the magnitude of the task. Figure 1 shows a cut-away rendition of the coliseum-sized facility. The NIF requires integration of about 40,000 atypical control points, must be highly automated and robust, and will operate continuously around the clock. The control system coordinates several experimental cycles concurrently, each at different stages of completion. Furthermore, facilities such as the NIF represent major capital investments that will be operated, maintained, and upgraded for decades. The computers, control subsystems, and functionality must be relatively easy to extend or replace periodically with newer technology.

  18. Some Unexpected Results Using Computer Algebra Systems.

    Science.gov (United States)

    Alonso, Felix; Garcia, Alfonsa; Garcia, Francisco; Hoya, Sara; Rodriguez, Gerardo; de la Villa, Agustin

    2001-01-01

    Shows how teachers can often use unexpected outputs from Computer Algebra Systems (CAS) to reinforce concepts and to show students the importance of thinking about how they use the software and reflecting on their results. Presents different examples where DERIVE, MAPLE, or Mathematica does not work as expected and suggests how to use them as a…

  19. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  20. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data dissem

  1. Computer Graphics for System Effectiveness Analysis.

    Science.gov (United States)

    1986-05-01

    02139, August 1982. Chapra , Steven C., and Raymond P. Canale, (1985), Numerical Methods for Engineers with Personal Computer Applications New York...I -~1.2 Outline of Thesis .................................. 1..... .......... CHAPTER 11. METHOD OF ANALYSIS...Chapter VII summarizes the results and gives recommendations for future research. I - P** METHOD OF ANALYSIS 2.1 Introduction Systems effectiveness

  2. Characterizing Video Coding Computing in Conference Systems

    NARCIS (Netherlands)

    Tuquerres, G.

    2000-01-01

    In this paper, a number of coding operations is provided for computing continuous data streams, in particular, video streams. A coding capability of the operations is expressed by a pyramidal structure in which coding processes and requirements of a distributed information system are represented. Th

  3. Lumber Grading With A Computer Vision System

    Science.gov (United States)

    Richard W. Conners; Tai-Hoon Cho; Philip A. Araman

    1989-01-01

    Over the past few years significant progress has been made in developing a computer vision system for locating and identifying defects on surfaced hardwood lumber. Unfortunately, until September of 1988 little research had gone into developing methods for analyzing rough lumber. This task is arguably more complex than the analysis of surfaced lumber. The prime...

  4. Computer Algebra Systems, Pedagogy, and Epistemology

    Science.gov (United States)

    Bosse, Michael J.; Nandakumar, N. R.

    2004-01-01

    The advent of powerful Computer Algebra Systems (CAS) continues to dramatically affect curricula, pedagogy, and epistemology in secondary and college algebra classrooms. However, epistemological and pedagogical research regarding the role and effectiveness of CAS in the learning of algebra lags behind. This paper investigates concerns regarding…

  5. Computer system SANC: its development and applications

    Science.gov (United States)

    Arbuzov, A.; Bardin, D.; Bondarenko, S.; Christova, P.; Kalinovskaya, L.; Sadykov, R.; Sapronov, A.; Riemann, T.

    2016-10-01

    The SANC system is used for systematic calculations of various processes within the Standard Model in the one-loop approximation. QED, electroweak, and QCD corrections are computed to a number of processes being of interest for modern and future high-energy experiments. Several applications for the LHC physics program are presented. Development of the system and the general problems and perspectives for future improvement of the theoretical precision are discussed.

  6. Personal healthcare system using cloud computing.

    Science.gov (United States)

    Takeuchi, Hiroshi; Mayuzumi, Yuuki; Kodama, Naoki; Sato, Keiichi

    2013-01-01

    A personal healthcare system used with cloud computing has been developed. It enables a daily time-series of personal health and lifestyle data to be stored in the cloud through mobile devices. The cloud automatically extracts personally useful information, such as rules and patterns concerning lifestyle and health conditions embedded in the personal big data, by using a data mining technology. The system provides three editions (Diet, Lite, and Pro) corresponding to users' needs.

  7. The CMS Computing System: Successes and Challenges

    CERN Document Server

    Bloom, Kenneth

    2009-01-01

    Each LHC experiment will produce datasets with sizes of order one petabyte per year. All of this data must be stored, processed, transferred, simulated and analyzed, which requires a computing system of a larger scale than ever mounted for any particle physics experiment, and possibly for any enterprise in the world. I discuss how CMS has chosen to address these challenges, focusing on recent tests of the system that demonstrate the experiment's readiness for producing physics results with the first LHC data.

  8. Integrative Genomics and Computational Systems Medicine

    Energy Technology Data Exchange (ETDEWEB)

    McDermott, Jason E.; Huang, Yufei; Zhang, Bing; Xu, Hua; Zhao, Zhongming

    2014-01-01

    The exponential growth in generation of large amounts of genomic data from biological samples has driven the emerging field of systems medicine. This field is promising because it improves our understanding of disease processes at the systems level. However, the field is still in its young stage. There exists a great need for novel computational methods and approaches to effectively utilize and integrate various omics data.

  9. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  10. Adaptive Fuzzy Systems in Computational Intelligence

    Science.gov (United States)

    Berenji, Hamid R.

    1996-01-01

    In recent years, the interest in computational intelligence techniques, which currently includes neural networks, fuzzy systems, and evolutionary programming, has grown significantly and a number of their applications have been developed in the government and industry. In future, an essential element in these systems will be fuzzy systems that can learn from experience by using neural network in refining their performances. The GARIC architecture, introduced earlier, is an example of a fuzzy reinforcement learning system which has been applied in several control domains such as cart-pole balancing, simulation of to Space Shuttle orbital operations, and tether control. A number of examples from GARIC's applications in these domains will be demonstrated.

  11. Cluster Computing for Embedded/Real-Time Systems

    Science.gov (United States)

    Katz, D.; Kepner, J.

    1999-01-01

    Embedded and real-time systems, like other computing systems, seek to maximize computing power for a given price, and thus can significantly benefit from the advancing capabilities of cluster computing.

  12. Landauer Bound for Analog Computing Systems

    CERN Document Server

    Diamantini, M Cristina; Trugenberger, Carlo A

    2016-01-01

    By establishing a relation between information erasure and continuous phase transitions we generalise the Landauer bound to analog computing systems. The entropy production per degree of freedom during erasure of an analog variable (reset to standard value) is given by the logarithm of the configurational volume measured in units of its minimal quantum. As a consequence every computation has to be carried on with a finite number of bits and infinite precision is forbidden by the fundamental laws of physics, since it would require an infinite amount of energy.

  13. Landauer bound for analog computing systems

    Science.gov (United States)

    Diamantini, M. Cristina; Gammaitoni, Luca; Trugenberger, Carlo A.

    2016-07-01

    By establishing a relation between information erasure and continuous phase transitions we generalize the Landauer bound to analog computing systems. The entropy production per degree of freedom during erasure of an analog variable (reset to standard value) is given by the logarithm of the configurational volume measured in units of its minimal quantum. As a consequence, every computation has to be carried on with a finite number of bits and infinite precision is forbidden by the fundamental laws of physics, since it would require an infinite amount of energy.

  14. International Conference on Soft Computing Systems

    CERN Document Server

    Panigrahi, Bijaya

    2016-01-01

    The book is a collection of high-quality peer-reviewed research papers presented in International Conference on Soft Computing Systems (ICSCS 2015) held at Noorul Islam Centre for Higher Education, Chennai, India. These research papers provide the latest developments in the emerging areas of Soft Computing in Engineering and Technology. The book is organized in two volumes and discusses a wide variety of industrial, engineering and scientific applications of the emerging techniques. It presents invited papers from the inventors/originators of new applications and advanced technologies.

  15. Embedded systems for supporting computer accessibility.

    Science.gov (United States)

    Mulfari, Davide; Celesti, Antonio; Fazio, Maria; Villari, Massimo; Puliafito, Antonio

    2015-01-01

    Nowadays, customized AT software solutions allow their users to interact with various kinds of computer systems. Such tools are generally available on personal devices (e.g., smartphones, laptops and so on) commonly used by a person with a disability. In this paper, we investigate a way of using the aforementioned AT equipments in order to access many different devices without assistive preferences. The solution takes advantage of open source hardware and its core component consists of an affordable Linux embedded system: it grabs data coming from the assistive software, which runs on the user's personal device, then, after processing, it generates native keyboard and mouse HID commands for the target computing device controlled by the end user. This process supports any operating system available on the target machine and it requires no specialized software installation; therefore the user with a disability can rely on a single assistive tool to control a wide range of computing platforms, including conventional computers and many kinds of mobile devices, which receive input commands through the USB HID protocol.

  16. Music Genre Classification Systems - A Computational Approach

    DEFF Research Database (Denmark)

    Ahrendt, Peter

    2006-01-01

    Automatic music genre classification is the classification of a piece of music into its corresponding genre (such as jazz or rock) by a computer. It is considered to be a cornerstone of the research area Music Information Retrieval (MIR) and closely linked to the other areas in MIR. It is thought...... that MIR will be a key element in the processing, searching and retrieval of digital music in the near future. This dissertation is concerned with music genre classification systems and in particular systems which use the raw audio signal as input to estimate the corresponding genre. This is in contrast...... to systems which use e.g. a symbolic representation or textual information about the music. The approach to music genre classification systems has here been system-oriented. In other words, all the different aspects of the systems have been considered and it is emphasized that the systems should...

  17. Autonomous Systems, Robotics, and Computing Systems Capability Roadmap: NRC Dialogue

    Science.gov (United States)

    Zornetzer, Steve; Gage, Douglas

    2005-01-01

    Contents include the following: Introduction. Process, Mission Drivers, Deliverables, and Interfaces. Autonomy. Crew-Centered and Remote Operations. Integrated Systems Health Management. Autonomous Vehicle Control. Autonomous Process Control. Robotics. Robotics for Solar System Exploration. Robotics for Lunar and Planetary Habitation. Robotics for In-Space Operations. Computing Systems. Conclusion.

  18. Nature-inspired computing for control systems

    CERN Document Server

    2016-01-01

    The book presents recent advances in nature-inspired computing, giving a special emphasis to control systems applications. It reviews different techniques used for simulating physical, chemical, biological or social phenomena at the purpose of designing robust, predictive and adaptive control strategies. The book is a collection of several contributions, covering either more general approaches in control systems, or methodologies for control tuning and adaptive controllers, as well as exciting applications of nature-inspired techniques in robotics. On one side, the book is expected to motivate readers with a background in conventional control systems to try out these powerful techniques inspired by nature. On the other side, the book provides advanced readers with a deeper understanding of the field and a broad spectrum of different methods and techniques. All in all, the book is an outstanding, practice-oriented reference guide to nature-inspired computing addressing graduate students, researchers and practi...

  19. Decomposability queueing and computer system applications

    CERN Document Server

    Courtois, P J

    1977-01-01

    Decomposability: Queueing and Computer System Applications presents a set of powerful methods for systems analysis. This 10-chapter text covers the theory of nearly completely decomposable systems upon which specific analytic methods are based.The first chapters deal with some of the basic elements of a theory of nearly completely decomposable stochastic matrices, including the Simon-Ando theorems and the perturbation theory. The succeeding chapters are devoted to the analysis of stochastic queuing networks that appear as a type of key model. These chapters also discuss congestion problems in

  20. Computer-aided Analysis of Phisiological Systems

    Directory of Open Access Journals (Sweden)

    Balázs Benyó

    2007-12-01

    Full Text Available This paper presents the recent biomedical engineering research activity of theMedical Informatics Laboratory at the Budapest University of Technology and Economics.The research projects are carried out in the fields as follows: Computer aidedidentification of physiological systems; Diabetic management and blood glucose control;Remote patient monitoring and diagnostic system; Automated system for analyzing cardiacultrasound images; Single-channel hybrid ECG segmentation; Event recognition and stateclassification to detect brain ischemia by means of EEG signal processing; Detection ofbreathing disorders like apnea and hypopnea; Molecular biology studies with DNA-chips;Evaluation of the cry of normal hearing and hard of hearing infants.

  1. Applicability of Computational Systems Biology in Toxicology

    DEFF Research Database (Denmark)

    Kongsbak, Kristine Grønning; Hadrup, Niels; Audouze, Karine Marie Laure

    2014-01-01

    be used to establish hypotheses on links between the chemical and human diseases. Such information can also be applied for designing more intelligent animal/cell experiments that can test the established hypotheses. Here, we describe how and why to apply an integrative systems biology method......Systems biology as a research field has emerged within the last few decades. Systems biology, often defined as the antithesis of the reductionist approach, integrates information about individual components of a biological system. In integrative systems biology, large data sets from various sources...... and databases are used to model and predict effects of chemicals on, for instance, human health. In toxicology, computational systems biology enables identification of important pathways and molecules from large data sets; tasks that can be extremely laborious when performed by a classical literature search...

  2. Low Power Dynamic Scheduling for Computing Systems

    CERN Document Server

    Neely, Michael J

    2011-01-01

    This paper considers energy-aware control for a computing system with two states: "active" and "idle." In the active state, the controller chooses to perform a single task using one of multiple task processing modes. The controller then saves energy by choosing an amount of time for the system to be idle. These decisions affect processing time, energy expenditure, and an abstract attribute vector that can be used to model other criteria of interest (such as processing quality or distortion). The goal is to optimize time average system performance. Applications of this model include a smart phone that makes energy-efficient computation and transmission decisions, a computer that processes tasks subject to rate, quality, and power constraints, and a smart grid energy manager that allocates resources in reaction to a time varying energy price. The solution methodology of this paper uses the theory of optimization for renewal systems developed in our previous work. This paper is written in tutorial form and devel...

  3. Applicability of computational systems biology in toxicology.

    Science.gov (United States)

    Kongsbak, Kristine; Hadrup, Niels; Audouze, Karine; Vinggaard, Anne Marie

    2014-07-01

    Systems biology as a research field has emerged within the last few decades. Systems biology, often defined as the antithesis of the reductionist approach, integrates information about individual components of a biological system. In integrative systems biology, large data sets from various sources and databases are used to model and predict effects of chemicals on, for instance, human health. In toxicology, computational systems biology enables identification of important pathways and molecules from large data sets; tasks that can be extremely laborious when performed by a classical literature search. However, computational systems biology offers more advantages than providing a high-throughput literature search; it may form the basis for establishment of hypotheses on potential links between environmental chemicals and human diseases, which would be very difficult to establish experimentally. This is possible due to the existence of comprehensive databases containing information on networks of human protein-protein interactions and protein-disease associations. Experimentally determined targets of the specific chemical of interest can be fed into these networks to obtain additional information that can be used to establish hypotheses on links between the chemical and human diseases. Such information can also be applied for designing more intelligent animal/cell experiments that can test the established hypotheses. Here, we describe how and why to apply an integrative systems biology method in the hypothesis-generating phase of toxicological research.

  4. Interactive computer-enhanced remote viewing system

    Energy Technology Data Exchange (ETDEWEB)

    Tourtellott, J.A.; Wagner, J.F. [Mechanical Technology Incorporated, Latham, NY (United States)

    1995-10-01

    Remediation activities such as decontamination and decommissioning (D&D) typically involve materials and activities hazardous to humans. Robots are an attractive way to conduct such remediation, but for efficiency they need a good three-dimensional (3-D) computer model of the task space where they are to function. This model can be created from engineering plans and architectural drawings and from empirical data gathered by various sensors at the site. The model is used to plan robotic tasks and verify that selected paths are clear of obstacles. This report describes the development of an Interactive Computer-Enhanced Remote Viewing System (ICERVS), a software system to provide a reliable geometric description of a robotic task space, and enable robotic remediation to be conducted more effectively and more economically.

  5. Cloud Computing Security in Business Information Systems

    CERN Document Server

    Ristov, Sasko; Kostoska, Magdalena

    2012-01-01

    Cloud computing providers' and customers' services are not only exposed to existing security risks, but, due to multi-tenancy, outsourcing the application and data, and virtualization, they are exposed to the emergent, as well. Therefore, both the cloud providers and customers must establish information security system and trustworthiness each other, as well as end users. In this paper we analyze main international and industrial standards targeting information security and their conformity with cloud computing security challenges. We evaluate that almost all main cloud service providers (CSPs) are ISO 27001:2005 certified, at minimum. As a result, we propose an extension to the ISO 27001:2005 standard with new control objective about virtualization, to retain generic, regardless of company's type, size and nature, that is, to be applicable for cloud systems, as well, where virtualization is its baseline. We also define a quantitative metric and evaluate the importance factor of ISO 27001:2005 control objecti...

  6. Thermoelectric property measurements with computer controlled systems

    Science.gov (United States)

    Chmielewski, A. B.; Wood, C.

    1984-01-01

    A joint JPL-NASA program to develop an automated system to measure the thermoelectric properties of newly developed materials is described. Consideration is given to the difficulties created by signal drift in measurements of Hall voltage and the Large Delta T Seebeck coefficient. The benefits of a computerized system were examined with respect to error reduction and time savings for human operators. It is shown that the time required to measure Hall voltage can be reduced by a factor of 10 when a computer is used to fit a curve to the ratio of the measured signal and its standard deviation. The accuracy of measurements of the Large Delta T Seebeck coefficient and thermal diffusivity was also enhanced by the use of computers.

  7. Checkpoint triggering in a computer system

    Science.gov (United States)

    Cher, Chen-Yong

    2016-09-06

    According to an aspect, a method for triggering creation of a checkpoint in a computer system includes executing a task in a processing node of the computer system and determining whether it is time to read a monitor associated with a metric of the task. The monitor is read to determine a value of the metric based on determining that it is time to read the monitor. A threshold for triggering creation of the checkpoint is determined based on the value of the metric. Based on determining that the value of the metric has crossed the threshold, the checkpoint including state data of the task is created to enable restarting execution of the task upon a restart operation.

  8. A NEW SYSTEM ARCHITECTURE FOR PERVASIVE COMPUTING

    Directory of Open Access Journals (Sweden)

    Anis ISMAIL

    2011-08-01

    Full Text Available We present new system architecture, a distributed framework designed to support pervasive computingapplications. We propose a new architecture consisting of a search engine and peripheral clients thataddresses issues in scalability, data sharing, data transformation and inherent platform heterogeneity. Keyfeatures of our application are a type-aware data transport that is capable of extract data, and presentdata through handheld devices (PDA (personal digital assistant, mobiles, etc. Pervasive computing usesweb technology, portable devices, wireless communications and nomadic or ubiquitous computing systems.The web and the simple standard HTTP protocol that it is based on, facilitate this kind of ubiquitousaccess. This can be implemented on a variety of devices - PDAs, laptops, information appliances such asdigital cameras and printers. Mobile users get transparent access to resources outside their currentenvironment. We discuss our system’s architecture and its implementation. Through experimental study,we show reasonable performance and adaptation for our system’s implementation for the mobile devices.

  9. Music Genre Classification Systems - A Computational Approach

    OpenAIRE

    Ahrendt, Peter; Hansen, Lars Kai

    2006-01-01

    Automatic music genre classification is the classification of a piece of music into its corresponding genre (such as jazz or rock) by a computer. It is considered to be a cornerstone of the research area Music Information Retrieval (MIR) and closely linked to the other areas in MIR. It is thought that MIR will be a key element in the processing, searching and retrieval of digital music in the near future. This dissertation is concerned with music genre classification systems and in particular...

  10. Research on Dynamic Distributed Computing System for Small and Medium-Sized Computer Clusters

    Institute of Scientific and Technical Information of China (English)

    Le Kang; Jianliang Xu; Feng Liu

    2012-01-01

      Distributed computing system is a science by which a complex task that need for large amount of computation can be divided into small pieces and calculated by more than one computer,and we can get the final result according to results from each computer.This paper considers a distributed computing system running in the small and medium-sized computer clusters to solve the problem that single computer has a low efficiency,and improve the efficiency of large-scale computing.The experiments show that the system can effectively improve the efficiency and it is a viable program.

  11. Performance evaluation of a computed radiography system

    Energy Technology Data Exchange (ETDEWEB)

    Roussilhe, J.; Fallet, E. [Carestream Health France, 71 - Chalon/Saone (France); Mango, St.A. [Carestream Health, Inc. Rochester, New York (United States)

    2007-07-01

    Computed radiography (CR) standards have been formalized and published in Europe and in the US. The CR system classification is defined in those standards by - minimum normalized signal-to-noise ratio (SNRN), and - maximum basic spatial resolution (SRb). Both the signal-to-noise ratio (SNR) and the contrast sensitivity of a CR system depend on the dose (exposure time and conditions) at the detector. Because of their wide dynamic range, the same storage phosphor imaging plate can qualify for all six CR system classes. The exposure characteristics from 30 to 450 kV, the contrast sensitivity, and the spatial resolution of the KODAK INDUSTREX CR Digital System have been thoroughly evaluated. This paper will present some of the factors that determine the system's spatial resolution performance. (authors)

  12. TMX-U computer system in evolution

    Science.gov (United States)

    Casper, T. A.; Bell, H.; Brown, M.; Gorvad, M.; Jenkins, S.; Meyer, W.; Moller, J.; Perkins, D.

    1986-08-01

    Over the past three years, the total TMX-U diagnostic data base has grown to exceed 10 Mbytes from over 1300 channels; roughly triple the originally designed size. This acquisition and processing load has resulted in an experiment repetition rate exceeding 10 min per shot using the five original Hewlett-Packard HP-1000 computers with their shared disks. Our new diagnostics tend to be multichannel instruments, which, in our environment, can be more easily managed using local computers. For this purpose, we are using HP series 9000 computers for instrument control, data acquisition, and analysis. Fourteen such systems are operational with processed format output exchanged via a shared resource manager. We are presently implementing the necessary hardware and software changes to create a local area network allowing us to combine the data from these systems with our main data archive. The expansion of our diagnostic system using the parallel acquisition and processing concept allows us to increase our data base with a minimum of impact on the experimental repetition rate.

  13. Physical Optics Based Computational Imaging Systems

    Science.gov (United States)

    Olivas, Stephen Joseph

    There is an ongoing demand on behalf of the consumer, medical and military industries to make lighter weight, higher resolution, wider field-of-view and extended depth-of-focus cameras. This leads to design trade-offs between performance and cost, be it size, weight, power, or expense. This has brought attention to finding new ways to extend the design space while adhering to cost constraints. Extending the functionality of an imager in order to achieve extraordinary performance is a common theme of computational imaging, a field of study which uses additional hardware along with tailored algorithms to formulate and solve inverse problems in imaging. This dissertation details four specific systems within this emerging field: a Fiber Bundle Relayed Imaging System, an Extended Depth-of-Focus Imaging System, a Platform Motion Blur Image Restoration System, and a Compressive Imaging System. The Fiber Bundle Relayed Imaging System is part of a larger project, where the work presented in this thesis was to use image processing techniques to mitigate problems inherent to fiber bundle image relay and then, form high-resolution wide field-of-view panoramas captured from multiple sensors within a custom state-of-the-art imager. The Extended Depth-of-Focus System goals were to characterize the angular and depth dependence of the PSF of a focal swept imager in order to increase the acceptably focused imaged scene depth. The goal of the Platform Motion Blur Image Restoration System was to build a system that can capture a high signal-to-noise ratio (SNR), long-exposure image which is inherently blurred while at the same time capturing motion data using additional optical sensors in order to deblur the degraded images. Lastly, the objective of the Compressive Imager was to design and build a system functionally similar to the Single Pixel Camera and use it to test new sampling methods for image generation and to characterize it against a traditional camera. These computational

  14. Computer performance optimization systems, applications, processes

    CERN Document Server

    Osterhage, Wolfgang W

    2013-01-01

    Computing power performance was important at times when hardware was still expensive, because hardware had to be put to the best use. Later on this criterion was no longer critical, since hardware had become inexpensive. Meanwhile, however, people have realized that performance again plays a significant role, because of the major drain on system resources involved in developing complex applications. This book distinguishes between three levels of performance optimization: the system level, application level and business processes level. On each, optimizations can be achieved and cost-cutting p

  15. Computational modeling of shallow geothermal systems

    CERN Document Server

    Al-Khoury, Rafid

    2011-01-01

    A Step-by-step Guide to Developing Innovative Computational Tools for Shallow Geothermal Systems Geothermal heat is a viable source of energy and its environmental impact in terms of CO2 emissions is significantly lower than conventional fossil fuels. Shallow geothermal systems are increasingly utilized for heating and cooling of buildings and greenhouses. However, their utilization is inconsistent with the enormous amount of energy available underneath the surface of the earth. Projects of this nature are not getting the public support they deserve because of the uncertainties associated with

  16. Prestandardisation Activities for Computer Based Safety Systems

    DEFF Research Database (Denmark)

    Taylor, J. R.; Bologna, S.; Ehrenberger, W.

    1981-01-01

    Questions of technical safety become more and more important. Due to the higher complexity of their functions computer based safety systems have special problems. Researchers, producers, licensing personnel and customers have met on a European basis to exchange knowledge and formulate positions....... The Commission of the european Community supports the work. Major topics comprise hardware configuration and self supervision, software design, verification and testing, documentation, system specification and concurrent processing. Preliminary results have been used for the draft of an IEC standard and for some...

  17. Tools for Embedded Computing Systems Software

    Science.gov (United States)

    1978-01-01

    A workshop was held to assess the state of tools for embedded systems software and to determine directions for tool development. A synopsis of the talk and the key figures of each workshop presentation, together with chairmen summaries, are presented. The presentations covered four major areas: (1) tools and the software environment (development and testing); (2) tools and software requirements, design, and specification; (3) tools and language processors; and (4) tools and verification and validation (analysis and testing). The utility and contribution of existing tools and research results for the development and testing of embedded computing systems software are described and assessed.

  18. Computer-Assisted Photo Interpretation System

    Science.gov (United States)

    Niedzwiadek, Harry A.

    1981-11-01

    A computer-assisted photo interpretation research (CAPIR) system has been developed at the U.S. Army Engineer Topographic Laboratories (ETL), Fort Belvoir, Virginia. The system is based around the APPS-IV analytical plotter, a photogrammetric restitution device that was designed and developed by Autometric specifically for interactive, computerized data collection activities involving high-resolution, stereo aerial photographs. The APPS-IV is ideally suited for feature analysis and feature extraction, the primary functions of a photo interpreter. The APPS-IV is interfaced with a minicomputer and a geographic information system called AUTOGIS. The AUTOGIS software provides the tools required to collect or update digital data using an APPS-IV, construct and maintain a geographic data base, and analyze or display the contents of the data base. Although the CAPIR system is fully functional at this time, considerable enhancements are planned for the future.

  19. Computational systems biology in cancer brain metastasis.

    Science.gov (United States)

    Peng, Huiming; Tan, Hua; Zhao, Weiling; Jin, Guangxu; Sharma, Sambad; Xing, Fei; Watabe, Kounosuke; Zhou, Xiaobo

    2016-01-01

    Brain metastases occur in 20-40% of patients with advanced malignancies. A better understanding of the mechanism of this disease will help us to identify novel therapeutic strategies. In this review, we will discuss the systems biology approaches used in this area, including bioinformatics and mathematical modeling. Bioinformatics has been used for identifying the molecular mechanisms driving brain metastasis and mathematical modeling methods for analyzing dynamics of a system and predicting optimal therapeutic strategies. We will illustrate the strategies, procedures, and computational techniques used for studying systems biology in cancer brain metastases. We will give examples on how to use a systems biology approach to analyze a complex disease. Some of the approaches used to identify relevant networks, pathways, and possibly biomarkers in metastasis will be reviewed into details. Finally, certain challenges and possible future directions in this area will also be discussed.

  20. A computer-aided continuous assessment system

    Directory of Open Access Journals (Sweden)

    B. C.H. Turton

    1996-12-01

    Full Text Available Universities within the United Kingdom have had to cope with a massive expansion in undergraduate student numbers over the last five years (Committee of Scottish University Principals, 1993; CVCP Briefing Note, 1994. In addition, there has been a move towards modularization and a closer monitoring of a student's progress throughout the year. Since the price/performance ratio of computer systems has continued to improve, Computer- Assisted Learning (CAL has become an attractive option. (Fry, 1990; Benford et al, 1994; Laurillard et al, 1994. To this end, the Universities Funding Council (UFQ has funded the Teaching and Learning Technology Programme (TLTP. However universities also have a duty to assess as well as to teach. This paper describes a Computer-Aided Assessment (CAA system capable of assisting in grading students and providing feedback. In this particular case, a continuously assessed course (Low-Level Languages of over 100 students is considered. Typically, three man-days are required to mark one assessed piece of coursework from the students in this class. Any feedback on how the questions were dealt with by the student are of necessity brief. Most of the feedback is provided in a tutorial session that covers the pitfalls encountered by the majority of the students.

  1. OPTIMIZATION OF PARAMETERS OF ELEMENTS COMPUTER SYSTEM

    Directory of Open Access Journals (Sweden)

    Nesterov G. D.

    2016-03-01

    Full Text Available The work is devoted to the topical issue of increasing the productivity of computers. It has an experimental character. Therefore, the description of a number of the carried-out tests and the analysis of their results is offered. Previously basic characteristics of modules of the computer for the regular mode of functioning are provided in the article. Further the technique of regulating their parameters in the course of experiment is described. Thus the special attention is paid to observing the necessary thermal mode in order to avoid an undesirable overheat of the central processor. Also, operability of system in the conditions of the increased energy consumption is checked. The most responsible moment thus is regulating the central processor. As a result of the test its optimum tension, frequency and delays of data reading from memory are found. The analysis of stability of characteristics of the RAM, in particular, a condition of its tires in the course of experiment is made. As the executed tests took place within the standard range of characteristics of modules, and, therefore, the margin of safety put in the computer and capacity of system wasn't used, further experiments were made at extreme dispersal in the conditions of air cooling. The received results are also given in the offered article

  2. Visual computing model for immune system and medical system.

    Science.gov (United States)

    Gong, Tao; Cao, Xinxue; Xiong, Qin

    2015-01-01

    Natural immune system is an intelligent self-organizing and adaptive system, which has a variety of immune cells with different types of immune mechanisms. The mutual cooperation between the immune cells shows the intelligence of this immune system, and modeling this immune system has an important significance in medical science and engineering. In order to build a comprehensible model of this immune system for better understanding with the visualization method than the traditional mathematic model, a visual computing model of this immune system was proposed and also used to design a medical system with the immune system, in this paper. Some visual simulations of the immune system were made to test the visual effect. The experimental results of the simulations show that the visual modeling approach can provide a more effective way for analyzing this immune system than only the traditional mathematic equations.

  3. Visual computing scientific visualization and imaging systems

    CERN Document Server

    2014-01-01

    This volume aims to stimulate discussions on research involving the use of data and digital images as an understanding approach for analysis and visualization of phenomena and experiments. The emphasis is put not only on graphically representing data as a way of increasing its visual analysis, but also on the imaging systems which contribute greatly to the comprehension of real cases. Scientific Visualization and Imaging Systems encompass multidisciplinary areas, with applications in many knowledge fields such as Engineering, Medicine, Material Science, Physics, Geology, Geographic Information Systems, among others. This book is a selection of 13 revised and extended research papers presented in the International Conference on Advanced Computational Engineering and Experimenting -ACE-X conferences 2010 (Paris), 2011 (Algarve), 2012 (Istanbul) and 2013 (Madrid). The examples were particularly chosen from materials research, medical applications, general concepts applied in simulations and image analysis and ot...

  4. Epilepsy analytic system with cloud computing.

    Science.gov (United States)

    Shen, Chia-Ping; Zhou, Weizhi; Lin, Feng-Seng; Sung, Hsiao-Ya; Lam, Yan-Yu; Chen, Wei; Lin, Jeng-Wei; Pan, Ming-Kai; Chiu, Ming-Jang; Lai, Feipei

    2013-01-01

    Biomedical data analytic system has played an important role in doing the clinical diagnosis for several decades. Today, it is an emerging research area of analyzing these big data to make decision support for physicians. This paper presents a parallelized web-based tool with cloud computing service architecture to analyze the epilepsy. There are many modern analytic functions which are wavelet transform, genetic algorithm (GA), and support vector machine (SVM) cascaded in the system. To demonstrate the effectiveness of the system, it has been verified by two kinds of electroencephalography (EEG) data, which are short term EEG and long term EEG. The results reveal that our approach achieves the total classification accuracy higher than 90%. In addition, the entire training time accelerate about 4.66 times and prediction time is also meet requirements in real time.

  5. 10 CFR 35.457 - Therapy-related computer systems.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Therapy-related computer systems. 35.457 Section 35.457... Therapy-related computer systems. The licensee shall perform acceptance testing on the treatment planning system of therapy-related computer systems in accordance with published protocols accepted by...

  6. Knowledge and intelligent computing system in medicine.

    Science.gov (United States)

    Pandey, Babita; Mishra, R B

    2009-03-01

    Knowledge-based systems (KBS) and intelligent computing systems have been used in the medical planning, diagnosis and treatment. The KBS consists of rule-based reasoning (RBR), case-based reasoning (CBR) and model-based reasoning (MBR) whereas intelligent computing method (ICM) encompasses genetic algorithm (GA), artificial neural network (ANN), fuzzy logic (FL) and others. The combination of methods in KBS such as CBR-RBR, CBR-MBR and RBR-CBR-MBR and the combination of methods in ICM is ANN-GA, fuzzy-ANN, fuzzy-GA and fuzzy-ANN-GA. The combination of methods from KBS to ICM is RBR-ANN, CBR-ANN, RBR-CBR-ANN, fuzzy-RBR, fuzzy-CBR and fuzzy-CBR-ANN. In this paper, we have made a study of different singular and combined methods (185 in number) applicable to medical domain from mid 1970s to 2008. The study is presented in tabular form, showing the methods and its salient features, processes and application areas in medical domain (diagnosis, treatment and planning). It is observed that most of the methods are used in medical diagnosis very few are used for planning and moderate number in treatment. The study and its presentation in this context would be helpful for novice researchers in the area of medical expert system.

  7. An Applet-based Anonymous Distributed Computing System.

    Science.gov (United States)

    Finkel, David; Wills, Craig E.; Ciaraldi, Michael J.; Amorin, Kevin; Covati, Adam; Lee, Michael

    2001-01-01

    Defines anonymous distributed computing systems and focuses on the specifics of a Java, applet-based approach for large-scale, anonymous, distributed computing on the Internet. Explains the possibility of a large number of computers participating in a single computation and describes a test of the functionality of the system. (Author/LRW)

  8. Final Report on the Automated Computer Science Education System.

    Science.gov (United States)

    Danielson, R. L.; And Others

    At the University of Illinois at Urbana, a computer based curriculum called Automated Computer Science Education System (ACSES) has been developed to supplement instruction in introductory computer science courses or to assist individuals interested in acquiring a foundation in computer science through independent study. The system, which uses…

  9. Neural circuits as computational dynamical systems.

    Science.gov (United States)

    Sussillo, David

    2014-04-01

    Many recent studies of neurons recorded from cortex reveal complex temporal dynamics. How such dynamics embody the computations that ultimately lead to behavior remains a mystery. Approaching this issue requires developing plausible hypotheses couched in terms of neural dynamics. A tool ideally suited to aid in this question is the recurrent neural network (RNN). RNNs straddle the fields of nonlinear dynamical systems and machine learning and have recently seen great advances in both theory and application. I summarize recent theoretical and technological advances and highlight an example of how RNNs helped to explain perplexing high-dimensional neurophysiological data in the prefrontal cortex.

  10. Controlling Energy Demand in Mobile Computing Systems

    CERN Document Server

    Ellis, Carla

    2007-01-01

    This lecture provides an introduction to the problem of managing the energy demand of mobile devices. Reducing energy consumption, primarily with the goal of extending the lifetime of battery-powered devices, has emerged as a fundamental challenge in mobile computing and wireless communication. The focus of this lecture is on a systems approach where software techniques exploit state-of-the-art architectural features rather than relying only upon advances in lower-power circuitry or the slow improvements in battery technology to solve the problem. Fortunately, there are many opportunities to i

  11. Large-scale neuromorphic computing systems

    Science.gov (United States)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  12. The Spartan attitude control system - Ground support computer

    Science.gov (United States)

    Schnurr, R. G., Jr.

    1986-01-01

    The Spartan Attitude Control System (ACS) contains a command and control computer. This computer is optimized for the activities of the flight and contains very little human interface hardware and software. The computer system provides the technicians testing of Spartan ACS with a convenient command-oriented interface to the flight ACS computer. The system also decodes and time tags data automatically sent out by the flight computer as key events occur. The duration and magnitude of all system maneuvers is also derived and displayed by this system. The Ground Support Computer is also the primary Ground Support Equipment for the flight sequencer which controls all payload maneuvers, and long term program timing.

  13. Computer system for monitoring power boiler operation

    Energy Technology Data Exchange (ETDEWEB)

    Taler, J.; Weglowski, B.; Zima, W.; Duda, P.; Gradziel, S.; Sobota, T.; Cebula, A.; Taler, D. [Cracow University of Technology, Krakow (Poland). Inst. for Process & Power Engineering

    2008-02-15

    The computer-based boiler performance monitoring system was developed to perform thermal-hydraulic computations of the boiler working parameters in an on-line mode. Measurements of temperatures, heat flux, pressures, mass flowrates, and gas analysis data were used to perform the heat transfer analysis in the evaporator, furnace, and convection pass. A new construction technique of heat flux tubes for determining heat flux absorbed by membrane water-walls is also presented. The current paper presents the results of heat flux measurement in coal-fired steam boilers. During changes of the boiler load, the necessary natural water circulation cannot be exceeded. A rapid increase of pressure may cause fading of the boiling process in water-wall tubes, whereas a rapid decrease of pressure leads to water boiling in all elements of the boiler's evaporator - water-wall tubes and downcomers. Both cases can cause flow stagnation in the water circulation leading to pipe cracking. Two flowmeters were assembled on central downcomers, and an investigation of natural water circulation in an OP-210 boiler was carried out. On the basis of these measurements, the maximum rates of pressure change in the boiler evaporator were determined. The on-line computation of the conditions in the combustion chamber allows for real-time determination of the heat flowrate transferred to the power boiler evaporator. Furthermore, with a quantitative indication of surface cleanliness, selective sootblowing can be directed at specific problem areas. A boiler monitoring system is also incorporated to provide details of changes in boiler efficiency and operating conditions following sootblowing, so that the effects of a particular sootblowing sequence can be analysed and optimized at a later stage.

  14. Engineering Control Systems and Computing in the 1990s

    OpenAIRE

    Casti, J.L.

    1985-01-01

    The relationship between computing hardware/software and engineering control systems is projected into the next decade, and conjectures are made as to the areas of control and system theory that will most benefit from various types of computing advances.

  15. Computer Based Information Systems and the Middle Manager.

    Science.gov (United States)

    Why do some computer based information systems succeed while others fail. It concludes with eleven recommended areas that middle management must...understand in order to effectively use computer based information systems . (Modified author abstract)

  16. Potential of Cognitive Computing and Cognitive Systems

    Science.gov (United States)

    Noor, Ahmed K.

    2014-11-01

    Cognitive computing and cognitive technologies are game changers for future engineering systems, as well as for engineering practice and training. They are major drivers for knowledge automation work, and the creation of cognitive products with higher levels of intelligence than current smart products. This paper gives a brief review of cognitive computing and some of the cognitive engineering systems activities. The potential of cognitive technologies is outlined, along with a brief description of future cognitive environments, incorporating cognitive assistants - specialized proactive intelligent software agents designed to follow and interact with humans and other cognitive assistants across the environments. The cognitive assistants engage, individually or collectively, with humans through a combination of adaptive multimodal interfaces, and advanced visualization and navigation techniques. The realization of future cognitive environments requires the development of a cognitive innovation ecosystem for the engineering workforce. The continuously expanding major components of the ecosystem include integrated knowledge discovery and exploitation facilities (incorporating predictive and prescriptive big data analytics); novel cognitive modeling and visual simulation facilities; cognitive multimodal interfaces; and cognitive mobile and wearable devices. The ecosystem will provide timely, engaging, personalized / collaborative, learning and effective decision making. It will stimulate creativity and innovation, and prepare the participants to work in future cognitive enterprises and develop new cognitive products of increasing complexity. http://www.aee.odu.edu/cognitivecomp

  17. COMPUTER-BASED REASONING SYSTEMS: AN OVERVIEW

    Directory of Open Access Journals (Sweden)

    CIPRIAN CUCU

    2012-12-01

    Full Text Available Argumentation is nowadays seen both as skill that people use in various aspects of their lives, as well as an educational technique that can support the transfer or creation of knowledge thus aiding in the development of other skills (e.g. Communication, critical thinking or attitudes. However, teaching argumentation and teaching with argumentation is still a rare practice, mostly due to the lack of available resources such as time or expert human tutors that are specialized in argumentation. Intelligent Computer Systems (i.e. Systems that implement an inner representation of particular knowledge and try to emulate the behavior of humans could allow more people to understand the purpose, techniques and benefits of argumentation. The proposed paper investigates the state of the art concepts of computer-based argumentation used in education and tries to develop a conceptual map, showing benefits, limitation and relations between various concepts focusing on the duality “learning to argue – arguing to learn”.

  18. Computational System For Rapid CFD Analysis In Engineering

    Science.gov (United States)

    Barson, Steven L.; Ascoli, Edward P.; Decroix, Michelle E.; Sindir, Munir M.

    1995-01-01

    Computational system comprising modular hardware and software sub-systems developed to accelerate and facilitate use of techniques of computational fluid dynamics (CFD) in engineering environment. Addresses integration of all aspects of CFD analysis process, including definition of hardware surfaces, generation of computational grids, CFD flow solution, and postprocessing. Incorporates interfaces for integration of all hardware and software tools needed to perform complete CFD analysis. Includes tools for efficient definition of flow geometry, generation of computational grids, computation of flows on grids, and postprocessing of flow data. System accepts geometric input from any of three basic sources: computer-aided design (CAD), computer-aided engineering (CAE), or definition by user.

  19. Multiaxis, Lightweight, Computer-Controlled Exercise System

    Science.gov (United States)

    Haynes, Leonard; Bachrach, Benjamin; Harvey, William

    2006-01-01

    The multipurpose, multiaxial, isokinetic dynamometer (MMID) is a computer-controlled system of exercise machinery that can serve as a means for quantitatively assessing a subject s muscle coordination, range of motion, strength, and overall physical condition with respect to a wide variety of forces, motions, and exercise regimens. The MMID is easily reconfigurable and compactly stowable and, in comparison with prior computer-controlled exercise systems, it weighs less, costs less, and offers more capabilities. Whereas a typical prior isokinetic exercise machine is limited to operation in only one plane, the MMID can operate along any path. In addition, the MMID is not limited to the isokinetic (constant-speed) mode of operation. The MMID provides for control and/or measurement of position, force, and/or speed of exertion in as many as six degrees of freedom simultaneously; hence, it can accommodate more complex, more nearly natural combinations of motions and, in so doing, offers greater capabilities for physical conditioning and evaluation. The MMID (see figure) includes as many as eight active modules, each of which can be anchored to a floor, wall, ceiling, or other fixed object. A cable is payed out from a reel in each module to a bar or other suitable object that is gripped and manipulated by the subject. The reel is driven by a DC brushless motor or other suitable electric motor via a gear reduction unit. The motor can be made to function as either a driver or an electromagnetic brake, depending on the required nature of the interaction with the subject. The module includes a force and a displacement sensor for real-time monitoring of the tension in and displacement of the cable, respectively. In response to commands from a control computer, the motor can be operated to generate a required tension in the cable, to displace the cable a required distance, or to reel the cable in or out at a required speed. The computer can be programmed, either locally or via

  20. 14 CFR 415.123 - Computing systems and software.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Computing systems and software. 415.123... Launch Vehicle From a Non-Federal Launch Site § 415.123 Computing systems and software. (a) An applicant's safety review document must describe all computing systems and software that perform a safety...

  1. Intelligent Computer Vision System for Automated Classification

    Science.gov (United States)

    Jordanov, Ivan; Georgieva, Antoniya

    2010-05-01

    In this paper we investigate an Intelligent Computer Vision System applied for recognition and classification of commercially available cork tiles. The system is capable of acquiring and processing gray images using several feature generation and analysis techniques. Its functionality includes image acquisition, feature extraction and preprocessing, and feature classification with neural networks (NN). We also discuss system test and validation results from the recognition and classification tasks. The system investigation also includes statistical feature processing (features number and dimensionality reduction techniques) and classifier design (NN architecture, target coding, learning complexity and performance, and training with our own metaheuristic optimization method). The NNs trained with our genetic low-discrepancy search method (GLPτS) for global optimisation demonstrated very good generalisation abilities. In our view, the reported testing success rate of up to 95% is due to several factors: combination of feature generation techniques; application of Analysis of Variance (ANOVA) and Principal Component Analysis (PCA), which appeared to be very efficient for preprocessing the data; and use of suitable NN design and learning method.

  2. Computational dynamics of acoustically driven microsphere systems.

    Science.gov (United States)

    Glosser, Connor; Piermarocchi, Carlo; Li, Jie; Dault, Dan; Shanker, B

    2016-01-01

    We propose a computational framework for the self-consistent dynamics of a microsphere system driven by a pulsed acoustic field in an ideal fluid. Our framework combines a molecular dynamics integrator describing the dynamics of the microsphere system with a time-dependent integral equation solver for the acoustic field that makes use of fields represented as surface expansions in spherical harmonic basis functions. The presented approach allows us to describe the interparticle interaction induced by the field as well as the dynamics of trapping in counter-propagating acoustic pulses. The integral equation formulation leads to equations of motion for the microspheres describing the effect of nondissipative drag forces. We show (1) that the field-induced interactions between the microspheres give rise to effective dipolar interactions, with effective dipoles defined by their velocities and (2) that the dominant effect of an ultrasound pulse through a cloud of microspheres gives rise mainly to a translation of the system, though we also observe both expansion and contraction of the cloud determined by the initial system geometry.

  3. Computing the Moore-Penrose Inverse of a Matrix with a Computer Algebra System

    Science.gov (United States)

    Schmidt, Karsten

    2008-01-01

    In this paper "Derive" functions are provided for the computation of the Moore-Penrose inverse of a matrix, as well as for solving systems of linear equations by means of the Moore-Penrose inverse. Making it possible to compute the Moore-Penrose inverse easily with one of the most commonly used Computer Algebra Systems--and to have the blueprint…

  4. Performance Aspects of Synthesizable Computing Systems

    DEFF Research Database (Denmark)

    Schleuniger, Pascal

    . However, high setup and design costs make ASICs economically viable only for high volume production. Therefore, FPGAs are increasingly being used in low and medium volume markets. The evolution of FPGAs has reached a point where multiple processor cores, dedicated accelerators, and a large number...... of interfaces can be integrated on a single device. This thesis consists of ve parts that address performance aspects of synthesizable computing systems on FPGAs. First, it is evaluated how synthesizable processor cores can exploit current state-of-the-art FPGA architectures. This evaluation results...... in a processor architecture optimized for a high throughput on modern FPGA architectures. The current hardware implementation, the Tinuso I core, can be clocked as high as 376MHz on a Xilinx Virtex 6 device and consumes fewer hardware resources than similar commercial processor congurations. The Tinuso...

  5. The fundamentals of computational intelligence system approach

    CERN Document Server

    Zgurovsky, Mikhail Z

    2017-01-01

    This monograph is dedicated to the systematic presentation of main trends, technologies and methods of computational intelligence (CI). The book pays big attention to novel important CI technology- fuzzy logic (FL) systems and fuzzy neural networks (FNN). Different FNN including new class of FNN- cascade neo-fuzzy neural networks are considered and their training algorithms are described and analyzed. The applications of FNN to the forecast in macroeconomics and at stock markets are examined. The book presents the problem of portfolio optimization under uncertainty, the novel theory of fuzzy portfolio optimization free of drawbacks of classical model of Markovitz as well as an application for portfolios optimization at Ukrainian, Russian and American stock exchanges. The book also presents the problem of corporations bankruptcy risk forecasting under incomplete and fuzzy information, as well as new methods based on fuzzy sets theory and fuzzy neural networks and results of their application for bankruptcy ris...

  6. Computational Modeling of Biological Systems From Molecules to Pathways

    CERN Document Server

    2012-01-01

    Computational modeling is emerging as a powerful new approach for studying and manipulating biological systems. Many diverse methods have been developed to model, visualize, and rationally alter these systems at various length scales, from atomic resolution to the level of cellular pathways. Processes taking place at larger time and length scales, such as molecular evolution, have also greatly benefited from new breeds of computational approaches. Computational Modeling of Biological Systems: From Molecules to Pathways provides an overview of established computational methods for the modeling of biologically and medically relevant systems. It is suitable for researchers and professionals working in the fields of biophysics, computational biology, systems biology, and molecular medicine.

  7. A computing system for LBB considerations

    Energy Technology Data Exchange (ETDEWEB)

    Ikonen, K.; Miettinen, J.; Raiko, H.; Keskinen, R.

    1997-04-01

    A computing system has been developed at VTT Energy for making efficient leak-before-break (LBB) evaluations of piping components. The system consists of fracture mechanics and leak rate analysis modules which are linked via an interactive user interface LBBCAL. The system enables quick tentative analysis of standard geometric and loading situations by means of fracture mechanics estimation schemes such as the R6, FAD, EPRI J, Battelle, plastic limit load and moments methods. Complex situations are handled with a separate in-house made finite-element code EPFM3D which uses 20-noded isoparametric solid elements, automatic mesh generators and advanced color graphics. Analytical formulas and numerical procedures are available for leak area evaluation. A novel contribution for leak rate analysis is the CRAFLO code which is based on a nonequilibrium two-phase flow model with phase slip. Its predictions are essentially comparable with those of the well known SQUIRT2 code; additionally it provides outputs for temperature, pressure and velocity distributions in the crack depth direction. An illustrative application to a circumferentially cracked elbow indicates expectedly that a small margin relative to the saturation temperature of the coolant reduces the leak rate and is likely to influence the LBB implementation to intermediate diameter (300 mm) primary circuit piping of BWR plants.

  8. Computer vision for driver assistance systems

    Science.gov (United States)

    Handmann, Uwe; Kalinke, Thomas; Tzomakas, Christos; Werner, Martin; von Seelen, Werner

    1998-07-01

    Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.

  9. Advances in Future Computer and Control Systems v.2

    CERN Document Server

    Lin, Sally; 2012 International Conference on Future Computer and Control Systems(FCCS2012)

    2012-01-01

    FCCS2012 is an integrated conference concentrating its focus on Future Computer and Control Systems. “Advances in Future Computer and Control Systems” presents the proceedings of the 2012 International Conference on Future Computer and Control Systems(FCCS2012) held April 21-22,2012, in Changsha, China including recent research results on Future Computer and Control Systems of researchers from all around the world.

  10. Advances in Future Computer and Control Systems v.1

    CERN Document Server

    Lin, Sally; 2012 International Conference on Future Computer and Control Systems(FCCS2012)

    2012-01-01

    FCCS2012 is an integrated conference concentrating its focus on Future Computer and Control Systems. “Advances in Future Computer and Control Systems” presents the proceedings of the 2012 International Conference on Future Computer and Control Systems(FCCS2012) held April 21-22,2012, in Changsha, China including recent research results on Future Computer and Control Systems of researchers from all around the world.

  11. Reachability computation for hybrid systems with Ariadne

    NARCIS (Netherlands)

    L. Benvenuti; D. Bresolin; A. Casagrande; P.J. Collins (Pieter); A. Ferrari; E. Mazzi; T. Villa; A. Sangiovanni-Vincentelli

    2008-01-01

    htmlabstractAriadne is an in-progress open environment to design algorithms for computing with hybrid automata, that relies on a rigorous computable analysis theory to represent geometric objects, in order to achieve provable approximation bounds along the computations. In this paper we discuss the

  12. Genost: A System for Introductory Computer Science Education with a Focus on Computational Thinking

    Science.gov (United States)

    Walliman, Garret

    Computational thinking, the creative thought process behind algorithmic design and programming, is a crucial introductory skill for both computer scientists and the population in general. In this thesis I perform an investigation into introductory computer science education in the United States and find that computational thinking is not effectively taught at either the high school or the college level. To remedy this, I present a new educational system intended to teach computational thinking called Genost. Genost consists of a software tool and a curriculum based on teaching computational thinking through fundamental programming structures and algorithm design. Genost's software design is informed by a review of eight major computer science educational software systems. Genost's curriculum is informed by a review of major literature on computational thinking. In two educational tests of Genost utilizing both college and high school students, Genost was shown to significantly increase computational thinking ability with a large effect size.

  13. A computer control system using a virtual keyboard

    Science.gov (United States)

    Ejbali, Ridha; Zaied, Mourad; Ben Amar, Chokri

    2015-02-01

    This work is in the field of human-computer communication, namely in the field of gestural communication. The objective was to develop a system for gesture recognition. This system will be used to control a computer without a keyboard. The idea consists in using a visual panel printed on an ordinary paper to communicate with a computer.

  14. 10 CFR 35.657 - Therapy-related computer systems.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Therapy-related computer systems. 35.657 Section 35.657... Units, Teletherapy Units, and Gamma Stereotactic Radiosurgery Units § 35.657 Therapy-related computer... computer systems in accordance with published protocols accepted by nationally recognized bodies. At...

  15. Factory automation management computer system and its applications. FA kanri computer system no tekiyo jirei

    Energy Technology Data Exchange (ETDEWEB)

    Maeda, M. (Meidensha Corp., Tokyo (Japan))

    1993-06-11

    A plurality of NC composite lathes used in a breaker manufacturing and processing line were integrated under a system mainly comprising the industrial computer [mu] PORT, an exclusive LAN, and material handling robots. This paper describes this flexible manufacturing system (FMS) that operates on an unmanned basis from process control to material distribution and processing. This system has achieved the following results: efficiency improvement in lines producing a great variety of products in small quantities and in mixed flow production lines enhancement in facility operating rates by means of group management of NC machine tools; orientation to developing into integrated production systems; expansion of processing capacity; reduction in number of processes; and reduction in management and indirect manpowers. This system allocates the production control plans transmitted from the production control system operated by a host computer to the processes on a daily basis and by machines, using the [mu] PORT. This FMS utilizes features of the multi-task processing function of the [mu] PORT and the ultra high-speed real-time-based BASIC. The system processes simultaneously the process management such as machining programs and processing results, the processing data management, and the operation control of a plurality of machines. The system achieved systematized machining processes. 6 figs., 2 tabs.

  16. Distributed computing system with dual independent communications paths between computers and employing split tokens

    Science.gov (United States)

    Rasmussen, Robert D. (Inventor); Manning, Robert M. (Inventor); Lewis, Blair F. (Inventor); Bolotin, Gary S. (Inventor); Ward, Richard S. (Inventor)

    1990-01-01

    This is a distributed computing system providing flexible fault tolerance; ease of software design and concurrency specification; and dynamic balance of the loads. The system comprises a plurality of computers each having a first input/output interface and a second input/output interface for interfacing to communications networks each second input/output interface including a bypass for bypassing the associated computer. A global communications network interconnects the first input/output interfaces for providing each computer the ability to broadcast messages simultaneously to the remainder of the computers. A meshwork communications network interconnects the second input/output interfaces providing each computer with the ability to establish a communications link with another of the computers bypassing the remainder of computers. Each computer is controlled by a resident copy of a common operating system. Communications between respective ones of computers is by means of split tokens each having a moving first portion which is sent from computer to computer and a resident second portion which is disposed in the memory of at least one of computer and wherein the location of the second portion is part of the first portion. The split tokens represent both functions to be executed by the computers and data to be employed in the execution of the functions. The first input/output interfaces each include logic for detecting a collision between messages and for terminating the broadcasting of a message whereby collisions between messages are detected and avoided.

  17. New computing systems, future computing environment, and their implications on structural analysis and design

    Science.gov (United States)

    Noor, Ahmed K.; Housner, Jerrold M.

    1993-01-01

    Recent advances in computer technology that are likely to impact structural analysis and design of flight vehicles are reviewed. A brief summary is given of the advances in microelectronics, networking technologies, and in the user-interface hardware and software. The major features of new and projected computing systems, including high performance computers, parallel processing machines, and small systems, are described. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed. The impact of the advances in computer technology on structural analysis and the design of flight vehicles is described. A scenario for future computing paradigms is presented, and the near-term needs in the computational structures area are outlined.

  18. Applications of membrane computing in systems and synthetic biology

    CERN Document Server

    Gheorghe, Marian; Pérez-Jiménez, Mario

    2014-01-01

    Membrane Computing was introduced as a computational paradigm in Natural Computing. The models introduced, called Membrane (or P) Systems, provide a coherent platform to describe and study living cells as computational systems. Membrane Systems have been investigated for their computational aspects and employed to model problems in other fields, like: Computer Science, Linguistics, Biology, Economy, Computer Graphics, Robotics, etc. Their inherent parallelism, heterogeneity and intrinsic versatility allow them to model a broad range of processes and phenomena, being also an efficient means to solve and analyze problems in a novel way. Membrane Computing has been used to model biological systems, becoming with time a thorough modeling paradigm comparable, in its modeling and predicting capabilities, to more established models in this area. This book is the result of the need to collect, in an organic way, different facets of this paradigm. The chapters of this book, together with the web pages accompanying th...

  19. COMPUTER APPLICATION SYSTEM FOR OPERATIONAL EFFICIENCY OF DIESEL RAILBUSES

    Directory of Open Access Journals (Sweden)

    Łukasz WOJCIECHOWSKI

    2016-09-01

    Full Text Available The article presents a computer algorithm to calculate the estimated operating cost analysis rail bus. This computer application system compares the cost of employment locomotive and wagon, the cost of using locomotives and cost of using rail bus. An intensive growth of passenger railway traffic increased a demand for modern computer systems to management means of transportation. Described computer application operates on the basis of selected operating parameters of rail buses.

  20. Computers as Components Principles of Embedded Computing System Design

    CERN Document Server

    Wolf, Wayne

    2008-01-01

    This book was the first to bring essential knowledge on embedded systems technology and techniques under a single cover. This second edition has been updated to the state-of-the-art by reworking and expanding performance analysis with more examples and exercises, and coverage of electronic systems now focuses on the latest applications. Researchers, students, and savvy professionals schooled in hardware or software design, will value Wayne Wolf's integrated engineering design approach.The second edition gives a more comprehensive view of multiprocessors including VLIW and superscalar archite

  1. An operating system for future aerospace vehicle computer systems

    Science.gov (United States)

    Foudriat, E. C.; Berman, W. J.; Will, R. W.; Bynum, W. L.

    1984-01-01

    The requirements for future aerospace vehicle computer operating systems are examined in this paper. The computer architecture is assumed to be distributed with a local area network connecting the nodes. Each node is assumed to provide a specific functionality. The network provides for communication so that the overall tasks of the vehicle are accomplished. The O/S structure is based upon the concept of objects. The mechanisms for integrating node unique objects with node common objects in order to implement both the autonomy and the cooperation between nodes is developed. The requirements for time critical performance and reliability and recovery are discussed. Time critical performance impacts all parts of the distributed operating system; e.g., its structure, the functional design of its objects, the language structure, etc. Throughout the paper the tradeoffs - concurrency, language structure, object recovery, binding, file structure, communication protocol, programmer freedom, etc. - are considered to arrive at a feasible, maximum performance design. Reliability of the network system is considered. A parallel multipath bus structure is proposed for the control of delivery time for time critical messages. The architecture also supports immediate recovery for the time critical message system after a communication failure.

  2. Possible Computer Vision Systems and Automated or Computer-Aided Edging and Trimming

    Science.gov (United States)

    Philip A. Araman

    1990-01-01

    This paper discusses research which is underway to help our industry reduce costs, increase product volume and value recovery, and market more accurately graded and described products. The research is part of a team effort to help the hardwood sawmill industry automate with computer vision systems, and computer-aided or computer controlled processing. This paper...

  3. Computational systems analysis of dopamine metabolism.

    Directory of Open Access Journals (Sweden)

    Zhen Qi

    Full Text Available A prominent feature of Parkinson's disease (PD is the loss of dopamine in the striatum, and many therapeutic interventions for the disease are aimed at restoring dopamine signaling. Dopamine signaling includes the synthesis, storage, release, and recycling of dopamine in the presynaptic terminal and activation of pre- and post-synaptic receptors and various downstream signaling cascades. As an aid that might facilitate our understanding of dopamine dynamics in the pathogenesis and treatment in PD, we have begun to merge currently available information and expert knowledge regarding presynaptic dopamine homeostasis into a computational model, following the guidelines of biochemical systems theory. After subjecting our model to mathematical diagnosis and analysis, we made direct comparisons between model predictions and experimental observations and found that the model exhibited a high degree of predictive capacity with respect to genetic and pharmacological changes in gene expression or function. Our results suggest potential approaches to restoring the dopamine imbalance and the associated generation of oxidative stress. While the proposed model of dopamine metabolism is preliminary, future extensions and refinements may eventually serve as an in silico platform for prescreening potential therapeutics, identifying immediate side effects, screening for biomarkers, and assessing the impact of risk factors of the disease.

  4. Lightness computation by the human visual system

    Science.gov (United States)

    Rudd, Michael E.

    2017-05-01

    A model of achromatic color computation by the human visual system is presented, which is shown to account in an exact quantitative way for a large body of appearance matching data collected with simple visual displays. The model equations are closely related to those of the original Retinex model of Land and McCann. However, the present model differs in important ways from Land and McCann's theory in that it invokes additional biological and perceptual mechanisms, including contrast gain control, different inherent neural gains for incremental, and decremental luminance steps, and two types of top-down influence on the perceptual weights applied to local luminance steps in the display: edge classification and spatial integration attentional windowing. Arguments are presented to support the claim that these various visual processes must be instantiated by a particular underlying neural architecture. By pointing to correspondences between the architecture of the model and findings from visual neurophysiology, this paper suggests that edge classification involves a top-down gating of neural edge responses in early visual cortex (cortical areas V1 and/or V2) while spatial integration windowing occurs in cortical area V4 or beyond.

  5. Quantum Computing in Fock Space Systems

    Science.gov (United States)

    Berezin, Alexander A.

    1997-04-01

    Fock space system (FSS) has unfixed number (N) of particles and/or degrees of freedom. In quantum computing (QC) main requirement is sustainability of coherent Q-superpositions. This normally favoured by low noise environment. High excitation/high temperature (T) limit is hence discarded as unfeasible for QC. Conversely, if N is itself a quantized variable, the dimensionality of Hilbert basis for qubits may increase faster (say, N-exponentially) than thermal noise (likely, in powers of N and T). Hence coherency may win over T-randomization. For this type of QC speed (S) of factorization of long integers (with D digits) may increase with D (for 'ordinary' QC speed polynomially decreases with D). This (apparent) paradox rests on non-monotonic bijectivity (cf. Georg Cantor's diagonal counting of rational numbers). This brings entire aleph-null structurality ("Babylonian Library" of infinite informational content of integer field) to superposition determining state of quantum analogue of Turing machine head. Structure of integer infinititude (e.g. distribution of primes) results in direct "Platonic pressure" resembling semi-virtual Casimir efect (presure of cut-off vibrational modes). This "effect", the embodiment of Pythagorean "Number is everything", renders Godelian barrier arbitrary thin and hence FSS-based QC can in principle be unlimitedly efficient (e.g. D/S may tend to zero when D tends to infinity).

  6. Context-aware computing and self-managing systems

    CERN Document Server

    Dargie, Waltenegus

    2009-01-01

    Bringing together an extensively researched area with an emerging research issue, Context-Aware Computing and Self-Managing Systems presents the core contributions of context-aware computing in the development of self-managing systems, including devices, applications, middleware, and networks. The expert contributors reveal the usefulness of context-aware computing in developing autonomous systems that have practical application in the real world.The first chapter of the book identifies features that are common to both context-aware computing and autonomous computing. It offers a basic definit

  7. Time computations in anuran auditory systems

    Directory of Open Access Journals (Sweden)

    Gary J Rose

    2014-05-01

    Full Text Available Temporal computations are important in the acoustic communication of anurans. In many cases, calls between closely related species are nearly identical spectrally but differ markedly in temporal structure. Depending on the species, calls can differ in pulse duration, shape and/or rate (i.e., amplitude modulation, direction and rate of frequency modulation, and overall call duration. Also, behavioral studies have shown that anurans are able to discriminate between calls that differ in temporal structure. In the peripheral auditory system, temporal information is coded primarily in the spatiotemporal patterns of activity of auditory-nerve fibers. However, major transformations in the representation of temporal information occur in the central auditory system. In this review I summarize recent advances in understanding how temporal information is represented in the anuran midbrain, with particular emphasis on mechanisms that underlie selectivity for pulse duration and pulse rate (i.e., intervals between onsets of successive pulses. Two types of neurons have been identified that show selectivity for pulse rate: long-interval cells respond well to slow pulse rates but fail to spike or respond phasically to fast pulse rates; conversely, interval-counting neurons respond to intermediate or fast pulse rates, but only after a threshold number of pulses, presented at optimal intervals, have occurred. Duration-selectivity is manifest as short-pass, band-pass or long-pass tuning. Whole-cell patch recordings, in vivo, suggest that excitation and inhibition are integrated in diverse ways to generate temporal selectivity. In many cases, activity-related enhancement or depression of excitatory or inhibitory processes appear to contribute to selective responses.

  8. Modelling, abstraction, and computation in systems biology: A view from computer science.

    Science.gov (United States)

    Melham, Tom

    2013-04-01

    Systems biology is centrally engaged with computational modelling across multiple scales and at many levels of abstraction. Formal modelling, precise and formalised abstraction relationships, and computation also lie at the heart of computer science--and over the past decade a growing number of computer scientists have been bringing their discipline's core intellectual and computational tools to bear on biology in fascinating new ways. This paper explores some of the apparent points of contact between the two fields, in the context of a multi-disciplinary discussion on conceptual foundations of systems biology. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Software Systems for High-performance Quantum Computing

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; Britt, Keith A [ORNL

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  10. A Heterogeneous High-Performance System for Computational and Computer Science

    Science.gov (United States)

    2016-11-15

    Science The views, opinions and/or findings contained in this report are those of the author(s) and should not contrued as an official Department of the...System for Computational and Computer Science Report Title This DoD HBC/MI Equipment/Instrumentation grant was awarded in October 2014 for the purchase...Computing (HPC) course taught in the department of computer science as to attract more graduate students from many disciplines where their research

  11. Computer controlled vent and pressurization system

    Science.gov (United States)

    Cieslewicz, E. J.

    1975-01-01

    The Centaur space launch vehicle airborne computer, which was primarily used to perform guidance, navigation, and sequencing tasks, was further used to monitor and control inflight pressurization and venting of the cryogenic propellant tanks. Computer software flexibility also provided a failure detection and correction capability necessary to adopt and operate redundant hardware techniques and enhance the overall vehicle reliability.

  12. Generalised Computability and Applications to Hybrid Systems

    DEFF Research Database (Denmark)

    Korovina, Margarita V.; Kudinov, Oleg V.

    2001-01-01

    We investigate the concept of generalised computability of operators and functionals defined on the set of continuous functions, firstly introduced in [9]. By working in the reals, with equality and without equality, we study properties of generalised computable operators and functionals. Also we...

  13. The hack attack - Increasing computer system awareness of vulnerability threats

    Science.gov (United States)

    Quann, John; Belford, Peter

    1987-01-01

    The paper discusses the issue of electronic vulnerability of computer based systems supporting NASA Goddard Space Flight Center (GSFC) by unauthorized users. To test the security of the system and increase security awareness, NYMA, Inc. employed computer 'hackers' to attempt to infiltrate the system(s) under controlled conditions. Penetration procedures, methods, and descriptions are detailed in the paper. The procedure increased the security consciousness of GSFC management to the electronic vulnerability of the system(s).

  14. PLAID- A COMPUTER AIDED DESIGN SYSTEM

    Science.gov (United States)

    Brown, J. W.

    1994-01-01

    PLAID is a three-dimensional Computer Aided Design (CAD) system which enables the user to interactively construct, manipulate, and display sets of highly complex geometric models. PLAID was initially developed by NASA to assist in the design of Space Shuttle crewstation panels, and the detection of payload object collisions. It has evolved into a more general program for convenient use in many engineering applications. Special effort was made to incorporate CAD techniques and features which minimize the users workload in designing and managing PLAID models. PLAID consists of three major modules: the Primitive Object Generator (BUILD), the Composite Object Generator (COG), and the DISPLAY Processor. The BUILD module provides a means of constructing simple geometric objects called primitives. The primitives are created from polygons which are defined either explicitly by vertex coordinates, or graphically by use of terminal crosshairs or a digitizer. Solid objects are constructed by combining, rotating, or translating the polygons. Corner rounding, hole punching, milling, and contouring are special features available in BUILD. The COG module hierarchically organizes and manipulates primitives and other previously defined COG objects to form complex assemblies. The composite object is constructed by applying transformations to simpler objects. The transformations which can be applied are scalings, rotations, and translations. These transformations may be defined explicitly or defined graphically using the interactive COG commands. The DISPLAY module enables the user to view COG assemblies from arbitrary viewpoints (inside or outside the object) both in wireframe and hidden line renderings. The PLAID projection of a three-dimensional object can be either orthographic or with perspective. A conflict analysis option enables detection of spatial conflicts or collisions. DISPLAY provides camera functions to simulate a view of the model through different lenses. Other

  15. Overview of ASC Capability Computing System Governance Model

    Energy Technology Data Exchange (ETDEWEB)

    Doebling, Scott W. [Los Alamos National Laboratory

    2012-07-11

    This document contains a description of the Advanced Simulation and Computing Program's Capability Computing System Governance Model. Objectives of the Governance Model are to ensure that the capability system resources are allocated on a priority-driven basis according to the Program requirements; and to utilize ASC Capability Systems for the large capability jobs for which they were designed and procured.

  16. High-Speed Computer-Controlled Switch-Matrix System

    Science.gov (United States)

    Spisz, E.; Cory, B.; Ho, P.; Hoffman, M.

    1985-01-01

    High-speed computer-controlled switch-matrix system developed for communication satellites. Satellite system controlled by onboard computer and all message-routing functions between uplink and downlink beams handled by newly developed switch-matrix system. Message requires only 2-microsecond interconnect period, repeated every millisecond.

  17. Granular computing analysis and design of intelligent systems

    CERN Document Server

    Pedrycz, Witold

    2013-01-01

    Information granules, as encountered in natural language, are implicit in nature. To make them fully operational so they can be effectively used to analyze and design intelligent systems, information granules need to be made explicit. An emerging discipline, granular computing focuses on formalizing information granules and unifying them to create a coherent methodological and developmental environment for intelligent system design and analysis. Granular Computing: Analysis and Design of Intelligent Systems presents the unified principles of granular computing along with its comprehensive algo

  18. Computational Modeling of Flow Control Systems for Aerospace Vehicles Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Clear Science Corp. proposes to develop computational methods for designing active flow control systems on aerospace vehicles with the primary objective of...

  19. Simulation model of load balancing in distributed computing systems

    Science.gov (United States)

    Botygin, I. A.; Popov, V. N.; Frolov, S. G.

    2017-02-01

    The availability of high-performance computing, high speed data transfer over the network and widespread of software for the design and pre-production in mechanical engineering have led to the fact that at the present time the large industrial enterprises and small engineering companies implement complex computer systems for efficient solutions of production and management tasks. Such computer systems are generally built on the basis of distributed heterogeneous computer systems. The analytical problems solved by such systems are the key models of research, but the system-wide problems of efficient distribution (balancing) of the computational load and accommodation input, intermediate and output databases are no less important. The main tasks of this balancing system are load and condition monitoring of compute nodes, and the selection of a node for transition of the user’s request in accordance with a predetermined algorithm. The load balancing is one of the most used methods of increasing productivity of distributed computing systems through the optimal allocation of tasks between the computer system nodes. Therefore, the development of methods and algorithms for computing optimal scheduling in a distributed system, dynamically changing its infrastructure, is an important task.

  20. Evolutionary Computing for Intelligent Power System Optimization and Control

    DEFF Research Database (Denmark)

    This new book focuses on how evolutionary computing techniques benefit engineering research and development tasks by converting practical problems of growing complexities into simple formulations, thus largely reducing development efforts. This book begins with an overview of the optimization the...... theory and modern evolutionary computing techniques, and goes on to cover specific applications of evolutionary computing to power system optimization and control problems....

  1. Top 10 Threats to Computer Systems Include Professors and Students

    Science.gov (United States)

    Young, Jeffrey R.

    2008-01-01

    User awareness is growing in importance when it comes to computer security. Not long ago, keeping college networks safe from cyberattackers mainly involved making sure computers around campus had the latest software patches. New computer worms or viruses would pop up, taking advantage of some digital hole in the Windows operating system or in…

  2. Top 10 Threats to Computer Systems Include Professors and Students

    Science.gov (United States)

    Young, Jeffrey R.

    2008-01-01

    User awareness is growing in importance when it comes to computer security. Not long ago, keeping college networks safe from cyberattackers mainly involved making sure computers around campus had the latest software patches. New computer worms or viruses would pop up, taking advantage of some digital hole in the Windows operating system or in…

  3. Bringing the CMS distributed computing system into scalable operations

    CERN Document Server

    Belforte, S; Fisk, I; Flix, J; Hernández, J M; Kress, T; Letts, J; Magini, N; Miccio, V; Sciabà, A

    2010-01-01

    Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure an...

  4. A Survey of Civilian Dental Computer Systems.

    Science.gov (United States)

    1988-01-01

    r.arketplace, the orthodontic community continued to pioneer clinical automation through diagnosis, treat- (1) patient registration, identification...profession." New York State Dental Journal 34:76, 1968. 17. Ehrlich, A., The Role of Computers in Dental Practice Management. Champaign, IL: Colwell...Council on Dental military dental clinic. Medical Bulletin of the US Army Practice. Report: Dental Computer Vendors. 1984 Europe 39:14-16, 1982. 19

  5. Distributed computing environments for future space control systems

    Science.gov (United States)

    Viallefont, Pierre

    1993-01-01

    The aim of this paper is to present the results of a CNES research project on distributed computing systems. The purpose of this research was to study the impact of the use of new computer technologies in the design and development of future space applications. The first part of this study was a state-of-the-art review of distributed computing systems. One of the interesting ideas arising from this review is the concept of a 'virtual computer' allowing the distributed hardware architecture to be hidden from a software application. The 'virtual computer' can improve system performance by adapting the best architecture (addition of computers) to the software application without having to modify its source code. This concept can also decrease the cost and obsolescence of the hardware architecture. In order to verify the feasibility of the 'virtual computer' concept, a prototype representative of a distributed space application is being developed independently of the hardware architecture.

  6. A computational design system for rapid CFD analysis

    Science.gov (United States)

    Ascoli, E. P.; Barson, S. L.; Decroix, M. E.; Sindir, Munir M.

    1992-01-01

    A computation design system (CDS) is described in which these tools are integrated in a modular fashion. This CDS ties together four key areas of computational analysis: description of geometry; grid generation; computational codes; and postprocessing. Integration of improved computational fluid dynamics (CFD) analysis tools through integration with the CDS has made a significant positive impact in the use of CFD for engineering design problems. Complex geometries are now analyzed on a frequent basis and with greater ease.

  7. Safety Metrics for Human-Computer Controlled Systems

    Science.gov (United States)

    Leveson, Nancy G; Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems.This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  8. Computational system identification of continuous-time nonlinear systems using approximate Bayesian computation

    Science.gov (United States)

    Krishnanathan, Kirubhakaran; Anderson, Sean R.; Billings, Stephen A.; Kadirkamanathan, Visakan

    2016-11-01

    In this paper, we derive a system identification framework for continuous-time nonlinear systems, for the first time using a simulation-focused computational Bayesian approach. Simulation approaches to nonlinear system identification have been shown to outperform regression methods under certain conditions, such as non-persistently exciting inputs and fast-sampling. We use the approximate Bayesian computation (ABC) algorithm to perform simulation-based inference of model parameters. The framework has the following main advantages: (1) parameter distributions are intrinsically generated, giving the user a clear description of uncertainty, (2) the simulation approach avoids the difficult problem of estimating signal derivatives as is common with other continuous-time methods, and (3) as noted above, the simulation approach improves identification under conditions of non-persistently exciting inputs and fast-sampling. Term selection is performed by judging parameter significance using parameter distributions that are intrinsically generated as part of the ABC procedure. The results from a numerical example demonstrate that the method performs well in noisy scenarios, especially in comparison to competing techniques that rely on signal derivative estimation.

  9. Design technologies for green and sustainable computing systems

    CERN Document Server

    Ganguly, Amlan; Chakrabarty, Krishnendu

    2013-01-01

    This book provides a comprehensive guide to the design of sustainable and green computing systems (GSC). Coverage includes important breakthroughs in various aspects of GSC, including multi-core architectures, interconnection technology, data centers, high-performance computing (HPC), and sensor networks. The authors address the challenges of power efficiency and sustainability in various contexts, including system design, computer architecture, programming languages, compilers and networking. ·         Offers readers a single-source reference for addressing the challenges of power efficiency and sustainability in embedded computing systems; ·         Provides in-depth coverage of the key underlying design technologies for green and sustainable computing; ·         Covers a wide range of topics, from chip-level design to architectures, computing systems, and networks.

  10. A comparison of queueing, cluster and distributed computing systems

    Science.gov (United States)

    Kaplan, Joseph A.; Nelson, Michael L.

    1993-01-01

    Using workstation clusters for distributed computing has become popular with the proliferation of inexpensive, powerful workstations. Workstation clusters offer both a cost effective alternative to batch processing and an easy entry into parallel computing. However, a number of workstations on a network does not constitute a cluster. Cluster management software is necessary to harness the collective computing power. A variety of cluster management and queuing systems are compared: Distributed Queueing Systems (DQS), Condor, Load Leveler, Load Balancer, Load Sharing Facility (LSF - formerly Utopia), Distributed Job Manager (DJM), Computing in Distributed Networked Environments (CODINE), and NQS/Exec. The systems differ in their design philosophy and implementation. Based on published reports on the different systems and conversations with the system's developers and vendors, a comparison of the systems are made on the integral issues of clustered computing.

  11. Computer Generated Hologram System for Wavefront Measurement System Calibration

    Science.gov (United States)

    Olczak, Gene

    2011-01-01

    Computer Generated Holograms (CGHs) have been used for some time to calibrate interferometers that require nulling optics. A typical scenario is the testing of aspheric surfaces with an interferometer placed near the paraxial center of curvature. Existing CGH technology suffers from a reduced capacity to calibrate middle and high spatial frequencies. The root cause of this shortcoming is as follows: the CGH is not placed at an image conjugate of the asphere due to limitations imposed by the geometry of the test and the allowable size of the CGH. This innovation provides a calibration system where the imaging properties in calibration can be made comparable to the test configuration. Thus, if the test is designed to have good imaging properties, then middle and high spatial frequency errors in the test system can be well calibrated. The improved imaging properties are provided by a rudimentary auxiliary optic as part of the calibration system. The auxiliary optic is simple to characterize and align to the CGH. Use of the auxiliary optic also reduces the size of the CGH required for calibration and the density of the lines required for the CGH. The resulting CGH is less expensive than the existing technology and has reduced write error and alignment error sensitivities. This CGH system is suitable for any kind of calibration using an interferometer when high spatial resolution is required. It is especially well suited for tests that include segmented optical components or large apertures.

  12. The Cc1 Project – System For Private Cloud Computing

    Directory of Open Access Journals (Sweden)

    J Chwastowski

    2012-01-01

    Full Text Available The main features of the Cloud Computing system developed at IFJ PAN are described. The project is financed from the structural resources provided by the European Commission and the Polish Ministry of Science and Higher Education (Innovative Economy, National Cohesion Strategy. The system delivers a solution for carrying out computer calculations on a Private Cloud computing infrastructure. It consists of an intuitive Web based user interface, a module for the users and resources administration and the standard EC2 interface implementation. Thanks to the distributed character of the system it allows for the integration of a geographically distant federation of computer clusters within a uniform user environment.

  13. National electronic medical records integration on cloud computing system.

    Science.gov (United States)

    Mirza, Hebah; El-Masri, Samir

    2013-01-01

    Few Healthcare providers have an advanced level of Electronic Medical Record (EMR) adoption. Others have a low level and most have no EMR at all. Cloud computing technology is a new emerging technology that has been used in other industry and showed a great success. Despite the great features of Cloud computing, they haven't been utilized fairly yet in healthcare industry. This study presents an innovative Healthcare Cloud Computing system for Integrating Electronic Health Record (EHR). The proposed Cloud system applies the Cloud Computing technology on EHR system, to present a comprehensive EHR integrated environment.

  14. A Brief Talk on Teaching Reform Program of Computer Network Course System about Computer Related Professional

    Institute of Scientific and Technical Information of China (English)

    Wang Jian-Ping; Huang Yong

    2008-01-01

    The computer network course is the mainstay required course that college computer-related professional sets up,in regard to current teaching condition analysis,the teaching of this course has not formed a complete system,the new knowledge points can be added in promptly while the outdated technology is still there in teaching The article describes the current situation and maladies which appears in the university computer network related professional teaching,the teaching systems and teaching reform schemes about the computer network coupe are presented.

  15. Mechanisms of protection of information in computer networks and systems

    Directory of Open Access Journals (Sweden)

    Sergey Petrovich Evseev

    2011-10-01

    Full Text Available Protocols of information protection in computer networks and systems are investigated. The basic types of threats of infringement of the protection arising from the use of computer networks are classified. The basic mechanisms, services and variants of realization of cryptosystems for maintaining authentication, integrity and confidentiality of transmitted information are examined. Their advantages and drawbacks are described. Perspective directions of development of cryptographic transformations for the maintenance of information protection in computer networks and systems are defined and analyzed.

  16. Research on computer virus database management system

    Science.gov (United States)

    Qi, Guoquan

    2011-12-01

    The growing proliferation of computer viruses becomes the lethal threat and research focus of the security of network information. While new virus is emerging, the number of viruses is growing, virus classification increasing complex. Virus naming because of agencies' capture time differences can not be unified. Although each agency has its own virus database, the communication between each other lacks, or virus information is incomplete, or a small number of sample information. This paper introduces the current construction status of the virus database at home and abroad, analyzes how to standardize and complete description of virus characteristics, and then gives the information integrity, storage security and manageable computer virus database design scheme.

  17. Sensor fusion control system for computer integrated manufacturing

    CSIR Research Space (South Africa)

    Kumile, CM

    2007-08-01

    Full Text Available of products in unpredictable quantities. Computer Integrated Manufacturing (CIM) systems plays an important role towards integrating such flexible systems. This paper presents a methodology of increasing flexibility and reusability of a generic CIM cell...

  18. Computer-Based Integrated Learning Systems: Research and Theory.

    Science.gov (United States)

    Hativa, Nira, Ed.; Becker, Henry Jay, Ed.

    1994-01-01

    The eight chapters of this theme issue discuss recent research and theory concerning computer-based integrated learning systems. Following an introduction about their theoretical background and current use in schools, the effects of using computer-based integrated learning systems in the elementary school classroom are considered. (SLD)

  19. Entrepreneurial Health Informatics for Computer Science and Information Systems Students

    Science.gov (United States)

    Lawler, James; Joseph, Anthony; Narula, Stuti

    2014-01-01

    Corporate entrepreneurship is a critical area of curricula for computer science and information systems students. Few institutions of computer science and information systems have entrepreneurship in the curricula however. This paper presents entrepreneurial health informatics as a course in a concentration of Technology Entrepreneurship at a…

  20. On the Computation of Lyapunov Functions for Interconnected Systems

    DEFF Research Database (Denmark)

    Sloth, Christoffer

    2016-01-01

    This paper addresses the computation of additively separable Lyapunov functions for interconnected systems. The presented results can be applied to reduce the complexity of the computations associated with stability analysis of large scale systems. We provide a necessary and sufficient condition...

  1. Software For Computer-Aided Design Of Control Systems

    Science.gov (United States)

    Wette, Matthew

    1994-01-01

    Computer Aided Engineering System (CAESY) software developed to provide means to evaluate methods for dealing with users' needs in computer-aided design of control systems. Interpreter program for performing engineering calculations. Incorporates features of both Ada and MATLAB. Designed to be flexible and powerful. Includes internally defined functions, procedures and provides for definition of functions and procedures by user. Written in C language.

  2. 3-D Signal Processing in a Computer Vision System

    Science.gov (United States)

    Dongping Zhu; Richard W. Conners; Philip A. Araman

    1991-01-01

    This paper discusses the problem of 3-dimensional image filtering in a computer vision system that would locate and identify internal structural failure. In particular, a 2-dimensional adaptive filter proposed by Unser has been extended to 3-dimension. In conjunction with segmentation and labeling, the new filter has been used in the computer vision system to...

  3. Experiments and simulation models of a basic computation element of an autonomous molecular computing system.

    Science.gov (United States)

    Takinoue, Masahiro; Kiga, Daisuke; Shohda, Koh-Ichiroh; Suyama, Akira

    2008-10-01

    Autonomous DNA computers have been attracting much attention because of their ability to integrate into living cells. Autonomous DNA computers can process information through DNA molecules and their molecular reactions. We have already proposed an idea of an autonomous molecular computer with high computational ability, which is now named Reverse-transcription-and-TRanscription-based Autonomous Computing System (RTRACS). In this study, we first report an experimental demonstration of a basic computation element of RTRACS and a mathematical modeling method for RTRACS. We focus on an AND gate, which produces an output RNA molecule only when two input RNA molecules exist, because it is one of the most basic computation elements in RTRACS. Experimental results demonstrated that the basic computation element worked as designed. In addition, its behaviors were analyzed using a mathematical model describing the molecular reactions of the RTRACS computation elements. A comparison between experiments and simulations confirmed the validity of the mathematical modeling method. This study will accelerate construction of various kinds of computation elements and computational circuits of RTRACS, and thus advance the research on autonomous DNA computers.

  4. Mechatronic sensory system for computer integrated manufacturing

    CSIR Research Space (South Africa)

    Kumile, CM

    2007-05-01

    Full Text Available (CIM) systems plays an important role towards integrating such flexible systems. The requirement of fast and cheap design and redesign of manufacturing systems therefore is gaining in importance, considering not only the products and the physical...

  5. Impact of new computing systems on computational mechanics and flight-vehicle structures technology

    Science.gov (United States)

    Noor, A. K.; Storaasli, O. O.; Fulton, R. E.

    1984-01-01

    Advances in computer technology which may have an impact on computational mechanics and flight vehicle structures technology were reviewed. The characteristics of supersystems, highly parallel systems, and small systems are summarized. The interrelations of numerical algorithms and software with parallel architectures are discussed. A scenario for future hardware/software environment and engineering analysis systems is presented. Research areas with potential for improving the effectiveness of analysis methods in the new environment are identified.

  6. Data systems and computer science programs: Overview

    Science.gov (United States)

    Smith, Paul H.; Hunter, Paul

    1991-01-01

    An external review of the Integrated Technology Plan for the Civil Space Program is presented. The topics are presented in viewgraph form and include the following: onboard memory and storage technology; advanced flight computers; special purpose flight processors; onboard networking and testbeds; information archive, access, and retrieval; visualization; neural networks; software engineering; and flight control and operations.

  7. Central Computer IMS Processing System (CIMS).

    Science.gov (United States)

    Wolfe, Howard

    As part of the IMS Version 3 tryout in 1971-72, software was developed to enable data submitted by IMS users to be transmitted to the central computer, which acted on the data to create IMS reports and to update the Pupil Data Base with criterion exercise and class roster information. The program logic is described, and the subroutines and…

  8. Cloud Computing Based E-Learning System

    Science.gov (United States)

    Al-Zoube, Mohammed; El-Seoud, Samir Abou; Wyne, Mudasser F.

    2010-01-01

    Cloud computing technologies although in their early stages, have managed to change the way applications are going to be developed and accessed. These technologies are aimed at running applications as services over the internet on a flexible infrastructure. Microsoft office applications, such as word processing, excel spreadsheet, access database…

  9. Cloud Computing Based E-Learning System

    Science.gov (United States)

    Al-Zoube, Mohammed; El-Seoud, Samir Abou; Wyne, Mudasser F.

    2010-01-01

    Cloud computing technologies although in their early stages, have managed to change the way applications are going to be developed and accessed. These technologies are aimed at running applications as services over the internet on a flexible infrastructure. Microsoft office applications, such as word processing, excel spreadsheet, access database…

  10. Evaluation of computer-based ultrasonic inservice inspection systems

    Energy Technology Data Exchange (ETDEWEB)

    Harris, R.V. Jr.; Angel, L.J.; Doctor, S.R.; Park, W.R.; Schuster, G.J.; Taylor, T.T. [Pacific Northwest Lab., Richland, WA (United States)

    1994-03-01

    This report presents the principles, practices, terminology, and technology of computer-based ultrasonic testing for inservice inspection (UT/ISI) of nuclear power plants, with extensive use of drawings, diagrams, and LTT images. The presentation is technical but assumes limited specific knowledge of ultrasonics or computers. The report is divided into 9 sections covering conventional LTT, computer-based LTT, and evaluation methodology. Conventional LTT topics include coordinate axes, scanning, instrument operation, RF and video signals, and A-, B-, and C-scans. Computer-based topics include sampling, digitization, signal analysis, image presentation, SAFI, ultrasonic holography, transducer arrays, and data interpretation. An evaluation methodology for computer-based LTT/ISI systems is presented, including questions, detailed procedures, and test block designs. Brief evaluations of several computer-based LTT/ISI systems are given; supplementary volumes will provide detailed evaluations of selected systems.

  11. Cloud Computing for Network Security Intrusion Detection System

    Directory of Open Access Journals (Sweden)

    Jin Yang

    2013-01-01

    Full Text Available In recent years, as a new distributed computing model, cloud computing has developed rapidly and become the focus of academia and industry. But now the security issue of cloud computing is a main critical problem of most enterprise customers faced. In the current network environment, that relying on a single terminal to check the Trojan virus is considered increasingly unreliable. This paper analyzes the characteristics of current cloud computing, and then proposes a comprehensive real-time network risk evaluation model for cloud computing based on the correspondence between the artificial immune system antibody and pathogen invasion intensity. The paper also combines assets evaluation system and network integration evaluation system, considering from the application layer, the host layer, network layer may be factors that affect the network risks. The experimental results show that this model improves the ability of intrusion detection and can support for the security of current cloud computing.

  12. Computer graphics application in the engineering design integration system

    Science.gov (United States)

    Glatt, C. R.; Abel, R. W.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Stewart, W. A.

    1975-01-01

    The computer graphics aspect of the Engineering Design Integration (EDIN) system and its application to design problems were discussed. Three basic types of computer graphics may be used with the EDIN system for the evaluation of aerospace vehicles preliminary designs: offline graphics systems using vellum-inking or photographic processes, online graphics systems characterized by direct coupled low cost storage tube terminals with limited interactive capabilities, and a minicomputer based refresh terminal offering highly interactive capabilities. The offline line systems are characterized by high quality (resolution better than 0.254 mm) and slow turnaround (one to four days). The online systems are characterized by low cost, instant visualization of the computer results, slow line speed (300 BAUD), poor hard copy, and the early limitations on vector graphic input capabilities. The recent acquisition of the Adage 330 Graphic Display system has greatly enhanced the potential for interactive computer aided design.

  13. Security for small computer systems a practical guide for users

    CERN Document Server

    Saddington, Tricia

    1988-01-01

    Security for Small Computer Systems: A Practical Guide for Users is a guidebook for security concerns for small computers. The book provides security advice for the end-users of small computers in different aspects of computing security. Chapter 1 discusses the security and threats, and Chapter 2 covers the physical aspect of computer security. The text also talks about the protection of data, and then deals with the defenses against fraud. Survival planning and risk assessment are also encompassed. The last chapter tackles security management from an organizational perspective. The bo

  14. Fast high-resolution computer-generated hologram computation using multiple graphics processing unit cluster system.

    Science.gov (United States)

    Takada, Naoki; Shimobaba, Tomoyoshi; Nakayama, Hirotaka; Shiraki, Atsushi; Okada, Naohisa; Oikawa, Minoru; Masuda, Nobuyuki; Ito, Tomoyoshi

    2012-10-20

    To overcome the computational complexity of a computer-generated hologram (CGH), we implement an optimized CGH computation in our multi-graphics processing unit cluster system. Our system can calculate a CGH of 6,400×3,072 pixels from a three-dimensional (3D) object composed of 2,048 points in 55 ms. Furthermore, in the case of a 3D object composed of 4096 points, our system is 553 times faster than a conventional central processing unit (using eight threads).

  15. Software fault tolerance in computer operating systems

    Science.gov (United States)

    Iyer, Ravishankar K.; Lee, Inhwan

    1994-01-01

    This chapter provides data and analysis of the dependability and fault tolerance for three operating systems: the Tandem/GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Based on measurements from these systems, basic software error characteristics are investigated. Fault tolerance in operating systems resulting from the use of process pairs and recovery routines is evaluated. Two levels of models are developed to analyze error and recovery processes inside an operating system and interactions among multiple instances of an operating system running in a distributed environment. The measurements show that the use of process pairs in Tandem systems, which was originally intended for tolerating hardware faults, allows the system to tolerate about 70% of defects in system software that result in processor failures. The loose coupling between processors which results in the backup execution (the processor state and the sequence of events occurring) being different from the original execution is a major reason for the measured software fault tolerance. The IBM/MVS system fault tolerance almost doubles when recovery routines are provided, in comparison to the case in which no recovery routines are available. However, even when recovery routines are provided, there is almost a 50% chance of system failure when critical system jobs are involved.

  16. TRL Computer System User’s Guide

    Energy Technology Data Exchange (ETDEWEB)

    Engel, David W.; Dalton, Angela C.

    2014-01-31

    We have developed a wiki-based graphical user-interface system that implements our technology readiness level (TRL) uncertainty models. This document contains the instructions for using this wiki-based system.

  17. Computer Sciences and Data Systems, volume 1

    Science.gov (United States)

    1987-01-01

    Topics addressed include: software engineering; university grants; institutes; concurrent processing; sparse distributed memory; distributed operating systems; intelligent data management processes; expert system for image analysis; fault tolerant software; and architecture research.

  18. EVALUATION & TRENDS OF SURVEILLANCE SYSTEM NETWORK IN UBIQUITOUS COMPUTING ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Sunil Kr Singh

    2015-03-01

    Full Text Available With the emergence of ubiquitous computing, whole scenario of computing has been changed. It affected many inter disciplinary fields. This paper visions the impact of ubiquitous computing on video surveillance system. With increase in population and highly specific security areas, intelligent monitoring is the major requirement of modern world .The paper describes the evolution of surveillance system from analog to multi sensor ubiquitous system. It mentions the demand of context based architectures. It draws the benefit of merging of cloud computing to boost the surveillance system and at the same time reducing cost and maintenance. It analyzes some surveillance system architectures which are made for ubiquitous deployment. It provides major challenges and opportunities for the researchers to make surveillance system highly efficient and make them seamlessly embed in our environments.

  19. Information Hiding based Trusted Computing System Design

    Science.gov (United States)

    2014-07-18

    and the environment where the system operates (electrical network frequency signals), and how to improve the trust in a wireless sensor network with...the system (silicon PUF) and the environment where the system operates (ENF signals). We also study how to improve the trust in a wireless sensor...Harbin Institute of Technology, Shenzhen , China, May 26, 2013. (Host: Prof. Aijiao Cui) 13) “Designing Trusted Energy-Efficient Circuits and Systems

  20. On the Computational Capabilities of Physical Systems. Part 1; The Impossibility of Infallible Computation

    Science.gov (United States)

    Wolpert, David H.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In this first of two papers, strong limits on the accuracy of physical computation are established. First it is proven that there cannot be a physical computer C to which one can pose any and all computational tasks concerning the physical universe. Next it is proven that no physical computer C can correctly carry out any computational task in the subset of such tasks that can be posed to C. This result holds whether the computational tasks concern a system that is physically isolated from C, or instead concern a system that is coupled to C. As a particular example, this result means that there cannot be a physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly 'processing information faster than the universe does'. The results also mean that there cannot exist an infallible, general-purpose observation apparatus, and that there cannot be an infallible, general-purpose control apparatus. These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - a definition of 'physical computation' - is needed to address the issues considered in these papers. While this definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. The second in this pair of papers presents a preliminary exploration of some of this mathematical structure, including in particular that of prediction complexity, which is a 'physical computation

  1. Automated fermentation equipment. 2. Computer-fermentor system

    Energy Technology Data Exchange (ETDEWEB)

    Nyeste, L.; Szigeti, L.; Veres, A.; Pungor, E. Jr.; Kurucz, I.; Hollo, J.

    1981-02-01

    An inexpensive computer-operated system suitable for data collection and steady-state optimum control of fermentation processes is presented. With this system, minimum generation time has been determined as a function of temperature and pH in the turbidostat cultivation of a yeast strain. The applicability of the computer-fermentor system is also presented by the determination of the dynamic Kla value.

  2. Managing trust in information systems by using computer simulations

    OpenAIRE

    Zupančič, Eva

    2009-01-01

    Human factor is more and more important in new information systems and it should be also taken into consideration when developing new systems. Trust issues, which are tightly tied to human factor, are becoming an important topic in computer science. In this work we research trust in IT systems and present computer-based trust management solutions. After a review of qualitative and quantitative methods for trust management, a precise description of a simulation tool for trust management ana...

  3. Personal Computer System for Automatic Coronary Venous Flow Measurement

    OpenAIRE

    Dew, Robert B.

    1985-01-01

    We developed an automated system based on an IBM PC/XT Personal computer to measure coronary venous blood flow during cardiac catheterization. Flow is determined by a thermodilution technique in which a cold saline solution is infused through a catheter into the coronary venous system. Regional temperature fluctuations sensed by the catheter are used to determine great cardiac vein and coronary sinus blood flow. The computer system replaces manual methods of acquiring and analyzing temperatur...

  4. Improving the safety features of general practice computer systems

    OpenAIRE

    Anthony Avery; Boki Savelyich; Sheila Teasdale

    2003-01-01

    General practice computer systems already have a number of important safety features. However, there are problems in that general practitioners (GPs) have come to rely on hazard alerts when they are not foolproof. Furthermore, GPs do not know how to make best use of safety features on their systems. There are a number of solutions that could help to improve the safety features of general practice computer systems and also help to improve the abilities of healthcare professionals to use these ...

  5. Multiple-User, Multitasking, Virtual-Memory Computer System

    Science.gov (United States)

    Generazio, Edward R.; Roth, Don J.; Stang, David B.

    1993-01-01

    Computer system designed and programmed to serve multiple users in research laboratory. Provides for computer control and monitoring of laboratory instruments, acquisition and anlaysis of data from those instruments, and interaction with users via remote terminals. System provides fast access to shared central processing units and associated large (from megabytes to gigabytes) memories. Underlying concept of system also applicable to monitoring and control of industrial processes.

  6. Performance Models for Split-execution Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; McCaskey, Alex [ORNL; Schrock, Jonathan [ORNL; Seddiqi, Hadayat [ORNL; Britt, Keith A [ORNL; Imam, Neena [ORNL

    2016-01-01

    Split-execution computing leverages the capabilities of multiple computational models to solve problems, but splitting program execution across different computational models incurs costs associated with the translation between domains. We analyze the performance of a split-execution computing system developed from conventional and quantum processing units (QPUs) by using behavioral models that track resource usage. We focus on asymmetric processing models built using conventional CPUs and a family of special-purpose QPUs that employ quantum computing principles. Our performance models account for the translation of a classical optimization problem into the physical representation required by the quantum processor while also accounting for hardware limitations and conventional processor speed and memory. We conclude that the bottleneck in this split-execution computing system lies at the quantum-classical interface and that the primary time cost is independent of quantum processor behavior.

  7. Intelligent decision support systems for sustainable computing paradigms and applications

    CERN Document Server

    Abraham, Ajith; Siarry, Patrick; Sheng, Michael

    2017-01-01

    This unique book dicusses the latest research, innovative ideas, challenges and computational intelligence (CI) solutions in sustainable computing. It presents novel, in-depth fundamental research on achieving a sustainable lifestyle for society, either from a methodological or from an application perspective. Sustainable computing has expanded to become a significant research area covering the fields of computer science and engineering, electrical engineering and other engineering disciplines, and there has been an increase in the amount of literature on aspects sustainable computing such as energy efficiency and natural resources conservation that emphasizes the role of ICT (information and communications technology) in achieving system design and operation objectives. The energy impact/design of more efficient IT infrastructures is a key challenge in realizing new computing paradigms. The book explores the uses of computational intelligence (CI) techniques for intelligent decision support that can be explo...

  8. Resource requirements for digital computations on electrooptical systems.

    Science.gov (United States)

    Eshaghian, M M; Panda, D K; Kumar, V K

    1991-03-10

    In this paper we study the resource requirements of electrooptical organizations in performing digital computing tasks. We define a generic model of parallel computation using optical interconnects, called the optical model of computation (OMC). In this model, computation is performed in digital electronics and communication is performed using free space optics. Using this model we derive relationships between information transfer and computational resources in solving a given problem. To illustrate our results, we concentrate on a computationally intensive operation, 2-D digital image convolution. Irrespective of the input/output scheme and the order of computation, we show a lower bound of ?(nw) on the optical volume required for convolving a w x w kernel with an n x n image, if the input bits are given to the system only once.

  9. Resource requirements for digital computations on electrooptical systems

    Science.gov (United States)

    Eshaghian, Mary M.; Panda, Dhabaleswar K.; Kumar, V. K. Prasanna

    1991-03-01

    The resource requirements of electrooptical organizations in performing digital computing tasks are studied via a generic model of parallel computation using optical interconnects, called the 'optical model of computation' (OMC). In this model, computation is performed in digital electronics and communication is performed using free space optics. Relationships between information transfer and computational resources in solving a given problem are derived. A computationally intensive operation, two-dimensional digital image convolution is undertaken. Irrespective of the input/output scheme and the order of computation, a lower bound of Omega(nw) is obtained on the optical volume required for convolving a w x w kernel with an n x n image, if the input bits are given to the system only once.

  10. 14 CFR 417.123 - Computing systems and software.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Computing systems and software. 417.123... systems and software. (a) A launch operator must document a system safety process that identifies the... systems and software. (b) A launch operator must identify all safety-critical functions associated with...

  11. Design of Computer Fault Diagnosis and Troubleshooting System ...

    African Journals Online (AJOL)

    PROF. O. E. OSUAGWU

    2013-12-01

    Dec 1, 2013 ... We model our system using Object-Oriented Analysis and Design. (OOAD) and UML ... high-level concept of a system. ... on the design of an expert system for computer .... opened distributed application, has rich type system ...

  12. Establishing performance requirements of computer based systems subject to uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Robinson, D.

    1997-02-01

    An organized systems design approach is dictated by the increasing complexity of computer based systems. Computer based systems are unique in many respects but share many of the same problems that have plagued design engineers for decades. The design of complex systems is difficult at best, but as a design becomes intensively dependent on the computer processing of external and internal information, the design process quickly borders chaos. This situation is exacerbated with the requirement that these systems operate with a minimal quantity of information, generally corrupted by noise, regarding the current state of the system. Establishing performance requirements for such systems is particularly difficult. This paper briefly sketches a general systems design approach with emphasis on the design of computer based decision processing systems subject to parameter and environmental variation. The approach will be demonstrated with application to an on-board diagnostic (OBD) system for automotive emissions systems now mandated by the state of California and the Federal Clean Air Act. The emphasis is on an approach for establishing probabilistically based performance requirements for computer based systems.

  13. Computer Aided Facial Prosthetics Manufacturing System

    Directory of Open Access Journals (Sweden)

    Peng H.K.

    2016-01-01

    Full Text Available Facial deformities can impose burden to the patient. There are many solutions for facial deformities such as plastic surgery and facial prosthetics. However, current fabrication method of facial prosthetics is high-cost and time consuming. This study aimed to identify a new method to construct a customized facial prosthetic. A 3D scanner, computer software and 3D printer were used in this study. Results showed that the new developed method can be used to produce a customized facial prosthetics. The advantages of the developed method over the conventional process are low cost, reduce waste of material and pollution in order to meet the green concept.

  14. Artificial intelligence, expert systems, computer vision, and natural language processing

    Science.gov (United States)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  15. Artificial intelligence, expert systems, computer vision, and natural language processing

    Science.gov (United States)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  16. THE USE OF COMPUTER ALGEBRA SYSTEMS IN THE TEACHING PROCESS

    Directory of Open Access Journals (Sweden)

    Mychaylo Paszeczko

    2014-11-01

    Full Text Available This work discusses computational capabilities of the programs belonging to the CAS (Computer Algebra Systems. A review of commercial and non-commercial software has been done here as well. In addition, there has been one of the programs belonging to the this group (program Mathcad selected and its application to the chosen example has been presented. Computational capabilities and ease of handling were decisive factors for the selection.

  17. SOME PARADIGMS OF ARTIFICIAL INTELLIGENCE IN FINANCIAL COMPUTER SYSTEMS

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2015-12-01

    Full Text Available The article discusses some paradigms of artificial intelligence in the context of their applications in computer financial systems. The proposed approach has a significant po-tential to increase the competitiveness of enterprises, including financial institutions. However, it requires the effective use of supercomputers, grids and cloud computing. A reference is made to the computing environment for Bitcoin. In addition, we characterized genetic programming and artificial neural networks to prepare investment strategies on the stock exchange market.

  18. Emerging Trends in Computing, Informatics, Systems Sciences, and Engineering

    CERN Document Server

    Elleithy, Khaled

    2013-01-01

    Emerging Trends in Computing, Informatics, Systems Sciences, and Engineering includes a set of rigorously reviewed world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of  Industrial Electronics, Technology & Automation, Telecommunications and Networking, Systems, Computing Sciences and Software Engineering, Engineering Education, Instructional Technology, Assessment, and E-learning. This book includes the proceedings of the International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering (CISSE 2010). The proceedings are a set of rigorously reviewed world-class manuscripts presenting the state of international practice in Innovative Algorithms and Techniques in Automation, Industrial Electronics and Telecommunications.

  19. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  20. Computer system organization the B5700/B6700 series

    CERN Document Server

    Organick, Elliott I

    1973-01-01

    Computer System Organization: The B5700/B6700 Series focuses on the organization of the B5700/B6700 Series developed by Burroughs Corp. More specifically, it examines how computer systems can (or should) be organized to support, and hence make more efficient, the running of computer programs that evolve with characteristically similar information structures.Comprised of nine chapters, this book begins with a background on the development of the B5700/B6700 operating systems, paying particular attention to their hardware/software architecture. The discussion then turns to the block-structured p

  1. Innovations and Advances in Computer, Information, Systems Sciences, and Engineering

    CERN Document Server

    Sobh, Tarek

    2013-01-01

    Innovations and Advances in Computer, Information, Systems Sciences, and Engineering includes the proceedings of the International Joint Conferences on Computer, Information, and Systems Sciences, and Engineering (CISSE 2011). The contents of this book are a set of rigorously reviewed, world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of  Industrial Electronics, Technology and Automation, Telecommunications and Networking, Systems, Computing Sciences and Software Engineering, Engineering Education, Instructional Technology, Assessment, and E-learning.

  2. Computational simulation of concurrent engineering for aerospace propulsion systems

    Science.gov (United States)

    Chamis, C. C.; Singhal, S. N.

    1992-01-01

    Results are summarized of an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulations methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties - fundamental in developing such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering for propulsion systems and systems in general. Benefits and facets needing early attention in the development are outlined.

  3. Computational simulation for concurrent engineering of aerospace propulsion systems

    Science.gov (United States)

    Chamis, C. C.; Singhal, S. N.

    1993-01-01

    Results are summarized for an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulation methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties--fundamental to develop such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering of propulsion systems and systems in general. Benefits and issues needing early attention in the development are outlined.

  4. Data entry system for INIS input using a personal computer

    Energy Technology Data Exchange (ETDEWEB)

    Ishikawa, Masashi (Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment)

    1990-01-01

    Input preparation for the INIS (International Nuclear Information System) has been performed by Japan Atomic Energy Research Institute since 1970. Instead of the input data preparation done by worksheets make out with the typewriters, new method with which data can be directly inputted into a diskette using personal computers is introduced. According to the popularization of personal computers and word processors, this system is easily applied to other system, so the outline and the future development on it are described. A shortcoming of this system is that spell-checking and data entry using authority files are hardly performed because of the limitation of hardware resources, and that data code conversion is needed because applied code systems between personal computer and main frame computer are quite different from each other. On the other hand, improving the timelyness of data entry is expected without duplication of keying. (author).

  5. Computational intelligence for decision support in cyber-physical systems

    CERN Document Server

    Ali, A; Riaz, Zahid

    2014-01-01

    This book is dedicated to applied computational intelligence and soft computing techniques with special reference to decision support in Cyber Physical Systems (CPS), where the physical as well as the communication segment of the networked entities interact with each other. The joint dynamics of such systems result in a complex combination of computers, software, networks and physical processes all combined to establish a process flow at system level. This volume provides the audience with an in-depth vision about how to ensure dependability, safety, security and efficiency in real time by making use of computational intelligence in various CPS applications ranging from the nano-world to large scale wide area systems of systems. Key application areas include healthcare, transportation, energy, process control and robotics where intelligent decision support has key significance in establishing dynamic, ever-changing and high confidence future technologies. A recommended text for graduate students and researche...

  6. Evolutionary Computational Methods for Identifying Emergent Behavior in Autonomous Systems

    Science.gov (United States)

    Terrile, Richard J.; Guillaume, Alexandre

    2011-01-01

    A technique based on Evolutionary Computational Methods (ECMs) was developed that allows for the automated optimization of complex computationally modeled systems, such as autonomous systems. The primary technology, which enables the ECM to find optimal solutions in complex search spaces, derives from evolutionary algorithms such as the genetic algorithm and differential evolution. These methods are based on biological processes, particularly genetics, and define an iterative process that evolves parameter sets into an optimum. Evolutionary computation is a method that operates on a population of existing computational-based engineering models (or simulators) and competes them using biologically inspired genetic operators on large parallel cluster computers. The result is the ability to automatically find design optimizations and trades, and thereby greatly amplify the role of the system engineer.

  7. Software design for resilient computer systems

    CERN Document Server

    Schagaev, Igor

    2016-01-01

    This book addresses the question of how system software should be designed to account for faults, and which fault tolerance features it should provide for highest reliability. The authors first show how the system software interacts with the hardware to tolerate faults. They analyze and further develop the theory of fault tolerance to understand the different ways to increase the reliability of a system, with special attention on the role of system software in this process. They further develop the general algorithm of fault tolerance (GAFT) with its three main processes: hardware checking, preparation for recovery, and the recovery procedure. For each of the three processes, they analyze the requirements and properties theoretically and give possible implementation scenarios and system software support required. Based on the theoretical results, the authors derive an Oberon-based programming language with direct support of the three processes of GAFT. In the last part of this book, they introduce a simulator...

  8. PROGTEST: A Computer System for the Analysis of Computational Computer Programs.

    Science.gov (United States)

    1980-04-01

    Richard Loller, Graphic Arts Branch Ms Linda Prieto , Word Processing Center A-i APPENDIX B CAA-D-80-1 DISTRIBUTION Addressee # of Copies Defense...Development Center ATTN: Alan Barnum Is Griffiss Air Force Base, NY 13441 B-6 CAA-D-80-1 Mr. Glen Ingram Scientific Computing Division Room A151

  9. Information Fusion Methods in Computer Pan-vision System

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Aiming at concrete tasks of information fusion in computer pan-vision (CPV) system, information fusion methods are studied thoroughly. Some research progresses are presented. Recognizing of vision testing object is realized by fusing vision information and non-vision auxiliary information, which contain recognition of material defects, intelligent robot's autonomous recognition for parts and computer to defect image understanding and recognition automatically.

  10. Python for Scientific Computing Education: Modeling of Queueing Systems

    Directory of Open Access Journals (Sweden)

    Vladimiras Dolgopolovas

    2014-01-01

    Full Text Available In this paper, we present the methodology for the introduction to scientific computing based on model-centered learning. We propose multiphase queueing systems as a basis for learning objects. We use Python and parallel programming for implementing the models and present the computer code and results of stochastic simulations.

  11. Robust Security System for Critical Computers

    Directory of Open Access Journals (Sweden)

    Preet Inder Singh

    2012-06-01

    Full Text Available Among the various means of available resource protection including biometrics, password based system is most simple, user friendly, cost effective and commonly used, but this system having high sensitivity with attacks. Most of the advanced methods for authentication based on password encrypt the contents of password before storing or transmitting in physical domain. But all conventional cryptographic based encryption methods are having its own limitations, generally either in terms of complexity, efficiency or in terms of security. In this paper a simple method is developed that provide more secure and efficient means of authentication, at the same time simple in design for critical systems. Apart from protection, a step toward perfect security has taken by adding the feature of intruder detection along with the protection system. This is possible by merging various security systems with each other i.e password based security with keystroke dynamic, thumb impression with retina scan associated with the users. This new method is centrally based on user behavior and users related security system, which provides the robust security to the critical systems with intruder detection facilities.

  12. Fundamentals of computational intelligence neural networks, fuzzy systems, and evolutionary computation

    CERN Document Server

    Keller, James M; Fogel, David B

    2016-01-01

    This book covers the three fundamental topics that form the basis of computational intelligence: neural networks, fuzzy systems, and evolutionary computation. The text focuses on inspiration, design, theory, and practical aspects of implementing procedures to solve real-world problems. While other books in the three fields that comprise computational intelligence are written by specialists in one discipline, this book is co-written by current former Editor-in-Chief of IEEE Transactions on Neural Networks and Learning Systems, a former Editor-in-Chief of IEEE Transactions on Fuzzy Systems, and the founding Editor-in-Chief of IEEE Transactions on Evolutionary Computation. The coverage across the three topics is both uniform and consistent in style and notation. Discusses single-layer and multilayer neural networks, radial-basi function networks, and recurrent neural networks Covers fuzzy set theory, fuzzy relations, fuzzy logic interference, fuzzy clustering and classification, fuzzy measures and fuzz...

  13. Computer Resources Handbook for Flight Critical Systems.

    Science.gov (United States)

    1985-01-01

    4- lr , 4-21 71 -r:v-’.7. 7777 -.- - ~~ --- 2- ’K 2. N It- NATIONAL 8UURFAU OF E UORGoPY RESOLUI TESI 4.4, % % ! O0 ASI-TR-85,-502O (0 COMPUTER...associated with the ,.l-a, and the status of the originating unit or function is identifiel (e. g., ’.." 4, . ..-. operating in no rrrji / r estr i ct ed emrg...lllllEEEEElhEE IEEEEEEEEEEEEE Eu. -2w |’’ ".4 -, M.iii - /, - ,, IV. . ,,. 1 0 2-4 11M ~ 2 - Hill- 14 W15 NATIONAL BURAU OF S MCROGOPY RESOUYI TESI 5’W, 4

  14. On The Computational Capabilities of Physical Systems. Part 2; Relationship With Conventional Computer Science

    Science.gov (United States)

    Wolpert, David H.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In the first of this pair of papers, it was proven that there cannot be a physical computer to which one can properly pose any and all computational tasks concerning the physical universe. It was then further proven that no physical computer C can correctly carry out all computational tasks that can be posed to C. As a particular example, this result means that no physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly "processing information faster than the universe does". These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - "physical computation" - is needed to address the issues considered in these papers, which concern real physical computers. While this novel definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. This second paper of the pair presents a preliminary exploration of some of this mathematical structure. Analogues of Chomskian results concerning universal Turing Machines and the Halting theorem are derived, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analogue of algorithmic information complexity, "prediction complexity", is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task

  15. Computing handbook information systems and information technology

    CERN Document Server

    Topi, Heikki

    2014-01-01

    Disciplinary Foundations and Global ImpactEvolving Discipline of Information Systems Heikki TopiDiscipline of Information Technology Barry M. Lunt and Han ReichgeltInformation Systems as a Practical Discipline Juhani IivariInformation Technology Han Reichgelt, Joseph J. Ekstrom, Art Gowan, and Barry M. LuntSociotechnical Approaches to the Study of Information Systems Steve Sawyer and Mohammad Hossein JarrahiIT and Global Development Erkki SutinenUsing ICT for Development, Societal Transformation, and Beyond Sherif KamelTechnical Foundations of Data and Database ManagementData Models Avi Silber

  16. Computer systems for annotation of single molecule fragments

    Science.gov (United States)

    Schwartz, David Charles; Severin, Jessica

    2016-07-19

    There are provided computer systems for visualizing and annotating single molecule images. Annotation systems in accordance with this disclosure allow a user to mark and annotate single molecules of interest and their restriction enzyme cut sites thereby determining the restriction fragments of single nucleic acid molecules. The markings and annotations may be automatically generated by the system in certain embodiments and they may be overlaid translucently onto the single molecule images. An image caching system may be implemented in the computer annotation systems to reduce image processing time. The annotation systems include one or more connectors connecting to one or more databases capable of storing single molecule data as well as other biomedical data. Such diverse array of data can be retrieved and used to validate the markings and annotations. The annotation systems may be implemented and deployed over a computer network. They may be ergonomically optimized to facilitate user interactions.

  17. Criteria of Human-computer Interface Design for Computer Assisted Surgery Systems

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jian-guo; LIN Yan-ping; WANG Cheng-tao; LIU Zhi-hong; YANG Qing-ming

    2008-01-01

    In recent years, computer assisted surgery (CAS) systems become more and more common in clinical practices, but few specific design criteria have been proposed for human-computer interface (HCI) in CAS systems. This paper tried to give universal criteria of HCI design for CAS systems through introduction of demonstration application, which is total knee replacement (TKR) with a nonimage-based navigation system.A typical computer assisted process can be divided into four phases: the preoperative planning phase, the intraoperative registration phase, the intraoperative navigation phase and finally the postoperative assessment phase. The interface design for four steps is described respectively in the demonstration application. These criteria this paper summarized can be useful to software developers to achieve reliable and effective interfaces for new CAS systems more easily.

  18. Computational Fluid and Particle Dynamics in the Human Respiratory System

    CERN Document Server

    Tu, Jiyuan; Ahmadi, Goodarz

    2013-01-01

    Traditional research methodologies in the human respiratory system have always been challenging due to their invasive nature. Recent advances in medical imaging and computational fluid dynamics (CFD) have accelerated this research. This book compiles and details recent advances in the modelling of the respiratory system for researchers, engineers, scientists, and health practitioners. It breaks down the complexities of this field and provides both students and scientists with an introduction and starting point to the physiology of the respiratory system, fluid dynamics and advanced CFD modeling tools. In addition to a brief introduction to the physics of the respiratory system and an overview of computational methods, the book contains best-practice guidelines for establishing high-quality computational models and simulations. Inspiration for new simulations can be gained through innovative case studies as well as hands-on practice using pre-made computational code. Last but not least, students and researcher...

  19. Proceedings: Computer Science and Data Systems Technical Symposium, volume 1

    Science.gov (United States)

    Larsen, Ronald L.; Wallgren, Kenneth

    1985-01-01

    Progress reports and technical updates of programs being performed by NASA centers are covered. Presentations in viewgraph form are included for topics in three categories: computer science, data systems and space station applications.

  20. Proceedings: Computer Science and Data Systems Technical Symposium, volume 2

    Science.gov (United States)

    Larsen, Ronald L.; Wallgren, Kenneth

    1985-01-01

    Progress reports and technical updates of programs being performed by NASA centers are covered. Presentations in viewgraph form, along with abstracts, are included for topics in three catagories: computer science, data systems, and space station applications.

  1. The evolution of the PVM concurrent computing system

    Energy Technology Data Exchange (ETDEWEB)

    Giest, G.A. [Oak Ridge National Lab., TN (United States); Sunderam, V.S. [Emory Univ., Atlanta, GA (United States). Dept. of Mathematics and Computer Science

    1993-07-01

    Concurrent and distributed computing, using portable software systems or environments on general purpose networked computing platforms, has recently gained widespread attention. Many such systems have been developed, and several are in production use. This paper describes the evolution of the PVM system, a software infrastructure for concurrent computing in networked environments. PVM has evolved over the past years; it is currently in use at several hundred institutions worldwide for applications ranging from scientific supercomputing to high performance computations in medicine, discrete mathematics, and databases, and for learning parallel programming. We describe the historical evolution of the PVM system, outline the programming model and supported features, present results gained from its use, list representative applications from a variety of disciplines that PVM has been used for, and comment on future trends and ongoing research projects.

  2. Modeling Workflow Management in a Distributed Computing System ...

    African Journals Online (AJOL)

    Modeling Workflow Management in a Distributed Computing System Using Petri Nets. ... who use it to share information more rapidly and increases their productivity. ... Petri nets are an established tool for modelling and analyzing processes.

  3. Service Level Agreement (SLA) in Utility Computing Systems

    CERN Document Server

    Wu, Linlin

    2010-01-01

    In recent years, extensive research has been conducted in the area of Service Level Agreement (SLA) for utility computing systems. An SLA is a formal contract used to guarantee that consumers' service quality expectation can be achieved. In utility computing systems, the level of customer satisfaction is crucial, making SLAs significantly important in these environments. Fundamental issue is the management of SLAs, including SLA autonomy management or trade off among multiple Quality of Service (QoS) parameters. Many SLA languages and frameworks have been developed as solutions; however, there is no overall classification for these extensive works. Therefore, the aim of this chapter is to present a comprehensive survey of how SLAs are created, managed and used in utility computing environment. We discuss existing use cases from Grid and Cloud computing systems to identify the level of SLA realization in state-of-art systems and emerging challenges for future research.

  4. Computational unit for non-contact photonic system

    Science.gov (United States)

    Kochetov, Alexander V.; Skrylev, Pavel A.

    2005-06-01

    Requirements to the unified computational unit for non-contact photonic system have been formulated. Estimation of central processing unit performance and required memory size are calculated. Specialized microcontroller optimal to use as central processing unit has been selected. Memory chip types are determinated for system. The computational unit consists of central processing unit based on selected microcontroller, NVRAM memory, receiving circuit, SDRAM memory, control and power circuits. It functions, as performing unit that calculates required parameters ofrail track.

  5. Towards accurate quantum simulations of large systems with small computers.

    Science.gov (United States)

    Yang, Yonggang

    2017-01-24

    Numerical simulations are important for many systems. In particular, various standard computer programs have been developed for solving the quantum Schrödinger equations. However, the accuracy of these calculations is limited by computer capabilities. In this work, an iterative method is introduced to enhance the accuracy of these numerical calculations, which is otherwise prohibitive by conventional methods. The method is easily implementable and general for many systems.

  6. Cluster-based localization and tracking in ubiquitous computing systems

    CERN Document Server

    Martínez-de Dios, José Ramiro; Torres-González, Arturo; Ollero, Anibal

    2017-01-01

    Localization and tracking are key functionalities in ubiquitous computing systems and techniques. In recent years a very high variety of approaches, sensors and techniques for indoor and GPS-denied environments have been developed. This book briefly summarizes the current state of the art in localization and tracking in ubiquitous computing systems focusing on cluster-based schemes. Additionally, existing techniques for measurement integration, node inclusion/exclusion and cluster head selection are also described in this book.

  7. Towards accurate quantum simulations of large systems with small computers

    Science.gov (United States)

    Yang, Yonggang

    2017-01-01

    Numerical simulations are important for many systems. In particular, various standard computer programs have been developed for solving the quantum Schrödinger equations. However, the accuracy of these calculations is limited by computer capabilities. In this work, an iterative method is introduced to enhance the accuracy of these numerical calculations, which is otherwise prohibitive by conventional methods. The method is easily implementable and general for many systems. PMID:28117366

  8. CANONICAL COMPUTATIONAL FORMS FOR AR 2-D SYSTEMS

    NARCIS (Netherlands)

    ROCHA, P; WILLEMS, JC

    1990-01-01

    A canonical form for AR 2-D systems representations is introduced. This yields a method for computing the system trajectories by means of a line-by-line recursion, and displays some relevant information about the system structure such as the choice of inputs and initial conditions.

  9. Understanding and Improving the Performance Consistency of Distributed Computing Systems

    NARCIS (Netherlands)

    Yigitbasi, M.N.

    2012-01-01

    With the increasing adoption of distributed systems in both academia and industry, and with the increasing computational and storage requirements of distributed applications, users inevitably demand more from these systems. Moreover, users also depend on these systems for latency and throughput sens

  10. Computer Directed Training System (CDTS), User’s Manual

    Science.gov (United States)

    1983-07-01

    94111447030 OPInONAL FORM 2.2 BACK (4-77) DEPARTMENT OF THE AIR FORCE A AUL5-5 DISRIBceO LIMITED TO DOD, REFR TER REQUESTS TO THE ADPS MANAGER...SYSTEM SUMMARY 2.1 System Aplication . The Computer Directed Training System is used to prepare and present lessons that supplement local on-the-job

  11. A New Approach: Computer-Assisted Problem-Solving Systems

    Science.gov (United States)

    Gok, Tolga

    2010-01-01

    Computer-assisted problem solving systems are rapidly growing in educational use and with the advent of the Internet. These systems allow students to do their homework and solve problems online with the help of programs like Blackboard, WebAssign and LON-CAPA program etc. There are benefits and drawbacks of these systems. In this study, the…

  12. Understanding and Improving the Performance Consistency of Distributed Computing Systems

    NARCIS (Netherlands)

    Yigitbasi, M.N.

    2012-01-01

    With the increasing adoption of distributed systems in both academia and industry, and with the increasing computational and storage requirements of distributed applications, users inevitably demand more from these systems. Moreover, users also depend on these systems for latency and throughput sens

  13. A modular system for computational fluid dynamics

    Science.gov (United States)

    McCarthy, D. R.; Foutch, D. W.; Shurtleff, G. E.

    This paper describes the Modular System for Compuational Fluid Dynamics (MOSYS), a software facility for the construction and execution of arbitrary solution procedures on multizone, structured body-fitted grids. It focuses on the structure and capabilities of MOSYS and the philosophy underlying its design. The system offers different levels of capability depending on the objectives of the user. It enables the applications engineer to quickly apply a variety of methods to geometrically complex problems. The methods developer can implement new algorithms in a simple form, and immediately apply them to problems of both theoretical and practical interest. And for the code builder it consitutes a toolkit for fast construction of CFD codes tailored to various purposes. These capabilities are illustrated through applications to a particularly complex problem encountered in aircraft propulsion systems, namely, the analysis of a landing aircraft in reverse thrust.

  14. Integrated computer-aided retinal photocoagulation system

    Science.gov (United States)

    Barrett, Steven F.; Wright, Cameron H. G.; Oberg, Erik D.; Rockwell, Benjamin A.; Cain, Clarence P.; Jerath, Maya R.; Rylander, Henry G., III; Welch, Ashley J.

    1996-05-01

    Successful retinal tracking subsystem testing results in vivo on rhesus monkeys using an argon continuous wave laser and an ultra-short pulse laser are presented. Progress on developing an integrated robotic retinal laser surgery system is also presented. Several interesting areas of study have developed: (1) 'doughnut' shaped lesions that occur under certain combinations of laser power, spot size, and irradiation time complicating measurements of central lesion reflectance, (2) the optimal retinal field of view to achieve simultaneous tracking and lesion parameter control, and (3) a fully digital versus a hybrid analog/digital tracker using confocal reflectometry integrated system implementation. These areas are investigated in detail in this paper. The hybrid system warrants a separate presentation and appears in another paper at this conference.

  15. Secure system design and trustable computing

    CERN Document Server

    Potkonjak, Miodrag

    2016-01-01

    This book provides the foundations for understanding hardware security and trust, which have become major concerns for national security over the past decade.  Coverage includes issues related to security and trust in a variety of electronic devices and systems related to the security of hardware, firmware and software, spanning system applications, online transactions, and networking services.  This serves as an invaluable reference to the state-of-the-art research that is of critical significance to the security of, and trust in, modern society’s microelectronic-supported infrastructures.

  16. The Science of Computing: Expert Systems

    Science.gov (United States)

    Denning, Peter J.

    1986-01-01

    The creative urge of human beings is coupled with tremendous reverence for logic. The idea that the ability to reason logically--to be rational--is closely tied to intelligence was clear in the writings of Plato. The search for greater understanding of human intelligence led to the development of mathematical logic, the study of methods of proving the truth of statements by manipulating the symbols in which they are written without regard to the meanings of those symbols. By the nineteenth century a search was under way for a universal system of logic, one capable of proving anything provable in any other system.

  17. Architecture Research of Non-Stop Computer System

    Institute of Scientific and Technical Information of China (English)

    LIUXinsong; QIUYuanjie; YANGFeng; YANGongjun; GUPan; GAOKe

    2004-01-01

    Distributed & parallel server system with distributed & parallel I/O interface has solved the bottleneck between server system and client system, and also has solved the rebuilding problem after system fault. However, the system still has some shortcomings: the switch is the system bottleneck and the system is not adapted to WAN (Wide area network). Therefore, we put forward a new system architecture to overcome these shortcomings and develop the non-stop computer system. The basis of a non-stop system is rebuilt after system fault. The inner architecture of non-stop system must be redundant and the redundancy is the system fault-tolerance redundancy based on distributed mechanism and not backupredundancy. Analysis and test results declare that the system rebuild time after fault is in second scale and its rebuild capability is so strong that the system can be nonstop in the system's lifetime.

  18. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  19. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  20. Evolution and development of complex computational systems using the paradigm of metabolic computing in Epigenetic Tracking

    Directory of Open Access Journals (Sweden)

    Alessandro Fontana

    2013-09-01

    Full Text Available Epigenetic Tracking (ET is an Artificial Embryology system which allows for the evolution and development of large complex structures built from artificial cells. In terms of the number of cells, the complexity of the bodies generated with ET is comparable with the complexity of biological organisms. We have previously used ET to simulate the growth of multicellular bodies with arbitrary 3-dimensional shapes which perform computation using the paradigm of ``metabolic computing''. In this paper we investigate the memory capacity of such computational structures and analyse the trade-off between shape and computation. We now plan to build on these foundations to create a biologically-inspired model in which the encoding of the phenotype is efficient (in terms of the compactness of the genome and evolvable in tasks involving non-trivial computation, robust to damage and capable of self-maintenance and self-repair.

  1. 1st International Conference on Signal, Networks, Computing, and Systems

    CERN Document Server

    Mohapatra, Durga; Nagar, Atulya; Sahoo, Manmath

    2016-01-01

    The book is a collection of high-quality peer-reviewed research papers presented in the first International Conference on Signal, Networks, Computing, and Systems (ICSNCS 2016) held at Jawaharlal Nehru University, New Delhi, India during February 25–27, 2016. The book is organized in to two volumes and primarily focuses on theory and applications in the broad areas of communication technology, computer science and information security. The book aims to bring together the latest scientific research works of academic scientists, professors, research scholars and students in the areas of signal, networks, computing and systems detailing the practical challenges encountered and the solutions adopted.

  2. Efficient Data-parallel Computations on Distributed Systems

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Task scheduling determines the performance of NOW computing to a large extent.However,the computer system architecture, computing capability and sys tem load are rarely proposed together.In this paper,a biggest-heterogeneous scheduling algorithm is presented.It fully considers the system characterist ics (from application view), structure and state.So it always can utilize all processing resource under a reasonable premise.The results of experiment show the algorithm can significantly shorten the response time of jobs.

  3. Intrusion Detection System Inside Grid Computing Environment (IDS-IGCE

    Directory of Open Access Journals (Sweden)

    Basappa B. Kodada

    2012-01-01

    Full Text Available Grid Computing is a kind of important information technology which enables resource sharing globally to solve the large scale problem. It is based on networks and able to enable large scale aggregation and sharing of computational, data, sensors and other resources across institutional boundaries. Integrated Globus Tool Kit with Web services is to present OGSA (Open Grid Services Architecture as the standardservice grid architecture. In OGSA, everything is abstracted as a service, including computers, applications, data as well as instruments. The services and resources in Grid are heterogeneous and dynamic, and they also belong to different domains. Grid Services are still new to business system & asmore systems are being attached to it, any threat to it could bring collapse and huge harm. May be intruder come with a new form of attack. Grid Computing is a Global Infrastructure on the internet has led to asecurity attacks on the Computing Infrastructure. The wide varieties of IDS (Intrusion Detection System are available which are designed to handle the specific types of attacks. The technique of [27] will protect future attacks in Service Grid Computing Environment at the Grid Infrastructure but there is no technique can protect these types of attacks inside the grid at the node level. So this paper proposes the Architecture of IDS-IGCE (Intrusion Detection System – Inside Grid Computing Environment which can provide the protection against the complete threats inside the Grid Environment.

  4. Integrated computer control system architectural overview

    Energy Technology Data Exchange (ETDEWEB)

    Van Arsdall, P.

    1997-06-18

    This overview introduces the NIF Integrated Control System (ICCS) architecture. The design is abstract to allow the construction of many similar applications from a common framework. This summary lays the essential foundation for understanding the model-based engineering approach used to execute the design.

  5. Cloud computing principles, systems and applications

    CERN Document Server

    Antonopoulos, Nick

    2017-01-01

    This essential reference is a thorough and timely examination of the services, interfaces and types of applications that can be executed on cloud-based systems. Among other things, it identifies and highlights state-of-the-art techniques and methodologies.

  6. Soft computing in green and renewable energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Gopalakrishnan, Kasthurirangan [Iowa State Univ., Ames, IA (United States). Iowa Bioeconomy Inst.; US Department of Energy, Ames, IA (United States). Ames Lab; Kalogirou, Soteris [Cyprus Univ. of Technology, Limassol (Cyprus). Dept. of Mechanical Engineering and Materials Sciences and Engineering; Khaitan, Siddhartha Kumar (eds.) [Iowa State Univ. of Science and Technology, Ames, IA (United States). Dept. of Electrical Engineering and Computer Engineering

    2011-07-01

    Soft Computing in Green and Renewable Energy Systems provides a practical introduction to the application of soft computing techniques and hybrid intelligent systems for designing, modeling, characterizing, optimizing, forecasting, and performance prediction of green and renewable energy systems. Research is proceeding at jet speed on renewable energy (energy derived from natural resources such as sunlight, wind, tides, rain, geothermal heat, biomass, hydrogen, etc.) as policy makers, researchers, economists, and world agencies have joined forces in finding alternative sustainable energy solutions to current critical environmental, economic, and social issues. The innovative models, environmentally benign processes, data analytics, etc. employed in renewable energy systems are computationally-intensive, non-linear and complex as well as involve a high degree of uncertainty. Soft computing technologies, such as fuzzy sets and systems, neural science and systems, evolutionary algorithms and genetic programming, and machine learning, are ideal in handling the noise, imprecision, and uncertainty in the data, and yet achieve robust, low-cost solutions. As a result, intelligent and soft computing paradigms are finding increasing applications in the study of renewable energy systems. Researchers, practitioners, undergraduate and graduate students engaged in the study of renewable energy systems will find this book very useful. (orig.)

  7. Computer support system for residential environment evaluation for citizen participation

    Institute of Scientific and Technical Information of China (English)

    GE Jian; TEKNOMO Kardi; LU Jiang; HOKAO Kazunori

    2005-01-01

    Though the method of citizen participation in urban planning is quite well established, for a specific segment of residential environment, however, existing participation system has not coped adequately with the issue. The specific residential environment has detailed aspects that need positive and high level involvement of the citizens in participating in all stages and every field of the plan. One of the best and systematic methods to obtain a more involved citizen is through a citizen workshop. To get a more "educated" citizen who participates in the workshop, a special session to inform the citizen on what was previously gathered through a survey was revealed to be prerequisite before the workshop. The computer support system is one of the best tools for this purpose. This paper describes the development of the computer support system for residential environment evaluation system, which is an essential tool to give more information to the citizens before their participation in public workshop. The significant contribution of this paper is the educational system framework involved in the workshop on the public participation system through computer support, especially for residential environment. The framework, development and application of the computer support system are described. The application of a workshop on the computer support system was commented on as very valuable and helpful by the audience as it resulted in greater benefit to have wider range of participation, and deeper level of citizen understanding.

  8. A Massive Data Parallel Computational Framework for Petascale/Exascale Hybrid Computer Systems

    CERN Document Server

    Blazewicz, Marek; Diener, Peter; Koppelman, David M; Kurowski, Krzysztof; Löffler, Frank; Schnetter, Erik; Tao, Jian

    2012-01-01

    Heterogeneous systems are becoming more common on High Performance Computing (HPC) systems. Even using tools like CUDA and OpenCL it is a non-trivial task to obtain optimal performance on the GPU. Approaches to simplifying this task include Merge (a library based framework for heterogeneous multi-core systems), Zippy (a framework for parallel execution of codes on multiple GPUs), BSGP (a new programming language for general purpose computation on the GPU) and CUDA-lite (an enhancement to CUDA that transforms code based on annotations). In addition, efforts are underway to improve compiler tools for automatic parallelization and optimization of affine loop nests for GPUs and for automatic translation of OpenMP parallelized codes to CUDA. In this paper we present an alternative approach: a new computational framework for the development of massively data parallel scientific codes applications suitable for use on such petascale/exascale hybrid systems built upon the highly scalable Cactus framework. As the first...

  9. Dynamic self-assembly in living systems as computation.

    Energy Technology Data Exchange (ETDEWEB)

    Bouchard, Ann Marie; Osbourn, Gordon Cecil

    2004-06-01

    Biochemical reactions taking place in living systems that map different inputs to specific outputs are intuitively recognized as performing information processing. Conventional wisdom distinguishes such proteins, whose primary function is to transfer and process information, from proteins that perform the vast majority of the construction, maintenance, and actuation tasks of the cell (assembling and disassembling macromolecular structures, producing movement, and synthesizing and degrading molecules). In this paper, we examine the computing capabilities of biological processes in the context of the formal model of computing known as the random access machine (RAM) [Dewdney AK (1993) The New Turing Omnibus. Computer Science Press, New York], which is equivalent to a Turing machine [Minsky ML (1967) Computation: Finite and Infinite Machines. Prentice-Hall, Englewood Cliffs, NJ]. When viewed from the RAM perspective, we observe that many of these dynamic self-assembly processes - synthesis, degradation, assembly, movement - do carry out computational operations. We also show that the same computing model is applicable at other hierarchical levels of biological systems (e.g., cellular or organism networks as well as molecular networks). We present stochastic simulations of idealized protein networks designed explicitly to carry out a numeric calculation. We explore the reliability of such computations and discuss error-correction strategies (algorithms) employed by living systems. Finally, we discuss some real examples of dynamic self-assembly processes that occur in living systems, and describe the RAM computer programs they implement. Thus, by viewing the processes of living systems from the RAM perspective, a far greater fraction of these processes can be understood as computing than has been previously recognized.

  10. The Rabi Oscillation in Subdynamic System for Quantum Computing

    Directory of Open Access Journals (Sweden)

    Bi Qiao

    2015-01-01

    Full Text Available A quantum computation for the Rabi oscillation based on quantum dots in the subdynamic system is presented. The working states of the original Rabi oscillation are transformed to the eigenvectors of subdynamic system. Then the dissipation and decoherence of the system are only shown in the change of the eigenvalues as phase errors since the eigenvectors are fixed. This allows both dissipation and decoherence controlling to be easier by only correcting relevant phase errors. This method can be extended to general quantum computation systems.

  11. One approach for evaluating the Distributed Computing Design System (DCDS)

    Science.gov (United States)

    Ellis, J. T.

    1985-01-01

    The Distributed Computer Design System (DCDS) provides an integrated environment to support the life cycle of developing real-time distributed computing systems. The primary focus of DCDS is to significantly increase system reliability and software development productivity, and to minimize schedule and cost risk. DCDS consists of integrated methodologies, languages, and tools to support the life cycle of developing distributed software and systems. Smooth and well-defined transistions from phase to phase, language to language, and tool to tool provide a unique and unified environment. An approach to evaluating DCDS highlights its benefits.

  12. Computer modeling of properties of complex molecular systems

    Energy Technology Data Exchange (ETDEWEB)

    Kulkova, E.Yu. [Moscow State University of Technology “STANKIN”, Vadkovsky per., 1, Moscow 101472 (Russian Federation); Khrenova, M.G.; Polyakov, I.V. [Lomonosov Moscow State University, Chemistry Department, Leninskie Gory 1/3, Moscow 119991 (Russian Federation); Nemukhin, A.V. [Lomonosov Moscow State University, Chemistry Department, Leninskie Gory 1/3, Moscow 119991 (Russian Federation); N.M. Emanuel Institute of Biochemical Physics, Russian Academy of Sciences, Kosygina 4, Moscow 119334 (Russian Federation)

    2015-03-10

    Large molecular aggregates present important examples of strongly nonhomogeneous systems. We apply combined quantum mechanics / molecular mechanics approaches that assume treatment of a part of the system by quantum-based methods and the rest of the system with conventional force fields. Herein we illustrate these computational approaches by two different examples: (1) large-scale molecular systems mimicking natural photosynthetic centers, and (2) components of prospective solar cells containing titan dioxide and organic dye molecules. We demonstrate that modern computational tools are capable to predict structures and spectra of such complex molecular aggregates.

  13. Human computer interaction issues in Clinical Trials Management Systems.

    Science.gov (United States)

    Starren, Justin B; Payne, Philip R O; Kaufman, David R

    2006-01-01

    Clinical trials increasingly rely upon web-based Clinical Trials Management Systems (CTMS). As with clinical care systems, Human Computer Interaction (HCI) issues can greatly affect the usefulness of such systems. Evaluation of the user interface of one web-based CTMS revealed a number of potential human-computer interaction problems, in particular, increased workflow complexity associated with a web application delivery model and potential usability problems resulting from the use of ambiguous icons. Because these design features are shared by a large fraction of current CTMS, the implications extend beyond this individual system.

  14. Method to Compute CT System MTF

    Energy Technology Data Exchange (ETDEWEB)

    Kallman, Jeffrey S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-05-03

    The modulation transfer function (MTF) is the normalized spatial frequency representation of the point spread function (PSF) of the system. Point objects are hard to come by, so typically the PSF is determined by taking the numerical derivative of the system's response to an edge. This is the method we use, and we typically use it with cylindrical objects. Given a cylindrical object, we first put an active contour around it, as shown in Figure 1(a). The active contour lets us know where the boundary of the test object is. We next set a threshold (Figure 1(b)) and determine the center of mass of the above threshold voxels. For the purposes of determining the center of mass, each voxel is weighted identically (not by voxel value).

  15. COMPUTER SIMULATION SYSTEM OF STRETCH REDUCING MILL

    Institute of Scientific and Technical Information of China (English)

    B.Y. Sun; S.J. Yuan

    2007-01-01

    The principle of the stretch reducing process is analyzed and three models of pass design areestablished. The simulations are done about variables, such as, stress, strain, the stretches betweenthe stands, the size parameters of the steel tube, and the roll force parameters. According to itsproduct catalogs the system can automatically divide the pass series, formulate the rolling table,and simulate the basic technological parameters in the stretch reducing process. All modules areintegrated based on the developing environment of VB6. The system can draw simulation curvesand pass pictures. Three kinds of database including the material database, pass design database,and product database are devised using Microsoft Access, which can be directly edited, corrected,and searched.

  16. Computational Virtual Reality (VR) as a human-computer interface in the operation of telerobotic systems

    Science.gov (United States)

    Bejczy, Antal K.

    1995-01-01

    This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.

  17. STUDY ON HUMAN-COMPUTER SYSTEM FOR STABLE VIRTUAL DISASSEMBLY

    Institute of Scientific and Technical Information of China (English)

    Guan Qiang; Zhang Shensheng; Liu Jihong; Cao Pengbing; Zhong Yifang

    2003-01-01

    The cooperative work between human being and computer based on virtual reality (VR) is investigated to plan the disassembly sequences more efficiently. A three-layer model of human-computer cooperative virtual disassembly is built, and the corresponding human-computer system for stable virtual disassembly is developed. In this system, an immersive and interactive virtual disassembly environment has been created to provide planners with a more visual working scene. For cooperative disassembly, an intelligent module of stability analysis of disassembly operations is embedded into the human-computer system to assist planners to implement disassembly tasks better. The supporting matrix for stability analysis of disassembly operations is defined and the method of stability analysis is detailed. Based on the approach, the stability of any disassembly operation can be analyzed to instruct the manual virtual disassembly. At last, a disassembly case in the virtual environment is given to prove the validity of above ideas.

  18. Computational Modeling, Formal Analysis, and Tools for Systems Biology.

    Science.gov (United States)

    Bartocci, Ezio; Lió, Pietro

    2016-01-01

    As the amount of biological data in the public domain grows, so does the range of modeling and analysis techniques employed in systems biology. In recent years, a number of theoretical computer science developments have enabled modeling methodology to keep pace. The growing interest in systems biology in executable models and their analysis has necessitated the borrowing of terms and methods from computer science, such as formal analysis, model checking, static analysis, and runtime verification. Here, we discuss the most important and exciting computational methods and tools currently available to systems biologists. We believe that a deeper understanding of the concepts and theory highlighted in this review will produce better software practice, improved investigation of complex biological processes, and even new ideas and better feedback into computer science.

  19. A Computer-Mediated Instruction System, Applied to Its Own Operating System and Peripheral Equipment.

    Science.gov (United States)

    Winiecki, Roger D.

    Each semester students in the School of Health Sciences of Hunter College learn how to use a computer, how a computer system operates, and how peripheral equipment can be used. To overcome inadequate computer center services and equipment, programed subject matter and accompanying reference material were developed. The instructional system has a…

  20. Research of the grid computing system applied in optical simulation

    Science.gov (United States)

    Jin, Wei-wei; Wang, Yu-dong; Liu, Qiangsheng; Cen, Zhao-feng; Li, Xiao-tong; Lin, Yi-qun

    2008-03-01

    A grid computing in the field of optics is presented in this paper. Firstly, the basic principles and research background of grid computing are outlined in this paper, along with the overview of its applications and the development status quo. The paper also discusses several typical tasks scheduling algorithms. Secondly, it focuses on describing a task scheduling of grid computing applied in optical computation. The paper gives details about the task scheduling system, including the task partition, granularity selection and tasks allocation, especially the structure of the system. In addition, some details of communication on grid computing are also illustrated. In this system, the "makespan" and "load balancing" are comprehensively considered. Finally, we build a grid model to test the task scheduling strategy, and the results are analyzed in detail. Compared to one isolated computer, a grid comprised of one server and four processors can shorten the "makespan" to 1/4. At the same time, the experimental results of the simulation also illustrate that the proposed scheduling system is able to balance loads of all processors. In short, the system performs scheduling well in the grid environment.

  1. Computational strategies for three-dimensional flow simulations on distributed computer systems

    Science.gov (United States)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-08-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  2. Computational strategies for three-dimensional flow simulations on distributed computer systems

    Science.gov (United States)

    Sankar, Lakshmi N.; Weed, Richard A.

    1995-01-01

    This research effort is directed towards an examination of issues involved in porting large computational fluid dynamics codes in use within the industry to a distributed computing environment. This effort addresses strategies for implementing the distributed computing in a device independent fashion and load balancing. A flow solver called TEAM presently in use at Lockheed Aeronautical Systems Company was acquired to start this effort. The following tasks were completed: (1) The TEAM code was ported to a number of distributed computing platforms including a cluster of HP workstations located in the School of Aerospace Engineering at Georgia Tech; a cluster of DEC Alpha Workstations in the Graphics visualization lab located at Georgia Tech; a cluster of SGI workstations located at NASA Ames Research Center; and an IBM SP-2 system located at NASA ARC. (2) A number of communication strategies were implemented. Specifically, the manager-worker strategy and the worker-worker strategy were tested. (3) A variety of load balancing strategies were investigated. Specifically, the static load balancing, task queue balancing and the Crutchfield algorithm were coded and evaluated. (4) The classical explicit Runge-Kutta scheme in the TEAM solver was replaced with an LU implicit scheme. And (5) the implicit TEAM-PVM solver was extensively validated through studies of unsteady transonic flow over an F-5 wing, undergoing combined bending and torsional motion. These investigations are documented in extensive detail in the dissertation, 'Computational Strategies for Three-Dimensional Flow Simulations on Distributed Computing Systems', enclosed as an appendix.

  3. Diabetes Monitoring System Using Mobile Computing Technologies

    Directory of Open Access Journals (Sweden)

    Mashael Saud Bin-Sabbar

    2013-03-01

    Full Text Available Diabetes is a chronic disease that needs to regularly be monitored to keep the blood sugar levels within normal ranges. This monitoring depends on the diabetic treatment plan that is periodically reviewed by the endocrinologist. The frequent visit to the main hospital seems to be tiring and time consuming for both endocrinologist and diabetes patients. The patient may have to travel to the main city, paying a ticket and reserving a place to stay. Those expenses can be reduced by remotely monitoring the diabetes patients with the help of mobile devices. In this paper, we introduce our implementation of an integrated monitoring tool for the diabetes patients. The designed system provides a daily monitoring and monthly services. The daily monitoring includes recording the result of daily analysis and activates to be transmitted from a patient’s mobile device to a central database. The monthly services require the patient to visit a nearby care center in the patient home town to do the medical examination and checkups. The result of this visit entered into the system and then synchronized with the central database. Finally, the endocrinologist can remotely monitor the patient record and adjust the treatment plan and the insulin doses if need.

  4. Computational Control of Flexible Aerospace Systems

    Science.gov (United States)

    Sharpe, Lonnie, Jr.; Shen, Ji Yao

    1994-01-01

    The main objective of this project is to establish a distributed parameter modeling technique for structural analysis, parameter estimation, vibration suppression and control synthesis of large flexible aerospace structures. This report concentrates on the research outputs produced in the last two years of the project. The main accomplishments can be summarized as follows. A new version of the PDEMOD Code had been completed. A theoretical investigation of the NASA MSFC two-dimensional ground-based manipulator facility by using distributed parameter modelling technique has been conducted. A new mathematical treatment for dynamic analysis and control of large flexible manipulator systems has been conceived, which may provide a embryonic form of a more sophisticated mathematical model for future modified versions of the PDEMOD Codes.

  5. Information and computer-aided system for structural materials

    Energy Technology Data Exchange (ETDEWEB)

    Nekrashevitch, Yu.G.; Nizametdinov, Sh.U.; Polkovnikov, A.V.; Rumjantzev, V.P.; Surina, O.N. (Engineering Physics Inst., Moscow (Russia)); Kalinin, G.M.; Sidorenkov, A.V.; Strebkov, Yu.S. (Research and Development Inst. of Power Engineering, Moscow (Russia))

    1992-09-01

    An information and computer-aided system for structural materials data has been developed to provide data for the fusion and fission reactor system design. It is designed for designers, industrial engineers, and material science specialists and provides a friendly interface in an interactive mode. The database for structural materials contains the master files: Chemical composition, physical, mechanical, corrosion, technological properties, regulatory and technical documentation. The system is implemented on a PC/AT running the PS /2 operating system. (orig.).

  6. Method and system for environmentally adaptive fault tolerant computing

    Science.gov (United States)

    Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

    2010-01-01

    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

  7. Patterns of Programmers' Use of Computer-Mediated Communications Systems

    Directory of Open Access Journals (Sweden)

    Chatpong Tangmanee

    2003-11-01

    Full Text Available Communication behavior of programmers plays an essential role in success of software development. Computer-mediated communication (CMC system, such as e-mail, or the World Wide Web (WWW, have substantial implications for coordinating work of programmers. Yet, no studies have dealt systematically with CMC behaviors of programmers. Drawing upon theories in organizational studies, information science, computer-mediated communication and software engineering, this research examines what programmers accomplish through CMC systems. Data were gathered from survey questionnaires mailed to 730 programmers, who are members of the Association of Computing Machinery (ACM and are involved in a variety of programming work. Based on factor analysis, the study found that programmers use CMC systems (1 to achieve progress in work-related tasks (i.e., task-related purposes, (2 to satisfy their social and emotional needs (i.e., socio-emotional purposes, and (3 to explore for information (i.e., exploring purposes. The findings of this research extend an insight into important patterns for which programmers use CMC systems. This insight has advanced theories of computer-mediated communication in the context of computer programmers. Also, practitioners, especially in software development, may use the results as guidelines in fostering a firm’s feasible network policy that fits with what their programming staff accomplish through computer-mediated communication.

  8. Complex system modelling and control through intelligent soft computations

    CERN Document Server

    Azar, Ahmad

    2015-01-01

    The book offers a snapshot of the theories and applications of soft computing in the area of complex systems modeling and control. It presents the most important findings discussed during the 5th International Conference on Modelling, Identification and Control, held in Cairo, from August 31-September 2, 2013. The book consists of twenty-nine selected contributions, which have been thoroughly reviewed and extended before their inclusion in the volume. The different chapters, written by active researchers in the field, report on both current theories and important applications of soft-computing. Besides providing the readers with soft-computing fundamentals, and soft-computing based inductive methodologies/algorithms, the book also discusses key industrial soft-computing applications, as well as multidisciplinary solutions developed for a variety of purposes, like windup control, waste management, security issues, biomedical applications and many others. It is a perfect reference guide for graduate students, r...

  9. Neuromorphic Computing – From Materials Research to Systems Architecture Roundtable

    Energy Technology Data Exchange (ETDEWEB)

    Schuller, Ivan K. [Univ. of California, San Diego, CA (United States); Stevens, Rick [Argonne National Lab. (ANL), Argonne, IL (United States); Univ. of Chicago, IL (United States); Pino, Robinson [Dept. of Energy (DOE) Office of Science, Washington, DC (United States); Pechan, Michael [Dept. of Energy (DOE) Office of Science, Washington, DC (United States)

    2015-10-29

    Computation in its many forms is the engine that fuels our modern civilization. Modern computation—based on the von Neumann architecture—has allowed, until now, the development of continuous improvements, as predicted by Moore’s law. However, computation using current architectures and materials will inevitably—within the next 10 years—reach a limit because of fundamental scientific reasons. DOE convened a roundtable of experts in neuromorphic computing systems, materials science, and computer science in Washington on October 29-30, 2015 to address the following basic questions: Can brain-like (“neuromorphic”) computing devices based on new material concepts and systems be developed to dramatically outperform conventional CMOS based technology? If so, what are the basic research challenges for materials sicence and computing? The overarching answer that emerged was: The development of novel functional materials and devices incorporated into unique architectures will allow a revolutionary technological leap toward the implementation of a fully “neuromorphic” computer. To address this challenge, the following issues were considered: The main differences between neuromorphic and conventional computing as related to: signaling models, timing/clock, non-volatile memory, architecture, fault tolerance, integrated memory and compute, noise tolerance, analog vs. digital, and in situ learning New neuromorphic architectures needed to: produce lower energy consumption, potential novel nanostructured materials, and enhanced computation Device and materials properties needed to implement functions such as: hysteresis, stability, and fault tolerance Comparisons of different implementations: spin torque, memristors, resistive switching, phase change, and optical schemes for enhanced breakthroughs in performance, cost, fault tolerance, and/or manufacturability.

  10. Semantic Computation in a Chinese Question-Answering System

    Institute of Scientific and Technical Information of China (English)

    李素建; 张健; 黄雄; 白硕; 刘群

    2002-01-01

    This paper introduces a kind of semantic computation and presents how tocombine it into our Chinese Question-Answering (QA) system. Based on two kinds of languageresources, Hownet and Cilin, we present an approach to computing the similarity and relevancybetween words. Using these results, we can calculate the relevancy between two sentences andthen get the optimal answer for the query in the system. The calculation adopts quantitativemethods and can be incorporated into QA systems easily, avoiding some difficulties in conven-tional NLP (Natural Language Processing) problems. The experiments show that the resultsare satisfactory.

  11. Modern Embedded Computing Designing Connected, Pervasive, Media-Rich Systems

    CERN Document Server

    Barry, Peter

    2012-01-01

    Modern embedded systems are used for connected, media-rich, and highly integrated handheld devices such as mobile phones, digital cameras, and MP3 players. All of these embedded systems require networking, graphic user interfaces, and integration with PCs, as opposed to traditional embedded processors that can perform only limited functions for industrial applications. While most books focus on these controllers, Modern Embedded Computing provides a thorough understanding of the platform architecture of modern embedded computing systems that drive mobile devices. The book offers a comprehen

  12. Fundamentals of power integrity for computer platforms and systems

    CERN Document Server

    DiBene, Joseph T

    2014-01-01

    An all-encompassing text that focuses on the fundamentals of power integrity Power integrity is the study of power distribution from the source to the load and the system level issues that can occur across it. For computer systems, these issues can range from inside the silicon to across the board and may egress into other parts of the platform, including thermal, EMI, and mechanical. With a focus on computer systems and silicon level power delivery, this book sheds light on the fundamentals of power integrity, utilizing the author's extensive background in the power integrity industry and un

  13. SLA for E-Learning System Based on Cloud Computing

    Directory of Open Access Journals (Sweden)

    Doaa Elmatary

    2015-10-01

    Full Text Available The Service Level Agreement (SLA becomes an important issue especially over the Cloud Computing and online services that based on the ‘pay-as-you-use’ fashion. Establishing the Service level agreements (SLAs, which can be defined as a negotiation between the service provider and the user, is needed for many types of current applications as the E-Learning systems. The work in this paper presents an idea of optimizing the SLA parameters to serve any E-Learning system over the Cloud Computing platform, with defining the negotiation process, the suitable frame work, and the sequence diagram to accommodate the E-Learning systems.

  14. An E-learning System based on Affective Computing

    Science.gov (United States)

    Duo, Sun; Song, Lu Xue

    In recent years, e-learning as a learning system is very popular. But the current e-learning systems cannot instruct students effectively since they do not consider the emotional state in the context of instruction. The emergence of the theory about "Affective computing" can solve this question. It can make the computer's intelligence no longer be a pure cognitive one. In this paper, we construct an emotional intelligent e-learning system based on "Affective computing". A dimensional model is put forward to recognize and analyze the student's emotion state and a virtual teacher's avatar is offered to regulate student's learning psychology with consideration of teaching style based on his personality trait. A "man-to-man" learning environment is built to simulate the traditional classroom's pedagogy in the system.

  15. The engineering design integration (EDIN) system. [digital computer program complex

    Science.gov (United States)

    Glatt, C. R.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Reiners, S. J.

    1974-01-01

    A digital computer program complex for the evaluation of aerospace vehicle preliminary designs is described. The system consists of a Univac 1100 series computer and peripherals using the Exec 8 operating system, a set of demand access terminals of the alphanumeric and graphics types, and a library of independent computer programs. Modification of the partial run streams, data base maintenance and construction, and control of program sequencing are provided by a data manipulation program called the DLG processor. The executive control of library program execution is performed by the Univac Exec 8 operating system through a user established run stream. A combination of demand and batch operations is employed in the evaluation of preliminary designs. Applications accomplished with the EDIN system are described.

  16. Automatic behaviour analysis system for honeybees using computer vision

    DEFF Research Database (Denmark)

    Tu, Gang Jun; Hansen, Mikkel Kragh; Kryger, Per

    2016-01-01

    -cost embedded computer with very limited computational resources as compared to an ordinary PC. The system succeeds in counting honeybees, identifying their position and measuring their in-and-out activity. Our algorithm uses background subtraction method to segment the images. After the segmentation stage......, the methods are primarily based on statistical analysis and inference. The regression statistics (i.e. R2) of the comparisons of system predictions and manual counts are 0.987 for counting honeybees, and 0.953 and 0.888 for measuring in-activity and out-activity, respectively. The experimental results...... demonstrate that this system can be used as a tool to detect the behaviour of honeybees and assess their state in the beehive entrance. Besides, the result of the computation time show that the Raspberry Pi is a viable solution in such real-time video processing system....

  17. Computer Aided Design System for Developing Musical Fountain Programs

    Institute of Scientific and Technical Information of China (English)

    刘丹; 张乃尧; 朱汉城

    2003-01-01

    A computer aided design system for developing musical fountain programs was developed with multiple functions such as intelligent design, 3-D animation, manual modification and synchronized motion to make the development process more efficient. The system first analyzed the music form and sentiment using many basic features of the music to select a basic fountain program. Then, this program is simulated with 3-D animation and modified manually to achieve the desired results. Finally, the program is transformed to a computer control program to control the musical fountain in time with the music. A prototype system for the musical fountain was also developed. It was tested with many styles of music and users were quite satisfied with its performance. By integrating various functions, the proposed computer aided design system for developing musical fountain programs greatly simplified the design of the musical fountain programs.

  18. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens; Inglett, Todd Alan

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  19. Dynamic detection for computer virus based on immune system

    Institute of Scientific and Technical Information of China (English)

    LI Tao

    2008-01-01

    Inspired by biological immune system,a new dynamic detection model for computer virus based on immune system is proposed.The quantitative description of the model is given.The problem of dynamic description for self and nonself in a computer virus immune system is solved,which reduces the size of self set.The new concept of dynamic tolerance,as well as the new mechanisms of gene evolution and gene coding for immature detectors is presented,improving the generating efficiency of mature detectors,reducing the false-negative and false-positive rates.Therefore,the difficult problem,in which the detector training cost is exponentially related to the size of self-set in a traditional computer immune system,is thus overcome.The theory analysis and experimental results show that the proposed model has better time efficiency and detecting ability than the classic model ARTIS.

  20. A Computer System for a Faculty of Education.

    Science.gov (United States)

    Hallworth, Herbert J.

    A computer system, introduced for use in statistics courses within a college of education, features the performance of a variety of functions, a relatively economic operation, and the facilitation of placing remote terminals in schools. The system provides an interactive statistics laboratory in which the student learns to write programs for the…

  1. Computer System Reliability Allocation Method and Supporting Tool

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper presents a computer system reliability allocationmethod that is based on the theory of statistic and Markovian chain,which can be used to allocate reliability to subsystem, to hybrid system and software modules. Arele vant supporting tool built by us is introduced.

  2. A review of residential computer oriented energy control systems

    Energy Technology Data Exchange (ETDEWEB)

    North, Greg

    2000-07-01

    The purpose of this report is to bring together as much information on Residential Computer Oriented Energy Control Systems as possible within a single document. This report identifies the main elements of the system and is intended to provide many technical options for the design and implementation of various energy related services.

  3. Load flow computations in hybrid transmission - distributed power systems

    NARCIS (Netherlands)

    Wobbes, E.D.; Lahaye, D.J.P.

    2013-01-01

    We interconnect transmission and distribution power systems and perform load flow computations in the hybrid network. In the largest example we managed to build, fifty copies of a distribution network consisting of fifteen nodes is connected to the UCTE study model, resulting in a system consisting

  4. Computer-Aided Communication Satellite System Analysis and Optimization.

    Science.gov (United States)

    Stagl, Thomas W.; And Others

    Various published computer programs for fixed/broadcast communication satellite system synthesis and optimization are discussed. The rationale for selecting General Dynamics/Convair's Satellite Telecommunication Analysis and Modeling Program (STAMP) in modified form to aid in the system costing and sensitivity analysis work in the Program on…

  5. Motivating Constraints of a Pedagogy-Embedded Computer Algebra System

    Science.gov (United States)

    Dana-Picard, Thierry

    2007-01-01

    The constraints of a computer algebra system (CAS) generally induce limitations on its usage. Via the pedagogical features implemented in such a system, "motivating constraints" can appear, encouraging advanced theoretical learning, providing a broader mathematical knowledge and more profound mathematical understanding. We discuss this issue,…

  6. Improving Computer Based Speech Therapy Using a Fuzzy Expert System

    OpenAIRE

    Ovidiu Andrei Schipor; Stefan Gheorghe Pentiuc; Maria Doina Schipor

    2012-01-01

    In this paper we present our work about Computer Based Speech Therapy systems optimization. We focus especially on using a fuzzy expert system in order to determine specific parameters of personalized therapy, i.e. the number, length and content of training sessions. The efficiency of this new approach was tested during an experiment performed with our CBST, named LOGOMON.

  7. Demonstrating Operating System Principles via Computer Forensics Exercises

    Science.gov (United States)

    Duffy, Kevin P.; Davis, Martin H., Jr.; Sethi, Vikram

    2010-01-01

    We explore the feasibility of sparking student curiosity and interest in the core required MIS operating systems course through inclusion of computer forensics exercises into the course. Students were presented with two in-class exercises. Each exercise demonstrated an aspect of the operating system, and each exercise was written as a computer…

  8. Demonstrating Operating System Principles via Computer Forensics Exercises

    Science.gov (United States)

    Duffy, Kevin P.; Davis, Martin H., Jr.; Sethi, Vikram

    2010-01-01

    We explore the feasibility of sparking student curiosity and interest in the core required MIS operating systems course through inclusion of computer forensics exercises into the course. Students were presented with two in-class exercises. Each exercise demonstrated an aspect of the operating system, and each exercise was written as a computer…

  9. Optical character recognition systems for different languages with soft computing

    CERN Document Server

    Chaudhuri, Arindam; Badelia, Pratixa; K Ghosh, Soumya

    2017-01-01

    The book offers a comprehensive survey of soft-computing models for optical character recognition systems. The various techniques, including fuzzy and rough sets, artificial neural networks and genetic algorithms, are tested using real texts written in different languages, such as English, French, German, Latin, Hindi and Gujrati, which have been extracted by publicly available datasets. The simulation studies, which are reported in details here, show that soft-computing based modeling of OCR systems performs consistently better than traditional models. Mainly intended as state-of-the-art survey for postgraduates and researchers in pattern recognition, optical character recognition and soft computing, this book will be useful for professionals in computer vision and image processing alike, dealing with different issues related to optical character recognition.

  10. 8th International Conference on Computer Recognition Systems

    CERN Document Server

    Jackowski, Konrad; Kurzynski, Marek; Wozniak, Michał; Zolnierek, Andrzej

    2013-01-01

    The computer recognition systems are nowadays one of the most promising directions in artificial intelligence. This book is the most comprehensive study of this field. It contains a collection of 86 carefully selected articles contributed by experts of pattern recognition. It reports on current research with respect to both methodology and applications. In particular, it includes the following sections: Biometrics Data Stream Classification and Big Data Analytics  Features, learning, and classifiers Image processing and computer vision Medical applications Miscellaneous applications Pattern recognition and image processing in robotics  Speech and word recognition This book is a great reference tool for scientists who deal with the problems of designing computer pattern recognition systems. Its target readers can be the as well researchers as students of computer science, artificial intelligence or robotics.

  11. Experimental quantum computing to solve systems of linear equations.

    Science.gov (United States)

    Cai, X-D; Weedbrook, C; Su, Z-E; Chen, M-C; Gu, Mile; Zhu, M-J; Li, Li; Liu, Nai-Le; Lu, Chao-Yang; Pan, Jian-Wei

    2013-06-07

    Solving linear systems of equations is ubiquitous in all areas of science and engineering. With rapidly growing data sets, such a task can be intractable for classical computers, as the best known classical algorithms require a time proportional to the number of variables N. A recently proposed quantum algorithm shows that quantum computers could solve linear systems in a time scale of order log(N), giving an exponential speedup over classical computers. Here we realize the simplest instance of this algorithm, solving 2×2 linear equations for various input vectors on a quantum computer. We use four quantum bits and four controlled logic gates to implement every subroutine required, demonstrating the working principle of this algorithm.

  12. Snore related signals processing in a private cloud computing system.

    Science.gov (United States)

    Qian, Kun; Guo, Jian; Xu, Huijie; Zhu, Zhaomeng; Zhang, Gongxuan

    2014-09-01

    Snore related signals (SRS) have been demonstrated to carry important information about the obstruction site and degree in the upper airway of Obstructive Sleep Apnea-Hypopnea Syndrome (OSAHS) patients in recent years. To make this acoustic signal analysis method more accurate and robust, big SRS data processing is inevitable. As an emerging concept and technology, cloud computing has motivated numerous researchers and engineers to exploit applications both in academic and industry field, which could have an ability to implement a huge blue print in biomedical engineering. Considering the security and transferring requirement of biomedical data, we designed a system based on private cloud computing to process SRS. Then we set the comparable experiments of processing a 5-hour audio recording of an OSAHS patient by a personal computer, a server and a private cloud computing system to demonstrate the efficiency of the infrastructure we proposed.

  13. 9th International Conference on Computer Recognition Systems

    CERN Document Server

    Jackowski, Konrad; Kurzyński, Marek; Woźniak, Michał; Żołnierek, Andrzej

    2016-01-01

    The computer recognition systems are nowadays one of the most promising directions in artificial intelligence. This book is the most comprehensive study of this field. It contains a collection of 79 carefully selected articles contributed by experts of pattern recognition. It reports on current research with respect to both methodology and applications. In particular, it includes the following sections: Features, learning, and classifiers Biometrics Data Stream Classification and Big Data Analytics Image processing and computer vision Medical applications Applications RGB-D perception: recent developments and applications This book is a great reference tool for scientists who deal with the problems of designing computer pattern recognition systems. Its target readers can be the as well researchers as students of computer science, artificial intelligence or robotics.  .

  14. TOWARD HIGHLY SECURE AND AUTONOMIC COMPUTING SYSTEMS: A HIERARCHICAL APPROACH

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hsien-Hsin S

    2010-05-11

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.

  15. Fault-tolerant clock synchronization validation methodology. [in computer systems

    Science.gov (United States)

    Butler, Ricky W.; Palumbo, Daniel L.; Johnson, Sally C.

    1987-01-01

    A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight-crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating the clock synchronization system of the Software Implemented Fault Tolerance computer. The design proof of the algorithm includes a theorem that defines the maximum skew between any two nonfaulty clocks in the system in terms of specific system parameters. Most of these parameters are deterministic. One crucial parameter is the upper bound on the clock read error, which is stochastic. The probability that this upper bound is exceeded is calculated from data obtained by the measurement of system parameters. This probability is then included in a detailed reliability analysis of the system.

  16. SD-CAS: Spin Dynamics by Computer Algebra System.

    Science.gov (United States)

    Filip, Xenia; Filip, Claudiu

    2010-11-01

    A computer algebra tool for describing the Liouville-space quantum evolution of nuclear 1/2-spins is introduced and implemented within a computational framework named Spin Dynamics by Computer Algebra System (SD-CAS). A distinctive feature compared with numerical and previous computer algebra approaches to solving spin dynamics problems results from the fact that no matrix representation for spin operators is used in SD-CAS, which determines a full symbolic character to the performed computations. Spin correlations are stored in SD-CAS as four-entry nested lists of which size increases linearly with the number of spins into the system and are easily mapped into analytical expressions in terms of spin operator products. For the so defined SD-CAS spin correlations a set of specialized functions and procedures is introduced that are essential for implementing basic spin algebra operations, such as the spin operator products, commutators, and scalar products. They provide results in an abstract algebraic form: specific procedures to quantitatively evaluate such symbolic expressions with respect to the involved spin interaction parameters and experimental conditions are also discussed. Although the main focus in the present work is on laying the foundation for spin dynamics symbolic computation in NMR based on a non-matrix formalism, practical aspects are also considered throughout the theoretical development process. In particular, specific SD-CAS routines have been implemented using the YACAS computer algebra package (http://yacas.sourceforge.net), and their functionality was demonstrated on a few illustrative examples.

  17. Radiation Tolerant, FPGA-Based SmallSat Computer System

    Science.gov (United States)

    LaMeres, Brock J.; Crum, Gary A.; Martinez, Andres; Petro, Andrew

    2015-01-01

    The Radiation Tolerant, FPGA-based SmallSat Computer System (RadSat) computing platform exploits a commercial off-the-shelf (COTS) Field Programmable Gate Array (FPGA) with real-time partial reconfiguration to provide increased performance, power efficiency and radiation tolerance at a fraction of the cost of existing radiation hardened computing solutions. This technology is ideal for small spacecraft that require state-of-the-art on-board processing in harsh radiation environments but where using radiation hardened processors is cost prohibitive.

  18. COMPUTER VISION APPLIED IN THE PRECISION CONTROL SYSTEM

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Computer vision and its application in the precision control system are discussed. In the process of fabricating, the accuracy of the products should be controlled reasonably and completely. The precision should be kept and adjusted according to the information of feedback got from the measurement on-line or out-line in different procedures. Computer vision is one of the useful methods to do this. Computer vision and the image manipulation are presented, and based on this, a n-dimensional vector to appraise on precision of machining is given.

  19. Research on the Teaching System of the University Computer Foundation

    OpenAIRE

    2016-01-01

    Inonal students, the teaching contents, classification, hierarchical teaching methods with the combination of professional level training, as well as for top-notch students after class to promote comprehensive training methods for different students, establish online Q & A, test platform, to strengthen the integration professional education and computer education and training system of college computer basic course of study and exploration, and the popularization and application of the basic ...

  20. Object-oriented models of functionally integrated computer systems

    OpenAIRE

    Kaasbøll, Jens

    1994-01-01

    Functional integration is the compatibility between the structure, culture and competence of an organization and its computer systems, specifically the availability of data and functionality and the consistency of user interfaces. Many people use more than one computer program in their work, and they experience problems relating to functional integration. Various solutions can be considered for different tasks and technology; e.g. to design a common userinterface shell for several application...

  1. Software Requirements for a System to Compute Mean Failure Cost

    Energy Technology Data Exchange (ETDEWEB)

    Aissa, Anis Ben [University of Tunis, Belvedere, Tunisia; Abercrombie, Robert K [ORNL; Sheldon, Frederick T [ORNL; Mili, Ali [New Jersey Insitute of Technology

    2010-01-01

    In earlier works, we presented a computational infrastructure that allows an analyst to estimate the security of a system in terms of the loss that each stakeholder. We also demonstrated this infrastructure through the results of security breakdowns for the ecommerce case. In this paper, we illustrate this infrastructure by an application that supports the computation of the Mean Failure Cost (MFC) for each stakeholder.

  2. A distributed deadlock detection algorithm for mobile computing system

    Institute of Scientific and Technical Information of China (English)

    CHENG Xin; LIU Hong-wei; ZUO De-cheng; JIN Feng; YANG Xiao-zong

    2005-01-01

    The mode of mobile computing originated from distributed computing and it has the un-idempotent operation property, therefore the deadlock detection algorithm designed for mobile computing systems will face challenges with regard to correctness and high efficiency. This paper attempts a fundamental study of deadlock detection for the AND model of mobile computing systems. First, the existing deadlock detection algorithms for distributed systems are classified into the resource node dependent (RD) and the resource node independent (RI) categories, and their corresponding weaknesses are discussed. Afterwards a new RI algorithm based on the AND model of mobile computing system is presented. The novelties of our algorithm are that: 1 ) the blocked nodes inform their predecessors and successors simultaneously; 2 ) the detection messages ( agents )hold the predecessors information of their originator; 3) no agent is stored midway. Additionally, the quit-inform scheme is introduced to treat the excessive victim quitting problem raised by the overlapped cycles. By these methods the proposed algorithm can detect a cycle of size n within n - 2 steps and with ( n2 - n - 2)/2 agents. The performance of our algorithm is compared with the most competitive RD and RI algorithms for distributed systems on a mobile agent simulation platform. Experiment results point out that our algorithm outperforms the two algorithms under the vast majority of resource configurations and concurrent workloads. The correctness of the proposed algorithm is formally proven by the invariant verification technique.

  3. Hot Chips and Hot Interconnects for High End Computing Systems

    Science.gov (United States)

    Saini, Subhash

    2005-01-01

    I will discuss several processors: 1. The Cray proprietary processor used in the Cray X1; 2. The IBM Power 3 and Power 4 used in an IBM SP 3 and IBM SP 4 systems; 3. The Intel Itanium and Xeon, used in the SGI Altix systems and clusters respectively; 4. IBM System-on-a-Chip used in IBM BlueGene/L; 5. HP Alpha EV68 processor used in DOE ASCI Q cluster; 6. SPARC64 V processor, which is used in the Fujitsu PRIMEPOWER HPC2500; 7. An NEC proprietary processor, which is used in NEC SX-6/7; 8. Power 4+ processor, which is used in Hitachi SR11000; 9. NEC proprietary processor, which is used in Earth Simulator. The IBM POWER5 and Red Storm Computing Systems will also be discussed. The architectures of these processors will first be presented, followed by interconnection networks and a description of high-end computer systems based on these processors and networks. The performance of various hardware/programming model combinations will then be compared, based on latest NAS Parallel Benchmark results (MPI, OpenMP/HPF and hybrid (MPI + OpenMP). The tutorial will conclude with a discussion of general trends in the field of high performance computing, (quantum computing, DNA computing, cellular engineering, and neural networks).

  4. Architectural requirements for the Red Storm computing system.

    Energy Technology Data Exchange (ETDEWEB)

    Camp, William J.; Tomkins, James Lee

    2003-10-01

    This report is based on the Statement of Work (SOW) describing the various requirements for delivering 3 new supercomputer system to Sandia National Laboratories (Sandia) as part of the Department of Energy's (DOE) Accelerated Strategic Computing Initiative (ASCI) program. This system is named Red Storm and will be a distributed memory, massively parallel processor (MPP) machine built primarily out of commodity parts. The requirements presented here distill extensive architectural and design experience accumulated over a decade and a half of research, development and production operation of similar machines at Sandia. Red Storm will have an unusually high bandwidth, low latency interconnect, specially designed hardware and software reliability features, a light weight kernel compute node operating system and the ability to rapidly switch major sections of the machine between classified and unclassified computing environments. Particular attention has been paid to architectural balance in the design of Red Storm, and it is therefore expected to achieve an atypically high fraction of its peak speed of 41 TeraOPS on real scientific computing applications. In addition, Red Storm is designed to be upgradeable to many times this initial peak capability while still retaining appropriate balance in key design dimensions. Installation of the Red Storm computer system at Sandia's New Mexico site is planned for 2004, and it is expected that the system will be operated for a minimum of five years following installation.

  5. Optical interconnection networks for high-performance computing systems.

    Science.gov (United States)

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  6. Cluster based parallel database management system for data intensive computing

    Institute of Scientific and Technical Information of China (English)

    Jianzhong LI; Wei ZHANG

    2009-01-01

    This paper describes a computer-cluster based parallel database management system (DBMS), InfiniteDB, developed by the authors. InfiniteDB aims at efficiently sup-port data intensive computing in response to the rapid grow-ing in database size and the need of high performance ana-lyzing of massive databases. It can be efficiently executed in the computing system composed by thousands of computers such as cloud computing system. It supports the parallelisms of intra-query, inter-query, intra-operation, inter-operation and pipelining. It provides effective strategies for managing massive databases including the multiple data declustering methods, the declustering-aware algorithms for relational operations and other database operations, and the adaptive query optimization method. It also provides the functions of parallel data warehousing and data mining, the coordinator-wrapper mechanism to support the integration of heteroge-neous information resources on the Internet, and the fault tol-erant and resilient infrastructures. It has been used in many applications and has proved quite effective for data intensive computing.

  7. Computational requirements for on-orbit identification of space systems

    Science.gov (United States)

    Hadaegh, Fred Y.

    1988-01-01

    For the future space systems, on-orbit identification (ID) capability will be required to complement on-orbit control, due to the fact that the dynamics of large space structures, spacecrafts, and antennas will not be known sufficiently from ground modeling and testing. The computational requirements for ID of flexible structures such as the space station (SS) or the large deployable reflectors (LDR) are however, extensive due to the large number of modes, sensors, and actuators. For these systems the ID algorithm operations need not be computed in real-time, only in near real-time, or an appropriate mission time. Consequently the space systems will need advanced processors and efficient parallel processing algorithm design and architectures to implement the identification algorithms in near real-time. The MAX computer currently being developed may handle such computational requirements. The purpose is to specify the on-board computational requirements for dynamic and static identification for large space structures. The computational requirements for six ID algorithms are presented in the context of three examples: the JPL/AFAL ground antenna facility, the space station (SS), and the large deployable reflector (LDR).

  8. 78 FR 18353 - Guidance for Industry: Blood Establishment Computer System Validation in the User's Facility...

    Science.gov (United States)

    2013-03-26

    ... HUMAN SERVICES Food and Drug Administration Guidance for Industry: Blood Establishment Computer System... ``Guidance for Industry: Blood Establishment Computer System Validation in the User's Facility'' dated April... establishment computer system validation program, consistent with recognized principles of software...

  9. Interactive Rhythm Learning System by Combining Tablet Computers and Robots

    Directory of Open Access Journals (Sweden)

    Chien-Hsing Chou

    2017-03-01

    Full Text Available This study proposes a percussion learning device that combines tablet computers and robots. This device comprises two systems: a rhythm teaching system, in which users can compose and practice rhythms by using a tablet computer, and a robot performance system. First, teachers compose the rhythm training contents on the tablet computer. Then, the learners practice these percussion exercises by using the tablet computer and a small drum set. The teaching system provides a new and user-friendly score editing interface for composing a rhythm exercise. It also provides a rhythm rating function to facilitate percussion training for children and improve the stability of rhythmic beating. To encourage children to practice percussion exercises, a robotic performance system is used to interact with the children; this system can perform percussion exercises for students to listen to and then help them practice the exercise. This interaction enhances children’s interest and motivation to learn and practice rhythm exercises. The results of experimental course and field trials reveal that the proposed system not only increases students’ interest and efficiency in learning but also helps them in understanding musical rhythms through interaction and composing simple rhythms.

  10. A cognitive computational model inspired by the immune system response.

    Science.gov (United States)

    Abdo Abd Al-Hady, Mohamed; Badr, Amr Ahmed; Mostafa, Mostafa Abd Al-Azim

    2014-01-01

    The immune system has a cognitive ability to differentiate between healthy and unhealthy cells. The immune system response (ISR) is stimulated by a disorder in the temporary fuzzy state that is oscillating between the healthy and unhealthy states. However, modeling the immune system is an enormous challenge; the paper introduces an extensive summary of how the immune system response functions, as an overview of a complex topic, to present the immune system as a cognitive intelligent agent. The homogeneity and perfection of the natural immune system have been always standing out as the sought-after model we attempted to imitate while building our proposed model of cognitive architecture. The paper divides the ISR into four logical phases: setting a computational architectural diagram for each phase, proceeding from functional perspectives (input, process, and output), and their consequences. The proposed architecture components are defined by matching biological operations with computational functions and hence with the framework of the paper. On the other hand, the architecture focuses on the interoperability of main theoretical immunological perspectives (classic, cognitive, and danger theory), as related to computer science terminologies. The paper presents a descriptive model of immune system, to figure out the nature of response, deemed to be intrinsic for building a hybrid computational model based on a cognitive intelligent agent perspective and inspired by the natural biology. To that end, this paper highlights the ISR phases as applied to a case study on hepatitis C virus, meanwhile illustrating our proposed architecture perspective.

  11. MENTAL SHIFT TOWARDS SYSTEMS THINKING SKILLS IN COMPUTER SCIENCE

    Directory of Open Access Journals (Sweden)

    MILDEOVÁ, Stanislava

    2012-03-01

    Full Text Available When seeking solutions to current problems in the field of computer science – and other fields – we encounter situations where traditional approaches no longer bring the desired results. Our cognitive skills also limit the implementation of reliable mental simulation within the basic set of relations. The world around us is becoming more complex and mutually interdependent, and this is reflected in the demands on computer support. Thus, in today’s education and science in the field of computer science and all other disciplines and areas of life need to address the issue of the paradigm shift, which is generally accepted by experts. The goal of the paper is to present the systems thinking that facilitates and extends the understanding of the world through relations and linkages. Moreover, the paper introduces the essence of systems thinking and the possibilities to achieve mental a shift toward systems thinking skills. At the same time, the link between systems thinking and functional literacy is presented. We adopted the “Bathtub Test” from the variety of systems thinking tests that allow people to assess the understanding of basic systemic concepts, in order to assess the level of systems thinking. University students (potential information managers were the examined subjects of the examination of systems thinking that was conducted over a longer time period and whose aim was to determine the status of systems thinking. . The paper demonstrates that some pedagogical concepts and activities, in our case the subject of System Dynamics that leads to the appropriate integration of systems thinking in education. There is some evidence that basic knowledge of system dynamics and systems thinking principles will affect students, and their thinking will contribute to an improved approach to solving problems of computer science both in theory and practice.

  12. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  13. An introduction to computer simulation methods applications to physical systems

    CERN Document Server

    Gould, Harvey; Christian, Wolfgang

    2007-01-01

    Now in its third edition, this book teaches physical concepts using computer simulations. The text incorporates object-oriented programming techniques and encourages readers to develop good programming habits in the context of doing physics. Designed for readers at all levels , An Introduction to Computer Simulation Methods uses Java, currently the most popular programming language. Introduction, Tools for Doing Simulations, Simulating Particle Motion, Oscillatory Systems, Few-Body Problems: The Motion of the Planets, The Chaotic Motion of Dynamical Systems, Random Processes, The Dynamics of Many Particle Systems, Normal Modes and Waves, Electrodynamics, Numerical and Monte Carlo Methods, Percolation, Fractals and Kinetic Growth Models, Complex Systems, Monte Carlo Simulations of Thermal Systems, Quantum Systems, Visualization and Rigid Body Dynamics, Seeing in Special and General Relativity, Epilogue: The Unity of Physics For all readers interested in developing programming habits in the context of doing phy...

  14. Computational Intelligence and its Role in Enhancing Sustainable Transport Systems

    Directory of Open Access Journals (Sweden)

    Eric Goodyer

    2011-09-01

    Full Text Available DeMontfort University’s (DMU Centre for Computational Intelligence (CCI is engaged in a range of programmes applying modern Computational Intelligence (CI techniques to provide superior analysis of complex real-time data sets that arise within transport systems. Better use of existing transport infrastructures can achieve positive sustainable outcomes, reducing congestion, improving air quality, providing real-time travel information and supporting low carbon vehicles. This is exemplified by the following examples: • ITRAQ, an integrated CI system that uses live feeds to determine the optimum use of the road system to reduce congestion and to improve air quality. • Sustainable Airport development Decision Support Systems. A CI based model that interfaces with a GIS system to model the environmental impact of flight paths. • The application of CI to solve multi-variable systems, logistics and passenger information. • VenusSim. The use of CI to model the dynamics of customer flows in transport terminals.

  15. Fault Detection of Computer Communication Networks Using an Expert System

    Directory of Open Access Journals (Sweden)

    Ibrahiem M.M. El Emary

    2005-01-01

    Full Text Available The main objective of this study was to build an expert system for assisting the network administrator in his work of management and administration of the computer communication network. Theory of operation of the proposed expert system depends on using a time series model capable of forecasting the various performance parameters as: delay, utilization and collision frequency. When the expert system finds a difference (with certain tolerance between the predicted value and the measured value, it informs the network administrator that there exist problems in his network either in the switch or link or router. We examine two types of network by our proposed expert system, the first one is called token bus while the second one is called token ring. When we run our expert system on these two types of computer networks, actually the expert system captures the problem when there exists an excess deviation from the network performance parameters.

  16. DOC-a file system cache to support mobile computers

    Science.gov (United States)

    Huizinga, D. M.; Heflinger, K.

    1995-09-01

    This paper identifies design requirements of system-level support for mobile computing in small form-factor battery-powered portable computers and describes their implementation in DOC (Disconnected Operation Cache). DOC is a three-level client caching system designed and implemented to allow mobile clients to transition between connected, partially disconnected and fully disconnected modes of operation with minimal user involvement. Implemented for notebook computers, DOC addresses not only typical issues of mobile elements such as resource scarcity and fluctuations in service quality but also deals with the pitfalls of MS-DOS, the operating system which prevails in the commercial notebook market. Our experiments performed in the software engineering environment of AST Research indicate not only considerable performance gains for connected and partially disconnected modes of DOC, but also the successful operation of the disconnected mode.

  17. Compute Element and Interface Box for the Hazard Detection System

    Science.gov (United States)

    Villalpando, Carlos Y.; Khanoyan, Garen; Stern, Ryan A.; Some, Raphael R.; Bailey, Erik S.; Carson, John M.; Vaughan, Geoffrey M.; Werner, Robert A.; Salomon, Phil M.; Martin, Keith E.; Spaulding, Matthew D.; Luna, Michael E.; Motaghedi, Shui H.; Trawny, Nikolas; Johnson, Andrew E.; Ivanov, Tonislav I.; Huertas, Andres; Whitaker, William D.; Goldberg, Steven B.

    2013-01-01

    The Autonomous Landing and Hazard Avoidance Technology (ALHAT) program is building a sensor that enables a spacecraft to evaluate autonomously a potential landing area to generate a list of hazardous and safe landing sites. It will also provide navigation inputs relative to those safe sites. The Hazard Detection System Compute Element (HDS-CE) box combines a field-programmable gate array (FPGA) board for sensor integration and timing, with a multicore computer board for processing. The FPGA does system-level timing and data aggregation, and acts as a go-between, removing the real-time requirements from the processor and labeling events with a high resolution time. The processor manages the behavior of the system, controls the instruments connected to the HDS-CE, and services the "heavy lifting" computational requirements for analyzing the potential landing spots.

  18. Method of Computer-aided Instruction in Situation Control Systems

    Directory of Open Access Journals (Sweden)

    Anatoliy O. Kargin

    2013-01-01

    Full Text Available The article considers the problem of computer-aided instruction in context-chain motivated situation control system of the complex technical system behavior. The conceptual and formal models of situation control with practical instruction are considered. Acquisition of new behavior knowledge is presented as structural changes in system memory in the form of situational agent set. Model and method of computer-aided instruction represent formalization, based on the nondistinct theories by physiologists and cognitive psychologists.The formal instruction model describes situation and reaction formation and dependence on different parameters, effecting education, such as the reinforcement value, time between the stimulus, action and the reinforcement. The change of the contextual link between situational elements when using is formalized.The examples and results of computer instruction experiments of the robot device “LEGO MINDSTORMS NXT”, equipped with ultrasonic distance, touch, light sensors.

  19. A Wearable Computing System for Dynamic Locating of Parking Spaces

    OpenAIRE

    Damian Mrugala; Alexander Dannies; Walter Lang

    2010-01-01

    This paper describes a dynamic locating system implemented in an autonomous wearable computing system for the automobile warehouse management application. Since the first prototype is developed as jacket [1], this prototype is miniaturized and therefore realized as holster which consists of several modules for identification, communication and localization. It is worn by employees during warehousing of automobiles. The modules collect data, which are used by the operating system to calculate ...

  20. A Wearable Computing System for Dynamic Locating of Parking Spaces

    Directory of Open Access Journals (Sweden)

    Damian Mrugala

    2010-07-01

    Full Text Available This paper describes a dynamic locating system implemented in an autonomous wearable computing system for the automobile warehouse management application. Since the first prototype is developed as jacket [1], this prototype is miniaturized and therefore realized as holster which consists of several modules for identification, communication and localization. It is worn by employees during warehousing of automobiles. The modules collect data, which are used by the operating system to calculate the location of parking spaces dynamically.

  1. Computational Biomathematics: Toward Optimal Control of Complex Biological Systems

    Science.gov (United States)

    2016-09-26

    Computational Biomathematics: Toward Optimal Control Of Complex Biological Systems See attached. The views, opinions and/or findings contained in this... Control Of Complex Biological Systems Report Title See attached. (a) Papers published in peer-reviewed journals (N/A for none) Enter List of papers...substantially lowered. Since the equations depend on what information we are interested in, automatic conversion of agent-based models to systems of

  2. Technology transfer of the Computer-Aided Prototyping System (CAPS)

    OpenAIRE

    Cooke, Robert P.

    1996-01-01

    The inability of the Department of Defense (DOD) to accurately and completely specify requirements for hard real-time software systems has resulted in poor productivity, schedule overruns, and software that is unmaintainable and unreliable. The Computer-Aided Prototyping System (CAPS) provides a capability to quickly develop functional prototypes to verify feasibility of system requirements early in the software development process. It was built to help program managers and software engineers...

  3. Development of Interactive Courseware for Learning Basic Computer System Components

    Directory of Open Access Journals (Sweden)

    Ida A. Bahrudin

    2011-01-01

    Full Text Available A computer assisted learning approach was developed to enhance course material for students to learn about computer components, through the use of multimedia courseware. Recent advances in software authoring packages have made the production of CD-ROMs an efficient and effective educational strategy. Problem statement: This study reports on the development an evaluation of this courseware related to the learning basic computer system components through the use of courseware (iC-COM. The purpose of the study was to evaluate a courseware for learning basic computer system components. The basic components of iC-COM included an interface, graphics, sound effects, narration and video. Approach: The iC-COM was developed based on the ADDIE instructional system design model. The research instrument was the courseware evaluation in the form of questionnaire that consisted of various attributes included content, interactivity, navigation and screen design which distributed to 50 computer system and support programme students of Kolej Komuniti Jasin. Results: Based on the results, it was shown that the analysis results of each questionnaire item mean ranged from 3.5 to 3.8. Conclusion/Recommendations: For future enhancement, it is recommended that iC-COM will be integrated into web-based platform so it can be accessed easily anywhere.

  4. Biological Computation as the Revolution of Complex Engineered Systems

    CERN Document Server

    Gómez-Cruz, Nelson Alfonso

    2011-01-01

    Provided that there is no theoretical frame for complex engineered systems (CES) as yet, this paper claims that bio-inspired engineering can help provide such a frame. Within CES bio-inspired systems play a key role. The disclosure from bio-inspired systems and biological computation has not been sufficiently worked out, however. Biological computation is to be taken as the processing of information by living systems that is carried out in polynomial time, i.e., efficiently; such processing however is grasped by current science and research as an intractable problem (for instance, the protein folding problem). A remark is needed here: P versus NP problems should be well defined and delimited but biological computation problems are not. The shift from conventional engineering to bio-inspired engineering needs bring the subject (or problem) of computability to a new level. Within the frame of computation, so far, the prevailing paradigm is still the Turing-Church thesis. In other words, conventional engineering...

  5. National Ignition Facility sub-system design requirements computer system SSDR 1.5.1

    Energy Technology Data Exchange (ETDEWEB)

    Spann, J.; VanArsdall, P.; Bliss, E.

    1996-09-05

    This System Design Requirement document establishes the performance, design, development and test requirements for the Computer System, WBS 1.5.1 which is part of the NIF Integrated Computer Control System (ICCS). This document responds directly to the requirements detailed in ICCS (WBS 1.5) which is the document directly above.

  6. Microeconomic theory and computation applying the maxima open-source computer algebra system

    CERN Document Server

    Hammock, Michael R

    2014-01-01

    This book provides a step-by-step tutorial for using Maxima, an open-source multi-platform computer algebra system, to examine the economic relationships that form the core of microeconomics in a way that complements traditional modeling techniques.

  7. Cloud Computing in the Curricula of Schools of Computer Science and Information Systems

    Science.gov (United States)

    Lawler, James P.

    2011-01-01

    The cloud continues to be a developing area of information systems. Evangelistic literature in the practitioner field indicates benefit for business firms but disruption for technology departments of the firms. Though the cloud currently is immature in methodology, this study defines a model program by which computer science and information…

  8. Distributed parallel computing in stochastic modeling of groundwater systems.

    Science.gov (United States)

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  9. TRIP: General computer algebra system for celestial mechanics

    Science.gov (United States)

    Laskar, J.; Gastineau, M.

    2012-10-01

    TRIP is an interactive computer algebra system that is devoted to perturbation series computations, and specially adapted to celestial mechanics. Its development started in 1988, as an upgrade of the special purpose FORTRAN routines elaborated by J. Laskar for the demonstration of the chaotic behavior of the Solar System. TRIP is a mature and efficient tool for handling multivariate generalized power series, and embeds two kernels, a symbolic and a numerical kernel. This numerical kernel communicates with Gnuplot or Grace to plot the graphics and allows one to plot the numerical evaluation of symbolic objects.

  10. Clone Selection Algorithm with Niching Strategy for Computer Iune System

    Institute of Scientific and Technical Information of China (English)

    张雅静; 侯朝桢; 薛阳

    2004-01-01

    A clone selection algorithm for computer iune system is presented. Clone selection principles in biological iune system are applied to the domain of computer virus detection. Based on the negative selection algorithm proposed by Stephanie Forrest, combining mutation operator in genetic algorithms and niching strategy in biology is adopted, the number of detectors is decreased effectively and the ability on self-nonself discrimination is improved. Simulation experiment shows that the algorithm is simple, practical and is adapted to the discrimination for long files.

  11. Computation system for nuclear reactor core analysis. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.; Petrie, L.M.

    1977-04-01

    This report documents a system which contains computer codes as modules developed to evaluate nuclear reactor core performance. The diffusion theory approximation to neutron transport may be applied with the VENTURE code treating up to three dimensions. The effect of exposure may be determined with the BURNER code, allowing depletion calculations to be made. The features and requirements of the system are discussed and aspects common to the computational modules, but the latter are documented elsewhere. User input data requirements, data file management, control, and the modules which perform general functions are described. Continuing development and implementation effort is enhancing the analysis capability available locally and to other installations from remote terminals.

  12. Local rollback for fault-tolerance in parallel computing systems

    Science.gov (United States)

    Blumrich, Matthias A [Yorktown Heights, NY; Chen, Dong [Yorktown Heights, NY; Gara, Alan [Yorktown Heights, NY; Giampapa, Mark E [Yorktown Heights, NY; Heidelberger, Philip [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Steinmacher-Burow, Burkhard [Boeblingen, DE; Sugavanam, Krishnan [Yorktown Heights, NY

    2012-01-24

    A control logic device performs a local rollback in a parallel super computing system. The super computing system includes at least one cache memory device. The control logic device determines a local rollback interval. The control logic device runs at least one instruction in the local rollback interval. The control logic device evaluates whether an unrecoverable condition occurs while running the at least one instruction during the local rollback interval. The control logic device checks whether an error occurs during the local rollback. The control logic device restarts the local rollback interval if the error occurs and the unrecoverable condition does not occur during the local rollback interval.

  13. Small file aggregation in a parallel computing system

    Science.gov (United States)

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Zhang, Jingwang

    2014-09-02

    Techniques are provided for small file aggregation in a parallel computing system. An exemplary method for storing a plurality of files generated by a plurality of processes in a parallel computing system comprises aggregating the plurality of files into a single aggregated file; and generating metadata for the single aggregated file. The metadata comprises an offset and a length of each of the plurality of files in the single aggregated file. The metadata can be used to unpack one or more of the files from the single aggregated file.

  14. Universal computer control system (UCCS) for space telerobots

    Science.gov (United States)

    Bejczy, Antal K.; Szakaly, Zoltan

    1987-01-01

    A universal computer control system (UCCS) is under development for all motor elements of a space telerobot. The basic hardware architecture and software design of UCCS are described, together with the rich motor sensing, control, and self-test capabilities of this all-computerized motor control system. UCCS is integrated into a multibus computer environment with direct interface to higher level control processors, uses pulsewidth multiplier power amplifiers, and one unit can control up to sixteen different motors simultaneously at a high I/O rate. UCCS performance capabilities are illustrated by a few data.

  15. A Synthesized Framework for Formal Verification of Computing Systems

    Directory of Open Access Journals (Sweden)

    Nikola Bogunovic

    2003-12-01

    Full Text Available Design process of computing systems gradually evolved to a level that encompasses formal verification techniques. However, the integration of formal verification techniques into a methodical design procedure has many inherent miscomprehensions and problems. The paper explicates the discrepancy between the real system implementation and the abstracted model that is actually used in the formal verification procedure. Particular attention is paid to the seamless integration of all phases of the verification procedure that encompasses definition of the specification language and denotation and execution of conformance relation between the abstracted model and its intended behavior. The concealed obstacles are exposed, computationally expensive steps identified and possible improvements proposed.

  16. Computer controlled MHD power consolidation and pulse-generation system

    Science.gov (United States)

    Johnson, R.

    The major goal of this project is to establish the feasibility of a power conversion technology which will permit the direct synthesis of computer programmable pulse power. Feasibility will be established in this project by demonstration of direct synthesis of commercial frequency power by means of computer control. The power input to the conversion system is assumed to be a magnetohydrodynamic (MHD) Faraday connected generator which may be viewed as a multi-terminal d.c. source. This consolidation/inversion process is referred to subsequently as Pulse-Amplitude-Synthesis-and-Control (PASC). A secondary goal is to deliver a controller subsystem consisting of a computer, software, and computer interface board which can serve as one of the building blocks for a possible Phase 2 prototype system. This report covers the initial six months portion of the project and includes discussions on the following areas: (1) selection of a control computer with software tool kit for development of the PASC controller contract requirement; (2) problem formulation considerations for simulation of the PASC technique on digital computers; (3) initial simulation results for the PASC transformer, including simulation results obtained using SPICE and the INTEG program; (4) a survey of available gate-turn-off (GTO's), power semiconductors, power field effect transistors (PFET's), and fiber optics signal cabling and transducers.

  17. A distributed spatial computing prototype system in grid environment

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Digital Earth has been a hot topic and research trend since it was proposed,and Digital China has drawn much attention in China.As a key technique to implement Digital China,grid is an excellent and promising concept to construct a dynamic,inter-domain and distributed computing environment.It is appropriate to process geographic information across dispersed computing resources in networks effectively and cooperatively.A distributed spatial computing prototype system is designed and implemented with the Globus Toolkit.Several important aspects are discussed in detail.The architecture is proposed according to the characteristics of grid firstly,and then the spatial resource query and access interfaces are designed for heterogeneous data sources.An open-up hierarchical architecture for resource discovery and management is represented to detect spatial and computing resources in grid.A standard spatial job management mechanism is implemented by grid service for convenient use.In addition,the control mechanism of spatial datasets access is developed based on GSI.The prototype system utilizes the Globus Toolkit to implement a common distributed spatial computing framework,and it reveals the spatial computing ability of grid to support Digital China.

  18. Advances in computational design and analysis of airbreathing propulsion systems

    Science.gov (United States)

    Klineberg, John M.

    1989-01-01

    The development of commercial and military aircraft depends, to a large extent, on engine manufacturers being able to achieve significant increases in propulsion capability through improved component aerodynamics, materials, and structures. The recent history of propulsion has been marked by efforts to develop computational techniques that can speed up the propulsion design process and produce superior designs. The availability of powerful supercomputers, such as the NASA Numerical Aerodynamic Simulator, and the potential for even higher performance offered by parallel computer architectures, have opened the door to the use of multi-dimensional simulations to study complex physical phenomena in propulsion systems that have previously defied analysis or experimental observation. An overview of several NASA Lewis research efforts is provided that are contributing toward the long-range goal of a numerical test-cell for the integrated, multidisciplinary design, analysis, and optimization of propulsion systems. Specific examples in Internal Computational Fluid Mechanics, Computational Structural Mechanics, Computational Materials Science, and High Performance Computing are cited and described in terms of current capabilities, technical challenges, and future research directions.

  19. Computer systems and software description for gas characterization system

    Energy Technology Data Exchange (ETDEWEB)

    Vo, C.V.

    1997-04-01

    The Gas Characterization System Project was commissioned by TWRS management with funding from TWRS Safety, on December 1, 1994. The project objective is to establish an instrumentation system to measure flammable gas concentrations in the vapor space of selected watch list tanks, starting with tank AN-105 and AW-101. Data collected by this system is meant to support first tank characterization, then tank safety. System design is premised upon Characterization rather than mitigation, therefore redundancy is not required.

  20. DMG-α--a computational geometry library for multimolecular systems.

    Science.gov (United States)

    Szczelina, Robert; Murzyn, Krzysztof

    2014-11-24

    The DMG-α library grants researchers in the field of computational biology, chemistry, and biophysics access to an open-sourced, easy to use, and intuitive software for performing fine-grained geometric analysis of molecular systems. The library is capable of computing power diagrams (weighted Voronoi diagrams) in three dimensions with 3D periodic boundary conditions, computing approximate projective 2D Voronoi diagrams on arbitrarily defined surfaces, performing shape properties recognition using α-shape theory and can do exact Solvent Accessible Surface Area (SASA) computation. The software is written mainly as a template-based C++ library for greater performance, but a rich Python interface (pydmga) is provided as a convenient way to manipulate the DMG-α routines. To illustrate possible applications of the DMG-α library, we present results of sample analyses which allowed to determine nontrivial geometric properties of two Escherichia coli-specific lipids as emerging from molecular dynamics simulations of relevant model bilayers.

  1. Computational physics simulation of classical and quantum systems

    CERN Document Server

    Scherer, Philipp O J

    2013-01-01

    This textbook presents basic and advanced computational physics in a very didactic style. It contains very-well-presented and simple mathematical descriptions of many of the most important algorithms used in computational physics. Many clear mathematical descriptions of important techniques in computational physics are given. The first part of the book discusses the basic numerical methods. A large number of exercises and computer experiments allows to study the properties of these methods. The second part concentrates on simulation of classical and quantum systems. It uses a rather general concept for the equation of motion which can be applied to ordinary and partial differential equations. Several classes of integration methods are discussed including not only the standard Euler and Runge Kutta method but also multistep methods and the class of Verlet methods which is introduced by studying the motion in Liouville space. Besides the classical methods, inverse interpolation is discussed, together with the p...

  2. Development of a proton Computed Tomography Detector System

    CERN Document Server

    Naimuddin, Md; Blazey, G; Boi, S; Dyshkant, A; Erdelyi, B; Hedin, D; Johnson, E; Krider, J; Rukalin, V; Uzunyan, S A; Zutshi, V; Fordt, R; Sellberg, G; Rauch, J E; Roman, M; Rubinov, P; Wilson, P

    2015-01-01

    Computer tomography is one of the most promising new methods to image abnormal tissues inside the human body. Tomography is also used to position the patient accurately before radiation therapy. Hadron therapy for treating cancer has become one of the most advantageous and safe options. In order to fully utilize the advantages of hadron therapy, there is a necessity of performing radiography with hadrons as well. In this paper we present the development of a proton computed tomography system. Our second-generation proton tomography system consists of two upstream and two downstream trackers made up of fibers as active material and a range detector consisting of plastic scintillators. We present details of the detector system, readout electronics, and data acquisition system as well as the commissioning of the entire system. We also present preliminary results from the test beam of the range detector.

  3. Development of a proton Computed Tomography Detector System

    Energy Technology Data Exchange (ETDEWEB)

    Naimuddin, Md. [Delhi U.; Coutrakon, G. [Northern Illinois U.; Blazey, G. [Northern Illinois U.; Boi, S. [Northern Illinois U.; Dyshkant, A. [Northern Illinois U.; Erdelyi, B. [Northern Illinois U.; Hedin, D. [Northern Illinois U.; Johnson, E. [Northern Illinois U.; Krider, J. [Northern Illinois U.; Rukalin, V. [Northern Illinois U.; Uzunyan, S. A. [Northern Illinois U.; Zutshi, V. [Northern Illinois U.; Fordt, R. [Fermilab; Sellberg, G. [Fermilab; Rauch, J. E. [Fermilab; Roman, M. [Fermilab; Rubinov, P. [Fermilab; Wilson, P. [Fermilab

    2016-02-04

    Computer tomography is one of the most promising new methods to image abnormal tissues inside the human body. Tomography is also used to position the patient accurately before radiation therapy. Hadron therapy for treating cancer has become one of the most advantegeous and safe options. In order to fully utilize the advantages of hadron therapy, there is a necessity of performing radiography with hadrons as well. In this paper we present the development of a proton computed tomography system. Our second-generation proton tomography system consists of two upstream and two downstream trackers made up of fibers as active material and a range detector consisting of plastic scintillators. We present details of the detector system, readout electronics, and data acquisition system as well as the commissioning of the entire system. We also present preliminary results from the test beam of the range detector.

  4. B190 computer controlled radiation monitoring and safety interlock system

    Energy Technology Data Exchange (ETDEWEB)

    Espinosa, D L; Fields, W F; Gittins, D E; Roberts, M L

    1998-08-01

    The Center for Accelerator Mass Spectrometry (CAMS) in the Earth and Environmental Sciences Directorate at Lawrence Livermore National Laboratory (LLNL) operates two accelerators and is in the process of installing two new additional accelerators in support of a variety of basic and applied measurement programs. To monitor the radiation environment in the facility in which these accelerators are located and to terminate accelerator operations if predetermined radiation levels are exceeded, an updated computer controlled radiation monitoring system has been installed. This new system also monitors various machine safety interlocks and again terminates accelerator operations if machine interlocks are broken. This new system replaces an older system that was originally installed in 1988. This paper describes the updated B190 computer controlled radiation monitoring and safety interlock system.

  5. Development of a proton Computed Tomography detector system

    Science.gov (United States)

    Naimuddin, Md.; Coutrakon, G.; Blazey, G.; Boi, S.; Dyshkant, A.; Erdelyi, B.; Hedin, D.; Johnson, E.; Krider, J.; Rukalin, V.; Uzunyan, S. A.; Zutshi, V.; Fordt, R.; Sellberg, G.; Rauch, J. E.; Roman, M.; Rubinov, P.; Wilson, P.

    2016-02-01

    Computer tomography is one of the most promising new methods to image abnormal tissues inside the human body. Tomography is also used to position the patient accurately before radiation therapy. Hadron therapy for treating cancer has become one of the most advantegeous and safe options. In order to fully utilize the advantages of hadron therapy, there is a necessity of performing radiography with hadrons as well. In this paper we present the development of a proton computed tomography system. Our second-generation proton tomography system consists of two upstream and two downstream trackers made up of fibers as active material and a range detector consisting of plastic scintillators. We present details of the detector system, readout electronics, and data acquisition system as well as the commissioning of the entire system. We also present preliminary results from the test beam of the range detector.

  6. Computational singular perturbation analysis of stochastic chemical systems with stiffness

    Science.gov (United States)

    Wang, Lijin; Han, Xiaoying; Cao, Yanzhao; Najm, Habib N.

    2017-04-01

    Computational singular perturbation (CSP) is a useful method for analysis, reduction, and time integration of stiff ordinary differential equation systems. It has found dominant utility, in particular, in chemical reaction systems with a large range of time scales at continuum and deterministic level. On the other hand, CSP is not directly applicable to chemical reaction systems at micro or meso-scale, where stochasticity plays an non-negligible role and thus has to be taken into account. In this work we develop a novel stochastic computational singular perturbation (SCSP) analysis and time integration framework, and associated algorithm, that can be used to not only construct accurately and efficiently the numerical solutions to stiff stochastic chemical reaction systems, but also analyze the dynamics of the reduced stochastic reaction systems. The algorithm is illustrated by an application to a benchmark stochastic differential equation model, and numerical experiments are carried out to demonstrate the effectiveness of the construction.

  7. Displacement measurement system for inverters using computer micro-vision

    Science.gov (United States)

    Wu, Heng; Zhang, Xianmin; Gan, Jinqiang; Li, Hai; Ge, Peng

    2016-06-01

    We propose a practical system for noncontact displacement measurement of inverters using computer micro-vision at the sub-micron scale. The measuring method of the proposed system is based on a fast template matching algorithm with an optical microscopy. A laser interferometer measurement (LIM) system is built up for comparison. Experimental results demonstrate that the proposed system can achieve the same performance as the LIM system but shows a higher operability and stability. The measuring accuracy is 0.283 μm.

  8. AVES: A Computer Cluster System approach for INTEGRAL Scientific Analysis

    Science.gov (United States)

    Federici, M.; Martino, B. L.; Natalucci, L.; Umbertini, P.

    The AVES computing system, based on an "Cluster" architecture is a fully integrated, low cost computing facility dedicated to the archiving and analysis of the INTEGRAL data. AVES is a modular system that uses the software resource manager (SLURM) and allows almost unlimited expandibility (65,536 nodes and hundreds of thousands of processors); actually is composed by 30 Personal Computers with Quad-Cores CPU able to reach the computing power of 300 Giga Flops (300x10{9} Floating point Operations Per Second), with 120 GB of RAM and 7.5 Tera Bytes (TB) of storage memory in UFS configuration plus 6 TB for users area. AVES was designed and built to solve growing problems raised from the analysis of the large data amount accumulated by the INTEGRAL mission (actually about 9 TB) and due to increase every year. The used analysis software is the OSA package, distributed by the ISDC in Geneva. This is a very complex package consisting of dozens of programs that can not be converted to parallel computing. To overcome this limitation we developed a series of programs to distribute the workload analysis on the various nodes making AVES automatically divide the analysis in N jobs sent to N cores. This solution thus produces a result similar to that obtained by the parallel computing configuration. In support of this we have developed tools that allow a flexible use of the scientific software and quality control of on-line data storing. The AVES software package is constituted by about 50 specific programs. Thus the whole computing time, compared to that provided by a Personal Computer with single processor, has been enhanced up to a factor 70.

  9. Image Interpretation Instruction Via A Computer-Based-Training System

    Science.gov (United States)

    Weisman, Melanie

    1988-02-01

    As newer and more sophisticated imagery collection systems rapidly increase the volume of imagery requiring thorough exploitation, the need for imagery analysts to acquire and maintain expertise increases accordingly. In response, Loral Systems Group (Arizona) has produced a computer-based-training (CBT) system that presents a series of lessons on radar imaging principles and their application to the various orders of battle. The training system is composed of two host computers, four student/instructor workstations, a printer, and lesson material. The computers control the imagery presentation, deliver twenty-eight interactive lessons of computer-assisted instruction, and generate reports. Each dual-screen workstation presents lessons consisting of instructional text coupled with representative imagery annotated with color graphics. Although the system is designed for the unique characteristics of radar interpretation, alternative courseware could instruct interpretation techniques for other imagery (photographic, electro-optical, infrared). Regardless of the sensor type and amount of available imagery, both commercial and military segments of the interpretation community will benefit only if the interpreter/analyst is successfully trained to translate image information into useful terms.

  10. A computer-aided drug discovery system for chemistry teaching.

    Science.gov (United States)

    Gledhill, Robert; Kent, Sarah; Hudson, Brian; Richards, W Graham; Essex, Jonathan W; Frey, Jeremy G

    2006-01-01

    The Schools Malaria Project (http://emalaria.soton.ac.uk/) brings together school students with university researchers in the hunt for a new antimalaria drug. The design challenge being offered to students is to use a distributed drug search and selection system to design potential antimalaria drugs. The system is accessed via a Web interface. This e-science project displays the results of the trials in an accessible manner, giving students an opportunity for discussion and debate both with peers and with the university contacts. The project has been implemented by using distributed computing techniques, spreading computer load over a network of machines that cross institutional boundaries, forming a grid. This provides access to greater computing power and allows a much more complex and detailed formulation of the drug design problem to be tackled for research, teaching, and learning.

  11. An Expert Fitness Diagnosis System Based on Elastic Cloud Computing

    Directory of Open Access Journals (Sweden)

    Kevin C. Tseng

    2014-01-01

    Full Text Available This paper presents an expert diagnosis system based on cloud computing. It classifies a user’s fitness level based on supervised machine learning techniques. This system is able to learn and make customized diagnoses according to the user’s physiological data, such as age, gender, and body mass index (BMI. In addition, an elastic algorithm based on Poisson distribution is presented to allocate computation resources dynamically. It predicts the required resources in the future according to the exponential moving average of past observations. The experimental results show that Naïve Bayes is the best classifier with the highest accuracy (90.8% and that the elastic algorithm is able to capture tightly the trend of requests generated from the Internet and thus assign corresponding computation resources to ensure the quality of service.

  12. An Expert Fitness Diagnosis System Based on Elastic Cloud Computing

    Science.gov (United States)

    Tseng, Kevin C.; Wu, Chia-Chuan

    2014-01-01

    This paper presents an expert diagnosis system based on cloud computing. It classifies a user's fitness level based on supervised machine learning techniques. This system is able to learn and make customized diagnoses according to the user's physiological data, such as age, gender, and body mass index (BMI). In addition, an elastic algorithm based on Poisson distribution is presented to allocate computation resources dynamically. It predicts the required resources in the future according to the exponential moving average of past observations. The experimental results show that Naïve Bayes is the best classifier with the highest accuracy (90.8%) and that the elastic algorithm is able to capture tightly the trend of requests generated from the Internet and thus assign corresponding computation resources to ensure the quality of service. PMID:24723842

  13. Computational Design and Experimental Validation of New Thermal Barrier Systems

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2014-04-01

    This project (10/01/2010-9/30/2014), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This project will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. In this project, the focus is to develop and implement novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; perform material characterizations and oxidation/corrosion tests; and demonstrate our new thermal barrier coating (TBC) systems experimentally under integrated gasification combined cycle (IGCC) environments.

  14. Computational Design and Experimental Validation of New Thermal Barrier Systems

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2012-10-01

    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This project will directly support the technical goals specified in DEFOA- 0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop and implement novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; perform material characterizations and oxidation/corrosion tests; and demonstrate our new thermal barrier coating (TBC) systems experimentally under integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed Durability Test Rig.

  15. Computational Design and Experimental Validation of New Thermal Barrier Systems

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2014-04-01

    This project (10/01/2010-9/30/2014), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This project will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. In this project, the focus is to develop and implement novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; perform material characterizations and oxidation/corrosion tests; and demonstrate our new thermal barrier coating (TBC) systems experimentally under integrated gasification combined cycle (IGCC) environments.

  16. Evaluation of computer-aided detection and diagnosis systems.

    Science.gov (United States)

    Petrick, Nicholas; Sahiner, Berkman; Armato, Samuel G; Bert, Alberto; Correale, Loredana; Delsanto, Silvia; Freedman, Matthew T; Fryd, David; Gur, David; Hadjiiski, Lubomir; Huo, Zhimin; Jiang, Yulei; Morra, Lia; Paquerault, Sophie; Raykar, Vikas; Samuelson, Frank; Summers, Ronald M; Tourassi, Georgia; Yoshida, Hiroyuki; Zheng, Bin; Zhou, Chuan; Chan, Heang-Ping

    2013-08-01

    Computer-aided detection and diagnosis (CAD) systems are increasingly being used as an aid by clinicians for detection and interpretation of diseases. Computer-aided detection systems mark regions of an image that may reveal specific abnormalities and are used to alert clinicians to these regions during image interpretation. Computer-aided diagnosis systems provide an assessment of a disease using image-based information alone or in combination with other relevant diagnostic data and are used by clinicians as a decision support in developing their diagnoses. While CAD systems are commercially available, standardized approaches for evaluating and reporting their performance have not yet been fully formalized in the literature or in a standardization effort. This deficiency has led to difficulty in the comparison of CAD devices and in understanding how the reported performance might translate into clinical practice. To address these important issues, the American Association of Physicists in Medicine (AAPM) formed the Computer Aided Detection in Diagnostic Imaging Subcommittee (CADSC), in part, to develop recommendations on approaches for assessing CAD system performance. The purpose of this paper is to convey the opinions of the AAPM CADSC members and to stimulate the development of consensus approaches and "best practices" for evaluating CAD systems. Both the assessment of a standalone CAD system and the evaluation of the impact of CAD on end-users are discussed. It is hoped that awareness of these important evaluation elements and the CADSC recommendations will lead to further development of structured guidelines for CAD performance assessment. Proper assessment of CAD system performance is expected to increase the understanding of a CAD system's effectiveness and limitations, which is expected to stimulate further research and development efforts on CAD technologies, reduce problems due to improper use, and eventually improve the utility and efficacy of CAD in

  17. 3D measurement system based on computer-generated gratings

    Science.gov (United States)

    Zhu, Yongjian; Pan, Weiqing; Luo, Yanliang

    2010-08-01

    A new kind of 3D measurement system has been developed to achieve the 3D profile of complex object. The principle of measurement system is based on the triangular measurement of digital fringe projection, and the fringes are fully generated from computer. Thus the computer-generated four fringes form the data source of phase-shifting 3D profilometry. The hardware of system includes the computer, video camera, projector, image grabber, and VGA board with two ports (one port links to the screen, another to the projector). The software of system consists of grating projection module, image grabbing module, phase reconstructing module and 3D display module. A software-based synchronizing method between grating projection and image capture is proposed. As for the nonlinear error of captured fringes, a compensating method is introduced based on the pixel-to-pixel gray correction. At the same time, a least square phase unwrapping is used to solve the problem of phase reconstruction by using the combination of Log Modulation Amplitude and Phase Derivative Variance (LMAPDV) as weight. The system adopts an algorithm from Matlab Tool Box for camera calibration. The 3D measurement system has an accuracy of 0.05mm. The execution time of system is 3~5s for one-time measurement.

  18. Supporting Privacy of Computations in Mobile Big Data Systems

    Directory of Open Access Journals (Sweden)

    Sriram Nandha Premnath

    2016-05-01

    Full Text Available Cloud computing systems enable clients to rent and share computing resources of third party platforms, and have gained widespread use in recent years. Numerous varieties of mobile, small-scale devices such as smartphones, red e-health devices, etc., across users, are connected to one another through the massive internetwork of vastly powerful servers on the cloud. While mobile devices store “private information” of users such as location, payment, health data, etc., they may also contribute “semi-public information” (which may include crowdsourced data such as transit, traffic, nearby points of interests, etc. for data analytics. In such a scenario, a mobile device may seek to obtain the result of a computation, which may depend on its private inputs, crowdsourced data from other mobile devices, and/or any “public inputs” from other servers on the Internet. We demonstrate a new method of delegating real-world computations of resource-constrained mobile clients using an encrypted program known as the garbled circuit. Using the garbled version of a mobile client’s inputs, a server in the cloud executes the garbled circuit and returns the resulting garbled outputs. Our system assures privacy of the mobile client’s input data and output of the computation, and also enables the client to verify that the evaluator actually performed the computation. We analyze the complexity of our system. We measure the time taken to construct the garbled circuit as well as evaluate it for varying number of servers. Using real-world data, we evaluate our system for a practical, privacy preserving search application that locates the nearest point of interest for the mobile client to demonstrate feasibility.

  19. Automation of the CFD Process on Distributed Computing Systems

    Science.gov (United States)

    Tejnil, Ed; Gee, Ken; Rizk, Yehia M.

    2000-01-01

    A script system was developed to automate and streamline portions of the CFD process. The system was designed to facilitate the use of CFD flow solvers on supercomputer and workstation platforms within a parametric design event. Integrating solver pre- and postprocessing phases, the fully automated ADTT script system marshalled the required input data, submitted the jobs to available computational resources, and processed the resulting output data. A number of codes were incorporated into the script system, which itself was part of a larger integrated design environment software package. The IDE and scripts were used in a design event involving a wind tunnel test. This experience highlighted the need for efficient data and resource management in all parts of the CFD process. To facilitate the use of CFD methods to perform parametric design studies, the script system was developed using UNIX shell and Perl languages. The goal of the work was to minimize the user interaction required to generate the data necessary to fill a parametric design space. The scripts wrote out the required input files for the user-specified flow solver, transferred all necessary input files to the computational resource, submitted and tracked the jobs using the resource queuing structure, and retrieved and post-processed the resulting dataset. For computational resources that did not run queueing software, the script system established its own simple first-in-first-out queueing structure to manage the workload. A variety of flow solvers were incorporated in the script system, including INS2D, PMARC, TIGER and GASP. Adapting the script system to a new flow solver was made easier through the use of object-oriented programming methods. The script system was incorporated into an ADTT integrated design environment and evaluated as part of a wind tunnel experiment. The system successfully generated the data required to fill the desired parametric design space. This stressed the computational

  20. Computation and brain processes, with special reference to neuroendocrine systems.

    Science.gov (United States)

    Toni, Roberto; Spaletta, Giulia; Casa, Claudia Della; Ravera, Simone; Sandri, Giorgio

    2007-01-01

    The development of neural networks and brain automata has made neuroscientists aware that the performance limits of these brain-like devices lies, at least in part, in their computational power. The computational basis of a. standard cybernetic design, in fact, refers to that of a discrete and finite state machine or Turing Machine (TM). In contrast, it has been suggested that a number of human cerebral activites, from feedback controls up to mental processes, rely on a mixing of both finitary, digital-like and infinitary, continuous-like procedures. Therefore, the central nervous system (CNS) of man would exploit a form of computation going beyond that of a TM. This "non conventional" computation has been called hybrid computation. Some basic structures for hybrid brain computation are believed to be the brain computational maps, in which both Turing-like (digital) computation and continuous (analog) forms of calculus might occur. The cerebral cortex and brain stem appears primary candidate for this processing. However, also neuroendocrine structures like the hypothalamus are believed to exhibit hybrid computional processes, and might give rise to computational maps. Current theories on neural activity, including wiring and volume transmission, neuronal group selection and dynamic evolving models of brain automata, bring fuel to the existence of natural hybrid computation, stressing a cooperation between discrete and continuous forms of communication in the CNS. In addition, the recent advent of neuromorphic chips, like those to restore activity in damaged retina and visual cortex, suggests that assumption of a discrete-continuum polarity in designing biocompatible neural circuitries is crucial for their ensuing performance. In these bionic structures, in fact, a correspondence exists between the original anatomical architecture and synthetic wiring of the chip, resulting in a correspondence between natural and cybernetic neural activity. Thus, chip "form

  1. Methods and computer codes for nuclear systems calculations

    Indian Academy of Sciences (India)

    B P Kochurov; A P Knyazev; A Yu Kwaretzkheli

    2007-02-01

    Some numerical methods for reactor cell, sub-critical systems and 3D models of nuclear reactors are presented. The methods are developed for steady states and space–time calculations. Computer code TRIFON solves space-energy problem in (, ) systems of finite height and calculates heterogeneous few-group matrix parameters of reactor cells. These parameters are used as input data in the computer code SHERHAN solving the 3D heterogeneous reactor equation for steady states and 3D space–time neutron processes simulation. Modification of TRIFON was developed for the simulation of space–time processes in sub-critical systems with external sources. An option of SHERHAN code for the system with external sources is under development.

  2. Computing Architecture of the ALICE Detector Control System

    CERN Document Server

    Augustinus, A; Moreno, A; Kurepin, A N; De Cataldo, G; Pinazza, O; Rosinský, P; Lechman, M; Jirdén, L S

    2011-01-01

    The ALICE Detector Control System (DCS) is based on a commercial SCADA product, running on a large Windows computer cluster. It communicates with about 1200 network attached devices to assure safe and stable operation of the experiment. In the presentation we focus on the design of the ALICE DCS computer systems. We describe the management of data flow, mechanisms for handling the large data amounts and information exchange with external systems. One of the key operational requirements is an intuitive, error proof and robust user interface allowing for simple operation of the experiment. At the same time the typical operator task, like trending or routine checks of the devices, must be decoupled from the automated operation in order to prevent overload of critical parts of the system. All these requirements must be implemented in an environment with strict security requirements. In the presentation we explain how these demands affected the architecture of the ALICE DCS.

  3. Intelligent computer systems in engineering design principles and applications

    CERN Document Server

    Sunnersjo, Staffan

    2016-01-01

    This introductory book discusses how to plan and build useful, reliable, maintainable and cost efficient computer systems for automated engineering design. The book takes a user perspective and seeks to bridge the gap between texts on principles of computer science and the user manuals for commercial design automation software. The approach taken is top-down, following the path from definition of the design task and clarification of the relevant design knowledge to the development of an operational system well adapted for its purpose. This introductory text for the practicing engineer working in industry covers most vital aspects of planning such a system. Experiences from applications of automated design systems in practice are reviewed based on a large number of real, industrial cases. The principles behind the most popular methods in design automation are presented with sufficient rigour to give the user confidence in applying them on real industrial problems. This book is also suited for a half semester c...

  4. High performance computing for classic gravitational N-body systems

    CERN Document Server

    Capuzzo-Dolcetta, Roberto

    2009-01-01

    The role of gravity is crucial in astrophysics. It determines the evolution of any system, over an enormous range of time and space scales. Astronomical stellar systems as composed by N interacting bodies represent examples of self-gravitating systems, usually treatable with the aid of newtonian gravity but for particular cases. In this note I will briefly discuss some of the open problems in the dynamical study of classic self-gravitating N-body systems, over the astronomical range of N. I will also point out how modern research in this field compulsorily requires a heavy use of large scale computations, due to the contemporary requirement of high precision and high computational speed.

  5. Computer simulation of confined and flexoelectric liquid crystalline systems

    CERN Document Server

    Barmes, F

    2003-01-01

    In this Thesis, systems of confined and flexoelectric liquid crystal systems have been studied using molecular computer simulations. The aim of this work was to provide a molecular model of a bistable display cell in which switching is induced through the application of directional electric field pulses. In the first part of this Thesis, the study of confined systems of liquid crystalline particles has been addressed. Computation of the anchoring phase diagrams for three different surface interaction models showed that the hard needle wall and rod-surface potentials induce both planar and homeotropic alignment separated by a bistability region, this being stronger and wider for the rod-surface varant. The results obtained using the rod-sphere surface model, in contrast, showed that tilled surface arrangements can be induced by surface absorption mechanisms. Equivalent studies of hybrid anchored systems showed that a bend director structure can be obtained in a slab with monostable homeotropic anchoring at the...

  6. Dynamics of number systems computation with arbitrary precision

    CERN Document Server

    Kurka, Petr

    2016-01-01

    This book is a source of valuable and useful information on the topics of dynamics of number systems and scientific computation with arbitrary precision. It is addressed to scholars, scientists and engineers, and graduate students. The treatment is elementary and self-contained with relevance both for theory and applications. The basic prerequisite of the book is linear algebra and matrix calculus. .

  7. A computer-based registration system for geological collections

    NARCIS (Netherlands)

    Germeraad, J.H.; Freudenthal, M.; Boogaard, van den M.; Arps, C.E.S.

    1972-01-01

    The new computer-based registration system, a project of the National Museum of Geology and Mineralogy in the Netherlands, will considerably increase the accessibility of the Museum collection. This greater access is realized by computerisation of the data in great detail, so that an almost unlimite

  8. An Evaluation Methodology for Computer Mediated Teletraining Systems.

    Science.gov (United States)

    Sandoz-Guermond, Francoise; Beuchot, Gerard

    This paper proposes a contextual evaluation method for computer-mediated teletraining systems. The proposed methods include: (1) quantification of results of the work on a generally recognized scale, in order to evaluate performance reached by the trainees following the achievement of the training task; (2) analysis of communication types used…

  9. Chandrasekhar equations and computational algorithms for distributed parameter systems

    Science.gov (United States)

    Burns, J. A.; Ito, K.; Powers, R. K.

    1984-01-01

    The Chandrasekhar equations arising in optimal control problems for linear distributed parameter systems are considered. The equations are derived via approximation theory. This approach is used to obtain existence, uniqueness, and strong differentiability of the solutions and provides the basis for a convergent computation scheme for approximating feedback gain operators. A numerical example is presented to illustrate these ideas.

  10. Processor Management in the Tera MTA Computer System,

    Science.gov (United States)

    1993-01-01

    This paper describes the processor scheduling issues specific to the Tera MTA (Multi Threaded Architecture) computer system and presents solutions to...classic scheduling problems. The Tera MTA exploits parallelism at all levels, from fine-grained instruction-level parallelism within a single

  11. Software metrics for green parallel computing of big data systems

    NARCIS (Netherlands)

    Gurbuz, Havva Gulay; Tekinerdogan, Bedir

    2016-01-01

    Big Data is typically organized around a distributed file system on top of which the parallel algorithms can be executed for realizing the Big Data analytics. In general, the parallel algorithms can be mapped in different alternative ways to the computing platform. Hereby each alternative will

  12. The Influence of Computer-Mediated Communication Systems on Community

    Science.gov (United States)

    Rockinson-Szapkiw, Amanda J.

    2012-01-01

    As higher education institutions enter the intense competition of the rapidly growing global marketplace of online education, the leaders within these institutions are challenged to identify factors critical for developing and for maintaining effective online courses. Computer-mediated communication (CMC) systems are considered critical to…

  13. Computer decision support system for the stomach cancer diagnosis

    Science.gov (United States)

    Polyakov, E. V.; Sukhova, O. G.; Korenevskaya, P. Y.; Ovcharova, V. S.; Kudryavtseva, I. O.; Vlasova, S. V.; Grebennikova, O. P.; Burov, D. A.; Yemelyanova, G. S.; Selchuk, V. Y.

    2017-01-01

    The paper considers the creation of the computer knowledge base containing the data of histological, cytologic, and clinical researches. The system is focused on improvement of diagnostics quality of stomach cancer - one of the most frequent death causes among oncologic patients.

  14. Client Anticipations about Computer-Assisted Career Guidance System Outcomes.

    Science.gov (United States)

    Osborn, Debra S.; Peterson, Gary W.; Sampson, James P., Jr.; Reardon, Robert C.

    2003-01-01

    This study describes how 55 clients from a career center at a large, southeastern university anticipated using computer-assisted career guidance (CACG) systems to help in their career decision making and problem solving. Responses to a cued and a free response survey indicated that clients' most frequent anticipations included increased career…

  15. Computational Structures Technology for Airframes and Propulsion Systems

    Science.gov (United States)

    Noor, Ahmed K. (Compiler); Housner, Jerrold M. (Compiler); Starnes, James H., Jr. (Compiler); Hopkins, Dale A. (Compiler); Chamis, Christos C. (Compiler)

    1992-01-01

    This conference publication contains the presentations and discussions from the joint University of Virginia (UVA)/NASA Workshops. The presentations included NASA Headquarters perspectives on High Speed Civil Transport (HSCT), goals and objectives of the UVA Center for Computational Structures Technology (CST), NASA and Air Force CST activities, CST activities for airframes and propulsion systems in industry, and CST activities at Sandia National Laboratory.

  16. Computer Algebra Systems and Theorems on Real Roots of Polynomials

    Science.gov (United States)

    Aidoo, Anthony Y.; Manthey, Joseph L.; Ward, Kim Y.

    2010-01-01

    A computer algebra system is used to derive a theorem on the existence of roots of a quadratic equation on any bounded real interval. This is extended to a cubic polynomial. We discuss how students could be led to derive and prove these theorems. (Contains 1 figure.)

  17. Computer program aids dual reflector antenna system design

    Science.gov (United States)

    Firnett, P.; Gerritsen, R.; Jarvie, P.; Ludwig, A.

    1968-01-01

    Computer program aids in the design of maximum efficiency dual reflector antenna systems. It designs a shaped cassegrainian antenna which has nearly 100 percent efficiency, and accepts input parameters specifying an existing conventional antenna and produces as output the modifications necessary to conform to a shaped design.

  18. Client Anticipations about Computer-Assisted Career Guidance System Outcomes.

    Science.gov (United States)

    Osborn, Debra S.; Peterson, Gary W.; Sampson, James P., Jr.; Reardon, Robert C.

    2003-01-01

    This study describes how 55 clients from a career center at a large, southeastern university anticipated using computer-assisted career guidance (CACG) systems to help in their career decision making and problem solving. Responses to a cued and a free response survey indicated that clients' most frequent anticipations included increased career…

  19. A Computer Aided System for Simulating Weld Metal Solidification Crack

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A computer-aided system for simulating weld solidification crack has been developed by which a welding engineer can carry out the welding solidification crack simulation on the basis of a commercial finite element analysis software package. Its main functions include calculating the heat generations of the moving arc, mesh generation, calculating stress-strain distributions with element rebirth technique.

  20. Melting line of Yukawa system by computer simulation

    NARCIS (Netherlands)

    Meijer, E.J.; Frenkel, D.

    1991-01-01

    We located the melting line of the Yukawa system by determining the free energy of both fluid and solid phases by computer simulations. At the high densities the fluid freezes into a body-centered-cubic (bcc) solid, whereas for low densities it freezes into a face-centered-cubic (fcc) solid. For bot

  1. Large Scale Development of Computer-Based Instructional Systems.

    Science.gov (United States)

    Olivier, William P.; Scott, G.F.

    The Individualization Project at the Ontario Institute for Studies in Education (OISE) was organized on a cooperative basis with a federal agency and several community colleges to move smoothly from R&D to a production mode of operation, and finally to emphasize dissemination of computer courseware and systems. The key to the successful…

  2. An Intelligent Computer-Based System for Sign Language Tutoring

    Science.gov (United States)

    Ritchings, Tim; Khadragi, Ahmed; Saeb, Magdy

    2012-01-01

    A computer-based system for sign language tutoring has been developed using a low-cost data glove and a software application that processes the movement signals for signs in real-time and uses Pattern Matching techniques to decide if a trainee has closely replicated a teacher's recorded movements. The data glove provides 17 movement signals from…

  3. Computer Vision Systems for Hardwood Logs and Lumber

    Science.gov (United States)

    Philip A. Araman; Tai-Hoon Cho; D. Zhu; R. Conners

    1991-01-01

    Computer vision systems being developed at Virginia Tech University with the support and cooperation from the U.S. Forest Service are presented. Researchers at Michigan State University, West Virginia University, and Mississippi State University are also members of the research team working on various parts of this research. Our goals are to help U.S. hardwood...

  4. Investing in Computer Technology: Criteria and Procedures for System Selection.

    Science.gov (United States)

    Hofstetter, Fred T.

    The criteria used by the University of Delaware in selecting the PLATO computer-based educational system are discussed in this document. Consideration was given to support for instructional strategies, requirements of the student learning station, features for instructors and authors of instructional materials, general operational characteristics,…

  5. Computer Algebra Systems: Permitted but Are They Used?

    Science.gov (United States)

    Pierce, Robyn; Bardini, Caroline

    2015-01-01

    Since the 1990s, computer algebra systems (CAS) have been available in Australia as hand-held devices designed for students with the expectation that they will be used in the mathematics classroom. The data discussed in this paper was collected as part of a pilot study that investigated first year university mathematics and statistics students'…

  6. FPGAs for next gen DAQ and Computing systems at CERN

    CERN Document Server

    CERN. Geneva

    2016-01-01

    The need for FPGAs in DAQ is a given, but newer systems needed to be designed to meet the substantial increase in data rate and the challenges that it brings. FPGAs are also power efficient computing devices. So the work also looks at accelerating HEP algorithms and integration of FPGAs with CPUs taking advantage of programming models like OpenCL. Other explorations involved using OpenCL to model a DAQ system.

  7. Computational Methods for Predictive Simulation of Stochastic Turbulence Systems

    Science.gov (United States)

    2015-11-05

    AFRL-AFOSR-VA-TR-2015-0363 Computational Methods for Predictive Simulation of Stochastic Turbulence Systems Catalin Trenchea UNIVERSITY OF PITTSBURGH...STOCHASTIC TURBULENCE SYSTEMS AFOSR GRANT FA 9550-12-1-0191 William Layton and Catalin Trenchea Department of Mathematics University of Pittsburgh...During Duration of Grant Nan Jian Graduate student, Univ . of Pittsburgh (currently Postdoc at FSU) Sarah Khankan Graduate student, Univ . of Pittsburgh

  8. Real-Time Visualization System for Computational Offloading

    Science.gov (United States)

    2015-01-01

    dependencies are hard- coded into the visualization system. The remainder of this report is organized as follows. In Section 2, we discuss the...timer. Such a driver has access to all the visualization functionality present in the visualization pane. In Fig. 8, we show a code snippet required...Real-Time Visualization System for Computational Offloading by Bryan Dawson and David L Doria ARL-TN-0655 January 2015

  9. Characterization of a new computer-ready photon counting system

    Science.gov (United States)

    Andor, Gyorgy

    1998-08-01

    The photon-counting system seems to be the best solution for extremely low optical power measurements. The Hamamatsu HC135 photon counting module has a built-in high-voltage power supply amplifier, discriminator, micro-controller with an RS232 serial output. It requires only a +5V supply voltage and an IBM PC or compatible computer to run. The system is supplied with an application software. This talk is about the testing of the device.

  10. Computer recognition of slag property diagrams in ternary systems

    Institute of Scientific and Technical Information of China (English)

    Jinxiong Lu; Li Wang; Jiongming Zhang; Xinhua Wang

    2004-01-01

    In order to take data information from the slag property diagram in a ternary system automatically and actually, a picture recognition and drawing software has been developed by Visual Basic 6.0 based on the image coding principle of computer system and the graphics programming method of VB. This software can transform the ternary system isopleth diagram from bitmap format to data file and establish a corresponding database which can be applied to rapidly retrieve a mass of data and make correlative thermodynamics or kinetics calculation. Besides, it still has the function of drawing the ternary system diagram which can draw different kinds of property parameters in the same diagram.

  11. Computational Proteomics: High-throughput Analysis for Systems Biology

    Energy Technology Data Exchange (ETDEWEB)

    Cannon, William R.; Webb-Robertson, Bobbie-Jo M.

    2007-01-03

    High-throughput (HTP) proteomics is a rapidly developing field that offers the global profiling of proteins from a biological system. The HTP technological advances are fueling a revolution in biology, enabling analyses at the scales of entire systems (e.g., whole cells, tumors, or environmental communities). However, simply identifying the proteins in a cell is insufficient for understanding the underlying complexity and operating mechanisms of the overall system. Systems level investigations are relying more and more on computational analyses, especially in the field of proteomics generating large-scale global data.

  12. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  13. Maxima Bridge System: A software interface between Stata and the Maxima computer algebra system

    OpenAIRE

    2013-01-01

    Maxima is a free and open-source computer algebra system (CAS), namely, software that can perform symbolic computations such as solving equations, determining derivatives of functions, obtaining Taylor series, and manipulating algebraic expressions. In this presentation, I discuss the Maxima Bridge System (MBS), a collection of software that allows Stata to interface with Maxima to use it as an engine for symbolic computation, transfer data from Stata to Maxima, and retrieve results from Maxi...

  14. Windtalking Computers: Frequency Normalization, Binary Coding Systems and Encryption

    CERN Document Server

    Zirkind, Givon

    2009-01-01

    The goal of this paper is to discuss the application of known techniques, knowledge and technology in a novel way, to encrypt computer and non-computer data. To-date most computers use base 2 and most encryption systems use ciphering and/or an encryption algorithm, to convert data into a secret message. The method of having the computer "speak another secret language" as used in human military secret communications has never been imitated. The author presents the theory and several possible implementations of a method for computers for secret communications analogous to human beings using a secret language or; speaking multiple languages. The kind of encryption scheme proposed significantly increases the complexity of and the effort needed for, decryption. As every methodology has its drawbacks, so too, the data of the proposed system has its drawbacks. It is not as compressed as base 2 would be. However, this is manageable and acceptable, if the goal is very strong encryption: At least two methods and their ...

  15. DNA-enabled integrated molecular systems for computation and sensing.

    Science.gov (United States)

    LaBoda, Craig; Duschl, Heather; Dwyer, Chris L

    2014-06-17

    CONSPECTUS: Nucleic acids have become powerful building blocks for creating supramolecular nanostructures with a variety of new and interesting behaviors. The predictable and guided folding of DNA, inspired by nature, allows designs to manipulate molecular-scale processes unlike any other material system. Thus, DNA can be co-opted for engineered and purposeful ends. This Account details a small portion of what can be engineered using DNA within the context of computer architectures and systems. Over a decade of work at the intersection of DNA nanotechnology and computer system design has shown several key elements and properties of how to harness the massive parallelism created by DNA self-assembly. This work is presented, naturally, from the bottom-up beginning with early work on strand sequence design for deterministic, finite DNA nanostructure synthesis. The key features of DNA nanostructures are explored, including how the use of small DNA motifs assembled in a hierarchical manner enables full-addressability of the final nanostructure, an important property for building dense and complicated systems. A full computer system also requires devices that are compatible with DNA self-assembly and cooperate at a higher level as circuits patterned over many, many replicated units. Described here is some work in this area investigating nanowire and nanoparticle devices, as well as chromophore-based circuits called resonance energy transfer (RET) logic. The former is an example of a new way to bring traditional silicon transistor technology to the nanoscale, which is increasingly problematic with current fabrication methods. RET logic, on the other hand, introduces a framework for optical computing at the molecular level. This Account also highlights several architectural system studies that demonstrate that even with low-level devices that are inferior to their silicon counterparts and a substrate that harbors abundant defects, self-assembled systems can still

  16. Computing the optimal path in stochastic dynamical systems.

    Science.gov (United States)

    Bauver, Martha; Forgoston, Eric; Billings, Lora

    2016-08-01

    In stochastic systems, one is often interested in finding the optimal path that maximizes the probability of escape from a metastable state or of switching between metastable states. Even for simple systems, it may be impossible to find an analytic form of the optimal path, and in high-dimensional systems, this is almost always the case. In this article, we formulate a constructive methodology that is used to compute the optimal path numerically. The method utilizes finite-time Lyapunov exponents, statistical selection criteria, and a Newton-based iterative minimizing scheme. The method is applied to four examples. The first example is a two-dimensional system that describes a single population with internal noise. This model has an analytical solution for the optimal path. The numerical solution found using our computational method agrees well with the analytical result. The second example is a more complicated four-dimensional system where our numerical method must be used to find the optimal path. The third example, although a seemingly simple two-dimensional system, demonstrates the success of our method in finding the optimal path where other numerical methods are known to fail. In the fourth example, the optimal path lies in six-dimensional space and demonstrates the power of our method in computing paths in higher-dimensional spaces.

  17. New computer system for the Japan Tier-2 center

    CERN Multimedia

    Hiroyuki Matsunaga

    2007-01-01

    The ICEPP (International Center for Elementary Particle Physics) of the University of Tokyo has been operating an LCG Tier-2 center dedicated to the ATLAS experiment, and is going to switch over to the new production system which has been recently installed. The system will be of great help to the exciting physics analyses for coming years. The new computer system includes brand-new blade servers, RAID disks, a tape library system and Ethernet switches. The blade server is DELL PowerEdge 1955 which contains two Intel dual-core Xeon (WoodCrest) CPUs running at 3GHz, and a total of 650 servers will be used as compute nodes. Each of the RAID disks is configured to be RAID-6 with 16 Serial ATA HDDs. The equipment as well as the cooling system is placed in a new large computer room, and both are hooked up to UPS (uninterruptible power supply) units for stable operation. As a whole, the system has been built with redundant configuration in a cost-effective way. The next major upgrade will take place in thre...

  18. Computing the optimal path in stochastic dynamical systems

    Energy Technology Data Exchange (ETDEWEB)

    Bauver, Martha; Forgoston, Eric, E-mail: eric.forgoston@montclair.edu; Billings, Lora [Department of Mathematical Sciences, Montclair State University, 1 Normal Avenue, Montclair, New Jersey 07043 (United States)

    2016-08-15

    In stochastic systems, one is often interested in finding the optimal path that maximizes the probability of escape from a metastable state or of switching between metastable states. Even for simple systems, it may be impossible to find an analytic form of the optimal path, and in high-dimensional systems, this is almost always the case. In this article, we formulate a constructive methodology that is used to compute the optimal path numerically. The method utilizes finite-time Lyapunov exponents, statistical selection criteria, and a Newton-based iterative minimizing scheme. The method is applied to four examples. The first example is a two-dimensional system that describes a single population with internal noise. This model has an analytical solution for the optimal path. The numerical solution found using our computational method agrees well with the analytical result. The second example is a more complicated four-dimensional system where our numerical method must be used to find the optimal path. The third example, although a seemingly simple two-dimensional system, demonstrates the success of our method in finding the optimal path where other numerical methods are known to fail. In the fourth example, the optimal path lies in six-dimensional space and demonstrates the power of our method in computing paths in higher-dimensional spaces.

  19. Assessing the efficiency of information protection systems in the computer systems and networks

    OpenAIRE

    Nachev, Atanas; Zhelezov, Stanimir

    2015-01-01

    The specific features of the information protection systems in the computer systems and networks require the development of non-trivial methods for their analysis and assessment. Attempts for solutions in this area are given in this paper.

  20. Model of Integration of Material Flow Control System with MES/ERP System via Cloud Computing

    National Research Council Canada - National Science Library

    Peter Peniak

    2014-01-01

    This article deals with a model of application gateway for integration of Material Flow Control System with ERP/MES systems, which are provided by Cloud Computing and Software as Service delivery model...