WorldWideScience

Sample records for memory computational systems

  1. Memory architectures for exaflop computing systems

    OpenAIRE

    Pavlović, Milan

    2016-01-01

    Most computing systems are heavily dependent on their main memories, as their primary storage, or as an intermediate cache for slower storage systems (HDDs). The capacity of memory systems, as well as their performance, have a direct impact on overall computing capabilities of the system, and are also major contributors to its initial and operating costs. Dynamic Random Access Memory (DRAM) technology has been dominating the main memory landscape since its beginnings in 1970s until today. ...

  2. Memory systems, computation, and the second law of thermodynamics

    International Nuclear Information System (INIS)

    Wolpert, D.H.

    1992-01-01

    A memory is a physical system for transferring information form one moment in time to another, where that information concerns something external to the system itself. This paper argues on information-theoretic and statistical mechanical grounds that useful memories must be of one of two types, exemplified by memory in abstract computer programs and by memory in photographs. Photograph-type memories work by exploring a collapse of state space flow to an attractor state. (This attractor state is the open-quotes initializedclose quotes state of the memory.) The central assumption of the theory of reversible computation tells us that in any such collapsing, regardless of whether the collapsing must increase in entropy of the system. In concert with the second law, this establishes the logical necessity of the empirical observation that photograph-type memories are temporally asymmetric (they can tell us about the past but not about the future). Under the assumption that human memory is a photograph-type memory, this result also explains why we humans can remember only our past and not our future. In contrast to photo-graph-type memories, computer-type memories do not require any initialization, and therefore are not directly affected by the second law. As a result, computer memories can be of the future as easily as of the past, even if the program running on the computer is logically irreversible. This is entirely in accord with the well-known temporal reversibility of the process of computation. This paper ends by arguing that the asymmetry of the psychological arrow of time is a direct consequence of the asymmetry of human memory. With the rest of this paper, this explains, explicitly and rigorously, why the psychological and thermodynamic arrows of time are correlated with one another. 24 refs

  3. Multiple-User, Multitasking, Virtual-Memory Computer System

    Science.gov (United States)

    Generazio, Edward R.; Roth, Don J.; Stang, David B.

    1993-01-01

    Computer system designed and programmed to serve multiple users in research laboratory. Provides for computer control and monitoring of laboratory instruments, acquisition and anlaysis of data from those instruments, and interaction with users via remote terminals. System provides fast access to shared central processing units and associated large (from megabytes to gigabytes) memories. Underlying concept of system also applicable to monitoring and control of industrial processes.

  4. Program partitioning for NUMA multiprocessor computer systems. [Nonuniform memory access

    Energy Technology Data Exchange (ETDEWEB)

    Wolski, R.M.; Feo, J.T. (Lawrence Livermore National Lab., CA (United States))

    1993-11-01

    Program partitioning and scheduling are essential steps in programming non-shared-memory computer systems. Partitioning is the separation of program operations into sequential tasks, and scheduling is the assignment of tasks to processors. To be effective, automatic methods require an accurate representation of the model of computation and the target architecture. Current partitioning methods assume today's most prevalent models -- macro dataflow and a homogeneous/two-level multicomputer system. Based on communication channels, neither model represents well the emerging class of NUMA multiprocessor computer systems consisting of hierarchical read/write memories. Consequently, the partitions generated by extant methods do not execute well on these systems. In this paper, the authors extend the conventional graph representation of the macro-dataflow model to enable mapping heuristics to consider the complex communication options supported by NUMA architectures. They describe two such heuristics. Simulated execution times of program graphs show that the model and heuristics generate higher quality program mappings than current methods for NUMA architectures.

  5. Programs for Testing Processor-in-Memory Computing Systems

    Science.gov (United States)

    Katz, Daniel S.

    2006-01-01

    The Multithreaded Microbenchmarks for Processor-In-Memory (PIM) Compilers, Simulators, and Hardware are computer programs arranged in a series for use in testing the performances of PIM computing systems, including compilers, simulators, and hardware. The programs at the beginning of the series test basic functionality; the programs at subsequent positions in the series test increasingly complex functionality. The programs are intended to be used while designing a PIM system, and can be used to verify that compilers, simulators, and hardware work correctly. The programs can also be used to enable designers of these system components to examine tradeoffs in implementation. Finally, these programs can be run on non-PIM hardware (either single-threaded or multithreaded) using the POSIX pthreads standard to verify that the benchmarks themselves operate correctly. [POSIX (Portable Operating System Interface for UNIX) is a set of standards that define how programs and operating systems interact with each other. pthreads is a library of pre-emptive thread routines that comply with one of the POSIX standards.

  6. Computing with memory for energy-efficient robust systems

    CERN Document Server

    Paul, Somnath

    2013-01-01

    This book analyzes energy and reliability as major challenges faced by designers of computing frameworks in the nanometer technology regime.  The authors describe the existing solutions to address these challenges and then reveal a new reconfigurable computing platform, which leverages high-density nanoscale memory for both data storage and computation to maximize the energy-efficiency and reliability. The energy and reliability benefits of this new paradigm are illustrated and the design challenges are discussed. Various hardware and software aspects of this exciting computing paradigm are de

  7. Energy-aware memory management for embedded multimedia systems a computer-aided design approach

    CERN Document Server

    Balasa, Florin

    2011-01-01

    Energy-Aware Memory Management for Embedded Multimedia Systems: A Computer-Aided Design Approach presents recent computer-aided design (CAD) ideas that address memory management tasks, particularly the optimization of energy consumption in the memory subsystem. It explains how to efficiently implement CAD solutions, including theoretical methods and novel algorithms. The book covers various energy-aware design techniques, including data-dependence analysis techniques, memory size estimation methods, extensions of mapping approaches, and memory banking approaches. It shows how these techniques

  8. Memory intensive functional architecture for distributed computer control systems

    International Nuclear Information System (INIS)

    Dimmler, D.G.

    1983-10-01

    A memory-intensive functional architectue for distributed data-acquisition, monitoring, and control systems with large numbers of nodes has been conceptually developed and applied in several large-scale and some smaller systems. This discussion concentrates on: (1) the basic architecture; (2) recent expansions of the architecture which now become feasible in view of the rapidly developing component technologies in microprocessors and functional large-scale integration circuits; and (3) implementation of some key hardware and software structures and one system implementation which is a system for performing control and data acquisition of a neutron spectrometer at the Brookhaven High Flux Beam Reactor. The spectrometer is equipped with a large-area position-sensitive neutron detector

  9. Single-Chip Computers With Microelectromechanical Systems-Based Magnetic Memory

    NARCIS (Netherlands)

    Carley, L. Richard; Bain, James A.; Fedder, Gary K.; Greve, David W.; Guillou, David F.; Lu, Michael S.C.; Mukherjee, Tamal; Santhanam, Suresh; Abelmann, Leon; Min, Seungook

    This article describes an approach for implementing a complete computer system (CPU, RAM, I/O, and nonvolatile mass memory) on a single integrated-circuit substrate (a chip)—hence, the name "single-chip computer." The approach presented combines advances in the field of microelectromechanical

  10. Memory management and compiler support for rapid recovery from failures in computer systems

    Science.gov (United States)

    Fuchs, W. K.

    1991-01-01

    This paper describes recent developments in the use of memory management and compiler technology to support rapid recovery from failures in computer systems. The techniques described include cache coherence protocols for user transparent checkpointing in multiprocessor systems, compiler-based checkpoint placement, compiler-based code modification for multiple instruction retry, and forward recovery in distributed systems utilizing optimistic execution.

  11. System of common usage on the base of external memory devices and the SM-3 computer

    International Nuclear Information System (INIS)

    Baluka, G.; Vasin, A.Yu.; Ermakov, V.A.; Zhukov, G.P.; Zimin, G.N.; Namsraj, Yu.; Ostrovnoj, A.I.; Savvateev, A.S.; Salamatin, I.M.; Yanovskij, G.Ya.

    1980-01-01

    An easily modified system of common usage on the base of external memories and a SM-3 minicomputer replacing some pulse analysers is described. The system has merits of PA and is more advantageous with regard to effectiveness of equipment using, the possibility of changing configuration and functions, the data protection against losses due to user errors and some failures, price of one registration channel, place occupied. The system of common usage is intended for the IBR-2 pulse reactor computing centre. It is designed using the SANPO system means for SM-3 computer [ru

  12. Self-Testing Computer Memory

    Science.gov (United States)

    Chau, Savio, N.; Rennels, David A.

    1988-01-01

    Memory system for computer repeatedly tests itself during brief, regular interruptions of normal processing of data. Detects and corrects transient faults as single-event upsets (changes in bits due to ionizing radiation) within milliseconds after occuring. Self-testing concept surpasses conventional by actively flushing latent defects out of memory and attempting to correct before accumulating beyond capacity for self-correction or detection. Cost of improvement modest increase in complexity of circuitry and operating time.

  13. Modified stretched exponential model of computer system resources management limitations-The case of cache memory

    Science.gov (United States)

    Strzałka, Dominik; Dymora, Paweł; Mazurek, Mirosław

    2018-02-01

    In this paper we present some preliminary results in the field of computer systems management with relation to Tsallis thermostatistics and the ubiquitous problem of hardware limited resources. In the case of systems with non-deterministic behaviour, management of their resources is a key point that guarantees theirs acceptable performance and proper working. This is very wide problem that stands for many challenges in financial, transport, water and food, health, etc. areas. We focus on computer systems with attention paid to cache memory and propose to use an analytical model that is able to connect non-extensive entropy formalism, long-range dependencies, management of system resources and queuing theory. Obtained analytical results are related to the practical experiment showing interesting and valuable results.

  14. Data systems and computer science space data systems: Onboard memory and storage

    Science.gov (United States)

    Shull, Tom

    1991-01-01

    The topics are presented in viewgraph form and include the following: technical objectives; technology challenges; state-of-the-art assessment; mass storage comparison; SODR drive and system concepts; program description; vertical Bloch line (VBL) device concept; relationship to external programs; and backup charts for memory and storage.

  15. Soft-error tolerance and energy consumption evaluation of embedded computer with magnetic random access memory in practical systems using computer simulations

    Science.gov (United States)

    Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko

    2017-08-01

    We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.

  16. Computer technologies and institutional memory

    Science.gov (United States)

    Bell, Christopher; Lachman, Roy

    1989-01-01

    NASA programs for manned space flight are in their 27th year. Scientists and engineers who worked continuously on the development of aerospace technology during that period are approaching retirement. The resulting loss to the organization will be considerable. Although this problem is general to the NASA community, the problem was explored in terms of the institutional memory and technical expertise of a single individual in the Man-Systems division. The main domain of the expert was spacecraft lighting, which became the subject area for analysis in these studies. The report starts with an analysis of the cumulative expertise and institutional memory of technical employees of organizations such as NASA. A set of solutions to this problem are examined and found inadequate. Two solutions were investigated at length: hypertext and expert systems. Illustrative examples were provided of hypertext and expert system representation of spacecraft lighting. These computer technologies can be used to ameliorate the problem of the loss of invaluable personnel.

  17. The Science of Computing: Virtual Memory

    Science.gov (United States)

    Denning, Peter J.

    1986-01-01

    In the March-April issue, I described how a computer's storage system is organized as a hierarchy consisting of cache, main memory, and secondary memory (e.g., disk). The cache and main memory form a subsystem that functions like main memory but attains speeds approaching cache. What happens if a program and its data are too large for the main memory? This is not a frivolous question. Every generation of computer users has been frustrated by insufficient memory. A new line of computers may have sufficient storage for the computations of its predecessor, but new programs will soon exhaust its capacity. In 1960, a longrange planning committee at MIT dared to dream of a computer with 1 million words of main memory. In 1985, the Cray-2 was delivered with 256 million words. Computational physicists dream of computers with 1 billion words. Computer architects have done an outstanding job of enlarging main memories yet they have never kept up with demand. Only the shortsighted believe they can.

  18. Dynamic computing random access memory

    International Nuclear Information System (INIS)

    Traversa, F L; Bonani, F; Pershin, Y V; Di Ventra, M

    2014-01-01

    The present von Neumann computing paradigm involves a significant amount of information transfer between a central processing unit and memory, with concomitant limitations in the actual execution speed. However, it has been recently argued that a different form of computation, dubbed memcomputing (Di Ventra and Pershin 2013 Nat. Phys. 9 200–2) and inspired by the operation of our brain, can resolve the intrinsic limitations of present day architectures by allowing for computing and storing of information on the same physical platform. Here we show a simple and practical realization of memcomputing that utilizes easy-to-build memcapacitive systems. We name this architecture dynamic computing random access memory (DCRAM). We show that DCRAM provides massively-parallel and polymorphic digital logic, namely it allows for different logic operations with the same architecture, by varying only the control signals. In addition, by taking into account realistic parameters, its energy expenditures can be as low as a few fJ per operation. DCRAM is fully compatible with CMOS technology, can be realized with current fabrication facilities, and therefore can really serve as an alternative to the present computing technology. (paper)

  19. Computer programming and computer systems

    CERN Document Server

    Hassitt, Anthony

    1966-01-01

    Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten

  20. Distributed-memory matrix computations

    DEFF Research Database (Denmark)

    Balle, Susanne Mølleskov

    1995-01-01

    in these algorithms is that many scientific applications rely heavily on the performance of the involved dense linear algebra building blocks. Even though we consider the distributed-memory as well as the shared-memory programming paradigm, the major part of the thesis is dedicated to distributed-memory architectures....... We emphasize distributed-memory massively parallel computers - such as the Connection Machines model CM-200 and model CM-5/CM-5E - available to us at UNI-C and at Thinking Machines Corporation. The CM-200 was at the time this project started one of the few existing massively parallel computers...... algorithm is investigated. this algorithm is built on top of several scan-operations. What difficulties occur when implementing this algorithm to massively parallel computers?...

  1. Paging memory from random access memory to backing storage in a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Inglett, Todd A; Ratterman, Joseph D; Smith, Brian E

    2013-05-21

    Paging memory from random access memory (`RAM`) to backing storage in a parallel computer that includes a plurality of compute nodes, including: executing a data processing application on a virtual machine operating system in a virtual machine on a first compute node; providing, by a second compute node, backing storage for the contents of RAM on the first compute node; and swapping, by the virtual machine operating system in the virtual machine on the first compute node, a page of memory from RAM on the first compute node to the backing storage on the second compute node.

  2. The MUSOS (MUsic SOftware System) Toolkit: A computer-based, open source application for testing memory for melodies.

    Science.gov (United States)

    Rainsford, M; Palmer, M A; Paine, G

    2018-04-01

    Despite numerous innovative studies, rates of replication in the field of music psychology are extremely low (Frieler et al., 2013). Two key methodological challenges affecting researchers wishing to administer and reproduce studies in music cognition are the difficulty of measuring musical responses, particularly when conducting free-recall studies, and access to a reliable set of novel stimuli unrestricted by copyright or licensing issues. In this article, we propose a solution for these challenges in computer-based administration. We present a computer-based application for testing memory for melodies. Created using the software Max/MSP (Cycling '74, 2014a), the MUSOS (Music Software System) Toolkit uses a simple modular framework configurable for testing common paradigms such as recall, old-new recognition, and stem completion. The program is accompanied by a stimulus set of 156 novel, copyright-free melodies, in audio and Max/MSP file formats. Two pilot tests were conducted to establish the properties of the accompanying stimulus set that are relevant to music cognition and general memory research. By using this software, a researcher without specialist musical training may administer and accurately measure responses from common paradigms used in the study of memory for music.

  3. Memory interface simulator: A computer design aid

    Science.gov (United States)

    Taylor, D. S.; Williams, T.; Weatherbee, J. E.

    1972-01-01

    Results are presented of a study conducted with a digital simulation model being used in the design of the Automatically Reconfigurable Modular Multiprocessor System (ARMMS), a candidate computer system for future manned and unmanned space missions. The model simulates the activity involved as instructions are fetched from random access memory for execution in one of the system central processing units. A series of model runs measured instruction execution time under various assumptions pertaining to the CPU's and the interface between the CPU's and RAM. Design tradeoffs are presented in the following areas: Bus widths, CPU microprogram read only memory cycle time, multiple instruction fetch, and instruction mix.

  4. Resistive content addressable memory based in-memory computation architecture

    KAUST Repository

    Salama, Khaled N.

    2016-12-08

    Various examples are provided examples related to resistive content addressable memory (RCAM) based in-memory computation architectures. In one example, a system includes a content addressable memory (CAM) including an array of cells having a memristor based crossbar and an interconnection switch matrix having a gateless memristor array, which is coupled to an output of the CAM. In another example, a method, includes comparing activated bit values stored a key register with corresponding bit values in a row of a CAM, setting a tag bit value to indicate that the activated bit values match the corresponding bit values, and writing masked key bit values to corresponding bit locations in the row of the CAM based on the tag bit value.

  5. Optical interconnection network for parallel access to multi-rank memory in future computing systems.

    Science.gov (United States)

    Wang, Kang; Gu, Huaxi; Yang, Yintang; Wang, Kun

    2015-08-10

    With the number of cores increasing, there is an emerging need for a high-bandwidth low-latency interconnection network, serving core-to-memory communication. In this paper, aiming at the goal of simultaneous access to multi-rank memory, we propose an optical interconnection network for core-to-memory communication. In the proposed network, the wavelength usage is delicately arranged so that cores can communicate with different ranks at the same time and broadcast for flow control can be achieved. A distributed memory controller architecture that works in a pipeline mode is also designed for efficient optical communication and transaction address processes. The scaling method and wavelength assignment for the proposed network are investigated. Compared with traditional electronic bus-based core-to-memory communication, the simulation results based on the PARSEC benchmark show that the bandwidth enhancement and latency reduction are apparent.

  6. Associative Memory Computing Power and Its Simulation

    CERN Document Server

    Volpi, G; The ATLAS collaboration

    2014-01-01

    The associative memory (AM) system is a computing device made of hundreds of AM ASICs chips designed to perform “pattern matching” at very high speed. Since each AM chip stores a data base of 130000 pre-calculated patterns and large numbers of chips can be easily assembled together, it is possible to produce huge AM banks. Speed and size of the system are crucial for real-time High Energy Physics applications, such as the ATLAS Fast TracKer (FTK) Processor. Using 80 million channels of the ATLAS tracker, FTK finds tracks within 100 micro seconds. The simulation of such a parallelized system is an extremely complex task if executed in commercial computers based on normal CPUs. The algorithm performance is limited, due to the lack of parallelism, and in addition the memory requirement is very large. In fact the AM chip uses a content addressable memory (CAM) architecture. Any data inquiry is broadcast to all memory elements simultaneously, thus data retrieval time is independent of the database size. The gr...

  7. Associative Memory computing power and its simulation

    CERN Document Server

    Ancu, L S; The ATLAS collaboration; Britzger, D; Giannetti, P; Howarth, J W; Luongo, C; Pandini, C; Schmitt, S; Volpi, G

    2014-01-01

    The associative memory (AM) system is a computing device made of hundreds of AM ASICs chips designed to perform “pattern matching” at very high speed. Since each AM chip stores a data base of 130000 pre-calculated patterns and large numbers of chips can be easily assembled together, it is possible to produce huge AM banks. Speed and size of the system are crucial for real-time High Energy Physics applications, such as the ATLAS Fast TracKer (FTK) Processor. Using 80 million channels of the ATLAS tracker, FTK finds tracks within 100 micro seconds. The simulation of such a parallelized system is an extremely complex task if executed in commercial computers based on normal CPUs. The algorithm performance is limited, due to the lack of parallelism, and in addition the memory requirement is very large. In fact the AM chip uses a content addressable memory (CAM) architecture. Any data inquiry is broadcast to all memory elements simultaneously, thus data retrieval time is independent of the database size. The gr...

  8. Quantum computation by measurement and quantum memory

    International Nuclear Information System (INIS)

    Nielsen, Michael A.

    2003-01-01

    What resources are universal for quantum computation? In the standard model of a quantum computer, a computation consists of a sequence of unitary gates acting coherently on the qubits making up the computer. This requirement for coherent unitary dynamical operations is widely believed to be the critical element of quantum computation. Here we show that a very different model involving only projective measurements and quantum memory is also universal for quantum computation. In particular, no coherent unitary dynamics are involved in the computation

  9. The computational nature of memory modification.

    Science.gov (United States)

    Gershman, Samuel J; Monfils, Marie-H; Norman, Kenneth A; Niv, Yael

    2017-03-15

    Retrieving a memory can modify its influence on subsequent behavior. We develop a computational theory of memory modification, according to which modification of a memory trace occurs through classical associative learning, but which memory trace is eligible for modification depends on a structure learning mechanism that discovers the units of association by segmenting the stream of experience into statistically distinct clusters (latent causes). New memories are formed when the structure learning mechanism infers that a new latent cause underlies current sensory observations. By the same token, old memories are modified when old and new sensory observations are inferred to have been generated by the same latent cause. We derive this framework from probabilistic principles, and present a computational implementation. Simulations demonstrate that our model can reproduce the major experimental findings from studies of memory modification in the Pavlovian conditioning literature.

  10. Reliability of computer memories in radiation environment

    Directory of Open Access Journals (Sweden)

    Fetahović Irfan S.

    2016-01-01

    Full Text Available The aim of this paper is examining a radiation hardness of the magnetic (Toshiba MK4007 GAL and semiconductor (AT 27C010 EPROM and AT 28C010 EEPROM computer memories in the field of radiation. Magnetic memories have been examined in the field of neutron radiation, and semiconductor memories in the field of gamma radiation. The obtained results have shown a high radiation hardness of magnetic memories. On the other side, it has been shown that semiconductor memories are significantly more sensitive and a radiation can lead to an important damage of their functionality. [Projekat Ministarstva nauke Republike Srbije, br. 171007

  11. Computer systems

    Science.gov (United States)

    Olsen, Lola

    1992-01-01

    In addition to the discussions, Ocean Climate Data Workshop hosts gave participants an opportunity to hear about, see, and test for themselves some of the latest computer tools now available for those studying climate change and the oceans. Six speakers described computer systems and their functions. The introductory talks were followed by demonstrations to small groups of participants and some opportunities for participants to get hands-on experience. After this familiarization period, attendees were invited to return during the course of the Workshop and have one-on-one discussions and further hands-on experience with these systems. Brief summaries or abstracts of introductory presentations are addressed.

  12. Parallel structures in human and computer memory

    Science.gov (United States)

    Kanerva, Pentti

    1986-08-01

    If we think of our experiences as being recorded continuously on film, then human memory can be compared to a film library that is indexed by the contents of the film strips stored in it. Moreover, approximate retrieval cues suffice to retrieve information stored in this library: We recognize a familiar person in a fuzzy photograph or a familiar tune played on a strange instrument. This paper is about how to construct a computer memory that would allow a computer to recognize patterns and to recall sequences the way humans do. Such a memory is remarkably similar in structure to a conventional computer memory and also to the neural circuits in the cortex of the cerebellum of the human brain. The paper concludes that the frame problem of artificial intelligence could be solved by the use of such a memory if we were able to encode information about the world properly.

  13. (U) Computation acceleration using dynamic memory

    Energy Technology Data Exchange (ETDEWEB)

    Hakel, Peter [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-10-24

    Many computational applications require the repeated use of quantities, whose calculations can be expensive. In order to speed up the overall execution of the program, it is often advantageous to replace computation with extra memory usage. In this approach, computed values are stored and then, when they are needed again, they are quickly retrieved from memory rather than being calculated again at great cost. Sometimes, however, the precise amount of memory needed to store such a collection is not known in advance, and only emerges in the course of running the calculation. One problem accompanying such a situation is wasted memory space in overdimensioned (and possibly sparse) arrays. Another issue is the overhead of copying existing values to a new, larger memory space, if the original allocation turns out to be insufficient. In order to handle these runtime problems, the programmer therefore has the extra task of addressing them in the code.

  14. Computational modelling of memory retention from synapse to behaviour

    International Nuclear Information System (INIS)

    Van Rossum, Mark C W; Shippi, Maria

    2013-01-01

    One of our most intriguing mental abilities is the capacity to store information and recall it from memory. Computational neuroscience has been influential in developing models and concepts of learning and memory. In this tutorial review we focus on the interplay between learning and forgetting. We discuss recent advances in the computational description of the learning and forgetting processes on synaptic, neuronal, and systems levels, as well as recent data that open up new challenges for statistical physicists. (paper)

  15. Memory Reconsolidation and Computational Learning

    Science.gov (United States)

    2010-03-01

    Siegelmann-Danieli and H.T. Siegelmann, "Robust Artificial Life Via Artificial Programmed Death," Artificial Inteligence 172(6-7), April 2008: 884-898. F...STATEMENT Unrestricted 13. SUPPLEMENTARY NOTES 20100402019 14. ABSTRACT Memory models are central to Artificial Intelligence and Machine...beyond [1]. The advances cited are a significant step toward creating Artificial Intelligence via neural networks at the human level. Our network

  16. Interfacing laboratory instruments to multiuser, virtual memory computers

    Science.gov (United States)

    Generazio, Edward R.; Roth, Don J.; Stang, David B.

    1990-01-01

    Incentives, problems and solutions associated with interfacing laboratory equipment with multiuser, virtual memory computers are presented. The major difficulty concerns how to utilize these computers effectively in a medium sized research group. This entails optimization of hardware interconnections and software to facilitate multiple instrument control, data acquisition and processing. The architecture of the system that was devised, and associated programming and subroutines are described. An example program involving computer controlled hardware for ultrasonic scan imaging is provided to illustrate the operational features.

  17. Mapping Computation with No Memory

    Science.gov (United States)

    Burckel, Serge; Gioan, Emeric; Thomé, Emmanuel

    We investigate the computation of mappings from a set S n to itself with in situ programs, that is using no extra variables than the input, and performing modifications of one component at a time. We consider several types of mappings and obtain effective computation and decomposition methods, together with upper bounds on the program length (number of assignments). Our technique is combinatorial and algebraic (graph coloration, partition ordering, modular arithmetics).

  18. Synthetic analog and digital circuits for cellular computation and memory.

    Science.gov (United States)

    Purcell, Oliver; Lu, Timothy K

    2014-10-01

    Biological computation is a major area of focus in synthetic biology because it has the potential to enable a wide range of applications. Synthetic biologists have applied engineering concepts to biological systems in order to construct progressively more complex gene circuits capable of processing information in living cells. Here, we review the current state of computational genetic circuits and describe artificial gene circuits that perform digital and analog computation. We then discuss recent progress in designing gene networks that exhibit memory, and how memory and computation have been integrated to yield more complex systems that can both process and record information. Finally, we suggest new directions for engineering biological circuits capable of computation. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Know Your Personal Computer Memory Organization

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 2. Know Your Personal Computer Memory Organization. Siddhartha Kumar Ghoshal. Series Article Volume 2 Issue 2 February 1997 pp 25-33. Fulltext. Click here to view fulltext PDF. Permanent link:

  20. Evaluating operating system vulnerability to memory errors.

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, Kurt Brian; Bridges, Patrick G. (University of New Mexico); Pedretti, Kevin Thomas Tauke; Mueller, Frank (North Carolina State University); Fiala, David (North Carolina State University); Brightwell, Ronald Brian

    2012-05-01

    Reliability is of great concern to the scalability of extreme-scale systems. Of particular concern are soft errors in main memory, which are a leading cause of failures on current systems and are predicted to be the leading cause on future systems. While great effort has gone into designing algorithms and applications that can continue to make progress in the presence of these errors without restarting, the most critical software running on a node, the operating system (OS), is currently left relatively unprotected. OS resiliency is of particular importance because, though this software typically represents a small footprint of a compute node's physical memory, recent studies show more memory errors in this region of memory than the remainder of the system. In this paper, we investigate the soft error vulnerability of two operating systems used in current and future high-performance computing systems: Kitten, the lightweight kernel developed at Sandia National Laboratories, and CLE, a high-performance Linux-based operating system developed by Cray. For each of these platforms, we outline major structures and subsystems that are vulnerable to soft errors and describe methods that could be used to reconstruct damaged state. Our results show the Kitten lightweight operating system may be an easier target to harden against memory errors due to its smaller memory footprint, largely deterministic state, and simpler system structure.

  1. Memory-Based Expert Systems

    Science.gov (United States)

    1992-12-01

    relevant cases quickly from a large memory -plus the requirement for an explicit theory of index content in the complex social domain where relevance often...Sep 89 - 31 Jan 92 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS " MEMORY -BASED EXPERT SYSTEMS" (U) 61102F 2304/A7 6. AUTHOR(S) Dr. Roger C. Schank 7...three problems: (1) The development of a robust memory -based parsing technology (Direct Memory Access Parsing, or DMP), (2) The development of case

  2. Computing betweenness centrality in external memory

    DEFF Research Database (Denmark)

    Arge, Lars; Goodrich, Michael T.; Walderveen, Freek van

    2013-01-01

    Betweenness centrality is one of the most well-known measures of the importance of nodes in a social-network graph. In this paper we describe the first known external-memory and cache-oblivious algorithms for computing betweenness centrality. We present four different external-memory algorithms...... exhibiting various tradeoffs with respect to performance. Two of the algorithms are cache-oblivious. We describe general algorithms for networks with weighted and unweighted edges and a specialized algorithm for networks with small diameters, as is common in social networks exhibiting the “small worlds...

  3. Advanced topics in security computer system design

    International Nuclear Information System (INIS)

    Stachniak, D.E.; Lamb, W.R.

    1989-01-01

    The capability, performance, and speed of contemporary computer processors, plus the associated performance capability of the operating systems accommodating the processors, have enormously expanded the scope of possibilities for designers of nuclear power plant security computer systems. This paper addresses the choices that could be made by a designer of security computer systems working with contemporary computers and describes the improvement in functionality of contemporary security computer systems based on an optimally chosen design. Primary initial considerations concern the selection of (a) the computer hardware and (b) the operating system. Considerations for hardware selection concern processor and memory word length, memory capacity, and numerous processor features

  4. Optical computing, optical memory, and SBIRs at Foster-Miller

    Science.gov (United States)

    Domash, Lawrence H.

    1994-03-01

    A desktop design and manufacturing system for binary diffractive elements, MacBEEP, was developed with the optical researcher in mind. Optical processing systems for specialized tasks such as cellular automation computation and fractal measurement were constructed. A new family of switchable holograms has enabled several applications for control of laser beams in optical memories. New spatial light modulators and optical logic elements have been demonstrated based on a more manufacturable semiconductor technology. Novel synthetic and polymeric nonlinear materials for optical storage are under development in an integrated memory architecture. SBIR programs enable creative contributions from smaller companies, both product oriented and technology oriented, and support advances that might not otherwise be developed.

  5. Computer System Design System-on-Chip

    CERN Document Server

    Flynn, Michael J

    2011-01-01

    The next generation of computer system designers will be less concerned about details of processors and memories, and more concerned about the elements of a system tailored to particular applications. These designers will have a fundamental knowledge of processors and other elements in the system, but the success of their design will depend on the skills in making system-level tradeoffs that optimize the cost, performance and other attributes to meet application requirements. This book provides a new treatment of computer system design, particularly for System-on-Chip (SOC), which addresses th

  6. Associative Memory computing power and its simulation.

    CERN Document Server

    Volpi, G; The ATLAS collaboration

    2014-01-01

    The associative memory (AM) chip is ASIC device specifically designed to perform ``pattern matching'' at very high speed and with parallel access to memory locations. The most extensive use for such device will be the ATLAS Fast Tracker (FTK) processor, where more than 8000 chips will be installed in 128 VME boards, specifically designed for high throughput in order to exploit the chip's features. Each AM chip will store a database of about 130000 pre-calculated patterns, allowing FTK to use about 1 billion patterns for the whole system, with any data inquiry broadcast to all memory elements simultaneously within the same clock cycle (10 ns), thus data retrieval time is independent of the database size. Speed and size of the system are crucial for real-time High Energy Physics applications, such as the ATLAS FTK processor. Using 80 million channels of the ATLAS tracker, FTK finds tracks within 100 $\\mathrm{\\mu s}$. The simulation of such a parallelized system is an extremely complex task when executed in comm...

  7. Noradrenergic System and Memory

    KAUST Repository

    Zenger, Manuel

    2017-07-22

    There is ample evidence indicating that noradrenaline plays an important role in memory mechanisms. Noradrenaline is thought to modulate these procsses through activation of adrenergic receptors in neurons. Astrocytes that form essential partners for synaptic function, also express alpha- and beta-adrenergic receptors. In astrocytes, noradrenaline triggers metabolic actions such as the glycogenolysis leading to an increase in l-lactate formation and release. l-Lactate can be used by neurons as a sourc of energy during memory tasks and can also induc transcription of plasticity genes in neurons. Activation of β-adrenergic receptors can also trigger gliotransmitter release resulting of intracllular calcium waves. These gliotransmitters modulate the synaptic activity and thereby can modulate long-term potentiation mechanisms. In summary, recnt evidencs indicate that noradrenaline exerts its memory-promoting effects through different modes of action both on neurons and astrocytes.

  8. Working Memory Systems in the Rat.

    Science.gov (United States)

    Bratch, Alexander; Kann, Spencer; Cain, Joshua A; Wu, Jie-En; Rivera-Reyes, Nilda; Dalecki, Stefan; Arman, Diana; Dunn, Austin; Cooper, Shiloh; Corbin, Hannah E; Doyle, Amanda R; Pizzo, Matthew J; Smith, Alexandra E; Crystal, Jonathon D

    2016-02-08

    A fundamental feature of memory in humans is the ability to simultaneously work with multiple types of information using independent memory systems. Working memory is conceptualized as two independent memory systems under executive control [1, 2]. Although there is a long history of using the term "working memory" to describe short-term memory in animals, it is not known whether multiple, independent memory systems exist in nonhumans. Here, we used two established short-term memory approaches to test the hypothesis that spatial and olfactory memory operate as independent working memory resources in the rat. In the olfactory memory task, rats chose a novel odor from a gradually incrementing set of old odors [3]. In the spatial memory task, rats searched for a depleting food source at multiple locations [4]. We presented rats with information to hold in memory in one domain (e.g., olfactory) while adding a memory load in the other domain (e.g., spatial). Control conditions equated the retention interval delay without adding a second memory load. In a further experiment, we used proactive interference [5-7] in the spatial domain to compromise spatial memory and evaluated the impact of adding an olfactory memory load. Olfactory and spatial memory are resistant to interference from the addition of a memory load in the other domain. Our data suggest that olfactory and spatial memory draw on independent working memory systems in the rat. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Large-scale particle simulations in a virtual-memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Wagner, J.S.; Tajima, T.; Million, R.

    1982-08-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceed the computer core size. The required address space is automatically mapped onto slow disc memory by the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Accesses to slow memory significantly reduce the execution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time

  10. Development scenarios for organizational memory information systems

    NARCIS (Netherlands)

    Wijnhoven, Alphonsus B.J.M.

    1999-01-01

    Well-managed organizational memories have been emphasized in the recent management literature as important sources for business success. Organizational memory infonnation systems (OMIS) have been conceptualized as a framework for information technologies to support these organizational memories.

  11. Spin-transfer torque magnetoresistive random-access memory technologies for normally off computing (invited)

    International Nuclear Information System (INIS)

    Ando, K.; Yuasa, S.; Fujita, S.; Ito, J.; Yoda, H.; Suzuki, Y.; Nakatani, Y.; Miyazaki, T.

    2014-01-01

    Most parts of present computer systems are made of volatile devices, and the power to supply them to avoid information loss causes huge energy losses. We can eliminate this meaningless energy loss by utilizing the non-volatile function of advanced spin-transfer torque magnetoresistive random-access memory (STT-MRAM) technology and create a new type of computer, i.e., normally off computers. Critical tasks to achieve normally off computers are implementations of STT-MRAM technologies in the main memory and low-level cache memories. STT-MRAM technology for applications to the main memory has been successfully developed by using perpendicular STT-MRAMs, and faster STT-MRAM technologies for applications to the cache memory are now being developed. The present status of STT-MRAMs and challenges that remain for normally off computers are discussed

  12. Spin-transfer torque magnetoresistive random-access memory technologies for normally off computing (invited)

    Energy Technology Data Exchange (ETDEWEB)

    Ando, K., E-mail: ando-koji@aist.go.jp; Yuasa, S. [National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba 305-8568 (Japan); Fujita, S.; Ito, J.; Yoda, H. [Toshiba Corporation, Kawasaki 212-8582 (Japan); Suzuki, Y. [Graduate School of Engineering Science, Osaka University, Toyonaka 560-8531 (Japan); Nakatani, Y. [Department of Communication Engineering and Informatics, University of Electro-Communication, Chofu 182-8585 (Japan); Miyazaki, T. [WPI-AIMR, Tohoku University, Sendai 980-8577 (Japan)

    2014-05-07

    Most parts of present computer systems are made of volatile devices, and the power to supply them to avoid information loss causes huge energy losses. We can eliminate this meaningless energy loss by utilizing the non-volatile function of advanced spin-transfer torque magnetoresistive random-access memory (STT-MRAM) technology and create a new type of computer, i.e., normally off computers. Critical tasks to achieve normally off computers are implementations of STT-MRAM technologies in the main memory and low-level cache memories. STT-MRAM technology for applications to the main memory has been successfully developed by using perpendicular STT-MRAMs, and faster STT-MRAM technologies for applications to the cache memory are now being developed. The present status of STT-MRAMs and challenges that remain for normally off computers are discussed.

  13. Amorphous Semiconductors: From Photocatalyst to Computer Memory

    Science.gov (United States)

    Sundararajan, Mayur

    encouraging but inconclusive. Then the method was successfully demonstrated on mesoporous TiO2SiO 2 by showing a shift in its optical bandgap. One of the special class of amorphous semiconductors is chalcogenide glasses, which exhibit high ionic conductivity even at room temperature. When metal doped chalcogenide glasses are under an electric field, they become electronically conductive. These properties are exploited in the computer memory storage application of Conductive Bridging Random Access Memory (CBRAM). CBRAM is a non-volatile memory that is a strong contender to replace conventional volatile RAMs such as DRAM, SRAM, etc. This technology has already been commercialized, but the working mechanism is still not clearly understood especially the nature of the conductive bridge filament. In this project, the CBRAM memory cells are fabricated by thermal evaporation method with Agx(GeSe 2)1-x as the solid electrolyte layer, Ag as the active electrode and Au as the inert electrode. By careful use of cyclic voltammetry, the conductive filaments were grown on the surface and the bulk of the solid electrolyte. The comparison between the two filaments revealed major differences leading to contradiction with the existing working mechanism. After compiling all the results, a modified working mechanism is proposed. SAXS is a powerful tool to characterize nanostructure of glasses. The analysis of the SAXS data to get useful information are usually performed by different programs. In this project, Irena and GIFT programs were compared by performing the analysis of the SAXS data of glass and glass ceramics samples. Irena was shown to be not suitable for the analysis of SAXS data that has a significant contribution from interparticle interactions. GIFT was demonstrated to be better suited for such analysis. Additionally, the results obtained by programs for samples with low interparticle interactions were shown to be consistent.

  14. A simplified computational memory model from information processing.

    Science.gov (United States)

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-11-23

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view.

  15. A simplified computational memory model from information processing

    Science.gov (United States)

    Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang

    2016-01-01

    This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view. PMID:27876847

  16. Multiple core computer processor with globally-accessible local memories

    Science.gov (United States)

    Shalf, John; Donofrio, David; Oliker, Leonid

    2016-09-20

    A multi-core computer processor including a plurality of processor cores interconnected in a Network-on-Chip (NoC) architecture, a plurality of caches, each of the plurality of caches being associated with one and only one of the plurality of processor cores, and a plurality of memories, each of the plurality of memories being associated with a different set of at least one of the plurality of processor cores and each of the plurality of memories being configured to be visible in a global memory address space such that the plurality of memories are visible to two or more of the plurality of processor cores.

  17. Static Memory Deduplication for Performance Optimization in Cloud Computing.

    Science.gov (United States)

    Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan

    2017-04-27

    In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.

  18. The computational nature of memory modification

    OpenAIRE

    Gershman, Samuel J; Monfils, Marie-H; Norman, Kenneth A; Niv, Yael

    2017-01-01

    eLife digest Our memories contain our expectations about the world that we can retrieve to make predictions about the future. For example, most people would expect a chocolate bar to taste good, because they have previously learned to associate chocolate with pleasure. When a surprising event occurs, such as tasting an unpalatable chocolate bar, the brain therefore faces a dilemma. Should it update the existing memory and overwrite the association between chocolate and pleasure? Or should it ...

  19. Reliable computer systems.

    Science.gov (United States)

    Wear, L L; Pinkert, J R

    1993-11-01

    In this article, we looked at some decisions that apply to the design of reliable computer systems. We began with a discussion of several terms such as testability, then described some systems that call for highly reliable hardware and software. The article concluded with a discussion of methods that can be used to achieve higher reliability in computer systems. Reliability and fault tolerance in computers probably will continue to grow in importance. As more and more systems are computerized, people will want assurances about the reliability of these systems, and their ability to work properly even when sub-systems fail.

  20. Visual software system for memory interleaving simulation

    Directory of Open Access Journals (Sweden)

    Milenković Katarina

    2017-01-01

    Full Text Available This paper describes the visual software system for memory interleaving simulation (VSMIS, implemented for the purpose of the course Computer Architecture and Organization 1, at the School of Electrical Engineering, University of Belgrade. The simulator enables students to expand their knowledge through practical work in the laboratory, as well as through independent work at home. VSMIS gives users the possibility to initialize parts of the system and to control simulation steps. The user has the ability to monitor simulation through graphical representation. It is possible to navigate through the entire hierarchy of the system using simple navigation. During the simulation the user can observe and set the values of the memory location. At any time, the user can reset the simulation of the system and observe it for different memory states; in addition, it is possible to save the current state of the simulation and continue with the execution of the simulation later. [Project of the Serbian Ministry of Education, Science and Technological Development, Grant no. III44009

  1. System and method for programmable bank selection for banked memory subsystems

    Energy Technology Data Exchange (ETDEWEB)

    Blumrich, Matthias A. (Ridgefield, CT); Chen, Dong (Croton on Hudson, NY); Gara, Alan G. (Mount Kisco, NY); Giampapa, Mark E. (Irvington, NY); Hoenicke, Dirk (Seebruck-Seeon, DE); Ohmacht, Martin (Yorktown Heights, NY); Salapura, Valentina (Chappaqua, NY); Sugavanam, Krishnan (Mahopac, NY)

    2010-09-07

    A programmable memory system and method for enabling one or more processor devices access to shared memory in a computing environment, the shared memory including one or more memory storage structures having addressable locations for storing data. The system comprises: one or more first logic devices associated with a respective one or more processor devices, each first logic device for receiving physical memory address signals and programmable for generating a respective memory storage structure select signal upon receipt of pre-determined address bit values at selected physical memory address bit locations; and, a second logic device responsive to each of the respective select signal for generating an address signal used for selecting a memory storage structure for processor access. The system thus enables each processor device of a computing environment memory storage access distributed across the one or more memory storage structures.

  2. Fault tolerant computing systems

    International Nuclear Information System (INIS)

    Randell, B.

    1981-01-01

    Fault tolerance involves the provision of strategies for error detection damage assessment, fault treatment and error recovery. A survey is given of the different sorts of strategies used in highly reliable computing systems, together with an outline of recent research on the problems of providing fault tolerance in parallel and distributed computing systems. (orig.)

  3. Operating systems. [of computers

    Science.gov (United States)

    Denning, P. J.; Brown, R. L.

    1984-01-01

    A counter operating system creates a hierarchy of levels of abstraction, so that at a given level all details concerning lower levels can be ignored. This hierarchical structure separates functions according to their complexity, characteristic time scale, and level of abstraction. The lowest levels include the system's hardware; concepts associated explicitly with the coordination of multiple tasks appear at intermediate levels, which conduct 'primitive processes'. Software semaphore is the mechanism controlling primitive processes that must be synchronized. At higher levels lie, in rising order, the access to the secondary storage devices of a particular machine, a 'virtual memory' scheme for managing the main and secondary memories, communication between processes by way of a mechanism called a 'pipe', access to external input and output devices, and a hierarchy of directories cataloguing the hardware and software objects to which access must be controlled.

  4. Attacks on computer systems

    Directory of Open Access Journals (Sweden)

    Dejan V. Vuletić

    2012-01-01

    Full Text Available Computer systems are a critical component of the human society in the 21st century. Economic sector, defense, security, energy, telecommunications, industrial production, finance and other vital infrastructure depend on computer systems that operate at local, national or global scales. A particular problem is that, due to the rapid development of ICT and the unstoppable growth of its application in all spheres of the human society, their vulnerability and exposure to very serious potential dangers increase. This paper analyzes some typical attacks on computer systems.

  5. Stress Effects on Multiple Memory System Interactions

    Directory of Open Access Journals (Sweden)

    Deborah Ness

    2016-01-01

    Full Text Available Extensive behavioural, pharmacological, and neurological research reports stress effects on mammalian memory processes. While stress effects on memory quantity have been known for decades, the influence of stress on multiple memory systems and their distinct contributions to the learning process have only recently been described. In this paper, after summarizing the fundamental biological aspects of stress/emotional arousal and recapitulating functionally and anatomically distinct memory systems, we review recent animal and human studies exploring the effects of stress on multiple memory systems. Apart from discussing the interaction between distinct memory systems in stressful situations, we will also outline the fundamental role of the amygdala in mediating such stress effects. Additionally, based on the methods applied in the herein discussed studies, we will discuss how memory translates into behaviour.

  6. Stress Effects on Multiple Memory System Interactions.

    Science.gov (United States)

    Ness, Deborah; Calabrese, Pasquale

    2016-01-01

    Extensive behavioural, pharmacological, and neurological research reports stress effects on mammalian memory processes. While stress effects on memory quantity have been known for decades, the influence of stress on multiple memory systems and their distinct contributions to the learning process have only recently been described. In this paper, after summarizing the fundamental biological aspects of stress/emotional arousal and recapitulating functionally and anatomically distinct memory systems, we review recent animal and human studies exploring the effects of stress on multiple memory systems. Apart from discussing the interaction between distinct memory systems in stressful situations, we will also outline the fundamental role of the amygdala in mediating such stress effects. Additionally, based on the methods applied in the herein discussed studies, we will discuss how memory translates into behaviour.

  7. Low-Voltage Protection For Volatile Computer Memories

    Science.gov (United States)

    Detwiler, R. C.

    1985-01-01

    Short-circuit current provides minimum memory power. Protective circuit includes dc-to-dc converter that supplies keep-alive voltage to memories when short circuit occurs in any of system loads. Converter powered by lowvoltage across two of three series diodes generated by short-circuit bus current. Relay switch is in open (short-circuit-detected) position. Protective circuit useful wherever necessary to improve reliability of volatile memories or other circuits that must not lose power.

  8. Stress Effects on Multiple Memory System Interactions

    OpenAIRE

    Ness, Deborah; Calabrese, Pasquale

    2016-01-01

    Extensive behavioural, pharmacological, and neurological research reports stress effects on mammalian memory processes. While stress effects on memory quantity have been known for decades, the influence of stress on multiple memory systems and their distinct contributions to the learning process have only recently been described. In this paper, after summarizing the fundamental biological aspects of stress/emotional arousal and recapitulating functionally and anatomically distinct memory syst...

  9. Resilient computer system design

    CERN Document Server

    Castano, Victor

    2015-01-01

    This book presents a paradigm for designing new generation resilient and evolving computer systems, including their key concepts, elements of supportive theory, methods of analysis and synthesis of ICT with new properties of evolving functioning, as well as implementation schemes and their prototyping. The book explains why new ICT applications require a complete redesign of computer systems to address challenges of extreme reliability, high performance, and power efficiency. The authors present a comprehensive treatment for designing the next generation of computers, especially addressing safety-critical, autonomous, real time, military, banking, and wearable health care systems.   §  Describes design solutions for new computer system - evolving reconfigurable architecture (ERA) that is free from drawbacks inherent in current ICT and related engineering models §  Pursues simplicity, reliability, scalability principles of design implemented through redundancy and re-configurability; targeted for energy-,...

  10. Homodyne detection of holographic memory systems

    Science.gov (United States)

    Urness, Adam C.; Wilson, William L.; Ayres, Mark R.

    2014-09-01

    We present a homodyne detection system implemented for a page-wise holographic memory architecture. Homodyne detection by holographic memory systems enables phase quadrature multiplexing (doubling address space), and lower exposure times (increasing read transfer rates). It also enables phase modulation, which improves signal-to-noise ratio (SNR) to further increase data capacity. We believe this is the first experimental demonstration of homodyne detection for a page-wise holographic memory system suitable for a commercial design.

  11. On-chip phase-change photonic memory and computing

    Science.gov (United States)

    Cheng, Zengguang; Ríos, Carlos; Youngblood, Nathan; Wright, C. David; Pernice, Wolfram H. P.; Bhaskaran, Harish

    2017-08-01

    The use of photonics in computing is a hot topic of interest, driven by the need for ever-increasing speed along with reduced power consumption. In existing computing architectures, photonic data storage would dramatically improve the performance by reducing latencies associated with electrical memories. At the same time, the rise of `big data' and `deep learning' is driving the quest for non-von Neumann and brain-inspired computing paradigms. To succeed in both aspects, we have demonstrated non-volatile multi-level photonic memory avoiding the von Neumann bottleneck in the existing computing paradigm and a photonic synapse resembling the biological synapses for brain-inspired computing using phase-change materials (Ge2Sb2Te5).

  12. A Layered Active Memory Architecture for Cognitive Vision Systems

    OpenAIRE

    Kolonias, Ilias; Christmas, William; Kittler, Josef

    2007-01-01

    Recognising actions and objects from video material has attracted growing research attention and given rise to important applications. However, injecting cognitive capabilities into computer vision systems requires an architecture more elaborate than the traditional signal processing paradigm for information processing. Inspired by biological cognitive systems, we present a memory architecture enabling cognitive processes (such as selecting the processes required for scene understanding, laye...

  13. Elastic Cloud Computing Architecture and System for Heterogeneous Spatiotemporal Computing

    Science.gov (United States)

    Shi, X.

    2017-10-01

    Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs), while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC) or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA) may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  14. ELASTIC CLOUD COMPUTING ARCHITECTURE AND SYSTEM FOR HETEROGENEOUS SPATIOTEMPORAL COMPUTING

    Directory of Open Access Journals (Sweden)

    X. Shi

    2017-10-01

    Full Text Available Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs, while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  15. Hybrid computing using a neural network with dynamic external memory.

    Science.gov (United States)

    Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward; Ramalho, Tiago; Agapiou, John; Badia, Adrià Puigdomènech; Hermann, Karl Moritz; Zwols, Yori; Ostrovski, Georg; Cain, Adam; King, Helen; Summerfield, Christopher; Blunsom, Phil; Kavukcuoglu, Koray; Hassabis, Demis

    2016-10-27

    Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read-write memory.

  16. Wearable Intrinsically Soft, Stretchable, Flexible Devices for Memories and Computing.

    Science.gov (United States)

    Rajan, Krishna; Garofalo, Erik; Chiolerio, Alessandro

    2018-01-27

    A recent trend in the development of high mass consumption electron devices is towards electronic textiles (e-textiles), smart wearable devices, smart clothes, and flexible or printable electronics. Intrinsically soft, stretchable, flexible, Wearable Memories and Computing devices (WMCs) bring us closer to sci-fi scenarios, where future electronic systems are totally integrated in our everyday outfits and help us in achieving a higher comfort level, interacting for us with other digital devices such as smartphones and domotics, or with analog devices, such as our brain/peripheral nervous system. WMC will enable each of us to contribute to open and big data systems as individual nodes, providing real-time information about physical and environmental parameters (including air pollution monitoring, sound and light pollution, chemical or radioactive fallout alert, network availability, and so on). Furthermore, WMC could be directly connected to human brain and enable extremely fast operation and unprecedented interface complexity, directly mapping the continuous states available to biological systems. This review focuses on recent advances in nanotechnology and materials science and pays particular attention to any result and promising technology to enable intrinsically soft, stretchable, flexible WMC.

  17. Graphical Visualization on Computational Simulation Using Shared Memory

    International Nuclear Information System (INIS)

    Lima, A B; Correa, Eberth

    2014-01-01

    The Shared Memory technique is a powerful tool for parallelizing computer codes. In particular it can be used to visualize the results ''on the fly'' without stop running the simulation. In this presentation we discuss and show how to use the technique conjugated with a visualization code using openGL

  18. Computer network defense system

    Science.gov (United States)

    Urias, Vincent; Stout, William M. S.; Loverro, Caleb

    2017-08-22

    A method and apparatus for protecting virtual machines. A computer system creates a copy of a group of the virtual machines in an operating network in a deception network to form a group of cloned virtual machines in the deception network when the group of the virtual machines is accessed by an adversary. The computer system creates an emulation of components from the operating network in the deception network. The components are accessible by the group of the cloned virtual machines as if the group of the cloned virtual machines was in the operating network. The computer system moves network connections for the group of the virtual machines in the operating network used by the adversary from the group of the virtual machines in the operating network to the group of the cloned virtual machines, enabling protecting the group of the virtual machines from actions performed by the adversary.

  19. Computer system operation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young Jae; Lee, Hae Cho; Lee, Ho Yeun; Kim, Young Taek; Lee, Sung Kyu; Park, Jeong Suk; Nam, Ji Wha; Kim, Soon Kon; Yang, Sung Un; Sohn, Jae Min; Moon, Soon Sung; Park, Bong Sik; Lee, Byung Heon; Park, Sun Hee; Kim, Jin Hee; Hwang, Hyeoi Sun; Lee, Hee Ja; Hwang, In A. [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1993-12-01

    The report described the operation and the trouble shooting of main computer and KAERINet. The results of the project are as follows; 1. The operation and trouble shooting of the main computer system. (Cyber 170-875, Cyber 960-31, VAX 6320, VAX 11/780). 2. The operation and trouble shooting of the KAERINet. (PC to host connection, host to host connection, file transfer, electronic-mail, X.25, CATV etc.). 3. The development of applications -Electronic Document Approval and Delivery System, Installation the ORACLE Utility Program. 22 tabs., 12 figs. (Author) .new.

  20. DYMAC computer system

    International Nuclear Information System (INIS)

    Hagen, J.; Ford, R.F.

    1979-01-01

    The DYnamic Materials ACcountability program (DYMAC) has been monitoring nuclear material at the Los Alamos Scientific Laboratory plutonium processing facility since January 1978. This paper presents DYMAC's features and philosophy, especially as reflected in its computer system design. Early decisions and tradeoffs are evaluated through the benefit of a year's operating experience

  1. Present SLAC accelerator computer control system features

    International Nuclear Information System (INIS)

    Davidson, V.; Johnson, R.

    1981-02-01

    The current functional organization and state of software development of the computer control system of the Stanford Linear Accelerator is described. Included is a discussion of the distribution of functions throughout the system, the local controller features, and currently implemented features of the touch panel portion of the system. The functional use of our triplex of PDP11-34 computers sharing common memory is described. Also included is a description of the use of pseudopanel tables as data tables for closed loop control functions

  2. Neuromorphic cognitive systems a learning and memory centered approach

    CERN Document Server

    Yu, Qiang; Hu, Jun; Tan Chen, Kay

    2017-01-01

    This book presents neuromorphic cognitive systems from a learning and memory-centered perspective. It illustrates how to build a system network of neurons to perform spike-based information processing, computing, and high-level cognitive tasks. It is beneficial to a wide spectrum of readers, including undergraduate and postgraduate students and researchers who are interested in neuromorphic computing and neuromorphic engineering, as well as engineers and professionals in industry who are involved in the design and applications of neuromorphic cognitive systems, neuromorphic sensors and processors, and cognitive robotics. The book formulates a systematic framework, from the basic mathematical and computational methods in spike-based neural encoding, learning in both single and multi-layered networks, to a near cognitive level composed of memory and cognition. Since the mechanisms for integrating spiking neurons integrate to formulate cognitive functions as in the brain are little understood, studies of neuromo...

  3. Optoelectronic Cache Memory System Architecture

    National Research Council Canada - National Science Library

    Chiarulli, Donald

    1999-01-01

    .... This technology enables the use of large page oriented optical memories in applications such as medical, image, and geo-spatial databases where high speed access to page structured data is essential.

  4. Embedded System Synthesis under Memory Constraints

    DEFF Research Database (Denmark)

    Madsen, Jan; Bjørn-Jørgensen, Peter

    1999-01-01

    This paper presents a genetic algorithm to solve the system synthesis problem of mapping a time constrained single-rate system specification onto a given heterogeneous architecture which may contain irregular interconnection structures. The synthesis is performed under memory constraints, that is......, the algorithm takes into account the memory size of processors and the size of interface buffers of communication links, and in particular the complicated interplay of these. The presented algorithm is implemented as part of the LY-COS cosynthesis system....

  5. The associative memory system for the FTK processor at ATLAS

    CERN Document Server

    Magalotti, D; The ATLAS collaboration; Donati, S; Luciano, P; Piendibene, M; Giannetti, P; Lanza, A; Verzellesi, G; Sakellariou, Andreas; Billereau, W; Combe, J M

    2014-01-01

    In high energy physics experiments, the most interesting processes are very rare and hidden in an extremely large level of background. As the experiment complexity, accelerator backgrounds, and instantaneous luminosity increase, more effective and accurate data selection techniques are needed. The Fast TracKer processor (FTK) is a real time tracking processor designed for the ATLAS trigger upgrade. The FTK core is the Associative Memory system. It provides massive computing power to minimize the processing time of complex tracking algorithms executed online. This paper reports on the results and performance of a new prototype of Associative Memory system.

  6. emMAW: computing minimal absent words in external memory.

    Science.gov (United States)

    Héliou, Alice; Pissis, Solon P; Puglisi, Simon J

    2017-09-01

    The biological significance of minimal absent words has been investigated in genomes of organisms from all domains of life. For instance, three minimal absent words of the human genome were found in Ebola virus genomes. There exists an O(n) -time and O(n) -space algorithm for computing all minimal absent words of a sequence of length n on a fixed-sized alphabet based on suffix arrays. A standard implementation of this algorithm, when applied to a large sequence of length n , requires more than 20 n  bytes of RAM. Such memory requirements are a significant hurdle to the computation of minimal absent words in large datasets. We present emMAW, the first external-memory algorithm for computing minimal absent words. A free open-source implementation of our algorithm is made available. This allows for computation of minimal absent words on far bigger data sets than was previously possible. Our implementation requires less than 3 h on a standard workstation to process the full human genome when as little as 1 GB of RAM is made available. We stress that our implementation, despite making use of external memory, is fast; indeed, even on relatively smaller datasets when enough RAM is available to hold all necessary data structures, it is less than two times slower than state-of-the-art internal-memory implementations. https://github.com/solonas13/maw (free software under the terms of the GNU GPL). alice.heliou@lix.polytechnique.fr or solon.pissis@kcl.ac.uk. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  7. PEP computer control system

    International Nuclear Information System (INIS)

    1979-03-01

    This paper describes the design and performance of the computer system that will be used to control and monitor the PEP storage ring. Since the design is essentially complete and much of the system is operational, the system is described as it is expected to 1979. Section 1 of the paper describes the system hardware which includes the computer network, the CAMAC data I/O system, and the operator control consoles. Section 2 describes a collection of routines that provide general services to applications programs. These services include a graphics package, data base and data I/O programs, and a director programm for use in operator communication. Section 3 describes a collection of automatic and semi-automatic control programs, known as SCORE, that contain mathematical models of the ring lattice and are used to determine in real-time stable paths for changing beam configuration and energy and for orbit correction. Section 4 describes a collection of programs, known as CALI, that are used for calibration of ring elements

  8. A compositional reservoir simulator on distributed memory parallel computers

    International Nuclear Information System (INIS)

    Rame, M.; Delshad, M.

    1995-01-01

    This paper presents the application of distributed memory parallel computes to field scale reservoir simulations using a parallel version of UTCHEM, The University of Texas Chemical Flooding Simulator. The model is a general purpose highly vectorized chemical compositional simulator that can simulate a wide range of displacement processes at both field and laboratory scales. The original simulator was modified to run on both distributed memory parallel machines (Intel iPSC/960 and Delta, Connection Machine 5, Kendall Square 1 and 2, and CRAY T3D) and a cluster of workstations. A domain decomposition approach has been taken towards parallelization of the code. A portion of the discrete reservoir model is assigned to each processor by a set-up routine that attempts a data layout as even as possible from the load-balance standpoint. Each of these subdomains is extended so that data can be shared between adjacent processors for stencil computation. The added routines that make parallel execution possible are written in a modular fashion that makes the porting to new parallel platforms straight forward. Results of the distributed memory computing performance of Parallel simulator are presented for field scale applications such as tracer flood and polymer flood. A comparison of the wall-clock times for same problems on a vector supercomputer is also presented

  9. A Brain System for Auditory Working Memory.

    Science.gov (United States)

    Kumar, Sukhbinder; Joseph, Sabine; Gander, Phillip E; Barascud, Nicolas; Halpern, Andrea R; Griffiths, Timothy D

    2016-04-20

    The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. Copyright © 2016 Kumar et al.

  10. Impulse: Memory System Support for Scientific Applications

    Directory of Open Access Journals (Sweden)

    John B. Carter

    1999-01-01

    Full Text Available Impulse is a new memory system architecture that adds two important features to a traditional memory controller. First, Impulse supports application‐specific optimizations through configurable physical address remapping. By remapping physical addresses, applications control how their data is accessed and cached, improving their cache and bus utilization. Second, Impulse supports prefetching at the memory controller, which can hide much of the latency of DRAM accesses. Because it requires no modification to processor, cache, or bus designs, Impulse can be adopted in conventional systems. In this paper we describe the design of the Impulse architecture, and show how an Impulse memory system can improve the performance of memory‐bound scientific applications. For instance, Impulse decreases the running time of the NAS conjugate gradient benchmark by 67%. We expect that Impulse will also benefit regularly strided, memory‐bound applications of commercial importance, such as database and multimedia programs.

  11. Ubiquitous Computing Systems

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Friday, Adrian

    2009-01-01

    applications Privacy protection in systems that connect personal devices and personal information Moving from the graphical to the ubiquitous computing user interface Techniques that are revolutionizing the way we determine a person’s location and understand other sensor measurements While we needn’t become...... expert in every sub-discipline of ubicomp, it is necessary that we appreciate all the perspectives that make up the field and understand how our work can influence and be influenced by those perspectives. This is important, if we are to encourage future generations to be as successfully innovative...

  12. MEMORY SYSTEMS AND THE ADDICTED BRAIN

    Directory of Open Access Journals (Sweden)

    Jarid eGoodman

    2016-02-01

    Full Text Available The view that anatomically distinct memory systems differentially contribute to the development of drug addiction and relapse has received extensive support. The present brief review revisits this hypothesis as it was originally proposed twenty years ago (White, 1996 and highlights several recent developments. Extensive research employing a variety of animal learning paradigms indicates that dissociable neural systems mediate distinct types of learning and memory. Each memory system potentially contributes unique components to the learned behavior supporting drug addiction and relapse. In particular, the shift from recreational drug use to compulsive drug abuse may reflect a neuroanatomical shift from cognitive control of behavior mediated by the hippocampus/dorsomedial striatum toward habitual control of behavior mediated by the dorsolateral striatum (DLS. In addition, stress/anxiety may constitute a cofactor that facilitates DLS-dependent memory, and this may serve as a neurobehavioral mechanism underlying the increased drug use and relapse in humans following stressful life events. Evidence supporting the multiple systems view of drug addiction comes predominantly from studies of learning and memory that have employed as reinforcers addictive substances often considered within the context of drug addiction research, including cocaine, alcohol, and amphetamines. In addition, recent evidence suggests that the memory systems approach may also be helpful for understanding topical sources of addiction that reflect emerging health concerns, including marijuana use, high-fat diet, and video game playing.

  13. Common oscillatory mechanisms across multiple memory systems

    Science.gov (United States)

    Headley, Drew B.; Paré, Denis

    2017-01-01

    The cortex, hippocampus, and striatum support dissociable forms of memory. While each of these regions contains specialized circuitry supporting their respective functions, all structure their activities across time with delta, theta, and gamma rhythms. We review how these oscillations are generated and how they coordinate distinct memory systems during encoding, consolidation, and retrieval. First, gamma oscillations occur in all regions and coordinate local spiking, compressing it into short population bursts. Second, gamma oscillations are modulated by delta and theta oscillations. Third, oscillatory dynamics in these memory systems can operate in either a "slow" or "fast" mode. The slow mode happens during slow-wave sleep and is characterized by large irregular activity in the hippocampus and delta oscillations in cortical and striatal circuits. The fast mode occurs during active waking and rapid eye movement (REM) sleep and is characterized by theta oscillations in the hippocampus and its targets, along with gamma oscillations in the rest of cortex. In waking, the fast mode is associated with the efficacious encoding and retrieval of declarative and procedural memories. Theta and gamma oscillations have similar relationships with encoding and retrieval across multiple forms of memory and brain regions, despite regional differences in microcircuitry and information content. Differences in the oscillatory coordination of memory systems during sleep might explain why the consolidation of some forms of memory is sensitive to slow-wave sleep, while others depend on REM. In particular, theta oscillations appear to support the consolidation of certain types of procedural memories during REM, while delta oscillations during slow-wave sleep seem to promote declarative and procedural memories.

  14. A Radiation Hardened Spacecraft Mass Memory System

    Science.gov (United States)

    Dennehy, W. J.; Lawton, B.; Stufflebeam, J.

    The functional design of a Radiation Hardened Spacecraft Mass Memory System (RH/SMMS) is described. This system is configured around a 1 megabit memory device and incorporates various system and circuit design features to achieve radiation hardness. The system is modular and storage capacities of 16 to 32 megabits are achievable within modest size, weight, and power constraints. Estimates of physical characteristics (size, weight, and power) are presented for a 16 Mbit system. The RH/SMMS is organized in a disk-like architecture and offers the spacecraft designer several unique benefits such as: reduced software cost, increased autonomy and survivability, increased functionality and increased fault tolerance.

  15. Bidirectional Frontoparietal Oscillatory Systems Support Working Memory.

    Science.gov (United States)

    Johnson, Elizabeth L; Dewar, Callum D; Solbakk, Anne-Kristin; Endestad, Tor; Meling, Torstein R; Knight, Robert T

    2017-06-19

    The ability to represent and select information in working memory provides the neurobiological infrastructure for human cognition. For 80 years, dominant views of working memory have focused on the key role of prefrontal cortex (PFC) [1-8]. However, more recent work has implicated posterior cortical regions [9-12], suggesting that PFC engagement during working memory is dependent on the degree of executive demand. We provide evidence from neurological patients with discrete PFC damage that challenges the dominant models attributing working memory to PFC-dependent systems. We show that neural oscillations, which provide a mechanism for PFC to communicate with posterior cortical regions [13], independently subserve communications both to and from PFC-uncovering parallel oscillatory mechanisms for working memory. Fourteen PFC patients and 20 healthy, age-matched controls performed a working memory task where they encoded, maintained, and actively processed information about pairs of common shapes. In controls, the electroencephalogram (EEG) exhibited oscillatory activity in the low-theta range over PFC and directional connectivity from PFC to parieto-occipital regions commensurate with executive processing demands. Concurrent alpha-beta oscillations were observed over parieto-occipital regions, with directional connectivity from parieto-occipital regions to PFC, regardless of processing demands. Accuracy, PFC low-theta activity, and PFC → parieto-occipital connectivity were attenuated in patients, revealing a PFC-independent, alpha-beta system. The PFC patients still demonstrated task proficiency, which indicates that the posterior alpha-beta system provides sufficient resources for working memory. Taken together, our findings reveal neurologically dissociable PFC and parieto-occipital systems and suggest that parallel, bidirectional oscillatory systems form the basis of working memory. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Parallel calculations on shared memory, NUMA-based computers using MATLAB

    Science.gov (United States)

    Krotkiewski, Marcin; Dabrowski, Marcin

    2014-05-01

    Achieving satisfactory computational performance in numerical simulations on modern computer architectures can be a complex task. Multi-core design makes it necessary to parallelize the code. Efficient parallelization on NUMA (Non-Uniform Memory Access) shared memory architectures necessitates explicit placement of the data in the memory close to the CPU that uses it. In addition, using more than 8 CPUs (~100 cores) requires a cluster solution of interconnected nodes, which involves (expensive) communication between the processors. It takes significant effort to overcome these challenges even when programming in low-level languages, which give the programmer full control over data placement and work distribution. Instead, many modelers use high-level tools such as MATLAB, which severely limit the optimization/tuning options available. Nonetheless, the advantage of programming simplicity and a large available code base can tip the scale in favor of MATLAB. We investigate whether MATLAB can be used for efficient, parallel computations on modern shared memory architectures. A common approach to performance optimization of MATLAB programs is to identify a bottleneck and migrate the corresponding code block to a MEX file implemented in, e.g. C. Instead, we aim at achieving a scalable parallel performance of MATLABs core functionality. Some of the MATLABs internal functions (e.g., bsxfun, sort, BLAS3, operations on vectors) are multi-threaded. Achieving high parallel efficiency of those may potentially improve the performance of significant portion of MATLABs code base. Since we do not have MATLABs source code, our performance tuning relies on the tools provided by the operating system alone. Most importantly, we use custom memory allocation routines, thread to CPU binding, and memory page migration. The performance tests are carried out on multi-socket shared memory systems (2- and 4-way Intel-based computers), as well as a Distributed Shared Memory machine with 96 CPU

  17. Configurable memory system and method for providing atomic counting operations in a memory device

    Science.gov (United States)

    Bellofatto, Ralph E.; Gara, Alan G.; Giampapa, Mark E.; Ohmacht, Martin

    2010-09-14

    A memory system and method for providing atomic memory-based counter operations to operating systems and applications that make most efficient use of counter-backing memory and virtual and physical address space, while simplifying operating system memory management, and enabling the counter-backing memory to be used for purposes other than counter-backing storage when desired. The encoding and address decoding enabled by the invention provides all this functionality through a combination of software and hardware.

  18. Translation Memory and Computer Assisted Translation Tool for Medieval Texts

    Directory of Open Access Journals (Sweden)

    Törcsvári Attila

    2013-05-01

    Full Text Available Translation memories (TMs, as part of Computer Assisted Translation (CAT tools, support translators reusing portions of formerly translated text. Fencing books are good candidates for using TMs due to the high number of repeated terms. Medieval texts suffer a number of drawbacks that make hard even “simple” rewording to the modern version of the same language. The analyzed difficulties are: lack of systematic spelling, unusual word orders and typos in the original. A hypothesis is made and verified that even simple modernization increases legibility and it is feasible, also it is worthwhile to apply translation memories due to the numerous and even extremely long repeated terms. Therefore, methods and algorithms are presented 1. for automated transcription of medieval texts (when a limited training set is available, and 2. collection of repeated patterns. The efficiency of the algorithms is analyzed for recall and precision.

  19. Towards Modeling False Memory With Computational Knowledge Bases.

    Science.gov (United States)

    Li, Justin; Kohanyi, Emma

    2017-01-01

    One challenge to creating realistic cognitive models of memory is the inability to account for the vast common-sense knowledge of human participants. Large computational knowledge bases such as WordNet and DBpedia may offer a solution to this problem but may pose other challenges. This paper explores some of these difficulties through a semantic network spreading activation model of the Deese-Roediger-McDermott false memory task. In three experiments, we show that these knowledge bases only capture a subset of human associations, while irrelevant information introduces noise and makes efficient modeling difficult. We conclude that the contents of these knowledge bases must be augmented and, more important, that the algorithms must be refined and optimized, before large knowledge bases can be widely used for cognitive modeling. Copyright © 2016 Cognitive Science Society, Inc.

  20. Fluctuations in interacting particle systems with memory

    International Nuclear Information System (INIS)

    Harris, Rosemary J

    2015-01-01

    We consider the effects of long-range temporal correlations in many-particle systems, focusing particularly on fluctuations about the typical behaviour. For a specific class of memory dependence we discuss the modification of the large deviation principle describing the probability of rare currents and show how superdiffusive behaviour can emerge. We illustrate the general framework with detailed calculations for a memory-dependent version of the totally asymmetric simple exclusion process as well as indicating connections to other recent work

  1. A memory-array architecture for computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Balsara, P.T.

    1989-01-01

    With the fast advances in the area of computer vision and robotics there is a growing need for machines that can understand images at a very high speed. A conventional von Neumann computer is not suited for this purpose because it takes a tremendous amount of time to solve most typical image processing problems. Exploiting the inherent parallelism present in various vision tasks can significantly reduce the processing time. Fortunately, parallelism is increasingly affordable as hardware gets cheaper. Thus it is now imperative to study computer vision in a parallel processing framework. The author should first design a computational structure which is well suited for a wide range of vision tasks and then develop parallel algorithms which can run efficiently on this structure. Recent advances in VLSI technology have led to several proposals for parallel architectures for computer vision. In this thesis he demonstrates that a memory array architecture with efficient local and global communication capabilities can be used for high speed execution of a wide range of computer vision tasks. This architecture, called the Access Constrained Memory Array Architecture (ACMAA), is efficient for VLSI implementation because of its modular structure, simple interconnect and limited global control. Several parallel vision algorithms have been designed for this architecture. The choice of vision problems demonstrates the versatility of ACMAA for a wide range of vision tasks. These algorithms were simulated on a high level ACMAA simulator running on the Intel iPSC/2 hypercube, a parallel architecture. The results of this simulation are compared with those of sequential algorithms running on a single hypercube node. Details of the ACMAA processor architecture are also presented.

  2. An Alternative Algorithm for Computing Watersheds on Shared Memory Parallel Computers

    NARCIS (Netherlands)

    Meijster, A.; Roerdink, J.B.T.M.

    1995-01-01

    In this paper a parallel implementation of a watershed algorithm is proposed. The algorithm can easily be implemented on shared memory parallel computers. The watershed transform is generally considered to be inherently sequential since the discrete watershed of an image is defined using recursion.

  3. The Computational Sensorimotor Systems Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — The Computational Sensorimotor Systems Lab focuses on the exploration, analysis, modeling and implementation of biological sensorimotor systems for both scientific...

  4. Robust dynamical decoupling for quantum computing and quantum memory.

    Science.gov (United States)

    Souza, Alexandre M; Alvarez, Gonzalo A; Suter, Dieter

    2011-06-17

    Dynamical decoupling (DD) is a popular technique for protecting qubits from the environment. However, unless special care is taken, experimental errors in the control pulses used in this technique can destroy the quantum information instead of preserving it. Here, we investigate techniques for making DD sequences robust against different types of experimental errors while retaining good decoupling efficiency in a fluctuating environment. We present experimental data from solid-state nuclear spin qubits and introduce a new DD sequence that is suitable for quantum computing and quantum memory.

  5. Forms of memory: Investigating the computational basis of semantic-episodic memory interactions

    NARCIS (Netherlands)

    Neville, D.A.

    2015-01-01

    The present thesis investigated how the memory systems related to the processing of semantic and episodic information combine to generate behavioural performance as measured in standard laboratory tasks. Across a series of behavioural experiment I looked at different types of interactions between

  6. A Case Study on Neural Inspired Dynamic Memory Management Strategies for High Performance Computing.

    Energy Technology Data Exchange (ETDEWEB)

    Vineyard, Craig Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Verzi, Stephen Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-01

    As high performance computing architectures pursue more computational power there is a need for increased memory capacity and bandwidth as well. A multi-level memory (MLM) architecture addresses this need by combining multiple memory types with different characteristics as varying levels of the same architecture. How to efficiently utilize this memory infrastructure is an unknown challenge, and in this research we sought to investigate whether neural inspired approaches can meaningfully help with memory management. In particular we explored neurogenesis inspired re- source allocation, and were able to show a neural inspired mixed controller policy can beneficially impact how MLM architectures utilize memory.

  7. Applied computation and security systems

    CERN Document Server

    Saeed, Khalid; Choudhury, Sankhayan; Chaki, Nabendu

    2015-01-01

    This book contains the extended version of the works that have been presented and discussed in the First International Doctoral Symposium on Applied Computation and Security Systems (ACSS 2014) held during April 18-20, 2014 in Kolkata, India. The symposium has been jointly organized by the AGH University of Science & Technology, Cracow, Poland and University of Calcutta, India. The Volume I of this double-volume book contains fourteen high quality book chapters in three different parts. Part 1 is on Pattern Recognition and it presents four chapters. Part 2 is on Imaging and Healthcare Applications contains four more book chapters. The Part 3 of this volume is on Wireless Sensor Networking and it includes as many as six chapters. Volume II of the book has three Parts presenting a total of eleven chapters in it. Part 4 consists of five excellent chapters on Software Engineering ranging from cloud service design to transactional memory. Part 5 in Volume II is on Cryptography with two book...

  8. The Memory System You Can't Avoid it, You Can't Ignore it, You Can't Fake it

    CERN Document Server

    Jacob, Bruce

    2009-01-01

    Today, computer-system optimization, at both the hardware and software levels, must consider the details of the memory system in its analysis; failing to do so yields systems that are increasingly inefficient as those systems become more complex. This lecture seeks to introduce the reader to the most important details of the memory system; it targets both computer scientists and computer engineers in industry and in academia. Roughly speaking, computer scientists are the users of the memory system and computer engineers are the designers of the memory system. Both can benefit tremendously from

  9. Computer systems a programmer's perspective

    CERN Document Server

    Bryant, Randal E

    2016-01-01

    Computer systems: A Programmer’s Perspective explains the underlying elements common among all computer systems and how they affect general application performance. Written from the programmer’s perspective, this book strives to teach readers how understanding basic elements of computer systems and executing real practice can lead them to create better programs. Spanning across computer science themes such as hardware architecture, the operating system, and systems software, the Third Edition serves as a comprehensive introduction to programming. This book strives to create programmers who understand all elements of computer systems and will be able to engage in any application of the field--from fixing faulty software, to writing more capable programs, to avoiding common flaws. It lays the groundwork for readers to delve into more intensive topics such as computer architecture, embedded systems, and cybersecurity. This book focuses on systems that execute an x86-64 machine code, and recommends th...

  10. Threats to Computer Systems

    Science.gov (United States)

    1973-03-01

    subjects and objects of attacks contribute to the uniqueness of computer-related crime. For example, as the cashless , checkless society approaches...advancing computer tech- nology and security methods, and proliferation of computers in bringing about the paperless society . The universal use of...organizations do to society . Jerry Schneider, one of the known perpetrators, said that he was motivated to perform his acts to make money, for the

  11. Ubiquitous Computing Systems

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Friday, Adrian

    2009-01-01

    First introduced two decades ago, the term ubiquitous computing is now part of the common vernacular. Ubicomp, as it is commonly called, has grown not just quickly but broadly so as to encompass a wealth of concepts and technology that serves any number of purposes across all of human endeavor......, an original ubicomp pioneer, Ubiquitous Computing Fundamentals brings together eleven ubiquitous computing trailblazers who each report on his or her area of expertise. Starting with a historical introduction, the book moves on to summarize a number of self-contained topics. Taking a decidedly human...... perspective, the book includes discussion on how to observe people in their natural environments and evaluate the critical points where ubiquitous computing technologies can improve their lives. Among a range of topics this book examines: How to build an infrastructure that supports ubiquitous computing...

  12. Organization of the two-level memory in the image processing system on scanning measuring projectors

    International Nuclear Information System (INIS)

    Sychev, A.Yu.

    1977-01-01

    Discussed are the problems of improving the efficiency of the system for processing pictures taken in bubble chambers with the use of scanning measuring projectors. The system comprises 20 to 30 pro ectors linked with the ICL-1903A computer provided with a mainframe memory, 64 kilobytes in size. Because of the insufficient size of a mainframe memory, a part of the programs and data is located in a second-level memory, i.e. in an external memory. The analytical model described herein is used to analyze the effect of the memory organization on the characteristics of the system. It is shown that organization of pure procedures and introduction of the centralized control of the tWo-leVel memory result in substantial improvement of the efficiency of the picture processing system

  13. CAM: A Collaborative Object Memory System

    NARCIS (Netherlands)

    Vyas, Dhaval; Nijholt, Antinus; Kröner, Alexander

    2010-01-01

    Physical design objects such as sketches, drawings, collages, storyboards and models play an important role in supporting communication and coordination in design studios. CAM (Cooperative Artefact Memory) is a mobile-tagging based messaging system that allows designers to collaboratively store

  14. Inovation of the computer system for the WWER-440 simulator

    International Nuclear Information System (INIS)

    Schrumpf, L.

    1988-01-01

    The configuration of the WWER-440 simulator computer system consists of four SMEP computers. The basic data processing unit consists of two interlinked SM 52/11.M1 computers with 1 MB of main memory. This part of the computer system of the simulator controls the operation of the entire simulator, processes the programs of technology behavior simulation, of the unit information system and of other special systems, guarantees program support and the operation of the instructor's console. An SM 52/11 computer with 256 kB of main memory is connected to each unit. It is used as a communication unit for data transmission using the DASIO 600 interface. Semigraphic color displays are based on the microprocessor modules of the SM 50/40 and SM 53/10 kit supplemented with a modified TESLA COLOR 110 ST tv receiver. (J.B.). 1 fig

  15. Computer system performance measurement techniques for ARTS III computer systems.

    Science.gov (United States)

    1973-12-01

    Direct measurement of computer systems is of vital importance in: a) developing an intelligent grasp of the variables which affect overall performance; b)tuning the system for optimum benefit; c)determining under what conditions saturation thresholds...

  16. Computational Model-Based Prediction of Human Episodic Memory Performance Based on Eye Movements

    Science.gov (United States)

    Sato, Naoyuki; Yamaguchi, Yoko

    Subjects' episodic memory performance is not simply reflected by eye movements. We use a ‘theta phase coding’ model of the hippocampus to predict subjects' memory performance from their eye movements. Results demonstrate the ability of the model to predict subjects' memory performance. These studies provide a novel approach to computational modeling in the human-machine interface.

  17. Computer Use and Its Effect on the Memory Process in Young and Adults

    Science.gov (United States)

    Alliprandini, Paula Mariza Zedu; Straub, Sandra Luzia Wrobel; Brugnera, Elisangela; de Oliveira, Tânia Pitombo; Souza, Isabela Augusta Andrade

    2013-01-01

    This work investigates the effect of computer use in the memory process in young and adults under the Perceptual and Memory experimental conditions. The memory condition involved the phases acquisition of information and recovery, on time intervals (2 min, 24 hours and 1 week) on situations of pre and post-test (before and after the participants…

  18. Core status computing system

    International Nuclear Information System (INIS)

    Yoshida, Hiroyuki.

    1982-01-01

    Purpose: To calculate power distribution, flow rate and the like in the reactor core with high accuracy in a BWR type reactor. Constitution: Total flow rate signals, traverse incore probe (TIP) signals as the neutron detector signals, thermal power signals and pressure signals are inputted into a process computer, where the power distribution and the flow rate distribution in the reactor core are calculated. A function generator connected to the process computer calculates the absolute flow rate passing through optional fuel assemblies using, as variables, flow rate signals from the introduction part for fuel assembly flow rate signals, data signals from the introduction part for the geometrical configuration data at the flow rate measuring site of fuel assemblies, total flow rate signals for the reactor core and the signals from the process computer. Numerical values thus obtained are given to the process computer as correction signals to perform correction for the experimental data. (Moriyama, K.)

  19. Capability-based computer systems

    CERN Document Server

    Levy, Henry M

    2014-01-01

    Capability-Based Computer Systems focuses on computer programs and their capabilities. The text first elaborates capability- and object-based system concepts, including capability-based systems, object-based approach, and summary. The book then describes early descriptor architectures and explains the Burroughs B5000, Rice University Computer, and Basic Language Machine. The text also focuses on early capability architectures. Dennis and Van Horn's Supervisor; CAL-TSS System; MIT PDP-1 Timesharing System; and Chicago Magic Number Machine are discussed. The book then describes Plessey System 25

  20. Computational implementation of a tunable multicellular memory circuit for engineered eukaryotic consortia

    Directory of Open Access Journals (Sweden)

    Josep eSardanyés

    2015-10-01

    Full Text Available Cells are complex machines capable of processing information by means of an entangled network ofmolecular interactions. A crucial component of these decision-making systems is the presence of memoryand this is also a specially relevant target of engineered synthetic systems. A classic example of memorydevices is a 1-bit memory element known as the flip-flop. Such system can be in principle designed usinga single-cell implementation, but a direct mapping between standard circuit design and a living circuitcan be cumbersome. Here we present a novel computational implementation of a 1-bit memory deviceusing a reliable multicellular design able to behave as a set-reset flip-flop that could be implemented inyeast cells. The dynamics of the proposed synthetic circuit is investigated with a mathematical modelusing biologically-meaningful parameters. The circuit is shown to behave as a flip-flop in a wide range ofparameter values. The repression strength for the NOT logics is shown to be crucial to obtain a goodflip-flop signal. Our model also shows that the circuit can be externally tuned to achieve different memorystates and dynamics, such as persistent and transient memory. We have characterised the parameterdomains for robust memory storage and retrieval as well as the corresponding time response dynamics.

  1. Metal oxide resistive random access memory based synaptic devices for brain-inspired computing

    Science.gov (United States)

    Gao, Bin; Kang, Jinfeng; Zhou, Zheng; Chen, Zhe; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan

    2016-04-01

    The traditional Boolean computing paradigm based on the von Neumann architecture is facing great challenges for future information technology applications such as big data, the Internet of Things (IoT), and wearable devices, due to the limited processing capability issues such as binary data storage and computing, non-parallel data processing, and the buses requirement between memory units and logic units. The brain-inspired neuromorphic computing paradigm is believed to be one of the promising solutions for realizing more complex functions with a lower cost. To perform such brain-inspired computing with a low cost and low power consumption, novel devices for use as electronic synapses are needed. Metal oxide resistive random access memory (ReRAM) devices have emerged as the leading candidate for electronic synapses. This paper comprehensively addresses the recent work on the design and optimization of metal oxide ReRAM-based synaptic devices. A performance enhancement methodology and optimized operation scheme to achieve analog resistive switching and low-energy training behavior are provided. A three-dimensional vertical synapse network architecture is proposed for high-density integration and low-cost fabrication. The impacts of the ReRAM synaptic device features on the performances of neuromorphic systems are also discussed on the basis of a constructed neuromorphic visual system with a pattern recognition function. Possible solutions to achieve the high recognition accuracy and efficiency of neuromorphic systems are presented.

  2. Secure computing on reconfigurable systems

    OpenAIRE

    Fernandes Chaves, R.J.

    2007-01-01

    This thesis proposes a Secure Computing Module (SCM) for reconfigurable computing systems. SC provides a protected and reliable computational environment, where data security and protection against malicious attacks to the system is assured. SC is strongly based on encryption algorithms and on the attestation of the executed functions. The use of SC on reconfigurable devices has the advantage of being highly adaptable to the application and the user requirements, while providing high performa...

  3. Novel spintronics devices for memory and logic: prospects and challenges for room temperature all spin computing

    Science.gov (United States)

    Wang, Jian-Ping

    An energy efficient memory and logic device for the post-CMOS era has been the goal of a variety of research fields. The limits of scaling, which we expect to reach by the year 2025, demand that future advances in computational power will not be realized from ever-shrinking device sizes, but rather by innovative designs and new materials and physics. Magnetoresistive based devices have been a promising candidate for future integrated magnetic computation because of its unique non-volatility and functionalities. The application of perpendicular magnetic anisotropy for potential STT-RAM application was demonstrated and later has been intensively investigated by both academia and industry groups, but there is no clear path way how scaling will eventually work for both memory and logic applications. One of main reasons is that there is no demonstrated material stack candidate that could lead to a scaling scheme down to sub 10 nm. Another challenge for the usage of magnetoresistive based devices for logic application is its available switching speed and writing energy. Although a good progress has been made to demonstrate the fast switching of a thermally stable magnetic tunnel junction (MTJ) down to 165 ps, it is still several times slower than its CMOS counterpart. In this talk, I will review the recent progress by my research group and my C-SPIN colleagues, then discuss the opportunities, challenges and some potential path ways for magnetoresitive based devices for memory and logic applications and their integration for room temperature all spin computing system.

  4. A multiprocessor computer simulation model employing a feedback scheduler/allocator for memory space and bandwidth matching and TMR processing

    Science.gov (United States)

    Bradley, D. B.; Irwin, J. D.

    1974-01-01

    A computer simulation model for a multiprocessor computer is developed that is useful for studying the problem of matching multiprocessor's memory space, memory bandwidth and numbers and speeds of processors with aggregate job set characteristics. The model assumes an input work load of a set of recurrent jobs. The model includes a feedback scheduler/allocator which attempts to improve system performance through higher memory bandwidth utilization by matching individual job requirements for space and bandwidth with space availability and estimates of bandwidth availability at the times of memory allocation. The simulation model includes provisions for specifying precedence relations among the jobs in a job set, and provisions for specifying precedence execution of TMR (Triple Modular Redundant and SIMPLEX (non redundant) jobs.

  5. Open system evolution and 'memory dressing'

    International Nuclear Information System (INIS)

    Knezevic, Irena; Ferry, David K.

    2004-01-01

    Due to recent advances in quantum information, as well as in mesoscopic and nanoscale physics, the interest in the theory of open systems and decoherence has significantly increased. In this paper, we present an interesting approach to solving a time-convolutionless equation of motion for the open system reduced density matrix beyond the limit of weak coupling with the environment. Our approach is based on identifying an effective, memory-containing interaction in the equations of motion for the representation submatrices of the evolution operator (these submatices are written in a special basis, adapted for the 'partial-trace-free' approach, in the system+environment Liouville space). We then identify the 'memory dressing', a quantity crucial for solving the equation of motion for the reduced density matrix, which separates the effective from the real physical interaction. The memory dressing obeys a self-contained nonlinear equation of motion, which we solve exactly. The solution can be represented in a diagrammatic fashion after introducing an 'information exchange propagator', a quantity that describes the transfer of information to and from the system, so the cumulative effect of the information exchange results in the memory dressing. In the case of weak system-environment coupling, we present the expansion of the reduced density matrix in terms of the physical interaction up to the third order. However, our approach is capable of going beyond the weak-coupling limit, and we show how short-time behavior of an open system can be analyzed for arbitrary coupling strength. We illustrate the approach with a simple numerical example of single-particle level broadening for a two-particle interacting system on short time scales. Furthermore, we point out a way to identify the structure of decoherence-free subspaces using the present approach

  6. Irrelevant sensory stimuli interfere with working memory storage: evidence from a computational model of prefrontal neurons.

    Science.gov (United States)

    Bancroft, Tyler D; Hockley, William E; Servos, Philip

    2013-03-01

    The encoding of irrelevant stimuli into the memory store has previously been suggested as a mechanism of interference in working memory (e.g., Lange & Oberauer, Memory, 13, 333-339, 2005; Nairne, Memory & Cognition, 18, 251-269, 1990). Recently, Bancroft and Servos (Experimental Brain Research, 208, 529-532, 2011) used a tactile working memory task to provide experimental evidence that irrelevant stimuli were, in fact, encoded into working memory. In the present study, we replicated Bancroft and Servos's experimental findings using a biologically based computational model of prefrontal neurons, providing a neurocomputational model of overwriting in working memory. Furthermore, our modeling results show that inhibition acts to protect the contents of working memory, and they suggest a need for further experimental research into the capacity of vibrotactile working memory.

  7. Computed tomography system

    International Nuclear Information System (INIS)

    Lambert, T.W.; Blake, J.E.

    1981-01-01

    This invention relates to computed tomography and is particularly concerned with determining the CT numbers of zones of interest in an image displayed on a cathode ray tube which zones lie in the so-called level or center of the gray scale window. (author)

  8. Attacker Modelling in Ubiquitous Computing Systems

    DEFF Research Database (Denmark)

    Papini, Davide

    in with our everyday life. This future is visible to everyone nowadays: terms like smartphone, cloud, sensor, network etc. are widely known and used in our everyday life. But what about the security of such systems. Ubiquitous computing devices can be limited in terms of energy, computing power and memory....... The implementation of cryptographic mechanisms that comes from classical communication systems could be too heavy for the resources of such devices, thus forcing the use of lighter security measures if any at all. The same goes for the implementation of security protocols. The protocols employed in classical......Within the last five to ten years we have experienced an incredible growth of ubiquitous technologies which has allowed for improvements in several areas, including energy distribution and management, health care services, border surveillance, secure monitoring and management of buildings...

  9. A learnable parallel processing architecture towards unity of memory and computing

    Science.gov (United States)

    Li, H.; Gao, B.; Chen, Z.; Zhao, Y.; Huang, P.; Ye, H.; Liu, L.; Liu, X.; Kang, J.

    2015-08-01

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named “iMemComp”, where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped “iMemComp” with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on “iMemComp” can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  10. A learnable parallel processing architecture towards unity of memory and computing.

    Science.gov (United States)

    Li, H; Gao, B; Chen, Z; Zhao, Y; Huang, P; Ye, H; Liu, L; Liu, X; Kang, J

    2015-08-14

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named "iMemComp", where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped "iMemComp" with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on "iMemComp" can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  11. Energy efficient distributed computing systems

    CERN Document Server

    Lee, Young-Choon

    2012-01-01

    The energy consumption issue in distributed computing systems raises various monetary, environmental and system performance concerns. Electricity consumption in the US doubled from 2000 to 2005.  From a financial and environmental standpoint, reducing the consumption of electricity is important, yet these reforms must not lead to performance degradation of the computing systems.  These contradicting constraints create a suite of complex problems that need to be resolved in order to lead to 'greener' distributed computing systems.  This book brings together a group of outsta

  12. Factors that influence the relative use of multiple memory systems.

    Science.gov (United States)

    Packard, Mark G; Goodman, Jarid

    2013-11-01

    Neurobehavioral evidence supports the existence of at least two anatomically distinct "memory systems" in the mammalian brain that mediate dissociable types of learning and memory; a "cognitive" memory system dependent upon the hippocampus and a "stimulus-response/habit" memory system dependent upon the dorsolateral striatum. Several findings indicate that despite their anatomical and functional distinctiveness, hippocampal- and dorsolateral striatal-dependent memory systems may potentially interact and that, depending on the learning situation, this interaction may be cooperative or competitive. One approach to examining the neural mechanisms underlying these interactions is to consider how various factors influence the relative use of multiple memory systems. The present review examines several such factors, including information compatibility, temporal sequence of training, the visual sensory environment, reinforcement parameters, emotional arousal, and memory modulatory systems. Altering these parameters can lead to selective enhancements of either hippocampal-dependent or dorsolateral striatal-dependent memory, and bias animals toward the use of either cognitive or habit memory in dual-solution tasks that may be solved adequately with either memory system. In many learning situations, the influence of such experimental factors on the relative use of memory systems likely reflects a competitive interaction between the systems. Research examining how various factors influence the relative use of multiple memory systems may be a useful method for investigating how these systems interact with one another. Copyright © 2013 Wiley Periodicals, Inc.

  13. Proposing an Abstracted Interface and Protocol for Computer Systems.

    Energy Technology Data Exchange (ETDEWEB)

    Resnick, David Richard [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Ignatowski, Mike [AMD Research

    2014-07-01

    While it made sense for historical reasons to develop different interfaces and protocols for memory channels, CPU to CPU interactions, and I/O devices, ongoing developments in the computer industry are leading to more converged requirements and physical implementations for these interconnects. As it becomes increasingly common for advanced components to contain a variety of computational devices as well as memory, the distinction between processors, memory, accelerators, and I/O devices become s increasingly blurred. As a result, the interface requirements among such components are converging. There is also a wide range of new disruptive technologies that will impact the computer market in the coming years , including 3D integration and emerging NVRAM memory. Optimal exploitation of these technologies cannot be done with the existing memory, storage, and I/O interface standards. The computer industry has historically made major advances when industry players have been able to add innovation behind a standard interface. The standard interface provides a large market for their products and enables relatively quick and widespread adoption. To enable a new wave of innovation in the form of advanced memory products and accelerators, we need a new standard interface explicitly designed to provide both the performance and flexibility to support new system integration solutions.

  14. Proposing an Abstracted Interface and Protocol for Computer Systems.

    Energy Technology Data Exchange (ETDEWEB)

    Resnick, David Richard [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ignatowski, Mike [AMD

    2014-07-01

    While it made sense for historical reasons to develop different interfaces and protocols for memory channels, CPU to CPU interactions, and I/O devices, ongoing developments in the computer industry are leading to more converged requirements and physical implementations for these interconnects. As it becomes increasingly common for advanced components to contain a variety of computational devices as well as memory, the distinction between processors, memory, accelerators, and I/O devices become s increasingly blur red. As a result, the interface requirements among such components are converging. There is also a wide range of new disruptive technologies that will impact the computer market in the coming years , including 3D integration and emerging NVRAM memory. Optimal exploitation of these technologies cannot be done with the existing memory , storage, and I/O interface standards. The computer industry has historically made major advances when industry players have been able to add innovation behind a standard interface. The standard interface provides a large market for their products and enable s relatively quick and widespread adoption. To enable a new wave of innovation in the form of advanced memory products and accelerators, we need a new standard interface explicitly design ed to provide both the performance and flexibility to support new system integration solutions.

  15. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems.

    Science.gov (United States)

    Shehzad, Danish; Bozkuş, Zeki

    2016-01-01

    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models.

  16. Strobe-margin test for plated memory systems

    Science.gov (United States)

    Anspach, T. E.; Clarke, J. W.; Constable, R. C.

    1978-01-01

    Technique measures performance of plated-wire memories. Strobe-margin test (SMT) utilizes worst-case testing and automatically gives exact strobe margin. Test is automatic; thus, memory system-level test is superior to tests at component level that use artificial test conditions. Test is significant tool in design and test of plated-wire memory systems. It can rapidly quantify memory-system margin on each production unit and impact of any design changes.

  17. A Scalable Unsegmented Multiport Memory for FPGA-Based Systems

    Directory of Open Access Journals (Sweden)

    Kevin R. Townsend

    2015-01-01

    Full Text Available On-chip multiport memory cores are crucial primitives for many modern high-performance reconfigurable architectures and multicore systems. Previous approaches for scaling memory cores come at the cost of operating frequency, communication overhead, and logic resources without increasing the storage capacity of the memory. In this paper, we present two approaches for designing multiport memory cores that are suitable for reconfigurable accelerators with substantial on-chip memory or complex communication. Our design approaches tackle these challenges by banking RAM blocks and utilizing interconnect networks which allows scaling without sacrificing logic resources. With banking, memory congestion is unavoidable and we evaluate our multiport memory cores under different memory access patterns to gain insights about different design trade-offs. We demonstrate our implementation with up to 256 memory ports using a Xilinx Virtex-7 FPGA. Our experimental results report high throughput memories with resource usage that scales with the number of ports.

  18. Computer-aided protective system (CAPS)

    International Nuclear Information System (INIS)

    Squire, R.K.

    1988-01-01

    A method of improving the security of materials in transit is described. The system provides a continuously monitored position location system for the transport vehicle, an internal computer-based geographic delimiter that makes continuous comparisons of actual positions with the preplanned routing and schedule, and a tamper detection/reaction system. The position comparison is utilized to institute preprogrammed reactive measures if the carrier is taken off course or schedule, penetrated, or otherwise interfered with. The geographic locater could be an independent internal platform or an external signal-dependent system utilizing GPS, Loran or similar source of geographic information; a small (micro) computer could provide adequate memory and computational capacity; the insurance of integrity of the system indicates the need for a tamper-proof container and built-in intrusion sensors. A variant of the system could provide real-time transmission of the vehicle position and condition to a central control point for; such transmission could be encrypted to preclude spoofing

  19. Working Memory Interventions with Children: Classrooms or Computers?

    Science.gov (United States)

    Colmar, Susan; Double, Kit

    2017-01-01

    The importance of working memory to classroom functioning and academic outcomes has led to the development of many interventions designed to enhance students' working memory. In this article we briefly review the evidence for the relative effectiveness of classroom and computerised working memory interventions in bringing about measurable and…

  20. Students "Hacking" School Computer Systems

    Science.gov (United States)

    Stover, Del

    2005-01-01

    This article deals with students hacking school computer systems. School districts are getting tough with students "hacking" into school computers to change grades, poke through files, or just pit their high-tech skills against district security. Dozens of students have been prosecuted recently under state laws on identity theft and unauthorized…

  1. Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and its Application to Sparse Coding

    Directory of Open Access Journals (Sweden)

    Sapan eAgarwal

    2016-01-01

    Full Text Available The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational advantages of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an NxN crossbar, these two kernels are at a minimum O(N more energy efficient than a digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1. These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N reduction in energy for the entire algorithm. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.

  2. A scalable parallel black oil simulator on distributed memory parallel computers

    Science.gov (United States)

    Wang, Kun; Liu, Hui; Chen, Zhangxin

    2015-11-01

    This paper presents our work on developing a parallel black oil simulator for distributed memory computers based on our in-house parallel platform. The parallel simulator is designed to overcome the performance issues of common simulators that are implemented for personal computers and workstations. The finite difference method is applied to discretize the black oil model. In addition, some advanced techniques are employed to strengthen the robustness and parallel scalability of the simulator, including an inexact Newton method, matrix decoupling methods, and algebraic multigrid methods. A new multi-stage preconditioner is proposed to accelerate the solution of linear systems from the Newton methods. Numerical experiments show that our simulator is scalable and efficient, and is capable of simulating extremely large-scale black oil problems with tens of millions of grid blocks using thousands of MPI processes on parallel computers.

  3. Computer systems performance measurement techniques.

    Science.gov (United States)

    1971-06-01

    Computer system performance measurement techniques, tools, and approaches are presented as a foundation for future recommendations regarding the instrumentation of the ARTS ATC data processing subsystem for purposes of measurement and evaluation.

  4. Continuous-variable quantum computing in optical time-frequency modes using quantum memories.

    Science.gov (United States)

    Humphreys, Peter C; Kolthammer, W Steven; Nunn, Joshua; Barbieri, Marco; Datta, Animesh; Walmsley, Ian A

    2014-09-26

    We develop a scheme for time-frequency encoded continuous-variable cluster-state quantum computing using quantum memories. In particular, we propose a method to produce, manipulate, and measure two-dimensional cluster states in a single spatial mode by exploiting the intrinsic time-frequency selectivity of Raman quantum memories. Time-frequency encoding enables the scheme to be extremely compact, requiring a number of memories that are a linear function of only the number of different frequencies in which the computational state is encoded, independent of its temporal duration. We therefore show that quantum memories can be a powerful component for scalable photonic quantum information processing architectures.

  5. Generalization through the Recurrent Interaction of Episodic Memories: A Model of the Hippocampal System

    Science.gov (United States)

    Kumaran, Dharshan; McClelland, James L.

    2012-01-01

    In this article, we present a perspective on the role of the hippocampal system in generalization, instantiated in a computational model called REMERGE (recurrency and episodic memory results in generalization). We expose a fundamental, but neglected, tension between prevailing computational theories that emphasize the function of the hippocampus…

  6. Computational mechanics of classical spin systems

    Science.gov (United States)

    Feldman, David Polant

    How does nature self-organize and how can scientists discover such organization? Is there an objective notion of pattern, or is the discovery of patterns a purely subjective process? And what mathematical vocabulary is appropriate for describing and quantifying pattern, structure, and organization? This dissertation compares and contrasts the way in which statistical mechanics, information theory, and computational mechanics address these questions. After an in-depth review of the statistical mechanical, information theoretic, and computational mechanical approaches to structure and pattern, I present exact analytic results for the excess entropy and ɛ- machines for one-dimensional, finite-range discrete classical spin systems. The excess entropy, a form of mutual information, is an information theoretic measure of apparent spatial memory. The ɛ-machine-the central object of computational mechanics-is defined as the minimal model capable of statistically reproducing a given configuration, where the model is chosen to belong to the least powerful model class(es) in a stochastic generalization of the discrete computation hierarchy. These results for one-dimensional spin systems demonstrate that the measures of pattern from information theory and computational mechanics differ from known thermodynamic and statistical mechanical functions. Moreover, they capture important structural features that are otherwise missed. In particular, the excess entropy serves to detect ordered, low entropy density patterns. It is superior in many respects to other functions used to probe the structure of a distribution, such as structure factors and the specific heat. More generally, ɛ-machines are seen to be the most direct approach to revealing the group and semigroup symmetries possessed by the spatial patterns and to estimating the minimum amount of memory required to reproduce the configuration ensemble, a quantity known as the statistical complexity. It is shown that the

  7. Exploring the Future of Out-of-Core Computing with Compute-Local Non-Volatile Memory

    Directory of Open Access Journals (Sweden)

    Myoungsoo Jung

    2014-01-01

    Full Text Available Drawing parallels to the rise of general purpose graphical processing units (GPGPUs as accelerators for specific high-performance computing (HPC workloads, there is a rise in the use of non-volatile memory (NVM as accelerators for I/O-intensive scientific applications. However, existing works have explored use of NVM within dedicated I/O nodes, which are distant from the compute nodes that actually need such acceleration. As NVM bandwidth begins to out-pace point-to-point network capacity, we argue for the need to break from the archetype of completely separated storage. Therefore, in this work we investigate co-location of NVM and compute by varying I/O interfaces, file systems, types of NVM, and both current and future SSD architectures, uncovering numerous bottlenecks implicit in these various levels in the I/O stack. We present novel hardware and software solutions, including the new Unified File System (UFS, to enable fuller utilization of the new compute-local NVM storage. Our experimental evaluation, which employs a real-world Out-of-Core (OoC HPC application, demonstrates throughput increases in excess of an order of magnitude over current approaches.

  8. Parallel grid generation algorithm for distributed memory computers

    Science.gov (United States)

    Moitra, Stuti; Moitra, Anutosh

    1994-01-01

    A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.

  9. Multiscale computer modeling of textured shape memory material

    International Nuclear Information System (INIS)

    Makarenkov, D.Yu.

    2000-01-01

    The general aim of this work was to create a computer model, predicting the strain to be accumulated and then recovered by nitinol superelastic textured sheets upon the reversible martensitic transformation. With the aid of an experimental orientation distribution function (ODF), connecting the microscale (grain) and macroscale (semiproduct) levels, it was realized through the following steps. Tensile loading was consecutively applied to the shape memory nitinol sheet in all directions from those rolling to transverse. An external stress was transferred to micro level (each grain), where the crystallographic strain obeying the minimal strain energy condition has been chosen. Then these accumulated deformations were translated backwards to the macrolevel through the orientation distribution function. At this point, to obtain the macrostrain accumulated by the whole sheet, direct weighted summation of grain-accumulated strains was used, i.e., an input from each grain orientation is assumed to be proportional to the corresponding ODF coefficient. The new HELENE model was then validated for its isotropy in a case of the constant ODF; and also for anisotropy effects arising from the typical experimental ODF. It was also demonstrated how the step-by-step texture sharpening continuously increase the strain anisotropy until the complete single crystal strain distribution of the unique grain orientation in the sheet plane. (author)

  10. Retrofitting of NPP Computer systems

    International Nuclear Information System (INIS)

    Pettersen, G.

    1994-01-01

    Retrofitting of nuclear power plant control rooms is a continuing process for most utilities. This involves introducing and/or extending computer-based solutions for surveillance and control as well as improving the human-computer interface. The paper describes typical requirements when retrofitting NPP process computer systems, and focuses on the activities of Institute for energieteknikk, OECD Halden Reactor project with respect to such retrofitting, using examples from actual delivery projects. In particular, a project carried out for Forsmarksverket in Sweden comprising upgrade of the operator system in the control rooms of units 1 and 2 is described. As many of the problems of retrofitting NPP process computer systems are similar to such work in other kinds of process industries, an example from a non-nuclear application area is also given

  11. Memory controllers for real-time embedded systems predictable and composable real-time systems

    CERN Document Server

    Akesson, Benny

    2012-01-01

      Verification of real-time requirements in systems-on-chip becomes more complex as more applications are integrated. Predictable and composable systems can manage the increasing complexity using formal verification and simulation.  This book explains the concepts of predictability and composability and shows how to apply them to the design and analysis of a memory controller, which is a key component in any real-time system. This book is generally intended for readers interested in Systems-on-Chips with real-time applications.   It is especially well-suited for readers looking to use SDRAM memories in systems with hard or firm real-time requirements. There is a strong focus on real-time concepts, such as predictability and composability, as well as a brief discussion about memory controller architectures for high-performance computing. Readers will learn step-by-step how to go from an unpredictable SDRAM memory, offering highly variable bandwidth and latency, to a predictable and composable shared memory...

  12. Backup computer theory for process control computer systems

    International Nuclear Information System (INIS)

    Davidson, K.E.

    1977-01-01

    Process control computer systems should have some type of back-up system which could automatically assume control of a process if the main computer system should fail. This subject is discussed in general and as relates to a specific process. This process is the computer control system for the Almarez Nuclear Power Plant installation at Almarez in Spain. The system configuration includes two main computer control systems, with one backup system which can replace either of the main systems. Criteria for determining an adequate backup system will be discussed including cost, reliability, and criticality of the computer system and process

  13. Limbic systems for emotion and for memory, but no single limbic system.

    Science.gov (United States)

    Rolls, Edmund T

    2015-01-01

    The concept of a (single) limbic system is shown to be outmoded. Instead, anatomical, neurophysiological, functional neuroimaging, and neuropsychological evidence is described that anterior limbic and related structures including the orbitofrontal cortex and amygdala are involved in emotion, reward valuation, and reward-related decision-making (but not memory), with the value representations transmitted to the anterior cingulate cortex for action-outcome learning. In this 'emotion limbic system' a computational principle is that feedforward pattern association networks learn associations from visual, olfactory and auditory stimuli, to primary reinforcers such as taste, touch, and pain. In primates including humans this learning can be very rapid and rule-based, with the orbitofrontal cortex overshadowing the amygdala in this learning important for social and emotional behaviour. Complementary evidence is described showing that the hippocampus and limbic structures to which it is connected including the posterior cingulate cortex and the fornix-mammillary body-anterior thalamus-posterior cingulate circuit are involved in episodic or event memory, but not emotion. This 'hippocampal system' receives information from neocortical areas about spatial location, and objects, and can rapidly associate this information together by the different computational principle of autoassociation in the CA3 region of the hippocampus involving feedback. The system can later recall the whole of this information in the CA3 region from any component, a feedback process, and can recall the information back to neocortical areas, again a feedback (to neocortex) recall process. Emotion can enter this memory system from the orbitofrontal cortex etc., and be recalled back to the orbitofrontal cortex etc. during memory recall, but the emotional and hippocampal networks or 'limbic systems' operate by different computational principles, and operate independently of each other except insofar as an

  14. Distributed computing system with dual independent communications paths between computers and employing split tokens

    Science.gov (United States)

    Rasmussen, Robert D. (Inventor); Manning, Robert M. (Inventor); Lewis, Blair F. (Inventor); Bolotin, Gary S. (Inventor); Ward, Richard S. (Inventor)

    1990-01-01

    This is a distributed computing system providing flexible fault tolerance; ease of software design and concurrency specification; and dynamic balance of the loads. The system comprises a plurality of computers each having a first input/output interface and a second input/output interface for interfacing to communications networks each second input/output interface including a bypass for bypassing the associated computer. A global communications network interconnects the first input/output interfaces for providing each computer the ability to broadcast messages simultaneously to the remainder of the computers. A meshwork communications network interconnects the second input/output interfaces providing each computer with the ability to establish a communications link with another of the computers bypassing the remainder of computers. Each computer is controlled by a resident copy of a common operating system. Communications between respective ones of computers is by means of split tokens each having a moving first portion which is sent from computer to computer and a resident second portion which is disposed in the memory of at least one of computer and wherein the location of the second portion is part of the first portion. The split tokens represent both functions to be executed by the computers and data to be employed in the execution of the functions. The first input/output interfaces each include logic for detecting a collision between messages and for terminating the broadcasting of a message whereby collisions between messages are detected and avoided.

  15. Generation-based memory synchronization in a multiprocessor system with weakly consistent memory accesses

    Science.gov (United States)

    Ohmacht, Martin

    2014-09-09

    In a multiprocessor system, a central memory synchronization module coordinates memory synchronization requests responsive to memory access requests in flight, a generation counter, and a reclaim pointer. The central module communicates via point-to-point communication. The module includes a global OR reduce tree for each memory access requesting device, for detecting memory access requests in flight. An interface unit is implemented associated with each processor requesting synchronization. The interface unit includes multiple generation completion detectors. The generation count and reclaim pointer do not pass one another.

  16. Interactive Electronic Circuit Simulation on Small Computer Systems

    Science.gov (United States)

    1979-11-01

    State Circuits, SC-11, No. 5, 730-732, Octo- ber 1976. 3. A. R. Newton and G. L. Taylor, BIASL.25, A MOS Circuit Simulator, Tenth Annual Asilo ...Analysis Time, Accuracy, and Memory Requirement Tradeoffs in SPICE2, Eleventh Annual Asilo - mar Conference on Circuits, Systems and Computers

  17. Survivable Avionics Computer System.

    Science.gov (United States)

    1980-11-01

    T. HALL AFWAL/AAA-1 SCONTRACT F33-615-80-C-1014 SRI Project 1314 E~J Approved by: CHARLES J. SHOENS, Director Systems Techniques Laboratory DAVID A...4 U3.3.3. U3.3.3. 3.3, ~l3.U OXZrZrZr -Z~ 31Zr31Zr I- o - NOIA . 3,3. 3,3. 3𔃽, 3~3, -- Zr -Zr A .4 .4 3, 3, 3, t-~ 3, ~3,3, .03~ .03, 3,3.4 3,3

  18. Management Information System & Computer Applications

    OpenAIRE

    Sreeramana Aithal

    2017-01-01

    The book contains following Chapters : Chapter 1 : Introduction to Management Information Systems, Chapter 2 : Structure of MIS, Chapter 3 : Planning for MIS, Chapter 4 : Introduction to Computers Chapter 5 : Decision Making Process in MIS Chapter 6 : Approaches for System Development Chapter 7 : Form Design Chapter 8 : Charting Techniques Chapter 9 : System Analysis & Design Chapter 10 : Applications of MIS in Functional Areas Chapter 11 : System Implement...

  19. Computational complexity and memory usage for multi-frontal direct solvers used in p finite element analysis

    KAUST Repository

    Calo, Victor M.

    2011-05-14

    The multi-frontal direct solver is the state of the art for the direct solution of linear systems. This paper provides computational complexity and memory usage estimates for the application of the multi-frontal direct solver algorithm on linear systems resulting from p finite elements. Specifically we provide the estimates for systems resulting from C0 polynomial spaces spanned by B-splines. The structured grid and uniform polynomial order used in isogeometric meshes simplifies the analysis.

  20. A system for simulating shared memory in heterogeneous distributed-memory networks with specialization for robotics applications

    Energy Technology Data Exchange (ETDEWEB)

    Jones, J.P.; Bangs, A.L.; Butler, P.L.

    1991-01-01

    Hetero Helix is a programming environment which simulates shared memory on a heterogeneous network of distributed-memory computers. The machines in the network may vary with respect to their native operating systems and internal representation of numbers. Hetero Helix presents a simple programming model to developers, and also considers the needs of designers, system integrators, and maintainers. The key software technology underlying Hetero Helix is the use of a compiler'' which analyzes the data structures in shared memory and automatically generates code which translates data representations from the format native to each machine into a common format, and vice versa. The design of Hetero Helix was motivated in particular by the requirements of robotics applications. Hetero Helix has been used successfully in an integration effort involving 27 CPUs in a heterogeneous network and a body of software totaling roughly 100,00 lines of code. 25 refs., 6 figs.

  1. Computational Intelligence for Engineering Systems

    CERN Document Server

    Madureira, A; Vale, Zita

    2011-01-01

    "Computational Intelligence for Engineering Systems" provides an overview and original analysis of new developments and advances in several areas of computational intelligence. Computational Intelligence have become the road-map for engineers to develop and analyze novel techniques to solve problems in basic sciences (such as physics, chemistry and biology) and engineering, environmental, life and social sciences. The contributions are written by international experts, who provide up-to-date aspects of the topics discussed and present recent, original insights into their own experien

  2. Mass memory formatter subsystem of the adaptive intrusion data system

    International Nuclear Information System (INIS)

    Corlis, N.E.

    1980-09-01

    The Mass Memory Formatter was developed as part of the Adaptive Intrusion Data System (AIDS) to control a 2.4-megabit mass memory. The data from a Memory Controlled Processor is formatted before it is stored in the memory and reformatted during the readout mode. The data is then transmitted to a NOVA 2 minicomputer-controlled magnetic tape recorder for storage. Techniques and circuits are described

  3. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    International Nuclear Information System (INIS)

    Nash, T.

    1989-05-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC systems, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described. 6 figs

  4. Parallel Breadth-First Search on Distributed Memory Systems

    Energy Technology Data Exchange (ETDEWEB)

    Computational Research Division; Buluc, Aydin; Madduri, Kamesh

    2011-04-15

    Data-intensive, graph-based computations are pervasive in several scientific applications, and are known to to be quite challenging to implement on distributed memory systems. In this work, we explore the design space of parallel algorithms for Breadth-First Search (BFS), a key subroutine in several graph algorithms. We present two highly-tuned par- allel approaches for BFS on large parallel systems: a level-synchronous strategy that relies on a simple vertex-based partitioning of the graph, and a two-dimensional sparse matrix- partitioning-based approach that mitigates parallel commu- nication overhead. For both approaches, we also present hybrid versions with intra-node multithreading. Our novel hybrid two-dimensional algorithm reduces communication times by up to a factor of 3.5, relative to a common vertex based approach. Our experimental study identifies execu- tion regimes in which these approaches will be competitive, and we demonstrate extremely high performance on lead- ing distributed-memory parallel systems. For instance, for a 40,000-core parallel execution on Hopper, an AMD Magny- Cours based system, we achieve a BFS performance rate of 17.8 billion edge visits per second on an undirected graph of 4.3 billion vertices and 68.7 billion edges with skewed degree distribution.

  5. Hybrid Systems: Computation and Control.

    Science.gov (United States)

    1999-02-17

    DAC/Euro- VHDL, Genf , Switzerland, September 1996. [Har78] D. Harel. Statecharts: A visual formalism for complex systems. Science of Computer...possible discrete state behaviors is finite. This con- cerns trajectories that are periodic both precisely and approximately. To precise details

  6. Cognitive memory.

    Science.gov (United States)

    Widrow, Bernard; Aragon, Juan Carlos

    2013-05-01

    Regarding the workings of the human mind, memory and pattern recognition seem to be intertwined. You generally do not have one without the other. Taking inspiration from life experience, a new form of computer memory has been devised. Certain conjectures about human memory are keys to the central idea. The design of a practical and useful "cognitive" memory system is contemplated, a memory system that may also serve as a model for many aspects of human memory. The new memory does not function like a computer memory where specific data is stored in specific numbered registers and retrieval is done by reading the contents of the specified memory register, or done by matching key words as with a document search. Incoming sensory data would be stored at the next available empty memory location, and indeed could be stored redundantly at several empty locations. The stored sensory data would neither have key words nor would it be located in known or specified memory locations. Sensory inputs concerning a single object or subject are stored together as patterns in a single "file folder" or "memory folder". When the contents of the folder are retrieved, sights, sounds, tactile feel, smell, etc., are obtained all at the same time. Retrieval would be initiated by a query or a prompt signal from a current set of sensory inputs or patterns. A search through the memory would be made to locate stored data that correlates with or relates to the prompt input. The search would be done by a retrieval system whose first stage makes use of autoassociative artificial neural networks and whose second stage relies on exhaustive search. Applications of cognitive memory systems have been made to visual aircraft identification, aircraft navigation, and human facial recognition. Concerning human memory, reasons are given why it is unlikely that long-term memory is stored in the synapses of the brain's neural networks. Reasons are given suggesting that long-term memory is stored in DNA or RNA

  7. Computer Sciences and Data Systems, volume 1

    Science.gov (United States)

    1987-01-01

    Topics addressed include: software engineering; university grants; institutes; concurrent processing; sparse distributed memory; distributed operating systems; intelligent data management processes; expert system for image analysis; fault tolerant software; and architecture research.

  8. Emotional Arousal and Multiple Memory Systems in the Mammalian Brain

    Directory of Open Access Journals (Sweden)

    Mark G. Packard

    2012-03-01

    Full Text Available Emotional arousal induced by stress and/or anxiety can exert complex effects on learning and memory processes in mammals. Recent studies have begun to link study of the influence of emotional arousal on memory with earlier research indicating that memory is organized in multiple systems in the brain that differ in terms of the type of memory they mediate. Specifically, these studies have examined whether emotional arousal may have a differential effect on the cognitive and stimulus-response habit memory processes subserved by the hippocampus and dorsal striatum, respectively. Evidence indicates that stress or the peripheral injection of anxiogenic drugs can bias animals and humans towards the use of striatal-dependent habit memory in dual-solution tasks in which both hippocampal and stritatal-based strategies can provide an adequate solution. A bias towards the use of habit memory can also be produced by intra-basolateral amygdala administration of anxiogenic drugs, consistent with the well documented role of efferent projections of this brain region in mediating the modulatory influence of emotional arousal on memory. In some learning situations, the bias towards the use of habit memory produced by emotional arousal appears to result from an impairing effect on hippocampus-dependent cognitive memory. Further research examining the neural mechanisms linking emotion and the relative use of multiple memory systems should prove useful in view of the potential role for maladaptive habitual behaviors in various human psychopathologies.

  9. FPGA-based prototype storage system with phase change memory

    Science.gov (United States)

    Li, Gezi; Chen, Xiaogang; Chen, Bomy; Li, Shunfen; Zhou, Mi; Han, Wenbing; Song, Zhitang

    2016-10-01

    With the ever-increasing amount of data being stored via social media, mobile telephony base stations, and network devices etc. the database systems face severe bandwidth bottlenecks when moving vast amounts of data from storage to the processing nodes. At the same time, Storage Class Memory (SCM) technologies such as Phase Change Memory (PCM) with unique features like fast read access, high density, non-volatility, byte-addressability, positive response to increasing temperature, superior scalability, and zero standby leakage have changed the landscape of modern computing and storage systems. In such a scenario, we present a storage system called FLEET which can off-load partial or whole SQL queries to the storage engine from CPU. FLEET uses an FPGA rather than conventional CPUs to implement the off-load engine due to its highly parallel nature. We have implemented an initial prototype of FLEET with PCM-based storage. The results demonstrate that significant performance and CPU utilization gains can be achieved by pushing selected query processing components inside in PCM-based storage.

  10. The Glass Computer

    Science.gov (United States)

    Paesler, M. A.

    2009-01-01

    Digital computers use different kinds of memory, each of which is either volatile or nonvolatile. On most computers only the hard drive memory is nonvolatile, i.e., it retains all information stored on it when the power is off. When a computer is turned on, an operating system stored on the hard drive is loaded into the computer's memory cache and…

  11. A Case for Tamper-Resistant and Tamper-Evident Computer Systems

    National Research Council Canada - National Science Library

    Solihin, Yan

    2007-01-01

    .... These attacks attempt to snoop or modify data transfer between various chips in a computer system such as between the processor and memory, and between processors in a multiprocessor interconnect network...

  12. Gamma spectrometric system based on the personal computer Pravetz-83

    International Nuclear Information System (INIS)

    Yanakiev, K; Grigorov, T.; Vuchkov, M.

    1985-01-01

    A gamma spectrometric system based on a personal microcomputer Pravets-85 is described. The analog modules are NIM standard. ADC data are stored in the memory of the computer via a DMA channel and a real-time data processing is possible. The results from a series of tests indicate that the performance of the system is comparable with that of comercially avalable computerized spectrometers Ortec and Canberra

  13. Computational models of complex systems

    CERN Document Server

    Dabbaghian, Vahid

    2014-01-01

    Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...

  14. Computer-aided design systems (CADS) for power units

    International Nuclear Information System (INIS)

    Rozov, S.S.

    1989-01-01

    Main functions and peculiarities of the computer-aided design system (CADS) used in NPP power unit design are considered. The to-day CADS are based on the developed computer complexes (for example, 2VAX type computers with up to 100 M byte immediate ecess memory and with few tens of terminals). Efficiency of such CADS constitutes 3000-5000 drawings a month. CADS includes all the design steps, beginning from the preliminary project up to economical justification. Selection of the plant type, its site, justification of safety, reliability and economical efficiency of some subsystems under normal and emergency conditions are included there

  15. ClimateSpark: An In-memory Distributed Computing Framework for Big Climate Data Analytics

    Science.gov (United States)

    Hu, F.; Yang, C. P.; Duffy, D.; Schnase, J. L.; Li, Z.

    2016-12-01

    Massive array-based climate data is being generated from global surveillance systems and model simulations. They are widely used to analyze the environment problems, such as climate changes, natural hazards, and public health. However, knowing the underlying information from these big climate datasets is challenging due to both data- and computing- intensive issues in data processing and analyzing. To tackle the challenges, this paper proposes ClimateSpark, an in-memory distributed computing framework to support big climate data processing. In ClimateSpark, the spatiotemporal index is developed to enable Apache Spark to treat the array-based climate data (e.g. netCDF4, HDF4) as native formats, which are stored in Hadoop Distributed File System (HDFS) without any preprocessing. Based on the index, the spatiotemporal query services are provided to retrieve dataset according to a defined geospatial and temporal bounding box. The data subsets will be read out, and a data partition strategy will be applied to equally split the queried data to each computing node, and store them in memory as climateRDDs for processing. By leveraging Spark SQL and User Defined Function (UDFs), the climate data analysis operations can be conducted by the intuitive SQL language. ClimateSpark is evaluated by two use cases using the NASA Modern-Era Retrospective Analysis for Research and Applications (MERRA) climate reanalysis dataset. One use case is to conduct the spatiotemporal query and visualize the subset results in animation; the other one is to compare different climate model outputs using Taylor-diagram service. Experimental results show that ClimateSpark can significantly accelerate data query and processing, and enable the complex analysis services served in the SQL-style fashion.

  16. Computer-aided system design

    Science.gov (United States)

    Walker, Carrie K.

    1991-01-01

    A technique has been developed for combining features of a systems architecture design and assessment tool and a software development tool. This technique reduces simulation development time and expands simulation detail. The Architecture Design and Assessment System (ADAS), developed at the Research Triangle Institute, is a set of computer-assisted engineering tools for the design and analysis of computer systems. The ADAS system is based on directed graph concepts and supports the synthesis and analysis of software algorithms mapped to candidate hardware implementations. Greater simulation detail is provided by the ADAS functional simulator. With the functional simulator, programs written in either Ada or C can be used to provide a detailed description of graph nodes. A Computer-Aided Software Engineering tool developed at the Charles Stark Draper Laboratory (CSDL CASE) automatically generates Ada or C code from engineering block diagram specifications designed with an interactive graphical interface. A technique to use the tools together has been developed, which further automates the design process.

  17. Transactive memory systems scale for couples: development and validation.

    Science.gov (United States)

    Hewitt, Lauren Y; Roberts, Lynne D

    2015-01-01

    People in romantic relationships can develop shared memory systems by pooling their cognitive resources, allowing each person access to more information but with less cognitive effort. Research examining such memory systems in romantic couples largely focuses on remembering word lists or performing lab-based tasks, but these types of activities do not capture the processes underlying couples' transactive memory systems, and may not be representative of the ways in which romantic couples use their shared memory systems in everyday life. We adapted an existing measure of transactive memory systems for use with romantic couples (TMSS-C), and conducted an initial validation study. In total, 397 participants who each identified as being a member of a romantic relationship of at least 3 months duration completed the study. The data provided a good fit to the anticipated three-factor structure of the components of couples' transactive memory systems (specialization, credibility and coordination), and there was reasonable evidence of both convergent and divergent validity, as well as strong evidence of test-retest reliability across a 2-week period. The TMSS-C provides a valuable tool that can quickly and easily capture the underlying components of romantic couples' transactive memory systems. It has potential to help us better understand this intriguing feature of romantic relationships, and how shared memory systems might be associated with other important features of romantic relationships.

  18. Transactive Memory Systems Scale for Couples: Development and Initial Validation

    Directory of Open Access Journals (Sweden)

    Lauren Y. Hewitt

    2015-05-01

    Full Text Available People in romantic relationships can develop shared memory systems by pooling their cognitive resources, allowing each person access to more information but with less cognitive effort. Research examining such memory systems in romantic couples largely focuses on remembering word lists or performing lab-based tasks, but these types of activities do not capture the processes underlying couples’ transactive memory systems, and may not be representative of the ways in which romantic couples use their shared memory systems in everyday life. We adapted an existing measure of transactive memory systems for use with romantic couples (TMSS-C, and conducted an initial validation study. In total, 397 participants who each identified as being a member of a romantic relationship of at least 3 months duration completed the study. The data provided a good fit to the anticipated three-factor structure of the components of couples’ transactive memory systems (specialization, credibility and coordination, and there was reasonable evidence of both convergent and divergent validity, as well as strong evidence of test-retest reliability across a two-week period. The TMSS-C provides a valuable tool that can quickly and easily capture the underlying components of romantic couples’ transactive memory systems. It has potential to help us better understand this intriguing feature of romantic relationships, and how shared memory systems might be associated with other important features of romantic relationships.

  19. Computer-Presented Organizational/Memory Aids as Instruction for Solving Pico-Fomi Problems.

    Science.gov (United States)

    Steinberg, Esther R.; And Others

    1985-01-01

    Describes investigation of effectiveness of computer-presented organizational/memory aids (matrix and verbal charts controlled by computer or learner) as instructional technique for solving Pico-Fomi problems, and the acquisition of deductive inference rules when such aids are present. Results indicate chart use control should be adapted to…

  20. Memory protection

    Science.gov (United States)

    Denning, Peter J.

    1988-01-01

    Accidental overwriting of files or of memory regions belonging to other programs, browsing of personal files by superusers, Trojan horses, and viruses are examples of breakdowns in workstations and personal computers that would be significantly reduced by memory protection. Memory protection is the capability of an operating system and supporting hardware to delimit segments of memory, to control whether segments can be read from or written into, and to confine accesses of a program to its segments alone. The absence of memory protection in many operating systems today is the result of a bias toward a narrow definition of performance as maximum instruction-execution rate. A broader definition, including the time to get the job done, makes clear that cost of recovery from memory interference errors reduces expected performance. The mechanisms of memory protection are well understood, powerful, efficient, and elegant. They add to performance in the broad sense without reducing instruction execution rate.

  1. Computer-aided instruction system

    International Nuclear Information System (INIS)

    Teneze, Jean Claude

    1968-01-01

    This research thesis addresses the use of teleprocessing and time sharing by the RAX IBM system and the possibility to introduce a dialog with the machine to develop an application in which the computer plays the role of a teacher for different pupils at the same time. Two operating modes are thus exploited: a teacher-mode and a pupil-mode. The developed CAI (computer-aided instruction) system comprises a checker to check the course syntax in teacher-mode, a translator to trans-code the course written in teacher-mode into a form which can be processes by the execution programme, and the execution programme which presents the course in pupil-mode

  2. Novel procedure for characterizing nonlinear systems with memory: 2017 update

    Science.gov (United States)

    Nuttall, Albert H.; Katz, Richard A.; Hughes, Derke R.; Koch, Robert M.

    2017-05-01

    The present article discusses novel improvements in nonlinear signal processing made by the prime algorithm developer, Dr. Albert H. Nuttall and co-authors, a consortium of research scientists from the Naval Undersea Warfare Center Division, Newport, RI. The algorithm, called the Nuttall-Wiener-Volterra or 'NWV' algorithm is named for its principal contributors [1], [2],[ 3] . The NWV algorithm significantly reduces the computational workload for characterizing nonlinear systems with memory. Following this formulation, two measurement waveforms are required in order to characterize a specified nonlinear system under consideration: (1) an excitation input waveform, x(t) (the transmitted signal); and, (2) a response output waveform, z(t) (the received signal). Given these two measurement waveforms for a given propagation channel, a 'kernel' or 'channel response', h= [h0,h1,h2,h3] between the two measurement points, is computed via a least squares approach that optimizes modeled kernel values by performing a best fit between measured response z(t) and a modeled response y(t). New techniques significantly diminish the exponential growth of the number of computed kernel coefficients at second and third order and alleviate the Curse of Dimensionality (COD) in order to realize practical nonlinear solutions of scientific and engineering interest.

  3. Trial-by-Trial Modulation of Associative Memory Formation by Reward Prediction Error and Reward Anticipation as Revealed by a Biologically Plausible Computational Model.

    Science.gov (United States)

    Aberg, Kristoffer C; Müller, Julia; Schwartz, Sophie

    2017-01-01

    Anticipation and delivery of rewards improves memory formation, but little effort has been made to disentangle their respective contributions to memory enhancement. Moreover, it has been suggested that the effects of reward on memory are mediated by dopaminergic influences on hippocampal plasticity. Yet, evidence linking memory improvements to actual reward computations reflected in the activity of the dopaminergic system, i.e., prediction errors and expected values, is scarce and inconclusive. For example, different previous studies reported that the magnitude of prediction errors during a reinforcement learning task was a positive, negative, or non-significant predictor of successfully encoding simultaneously presented images. Individual sensitivities to reward and punishment have been found to influence the activation of the dopaminergic reward system and could therefore help explain these seemingly discrepant results. Here, we used a novel associative memory task combined with computational modeling and showed independent effects of reward-delivery and reward-anticipation on memory. Strikingly, the computational approach revealed positive influences from both reward delivery, as mediated by prediction error magnitude, and reward anticipation, as mediated by magnitude of expected value, even in the absence of behavioral effects when analyzed using standard methods, i.e., by collapsing memory performance across trials within conditions. We additionally measured trait estimates of reward and punishment sensitivity and found that individuals with increased reward (vs. punishment) sensitivity had better memory for associations encoded during positive (vs. negative) prediction errors when tested after 20 min, but a negative trend when tested after 24 h. In conclusion, modeling trial-by-trial fluctuations in the magnitude of reward, as we did here for prediction errors and expected value computations, provides a comprehensive and biologically plausible description of

  4. A computer vision-based automated Figure-8 maze for working memory test in rodents.

    Science.gov (United States)

    Pedigo, Samuel F; Song, Eun Young; Jung, Min Whan; Kim, Jeansok J

    2006-09-30

    The benchmark test for prefrontal cortex (PFC)-mediated working memory in rodents is a delayed alternation task utilizing variations of T-maze or Figure-8 maze, which requires the animals to make specific arm entry responses for reward. In this task, however, manual procedures involved in shaping target behavior, imposing delays between trials and delivering rewards can potentially influence the animal's performance on the maze. Here, we report an automated Figure-8 maze which does not necessitate experimenter-subject interaction during shaping, training or testing. This system incorporates a computer vision system for tracking, motorized gates to impose delays, and automated reward delivery. The maze is controlled by custom software that records the animal's location and activates the gates according to the animal's behavior and a control algorithm. The program performs calculations of task accuracy, tracks movement sequence through the maze, and provides other dependent variables (such as running speed, time spent in different maze locations, activity level during delay). Testing in rats indicates that the performance accuracy is inversely proportional to the delay interval, decreases with PFC lesions, and that animals anticipate timing during long delays. Thus, our automated Figure-8 maze is effective at assessing working memory and provides novel behavioral measures in rodents.

  5. Computer Networks A Systems Approach

    CERN Document Server

    Peterson, Larry L

    2011-01-01

    This best-selling and classic book teaches you the key principles of computer networks with examples drawn from the real world of network and protocol design. Using the Internet as the primary example, the authors explain various protocols and networking technologies. Their systems-oriented approach encourages you to think about how individual network components fit into a larger, complex system of interactions. Whatever your perspective, whether it be that of an application developer, network administrator, or a designer of network equipment or protocols, you will come away with a "big pictur

  6. Automated validation of a computer operating system

    Science.gov (United States)

    Dervage, M. M.; Milberg, B. A.

    1970-01-01

    Programs apply selected input/output loads to complex computer operating system and measure performance of that system under such loads. Technique lends itself to checkout of computer software designed to monitor automated complex industrial systems.

  7. Local rollback for fault-tolerance in parallel computing systems

    Science.gov (United States)

    Blumrich, Matthias A [Yorktown Heights, NY; Chen, Dong [Yorktown Heights, NY; Gara, Alan [Yorktown Heights, NY; Giampapa, Mark E [Yorktown Heights, NY; Heidelberger, Philip [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Steinmacher-Burow, Burkhard [Boeblingen, DE; Sugavanam, Krishnan [Yorktown Heights, NY

    2012-01-24

    A control logic device performs a local rollback in a parallel super computing system. The super computing system includes at least one cache memory device. The control logic device determines a local rollback interval. The control logic device runs at least one instruction in the local rollback interval. The control logic device evaluates whether an unrecoverable condition occurs while running the at least one instruction during the local rollback interval. The control logic device checks whether an error occurs during the local rollback. The control logic device restarts the local rollback interval if the error occurs and the unrecoverable condition does not occur during the local rollback interval.

  8. Data systems and computer science programs: Overview

    Science.gov (United States)

    Smith, Paul H.; Hunter, Paul

    1991-01-01

    An external review of the Integrated Technology Plan for the Civil Space Program is presented. The topics are presented in viewgraph form and include the following: onboard memory and storage technology; advanced flight computers; special purpose flight processors; onboard networking and testbeds; information archive, access, and retrieval; visualization; neural networks; software engineering; and flight control and operations.

  9. A computational predictor of human episodic memory based on a theta phase precession network.

    Directory of Open Access Journals (Sweden)

    Naoyuki Sato

    Full Text Available In the rodent hippocampus, a phase precession phenomena of place cell firing with the local field potential (LFP theta is called "theta phase precession" and is considered to contribute to memory formation with spike time dependent plasticity (STDP. On the other hand, in the primate hippocampus, the existence of theta phase precession is unclear. Our computational studies have demonstrated that theta phase precession dynamics could contribute to primate-hippocampal dependent memory formation, such as object-place association memory. In this paper, we evaluate human theta phase precession by using a theory-experiment combined analysis. Human memory recall of object-place associations was analyzed by an individual hippocampal network simulated by theta phase precession dynamics of human eye movement and EEG data during memory encoding. It was found that the computational recall of the resultant network is significantly correlated with human memory recall performance, while other computational predictors without theta phase precession are not significantly correlated with subsequent memory recall. Moreover the correlation is larger than the correlation between human recall and traditional experimental predictors. These results indicate that theta phase precession dynamics are necessary for the better prediction of human recall performance with eye movement and EEG data. In this analysis, theta phase precession dynamics appear useful for the extraction of memory-dependent components from the spatio-temporal pattern of eye movement and EEG data as an associative network. Theta phase precession may be a common neural dynamic between rodents and humans for the formation of environmental memories.

  10. Scripting for construction of a transactive memory system in multidisciplinary CSCL environments

    NARCIS (Netherlands)

    Noroozi, O.; Biemans, H.J.A.; Weinberger, A.; Mulder, M.; Chizari, M.

    2013-01-01

    Establishing a Transactive Memory System (TMS) is essential for groups of learners, when they are multidisciplinary and collaborate online. Environments for Computer-Supported Collaborative Learning (CSCL) could be designed to facilitate the TMS. This study investigates how various aspects of a TMS

  11. Computer aided training system development

    International Nuclear Information System (INIS)

    Midkiff, G.N.

    1987-01-01

    The first three phases of Training System Development (TSD) -- job and task analysis, curriculum design, and training material development -- are time consuming and labor intensive. The use of personal computers with a combination of commercial and custom-designed software resulted in a significant reduction in the man-hours required to complete these phases for a Health Physics Technician Training Program at a nuclear power station. This paper reports that each step in the training program project involved the use of personal computers: job survey data were compiled with a statistical package, task analysis was performed with custom software designed to interface with a commercial database management program. Job Performance Measures (tests) were generated by a custom program from data in the task analysis database, and training materials were drafted, edited, and produced using commercial word processing software

  12. CAESY - COMPUTER AIDED ENGINEERING SYSTEM

    Science.gov (United States)

    Wette, M. R.

    1994-01-01

    Many developers of software and algorithms for control system design have recognized that current tools have limits in both flexibility and efficiency. Many forces drive the development of new tools including the desire to make complex system modeling design and analysis easier and the need for quicker turnaround time in analysis and design. Other considerations include the desire to make use of advanced computer architectures to help in control system design, adopt new methodologies in control, and integrate design processes (e.g., structure, control, optics). CAESY was developed to provide a means to evaluate methods for dealing with user needs in computer-aided control system design. It is an interpreter for performing engineering calculations and incorporates features of both Ada and MATLAB. It is designed to be reasonably flexible and powerful. CAESY includes internally defined functions and procedures, as well as user defined ones. Support for matrix calculations is provided in the same manner as MATLAB. However, the development of CAESY is a research project, and while it provides some features which are not found in commercially sold tools, it does not exhibit the robustness that many commercially developed tools provide. CAESY is written in C-language for use on Sun4 series computers running SunOS 4.1.1 and later. The program is designed to optionally use the LAPACK math library. The LAPACK math routines are available through anonymous ftp from research.att.com. CAESY requires 4Mb of RAM for execution. The standard distribution medium is a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format. CAESY was developed in 1993 and is a copyrighted work with all copyright vested in NASA.

  13. Attention and visual memory in visualization and computer graphics.

    Science.gov (United States)

    Healey, Christopher G; Enns, James T

    2012-07-01

    A fundamental goal of visualization is to produce images of data that support visual analysis, exploration, and discovery of novel insights. An important consideration during visualization design is the role of human visual perception. How we "see" details in an image can directly impact a viewer's efficiency and effectiveness. This paper surveys research on attention and visual perception, with a specific focus on results that have direct relevance to visualization and visual analytics. We discuss theories of low-level visual perception, then show how these findings form a foundation for more recent work on visual memory and visual attention. We conclude with a brief overview of how knowledge of visual attention and visual memory is being applied in visualization and graphics. We also discuss how challenges in visualization are motivating research in psychophysics.

  14. RAPID: A random access picture digitizer, display, and memory system

    Science.gov (United States)

    Yakimovsky, Y.; Rayfield, M.; Eskenazi, R.

    1976-01-01

    RAPID is a system capable of providing convenient digital analysis of video data in real-time. It has two modes of operation. The first allows for continuous digitization of an EIA RS-170 video signal. Each frame in the video signal is digitized and written in 1/30 of a second into RAPID's internal memory. The second mode leaves the content of the internal memory independent of the current input video. In both modes of operation the image contained in the memory is used to generate an EIA RS-170 composite video output signal representing the digitized image in the memory so that it can be displayed on a monitor.

  15. Computed radiography systems performance evaluation

    International Nuclear Information System (INIS)

    Xavier, Clarice C.; Nersissian, Denise Y.; Furquim, Tania A.C.

    2009-01-01

    The performance of a computed radiography system was evaluated, according to the AAPM Report No. 93. Evaluation tests proposed by the publication were performed, and the following nonconformities were found: imaging p/ate (lP) dark noise, which compromises the clinical image acquired using the IP; exposure indicator uncalibrated, which can cause underexposure to the IP; nonlinearity of the system response, which causes overexposure; resolution limit under the declared by the manufacturer and erasure thoroughness uncalibrated, impairing structures visualization; Moire pattern visualized at the grid response, and IP Throughput over the specified by the manufacturer. These non-conformities indicate that digital imaging systems' lack of calibration can cause an increase in dose in order that image prob/ems can be so/ved. (author)

  16. High Performance Polar Decomposition on Distributed Memory Systems

    KAUST Repository

    Sukkari, Dalal E.

    2016-08-08

    The polar decomposition of a dense matrix is an important operation in linear algebra. It can be directly calculated through the singular value decomposition (SVD) or iteratively using the QR dynamically-weighted Halley algorithm (QDWH). The former is difficult to parallelize due to the preponderant number of memory-bound operations during the bidiagonal reduction. We investigate the latter scenario, which performs more floating-point operations but exposes at the same time more parallelism, and therefore, runs closer to the theoretical peak performance of the system, thanks to more compute-bound matrix operations. Profiling results show the performance scalability of QDWH for calculating the polar decomposition using around 9200 MPI processes on well and ill-conditioned matrices of 100K×100K problem size. We study then the performance impact of the QDWH-based polar decomposition as a pre-processing step toward calculating the SVD itself. The new distributed-memory implementation of the QDWH-SVD solver achieves up to five-fold speedup against current state-of-the-art vendor SVD implementations. © Springer International Publishing Switzerland 2016.

  17. The computational implementation of the landscape model: modeling inferential processes and memory representations of text comprehension.

    Science.gov (United States)

    Tzeng, Yuhtsuen; van den Broek, Paul; Kendeou, Panayiota; Lee, Chengyuan

    2005-05-01

    The complexity of text comprehension demands a computational approach to describe the cognitive processes involved. In this article, we present the computational implementation of the landscape model of reading. This model captures both on-line comprehension processes during reading and the off-line memory representation after reading is completed, incorporating both memory-based and coherence-based mechanisms of comprehension. The overall architecture and specific parameters of the program are described, and a running example is provided. Several studies comparing computational and behavioral data indicate that the implemented model is able to account for cycle-by-cycle comprehension processes and memory for a variety of text types and reading situations.

  18. Automated Computer Access Request System

    Science.gov (United States)

    Snook, Bryan E.

    2010-01-01

    The Automated Computer Access Request (AutoCAR) system is a Web-based account provisioning application that replaces the time-consuming paper-based computer-access request process at Johnson Space Center (JSC). Auto- CAR combines rules-based and role-based functionality in one application to provide a centralized system that is easily and widely accessible. The system features a work-flow engine that facilitates request routing, a user registration directory containing contact information and user metadata, an access request submission and tracking process, and a system administrator account management component. This provides full, end-to-end disposition approval chain accountability from the moment a request is submitted. By blending both rules-based and rolebased functionality, AutoCAR has the flexibility to route requests based on a user s nationality, JSC affiliation status, and other export-control requirements, while ensuring a user s request is addressed by either a primary or backup approver. All user accounts that are tracked in AutoCAR are recorded and mapped to the native operating system schema on the target platform where user accounts reside. This allows for future extensibility for supporting creation, deletion, and account management directly on the target platforms by way of AutoCAR. The system s directory-based lookup and day-today change analysis of directory information determines personnel moves, deletions, and additions, and automatically notifies a user via e-mail to revalidate his/her account access as a result of such changes. AutoCAR is a Microsoft classic active server page (ASP) application hosted on a Microsoft Internet Information Server (IIS).

  19. Computers as components principles of embedded computing system design

    CERN Document Server

    Wolf, Marilyn

    2012-01-01

    Computers as Components: Principles of Embedded Computing System Design, 3e, presents essential knowledge on embedded systems technology and techniques. Updated for today's embedded systems design methods, this edition features new examples including digital signal processing, multimedia, and cyber-physical systems. Author Marilyn Wolf covers the latest processors from Texas Instruments, ARM, and Microchip Technology plus software, operating systems, networks, consumer devices, and more. Like the previous editions, this textbook: Uses real processors to demonstrate both technology and tec

  20. A selective logging mechanism for hardware transactional memory systems

    OpenAIRE

    Lupon Navazo, Marc; Magklis, Grigorios; González Colás, Antonio María

    2011-01-01

    Log-based Hardware Transactional Memory (HTM) systems offer an elegant solution to handle speculative data that overflow transactional L1 caches. By keeping the pre-transactional values on a software-resident log, speculative values can be safely moved across the memory hierarchy, without requiring expensive searches on L1 misses or commits.

  1. Multiple Systems of Spatial Memory: Evidence from Described Scenes

    Science.gov (United States)

    Avraamides, Marios N.; Kelly, Jonathan W.

    2010-01-01

    Recent models in spatial cognition posit that distinct memory systems are responsible for maintaining transient and enduring spatial relations. The authors used perspective-taking performance to assess the presence of these enduring and transient spatial memories for locations encoded through verbal descriptions. Across 3 experiments, spatial…

  2. Research on computer systems benchmarking

    Science.gov (United States)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  3. Computer vision in control systems

    CERN Document Server

    Jain, Lakhmi

    2015-01-01

    Volume 1 : This book is focused on the recent advances in computer vision methodologies and technical solutions using conventional and intelligent paradigms. The Contributions include: ·         Morphological Image Analysis for Computer Vision Applications. ·         Methods for Detecting of Structural Changes in Computer Vision Systems. ·         Hierarchical Adaptive KL-based Transform: Algorithms and Applications. ·         Automatic Estimation for Parameters of Image Projective Transforms Based on Object-invariant Cores. ·         A Way of Energy Analysis for Image and Video Sequence Processing. ·         Optimal Measurement of Visual Motion Across Spatial and Temporal Scales. ·         Scene Analysis Using Morphological Mathematics and Fuzzy Logic. ·         Digital Video Stabilization in Static and Dynamic Scenes. ·         Implementation of Hadamard Matrices for Image Processing. ·         A Generalized Criterion ...

  4. When does a physical system compute?

    Science.gov (United States)

    Horsman, Clare; Stepney, Susan; Wagner, Rob C.; Kendon, Viv

    2014-01-01

    Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper, we introduce a formal framework that can be used to determine whether a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, in comparison with the use of mathematical models in experimental science. This powerful formulation allows a precise description of experiments, technology, computation and simulation, giving our central conclusion: physical computing is the use of a physical system to predict the outcome of an abstract evolution. We give conditions for computing, illustrated using a range of non-standard computing scenarios. The framework also covers broader computing contexts, where there is no obvious human computer user. We introduce the notion of a ‘computational entity’, and its critical role in defining when computing is taking place in physical systems. PMID:25197245

  5. '95 computer system operation project

    International Nuclear Information System (INIS)

    Kim, Young Taek; Lee, Hae Cho; Park, Soo Jin; Kim, Hee Kyung; Lee, Ho Yeun; Lee, Sung Kyu; Choi, Mi Kyung

    1995-12-01

    This report describes overall project works related to the operation of mainframe computers, the management of nuclear computer codes and the project of nuclear computer code conversion. The results of the project are as follows ; 1. The operation and maintenance of the three mainframe computers and other utilities. 2. The management of the nuclear computer codes. 3. The finishing of the computer codes conversion project. 26 tabs., 5 figs., 17 refs. (Author) .new

  6. `95 computer system operation project

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Taek; Lee, Hae Cho; Park, Soo Jin; Kim, Hee Kyung; Lee, Ho Yeun; Lee, Sung Kyu; Choi, Mi Kyung [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-12-01

    This report describes overall project works related to the operation of mainframe computers, the management of nuclear computer codes and the project of nuclear computer code conversion. The results of the project are as follows ; 1. The operation and maintenance of the three mainframe computers and other utilities. 2. The management of the nuclear computer codes. 3. The finishing of the computer codes conversion project. 26 tabs., 5 figs., 17 refs. (Author) .new.

  7. A homotopy method for solving Riccati equations on a shared memory parallel computer

    International Nuclear Information System (INIS)

    Zigic, D.; Watson, L.T.; Collins, E.G. Jr.; Davis, L.D.

    1993-01-01

    Although there are numerous algorithms for solving Riccati equations, there still remains a need for algorithms which can operate efficiently on large problems and on parallel machines. This paper gives a new homotopy-based algorithm for solving Riccati equations on a shared memory parallel computer. The central part of the algorithm is the computation of the kernel of the Jacobian matrix, which is essential for the corrector iterations along the homotopy zero curve. Using a Schur decomposition the tensor product structure of various matrices can be efficiently exploited. The algorithm allows for efficient parallelization on shared memory machines

  8. Memory

    Science.gov (United States)

    ... it has to decide what is worth remembering. Memory is the process of storing and then remembering this information. There are different types of memory. Short-term memory stores information for a few ...

  9. Interacting Brain Systems Modulate Memory Consolidation

    Science.gov (United States)

    McIntyre, Christa K.; McGaugh, James L.; Williams, Cedric L.

    2011-01-01

    Emotional arousal influences the consolidation of long-term memory. This review discusses experimental approaches and relevant findings that provide the foundation for current understanding of coordinated interactions between arousal activated peripheral hormones and the brain processes that modulate memory formation. Rewarding or aversive experiences release the stress hormones epinephrine (adrenalin) and glucocorticoids from the adrenal glands into the bloodstream. The effect of these hormones on memory consolidation depends upon binding of norepinephrine to beta-adrenergic receptors in the basolateral complex of the amygdala (BLA). Much evidence indicates that the stress hormones influence release of norepinephrine in the BLA through peripheral actions on the vagus nerve which stimulates, through polysynaptic connections, cells of the locus coeruleus to release norepinephrine. The BLA influences memory storage by actions on synapses, distributed throughout the brain, that are engaged in sensory and cognitive processing at the time of amygdala activation. The implications of the activation of these stress-activated memory processes are discussed in relation to stress-related memory disorders. PMID:22085800

  10. Hydronic distribution system computer model

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, J.W.; Strasser, J.J.

    1994-10-01

    A computer model of a hot-water boiler and its associated hydronic thermal distribution loop has been developed at Brookhaven National Laboratory (BNL). It is intended to be incorporated as a submodel in a comprehensive model of residential-scale thermal distribution systems developed at Lawrence Berkeley. This will give the combined model the capability of modeling forced-air and hydronic distribution systems in the same house using the same supporting software. This report describes the development of the BNL hydronics model, initial results and internal consistency checks, and its intended relationship to the LBL model. A method of interacting with the LBL model that does not require physical integration of the two codes is described. This will provide capability now, with reduced up-front cost, as long as the number of runs required is not large.

  11. Trusted computing for embedded systems

    CERN Document Server

    Soudris, Dimitrios; Anagnostopoulos, Iraklis

    2015-01-01

    This book describes the state-of-the-art in trusted computing for embedded systems. It shows how a variety of security and trusted computing problems are addressed currently and what solutions are expected to emerge in the coming years. The discussion focuses on attacks aimed at hardware and software for embedded systems, and the authors describe specific solutions to create security features. Case studies are used to present new techniques designed as industrial security solutions. Coverage includes development of tamper resistant hardware and firmware mechanisms for lightweight embedded devices, as well as those serving as security anchors for embedded platforms required by applications such as smart power grids, smart networked and home appliances, environmental and infrastructure sensor networks, etc. ·         Enables readers to address a variety of security threats to embedded hardware and software; ·         Describes design of secure wireless sensor networks, to address secure authen...

  12. Computer systems and software engineering

    Science.gov (United States)

    Mckay, Charles W.

    1988-01-01

    The High Technologies Laboratory (HTL) was established in the fall of 1982 at the University of Houston Clear Lake. Research conducted at the High Tech Lab is focused upon computer systems and software engineering. There is a strong emphasis on the interrelationship of these areas of technology and the United States' space program. In Jan. of 1987, NASA Headquarters announced the formation of its first research center dedicated to software engineering. Operated by the High Tech Lab, the Software Engineering Research Center (SERC) was formed at the University of Houston Clear Lake. The High Tech Lab/Software Engineering Research Center promotes cooperative research among government, industry, and academia to advance the edge-of-knowledge and the state-of-the-practice in key topics of computer systems and software engineering which are critical to NASA. The center also recommends appropriate actions, guidelines, standards, and policies to NASA in matters pertinent to the center's research. Results of the research conducted at the High Tech Lab/Software Engineering Research Center have given direction to many decisions made by NASA concerning the Space Station Program.

  13. Neuromorphic Computing – From Materials Research to Systems Architecture Roundtable

    Energy Technology Data Exchange (ETDEWEB)

    Schuller, Ivan K. [Univ. of California, San Diego, CA (United States); Stevens, Rick [Argonne National Lab. (ANL), Argonne, IL (United States); Univ. of Chicago, IL (United States); Pino, Robinson [Dept. of Energy (DOE) Office of Science, Washington, DC (United States); Pechan, Michael [Dept. of Energy (DOE) Office of Science, Washington, DC (United States)

    2015-10-29

    Computation in its many forms is the engine that fuels our modern civilization. Modern computation—based on the von Neumann architecture—has allowed, until now, the development of continuous improvements, as predicted by Moore’s law. However, computation using current architectures and materials will inevitably—within the next 10 years—reach a limit because of fundamental scientific reasons. DOE convened a roundtable of experts in neuromorphic computing systems, materials science, and computer science in Washington on October 29-30, 2015 to address the following basic questions: Can brain-like (“neuromorphic”) computing devices based on new material concepts and systems be developed to dramatically outperform conventional CMOS based technology? If so, what are the basic research challenges for materials sicence and computing? The overarching answer that emerged was: The development of novel functional materials and devices incorporated into unique architectures will allow a revolutionary technological leap toward the implementation of a fully “neuromorphic” computer. To address this challenge, the following issues were considered: The main differences between neuromorphic and conventional computing as related to: signaling models, timing/clock, non-volatile memory, architecture, fault tolerance, integrated memory and compute, noise tolerance, analog vs. digital, and in situ learning New neuromorphic architectures needed to: produce lower energy consumption, potential novel nanostructured materials, and enhanced computation Device and materials properties needed to implement functions such as: hysteresis, stability, and fault tolerance Comparisons of different implementations: spin torque, memristors, resistive switching, phase change, and optical schemes for enhanced breakthroughs in performance, cost, fault tolerance, and/or manufacturability.

  14. Memory systems, processes, and tasks: taxonomic clarification via factor analysis.

    Science.gov (United States)

    Bruss, Peter J; Mitchell, David B

    2009-01-01

    The nature of various memory systems was examined using factor analysis. We reanalyzed data from 11 memory tasks previously reported in Mitchell and Bruss (2003). Four well-defined factors emerged, closely resembling episodic and semantic memory and conceptual and perceptual implicit memory, in line with both memory systems and transfer-appropriate processing accounts. To explore taxonomic issues, we ran separate analyses on the implicit tasks. Using a cross-format manipulation (pictures vs. words), we identified 3 prototypical tasks. Word fragment completion and picture fragment identification tasks were "factor pure," tapping perceptual processes uniquely. Category exemplar generation revealed its conceptual nature, yielding both cross-format priming and a picture superiority effect. In contrast, word stem completion and picture naming were more complex, revealing attributes of both processes.

  15. Cortical Thickness and Episodic Memory Impairment in Systemic Lupus Erythematosus.

    Science.gov (United States)

    Bizzo, Bernardo Canedo; Sanchez, Tiago Arruda; Tukamoto, Gustavo; Zimmermann, Nicolle; Netto, Tania Maria; Gasparetto, Emerson Leandro

    2017-01-01

    The purpose of this study was to investigate differences in brain cortical thickness of systemic lupus erythematosus (SLE) patients with and without episodic memory impairment and healthy controls. We studied 51 patients divided in 2 groups (SLE with episodic memory deficit, n = 17; SLE without episodic memory deficit, n = 34) by the Rey Auditory Verbal Learning Test and 34 healthy controls. Groups were paired based on sex, age, education, Mini-Mental State Examination score, and accumulation of disease burden. Cortical thickness from magnetic resonance imaging scans was determined using the FreeSurfer software package. SLE patients with episodic memory deficits presented reduced cortical thickness in the left supramarginal cortex and superior temporal gyrus when compared to the control group and in the right superior frontal, caudal, and rostral middle frontal and precentral gyri when compared to the SLE group without episodic memory impairment considering time since diagnosis of SLE as covaried. There were no significant differences in the cortical thickness between the SLE without episodic memory and control groups. Different memory-related cortical regions thinning were found in the episodic memory deficit group when individually compared to the groups of patients without memory impairment and healthy controls. Copyright © 2016 by the American Society of Neuroimaging.

  16. TOWARD HIGHLY SECURE AND AUTONOMIC COMPUTING SYSTEMS: A HIERARCHICAL APPROACH

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hsien-Hsin S

    2010-05-11

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.

  17. Computer assisted inventory control system | Dessalegn | Zede ...

    African Journals Online (AJOL)

    The information system in business and manufacturing organizations is better organized with the help of computers. In a computer based system, the flow of information within the different departments of an organization and with the external environment can easily be maintained. A computer based decision support system ...

  18. Software For Monitoring VAX Computer Systems

    Science.gov (United States)

    Farkas, Les; Don, Ken; Lavery, David; Baron, Amy

    1994-01-01

    VAX Continuous Monitoring System (VAXCMS) computer program developed at NASA Headquarters to aid system managers in monitoring performances of VAX computer systems through generation of graphic images summarizing trends in performance metrics over time. VAXCMS written in DCL and VAX FORTRAN for use with DEC VAX-series computers running VMS 5.1 or later.

  19. Dissociating response systems: erasing fear from memory.

    Science.gov (United States)

    Soeter, Marieke; Kindt, Merel

    2010-07-01

    In addition to the extensive evidence in animals, we previously showed that disrupting reconsolidation by noradrenergic blockade produced amnesia for the original fear response in humans. Interestingly, the declarative memory for the fear association remained intact. These results asked for a solid replication. Moreover, given the constructive nature of memories, the intact recollection of the fear association could eventually 'rebuild' the fear memory, resulting in the spontaneous recovery of the fear response. Yet, perseverance of the amnesic effects would have substantial clinical implications, as even the most effective treatments for psychiatric disorders display high percentages of relapse. Using a differential fear conditioning procedure in humans, we replicated our previous findings by showing that administering propranolol (40mg) prior to memory reactivation eliminated the startle fear response 24h later. But most importantly, this effect persisted at one month follow-up. Notably, the propranolol manipulation not only left the declarative memory for the acquired contingency untouched, but also skin conductance discrimination. In addition, a close association between declarative knowledge and skin conductance responses was found. These findings are in line with the supposed double dissociation of fear conditioning and declarative knowledge relative to the amygdala and hippocampus in humans. They support the view that skin conductance conditioning primarily reflects contingency learning, whereas the startle response is a rather specific measure of fear. Furthermore, the results indicate the absence of a causal link between the actual knowledge of a fear association and its fear response, even though they often operate in parallel. Interventions targeting the amygdalar fear memory may be essential in specifically and persistently dampening the emotional impact of fear. From a clinical and ethical perspective, disrupting reconsolidation points to promising

  20. The memory systems of children with (central) auditory disorder.

    Science.gov (United States)

    Pires, Mayra Monteiro; Mota, Mailce Borges; Pinheiro, Maria Madalena Canina

    2015-01-01

    This study aims to investigate working, declarative, and procedural memory in children with (central) auditory processing disorder who showed poor phonological awareness. Thirty 9- and 10-year-old children participated in the study and were distributed into two groups: a control group consisting of 15 children with typical development, and an experimental group consisting of 15 children with (central) auditory processing disorder who were classified according to three behavioral tests and who showed poor phonological awareness in the CONFIAS test battery. The memory systems were assessed through the adapted tests in the program E-PRIME 2.0. The working memory was assessed by the Working Memory Test Battery for Children (WMTB-C), whereas the declarative memory was assessed by a picture-naming test and the procedural memory was assessed by means of a morphosyntactic processing test. The results showed that, when compared to the control group, children with poor phonological awareness scored lower in the working, declarative, and procedural memory tasks. The results of this study suggest that in children with (central) auditory processing disorder, phonological awareness is associated with the analyzed memory systems.

  1. Portable computers - portable operating systems

    International Nuclear Information System (INIS)

    Wiegandt, D.

    1985-01-01

    Hardware development has made rapid progress over the past decade. Computers used to have attributes like ''general purpose'' or ''universal'', nowadays they are labelled ''personal'' and ''portable''. Recently, a major manufacturing company started marketing a portable version of their personal computer. But even for these small computers the old truth still holds that the biggest disadvantage of a computer is that it must be programmed, hardware by itself does not make a computer. (orig.)

  2. Computational memory architectures for autobiographic agents interacting in a complex virtual environment: a working model

    Science.gov (United States)

    Ho, Wan Ching; Dautenhahn, Kerstin; Nehaniv, Chrystopher

    2008-03-01

    In this paper, we discuss the concept of autobiographic agent and how memory may extend an agent's temporal horizon and increase its adaptability. These concepts are applied to an implementation of a scenario where agents are interacting in a complex virtual artificial life environment. We present computational memory architectures for autobiographic virtual agents that enable agents to retrieve meaningful information from their dynamic memories which increases their adaptation and survival in the environment. The design of the memory architectures, the agents, and the virtual environment are described in detail. Next, a series of experimental studies and their results are presented which show the adaptive advantage of autobiographic memory, i.e. from remembering significant experiences. Also, in a multi-agent scenario where agents can communicate via stories based on their autobiographic memory, it is found that new adaptive behaviours can emerge from an individual's reinterpretation of experiences received from other agents whereby higher communication frequency yields better group performance. An interface is described that visualises the memory contents of an agent. From an observer perspective, the agents' behaviours can be understood as individually structured, and temporally grounded, and, with the communication of experience, can be seen to rely on emergent mixed narrative reconstructions combining the experiences of several agents. This research leads to insights into how bottom-up story-telling and autobiographic reconstruction in autonomous, adaptive agents allow temporally grounded behaviour to emerge. The article concludes with a discussion of possible implications of this research direction for future autobiographic, narrative agents.

  3. Optimizing Memory Transactions for Multicore Systems

    Science.gov (United States)

    Adl-Tabatabai, Ali-Reza; Kozyrakis, Christos; Saha, Bratin

    The shift to multicore architectures will require new programming technologies that enable mainstream developers to write parallel programs that can safely take advantage of the parallelism offered by multicore processors. One challenging aspect of shared memory parallel programming is synchronization. Programmers have traditionally used locks for synchronization, but lock-based synchronization has well-known pitfalls that make it hard to use for building thread-safe and scalable software components. Memory transactions have emerged as a promising alternative to lock-based synchronization because they promise to eliminate many of the problems associated with locks. Transactional programming constructs, however, have overheads and require optimizations to make them practical. Transactions can also benefit significantly from hardware support, and multicore processors with their large transistor budgets and on-chip memory hierarchies have the opportunity to provide this support.

  4. Method of Computer-aided Instruction in Situation Control Systems

    Directory of Open Access Journals (Sweden)

    Anatoliy O. Kargin

    2013-01-01

    Full Text Available The article considers the problem of computer-aided instruction in context-chain motivated situation control system of the complex technical system behavior. The conceptual and formal models of situation control with practical instruction are considered. Acquisition of new behavior knowledge is presented as structural changes in system memory in the form of situational agent set. Model and method of computer-aided instruction represent formalization, based on the nondistinct theories by physiologists and cognitive psychologists.The formal instruction model describes situation and reaction formation and dependence on different parameters, effecting education, such as the reinforcement value, time between the stimulus, action and the reinforcement. The change of the contextual link between situational elements when using is formalized.The examples and results of computer instruction experiments of the robot device “LEGO MINDSTORMS NXT”, equipped with ultrasonic distance, touch, light sensors.

  5. Computational skills, working memory, and conceptual knowledge in older children with mathematics learning disabilities.

    Science.gov (United States)

    Mabbott, Donald J; Bisanz, Jeffrey

    2008-01-01

    Knowledge and skill in multiplication were investigated for late elementary-grade students with mathematics learning disabilities (MLD), typically achieving age-matched peers, low-achieving age-matched peers, and ability-matched peers by examining multiple measures of computational skill, working memory, and conceptual knowledge. Poor multiplication fact mastery and calculation fluency and general working memory discriminated children with MLD from typically achieving age-matched peers. Furthermore, children with MLD were slower in executing backup procedures than typically achieving age-matched peers. The performance of children with MLD on multiple measures of multiplication skill and knowledge was most similar to that of ability-matched younger children. MLD may be due to difficulties in computational skills and working memory. Implications for the diagnosis and remediation of MLD are discussed.

  6. Fostering multidisciplinary learning through computer-supported collaboration script: The role of a transactive memory script

    NARCIS (Netherlands)

    Noroozi, O.; Weinberger, A.; Biemans, H.J.A.; Teasley, S.D.; Mulder, M.

    2012-01-01

    For solving many of today's complex problems, professionals need to collaborate in multidisciplinary teams. Facilitation of knowledge awareness and coordination among group members, that is through a Transactive Memory System (TMS), is vital in multidisciplinary collaborative settings. Online

  7. Transient Faults in Computer Systems

    Science.gov (United States)

    Masson, Gerald M.

    1993-01-01

    A powerful technique particularly appropriate for the detection of errors caused by transient faults in computer systems was developed. The technique can be implemented in either software or hardware; the research conducted thus far primarily considered software implementations. The error detection technique developed has the distinct advantage of having provably complete coverage of all errors caused by transient faults that affect the output produced by the execution of a program. In other words, the technique does not have to be tuned to a particular error model to enhance error coverage. Also, the correctness of the technique can be formally verified. The technique uses time and software redundancy. The foundation for an effective, low-overhead, software-based certification trail approach to real-time error detection resulting from transient fault phenomena was developed.

  8. Memory architecture for efficient utilization of SDRAM: a case study of the computation/memory access trade-off

    DEFF Research Database (Denmark)

    Gleerup, Thomas Møller; Holten-Lund, Hans Erik; Madsen, Jan

    2000-01-01

    This paper discusses the trade-off between calculations and memory accesses in a 3D graphics tile renderer for visualization of data from medical scanners. The performance requirement of this application is a frame rate of 25 frames per second when rendering 3D models with 2 million triangles, i....... to use a memory access strategy with write-only and read-only phases, and a buffering system, which uses round-robin bank write-access combined with burst read-access.......This paper discusses the trade-off between calculations and memory accesses in a 3D graphics tile renderer for visualization of data from medical scanners. The performance requirement of this application is a frame rate of 25 frames per second when rendering 3D models with 2 million triangles, i....... In software, forward differencing is usually better, but in this hardware implementation, the trade-off has made it possible to develop a very regular memory architecture with a buffering system, which can reach 95% bandwidth utilization using off-the-shelf SDRAM, This is achieved by changing the algorithm...

  9. Memory.

    Science.gov (United States)

    McKean, Kevin

    1983-01-01

    Discusses current research (including that involving amnesiacs and snails) into the nature of the memory process, differentiating between and providing examples of "fact" memory and "skill" memory. Suggests that three brain parts (thalamus, fornix, mammilary body) are involved in the memory process. (JN)

  10. Computer system reliability safety and usability

    CERN Document Server

    Dhillon, BS

    2013-01-01

    Computer systems have become an important element of the world economy, with billions of dollars spent each year on development, manufacture, operation, and maintenance. Combining coverage of computer system reliability, safety, usability, and other related topics into a single volume, Computer System Reliability: Safety and Usability eliminates the need to consult many different and diverse sources in the hunt for the information required to design better computer systems.After presenting introductory aspects of computer system reliability such as safety, usability-related facts and figures,

  11. Impact of new computing systems on finite element computations

    International Nuclear Information System (INIS)

    Noor, A.K.; Fulton, R.E.; Storaasi, O.O.

    1983-01-01

    Recent advances in computer technology that are likely to impact finite element computations are reviewed. The characteristics of supersystems, highly parallel systems, and small systems (mini and microcomputers) are summarized. The interrelations of numerical algorithms and software with parallel architectures are discussed. A scenario is presented for future hardware/software environment and finite element systems. A number of research areas which have high potential for improving the effectiveness of finite element analysis in the new environment are identified

  12. Memory as a Factor in the Computational Efficiency of Dyslexic Children with High Abstract Reasoning Ability.

    Science.gov (United States)

    Steeves, K. Joyce

    1983-01-01

    A study involving dyslexic children (10-14 years old) with average and high reasoning ability and nondyslexic children with and without superior mathematical ability suggested that the high reasoning dyslexic Ss had similar abstract reasoning ability but lower computation and memory skills than mathematically gifted nondyslexic Ss. (CL)

  13. The Effects of 3D Computer Simulation on Biology Students' Achievement and Memory Retention

    Science.gov (United States)

    Elangovan, Tavasuria; Ismail, Zurida

    2014-01-01

    A quasi experimental study was conducted for six weeks to determine the effectiveness of two different 3D computer simulation based teaching methods, that is, realistic simulation and non-realistic simulation on Form Four Biology students' achievement and memory retention in Perak, Malaysia. A sample of 136 Form Four Biology students in Perak,…

  14. From shoebox to performative agent: the computer as personal memory machine

    NARCIS (Netherlands)

    van Dijck, J.

    2005-01-01

    Digital technologies offer new opportunities in the everyday lives of people: with still expanding memory capacities, the computer is rapidly becoming a giant storage and processing facility for recording and retrieving ‘bits of life’. Software engineers and companies promise not only to expand the

  15. Computational Skills, Working Memory, and Conceptual Knowledge in Older Children with Mathematics Learning Disabilities

    Science.gov (United States)

    Mabbott, Donald J.; Bisanz, Jeffrey

    2008-01-01

    Knowledge and skill in multiplication were investigated for late elementary-grade students with mathematics learning disabilities (MLD), typically achieving age-matched peers, low-achieving age-matched peers, and ability-matched peers by examining multiple measures of computational skill, working memory, and conceptual knowledge. Poor…

  16. On the Universal Computing Power of Amorphous Computing Systems

    Czech Academy of Sciences Publication Activity Database

    Wiedermann, Jiří; Petrů, L.

    2009-01-01

    Roč. 45, č. 4 (2009), s. 995-1010 ISSN 1432-4350 R&D Projects: GA AV ČR 1ET100300517; GA ČR GD201/05/H014 Institutional research plan: CEZ:AV0Z10300504 Keywords : amorphous computing systems * universal computing * random access machine * simulation Subject RIV: IN - Informatics, Computer Science Impact factor: 0.726, year: 2009

  17. A study of standard building blocks for the design of fault-tolerant distributed computer systems

    Science.gov (United States)

    Rennels, D. A.; Avizienis, A.; Ercegovac, M.

    1978-01-01

    This paper presents the results of a study that has established a standard set of four semiconductor VLSI building-block circuits. These circuits can be assembled with off-the-shelf microprocessors and semiconductor memory modules into fault-tolerant distributed computer configurations. The resulting multi-computer architecture uses self-checking computer modules backed up by a limited number of spares. A redundant bus system is employed for communication between computer modules.

  18. Integrated Computer System of Management in Logistics

    Science.gov (United States)

    Chwesiuk, Krzysztof

    2011-06-01

    This paper aims at presenting a concept of an integrated computer system of management in logistics, particularly in supply and distribution chains. Consequently, the paper includes the basic idea of the concept of computer-based management in logistics and components of the system, such as CAM and CIM systems in production processes, and management systems for storage, materials flow, and for managing transport, forwarding and logistics companies. The platform which integrates computer-aided management systems is that of electronic data interchange.

  19. Conflict Resolution in Computer Systems

    Directory of Open Access Journals (Sweden)

    G. P. Mojarov

    2015-01-01

    Full Text Available A conflict situation in computer systems CS is the phenomenon arising when the processes have multi-access to the shared resources and none of the involved processes can proceed because of their waiting for the certain resources locked by the other processes which, in turn, are in a similar position. The conflict situation is also called a deadlock that has quite clear impact on the CS state.To find the reduced to practice algorithms to resolve the impasses is of significant applied importance for ensuring information security of computing process and thereupon the presented article is aimed at solving a relevant problem.The gravity of situation depends on the types of processes in a deadlock, types of used resources, number of processes, and a lot of other factors.A disadvantage of the method for preventing the impasses used in many modern operating systems and based on the preliminary planning resources required for the process is obvious - waiting time can be overlong. The preventing method with the process interruption and deallocation of its resources is very specific and a little effective, when there is a set of the polytypic resources requested dynamically. The drawback of another method, to prevent a deadlock by ordering resources, consists in restriction of possible sequences of resource requests.A different way of "struggle" against deadlocks is a prevention of impasses. In the future a prediction of appearing impasses is supposed. There are known methods [1,4,5] to define and prevent conditions under which deadlocks may occur. Thus the preliminary information on what resources a running process can request is used. Before allocating a free resource to the process, a test for a state “safety” condition is provided. The state is "safe" if in the future impasses cannot occur as a result of resource allocation to the process. Otherwise the state is considered to be " hazardous ", and resource allocation is postponed. The obvious

  20. Buyer's Guide to Computer Based Instructional Systems.

    Science.gov (United States)

    Fratini, Robert C.

    1981-01-01

    Examines the advantages and disadvantages of shared multiterminal computer based instruction (CBI) systems, dedicated multiterminal CBI systems, and stand-alone CBI systems. A series of questions guide consumers in matching a system's capabilities with an organization's needs. (MER)

  1. A Development of Computer Controlled 5 Axis Ultrasonic Testing System

    International Nuclear Information System (INIS)

    Kim, Y. S.; Kim, J. G.; Park, J. C.; Kim, N. I.

    1990-01-01

    A computer controlled 5 axis ultrasonic testing system is developed in order to detect flaws in special parts with complex shape. The various kinds of ultrasonic test can be performed automatically using computer program which was developed by DHI(Daewoo Heavy Industries Ltd.). By use of this computer program, the detector location can be programed and the amplitude signal of echo can be processed digitally. The test results can be plotted graphically on a high resolution display monitor in real time base. The test data can be also saved in magnetic memory devices(HDD or FDD) as well as in the form of hard copy through color printer. The computer software contains c- scan, c+a scan processing programs as well as statistical analysis for test data

  2. Dissociating response systems: erasing fear from memory

    NARCIS (Netherlands)

    Soeter, A.C.; Kindt, M.

    2010-01-01

    In addition to the extensive evidence in animals, we previously showed that disrupting reconsolidation by noradrenergic blockade produced amnesia for the original fear response in humans. Interestingly, the declarative memory for the fear association remained intact. These results asked for a solid

  3. Know Your Personal Computer The Personal Computer System ...

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 4. Know Your Personal Computer The Personal Computer System Software. Siddhartha Kumar Ghoshal. Series Article Volume 1 Issue 4 April 1996 pp 31-36. Fulltext. Click here to view fulltext PDF. Permanent link:

  4. Memory Efficient VLSI Implementation of Real-Time Motion Detection System Using FPGA Platform

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2017-06-01

    Full Text Available Motion detection is the heart of a potentially complex automated video surveillance system, intended to be used as a standalone system. Therefore, in addition to being accurate and robust, a successful motion detection technique must also be economical in the use of computational resources on selected FPGA development platform. This is because many other complex algorithms of an automated video surveillance system also run on the same platform. Keeping this key requirement as main focus, a memory efficient VLSI architecture for real-time motion detection and its implementation on FPGA platform is presented in this paper. This is accomplished by proposing a new memory efficient motion detection scheme and designing its VLSI architecture. The complete real-time motion detection system using the proposed memory efficient architecture along with proper input/output interfaces is implemented on Xilinx ML510 (Virtex-5 FX130T FPGA development platform and is capable of operating at 154.55 MHz clock frequency. Memory requirement of the proposed architecture is reduced by 41% compared to the standard clustering based motion detection architecture. The new memory efficient system robustly and automatically detects motion in real-world scenarios (both for the static backgrounds and the pseudo-stationary backgrounds in real-time for standard PAL (720 × 576 size color video.

  5. Process computer systems for fossil and nuclear power plants

    International Nuclear Information System (INIS)

    Iida, Hiroshi; Matsumura, Jube; Nakamura, Hideo; Nakano, Yoshiyuki

    1976-01-01

    It is of growing importance that the process computer systems play the role of the automatic control systems in thermal power plants. While in nuclear power plants, it is considered as indispensable that reactor behavior is grasped with computers for improving the plant safety. The application of computers to thermal power generating units has been developed in four stages, namely plant performance monitor, sequence monitor, operation automation system, over-all total automation. The over-all total automation system is in the development stage. In nuclear power plants, though the plant control instrumentation and operation systems are significantly different depending upon the reactor types, computers are mainly employed to air at grasping accurately the complicated reactor conditions in real time in light water reactors which form 77% of existing nuclear power generating units now in operation. HITAC 7250 was adopted for No. 1 plant of Shimane Nuclear Power Station, Chugoku Electric Power Co., and the functions of the standard computers for BWRs are included in it, besides, the introduction of colored cathode ray tubes was realized at the first time in the world. Qualitative expansion is expected as the future trend in the computer application to power plants in the following matters. (1) GRT-incorporated central control panel. (2) Plant accident protection. (3) Computerization of sub-loop equipments. (4) Memory, retrieval and linking of plant operation (information) data. (5) Environmental monitoring in the periphery of power stations. (6) Control of radiation exposure (only for nuclear plants). (Wakatsuki, Y.)

  6. Computation in the Learning System of Cephalopods.

    Science.gov (United States)

    Young, J Z

    1991-04-01

    The memory mechanisms of cephalopods consist of a series of matrices of intersecting axes, which find associations between the signals of input events and their consequences. The tactile memory is distributed among eight such matrices, and there is also some suboesophageal learning capacity. The visual memory lies in the optic lobe and four matrices, with some re-exciting pathways. In both systems, damage to any part reduces proportionally the effectiveness of the whole memory. These matrices are somewhat like those in mammals, for instance those in the hippocampus. The first matrix in both visual and tactile systems receives signals of vision and taste, and its output serves to increase the tendency to attack or to take with the arms. The second matrix provides for the correlation of groups of signals on its neurons, which pass signals to the third matrix. Here large cells find clusters in the sets of signals. Their output re-excites those of the first lobe, unless pain occurs. In that case, this set of cells provides a record that ensures retreat. There is experimental evidence that these distributed memory systems allow for the identification of categories of visual and tactile inputs, for generalization, and for decision on appropriate behavior in the light of experience. The evidence suggests that learning in cephalopods is not localized to certain layers or "grandmother cells" but is distributed with high redundance in serial networks, with recurrent circuits.

  7. A distributed computer system for digitising machines

    International Nuclear Information System (INIS)

    Bairstow, R.; Barlow, J.; Waters, M.; Watson, J.

    1977-07-01

    This paper describes a Distributed Computing System, based on micro computers, for the monitoring and control of digitising tables used by the Rutherford Laboratory Bubble Chamber Research Group in the measurement of bubble chamber photographs. (author)

  8. Memory Efficient Data Structures for Explicit Verification of Timed Systems

    DEFF Research Database (Denmark)

    Taankvist, Jakob Haahr; Srba, Jiri; Larsen, Kim Guldstrand

    2014-01-01

    -arc Petri nets, we explore new data structures for lowering the used memory: PTries for efficient storing of configurations and time darts for semi-symbolic description of the state-space. Both methods are implemented as a part of the tool TAPAAL and the experiments document at least one order of magnitude......Timed analysis of real-time systems can be performed using continuous (symbolic) or discrete (explicit) techniques. The explicit state-space exploration can be considerably faster for models with moderately small constants, however, at the expense of high memory consumption. In the setting of timed...... of memory savings while preserving comparable verification times....

  9. Systemic Lisbon Battery: Normative Data for Memory and Attention Assessments.

    Science.gov (United States)

    Gamito, Pedro; Morais, Diogo; Oliveira, Jorge; Ferreira Lopes, Paulo; Picareli, Luís Felipe; Matias, Marcelo; Correia, Sara; Brito, Rodrigo

    2016-05-04

    Memory and attention are two cognitive domains pivotal for the performance of instrumental activities of daily living (IADLs). The assessment of these functions is still widely carried out with pencil-and-paper tests, which lack ecological validity. The evaluation of cognitive and memory functions while the patients are performing IADLs should contribute to the ecological validity of the evaluation process. The objective of this study is to establish normative data from virtual reality (VR) IADLs designed to activate memory and attention functions. A total of 243 non-clinical participants carried out a paper-and-pencil Mini-Mental State Examination (MMSE) and performed 3 VR activities: art gallery visual matching task, supermarket shopping task, and memory fruit matching game. The data (execution time and errors, and money spent in the case of the supermarket activity) was automatically generated from the app. Outcomes were computed using non-parametric statistics, due to non-normality of distributions. Age, academic qualifications, and computer experience all had significant effects on most measures. Normative values for different levels of these measures were defined. Age, academic qualifications, and computer experience should be taken into account while using our VR-based platform for cognitive assessment purposes. ©Pedro Gamito, Diogo Morais, Jorge Oliveira, Paulo Ferreira Lopes, Luís Felipe Picareli, Marcelo Matias, Sara Correia, Rodrigo Brito. Originally published in JMIR Rehabilitation and Assistive Technology (http://rehab.jmir.org), 04.05.2016.

  10. Memory allocation and computations for Laplace’s equation of 3-D arbitrary boundary problems

    Directory of Open Access Journals (Sweden)

    Tsay Tswn-Syau

    2017-01-01

    Full Text Available Computation iteration schemes and memory allocation technique for finite difference method were presented in this paper. The transformed form of a groundwater flow problem in the generalized curvilinear coordinates was taken to be the illustrating example and a 3-dimensional second order accurate 19-point scheme was presented. Traditional element-by-element methods (e.g. SOR are preferred since it is simple and memory efficient but time consuming in computation. For efficient memory allocation, an index method was presented to store the sparse non-symmetric matrix of the problem. For computations, conjugate-gradient-like methods were reported to be computationally efficient. Among them, using incomplete Choleski decomposition as preconditioner was reported to be good method for iteration convergence. In general, the developed index method in this paper has the following advantages: (1 adaptable to various governing and boundary conditions, (2 flexible for higher order approximation, (3 independence of problem dimension, (4 efficient for complex problems when global matrix is not symmetric, (5 convenience for general sparse matrices, (6 computationally efficient in the most time consuming procedure of matrix multiplication, and (7 applicable to any developed matrix solver.

  11. A revised limbic system model for memory, emotion and behaviour.

    Science.gov (United States)

    Catani, Marco; Dell'acqua, Flavio; Thiebaut de Schotten, Michel

    2013-09-01

    Emotion, memories and behaviour emerge from the coordinated activities of regions connected by the limbic system. Here, we propose an update of the limbic model based on the seminal work of Papez, Yakovlev and MacLean. In the revised model we identify three distinct but partially overlapping networks: (i) the Hippocampal-diencephalic and parahippocampal-retrosplenial network dedicated to memory and spatial orientation; (ii) The temporo-amygdala-orbitofrontal network for the integration of visceral sensation and emotion with semantic memory and behaviour; (iii) the default-mode network involved in autobiographical memories and introspective self-directed thinking. The three networks share cortical nodes that are emerging as principal hubs in connectomic analysis. This revised network model of the limbic system reconciles recent functional imaging findings with anatomical accounts of clinical disorders commonly associated with limbic pathology. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Computational Approach to Profit Optimization of a Loss-Queueing System

    Directory of Open Access Journals (Sweden)

    Dinesh Kumar Yadav

    2010-01-01

    Full Text Available Objective of the paper is to deal with the profit optimization of a loss queueing system with the finite capacity. Here, we define and compute total expected cost (TEC, total expected revenue (TER and consequently we compute the total optimal profit (TOP of the system. In order to compute the total optimal profit of the system, a computing algorithm has been developed and a fast converging N-R method has been employed which requires least computing time and lesser memory space as compared to other methods. Sensitivity analysis and its observations based on graphics have added a significant value to this model.

  13. Computer control system of TARN-II

    International Nuclear Information System (INIS)

    Watanabe, S.

    1990-01-01

    This report describes the present status and future plane of TARN-II computer control system. At present, the microcomputer M-16 is used as the main control computer to regulate the 64 kinds of power supplies coupled with the serial CAMAC interface system. An excitation control of the main ring magnets is performed with the aid of self-learn technique to optimize the tracking error among them. An Rf pattern control is also performed with the microcomputer as well as the main ring control system. New computer system linked with the Ethernet is planed to develop the computing power and portability of the present control system. (author)

  14. Brains of verbal memory specialists show anatomical differences in language, memory and visual systems.

    Science.gov (United States)

    Hartzell, James F; Davis, Ben; Melcher, David; Miceli, Gabriele; Jovicich, Jorge; Nath, Tanmay; Singh, Nandini Chatterjee; Hasson, Uri

    2016-05-01

    We studied a group of verbal memory specialists to determine whether intensive oral text memory is associated with structural features of hippocampal and lateral-temporal regions implicated in language processing. Professional Vedic Sanskrit Pandits in India train from childhood for around 10years in an ancient, formalized tradition of oral Sanskrit text memorization and recitation, mastering the exact pronunciation and invariant content of multiple 40,000-100,000 word oral texts. We conducted structural analysis of gray matter density, cortical thickness, local gyrification, and white matter structure, relative to matched controls. We found massive gray matter density and cortical thickness increases in Pandit brains in language, memory and visual systems, including i) bilateral lateral temporal cortices and ii) the anterior cingulate cortex and the hippocampus, regions associated with long and short-term memory. Differences in hippocampal morphometry matched those previously documented for expert spatial navigators and individuals with good verbal working memory. The findings provide unique insight into the brain organization implementing formalized oral knowledge systems. Copyright © 2015. Published by Elsevier Inc.

  15. Information processing in bacteria: memory, computation, and statistical physics: a key issues review

    International Nuclear Information System (INIS)

    Lan, Ganhui; Tu, Yuhai

    2016-01-01

    preserving information, it does not reveal the underlying mechanism that leads to the observed input-output relationship, nor does it tell us much about which information is important for the organism and how biological systems use information to carry out specific functions. To do that, we need to develop models of the biological machineries, e.g. biochemical networks and neural networks, to understand the dynamics of biological information processes. This is a much more difficult task. It requires deep knowledge of the underlying biological network—the main players (nodes) and their interactions (links)—in sufficient detail to build a model with predictive power, as well as quantitative input-output measurements of the system under different perturbations (both genetic variations and different external conditions) to test the model predictions to guide further development of the model. Due to the recent growth of biological knowledge thanks in part to high throughput methods (sequencing, gene expression microarray, etc) and development of quantitative in vivo techniques such as various florescence technology, these requirements are starting to be realized in different biological systems. The possible close interaction between quantitative experimentation and theoretical modeling has made systems biology an attractive field for physicists interested in quantitative biology. In this review, we describe some of the recent work in developing a quantitative predictive model of bacterial chemotaxis, which can be considered as the hydrogen atom of systems biology. Using statistical physics approaches, such as the Ising model and Langevin equation, we study how bacteria, such as E. coli, sense and amplify external signals, how they keep a working memory of the stimuli, and how they use these data to compute the chemical gradient. In particular, we will describe how E. coli cells avoid cross-talk in a heterogeneous receptor cluster to keep a ligand-specific memory. We will also

  16. Information processing in bacteria: memory, computation, and statistical physics: a key issues review

    Science.gov (United States)

    Lan, Ganhui; Tu, Yuhai

    2016-05-01

    preserving information, it does not reveal the underlying mechanism that leads to the observed input-output relationship, nor does it tell us much about which information is important for the organism and how biological systems use information to carry out specific functions. To do that, we need to develop models of the biological machineries, e.g. biochemical networks and neural networks, to understand the dynamics of biological information processes. This is a much more difficult task. It requires deep knowledge of the underlying biological network—the main players (nodes) and their interactions (links)—in sufficient detail to build a model with predictive power, as well as quantitative input-output measurements of the system under different perturbations (both genetic variations and different external conditions) to test the model predictions to guide further development of the model. Due to the recent growth of biological knowledge thanks in part to high throughput methods (sequencing, gene expression microarray, etc) and development of quantitative in vivo techniques such as various florescence technology, these requirements are starting to be realized in different biological systems. The possible close interaction between quantitative experimentation and theoretical modeling has made systems biology an attractive field for physicists interested in quantitative biology. In this review, we describe some of the recent work in developing a quantitative predictive model of bacterial chemotaxis, which can be considered as the hydrogen atom of systems biology. Using statistical physics approaches, such as the Ising model and Langevin equation, we study how bacteria, such as E. coli, sense and amplify external signals, how they keep a working memory of the stimuli, and how they use these data to compute the chemical gradient. In particular, we will describe how E. coli cells avoid cross-talk in a heterogeneous receptor cluster to keep a ligand-specific memory. We will also

  17. Universal blind quantum computation for hybrid system

    Science.gov (United States)

    Huang, He-Liang; Bao, Wan-Su; Li, Tan; Li, Feng-Guang; Fu, Xiang-Qun; Zhang, Shuo; Zhang, Hai-Long; Wang, Xiang

    2017-08-01

    As progress on the development of building quantum computer continues to advance, first-generation practical quantum computers will be available for ordinary users in the cloud style similar to IBM's Quantum Experience nowadays. Clients can remotely access the quantum servers using some simple devices. In such a situation, it is of prime importance to keep the security of the client's information. Blind quantum computation protocols enable a client with limited quantum technology to delegate her quantum computation to a quantum server without leaking any privacy. To date, blind quantum computation has been considered only for an individual quantum system. However, practical universal quantum computer is likely to be a hybrid system. Here, we take the first step to construct a framework of blind quantum computation for the hybrid system, which provides a more feasible way for scalable blind quantum computation.

  18. Software Coherence in Multiprocessor Memory Systems. Ph.D. Thesis

    Science.gov (United States)

    Bolosky, William Joseph

    1993-01-01

    Processors are becoming faster and multiprocessor memory interconnection systems are not keeping up. Therefore, it is necessary to have threads and the memory they access as near one another as possible. Typically, this involves putting memory or caches with the processors, which gives rise to the problem of coherence: if one processor writes an address, any other processor reading that address must see the new value. This coherence can be maintained by the hardware or with software intervention. Systems of both types have been built in the past; the hardware-based systems tended to outperform the software ones. However, the ratio of processor to interconnect speed is now so high that the extra overhead of the software systems may no longer be significant. This issue is explored both by implementing a software maintained system and by introducing and using the technique of offline optimal analysis of memory reference traces. It finds that in properly built systems, software maintained coherence can perform comparably to or even better than hardware maintained coherence. The architectural features necessary for efficient software coherence to be profitable include a small page size, a fast trap mechanism, and the ability to execute instructions while remote memory references are outstanding.

  19. Fencing direct memory access data transfers in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Blocksome, Michael A.; Mamidala, Amith R.

    2013-09-03

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to segments of shared random access memory through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and a segment of shared memory; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

  20. Trust in Social Computing

    Science.gov (United States)

    2014-04-07

    items that match their preferences ─ Only depending on users’ past behaviors ─ Memory -based CF and Model-based CF Trust in Social Computing Arizona...SystemsMemory -based Trust-aware Recommender Systems Evaluation Trust in Social Computing Arizona State University Data Mining and Machine...Recommender SystemsMemory -based Trust-aware Recommender Systems Evaluation Trust in Social Sciences Computational Understanding of

  1. The Northeast Utilities generic plant computer system

    International Nuclear Information System (INIS)

    Spitzner, K.J.

    1980-01-01

    A variety of computer manufacturers' equipment monitors plant systems in Northeast Utilities' (NU) nuclear and fossil power plants. The hardware configuration and the application software in each of these systems are essentially one of a kind. Over the next few years these computer systems will be replaced by the NU Generic System, whose prototype is under development now for Millstone III, an 1150 Mwe Pressurized Water Reactor plant being constructed in Waterford, Connecticut. This paper discusses the Millstone III computer system design, concentrating on the special problems inherent in a distributed system configuration such as this. (auth)

  2. Distributed computer systems theory and practice

    CERN Document Server

    Zedan, H S M

    2014-01-01

    Distributed Computer Systems: Theory and Practice is a collection of papers dealing with the design and implementation of operating systems, including distributed systems, such as the amoeba system, argus, Andrew, and grapevine. One paper discusses the concepts and notations for concurrent programming, particularly language notation used in computer programming, synchronization methods, and also compares three classes of languages. Another paper explains load balancing or load redistribution to improve system performance, namely, static balancing and adaptive load balancing. For program effici

  3. Computational cost estimates for parallel shared memory isogeometric multi-frontal solvers

    KAUST Repository

    Woźniak, Maciej

    2014-06-01

    In this paper we present computational cost estimates for parallel shared memory isogeometric multi-frontal solvers. The estimates show that the ideal isogeometric shared memory parallel direct solver scales as O( p2log(N/p)) for one dimensional problems, O(Np2) for two dimensional problems, and O(N4/3p2) for three dimensional problems, where N is the number of degrees of freedom, and p is the polynomial order of approximation. The computational costs of the shared memory parallel isogeometric direct solver are compared with those corresponding to the sequential isogeometric direct solver, being the latest equal to O(N p2) for the one dimensional case, O(N1.5p3) for the two dimensional case, and O(N2p3) for the three dimensional case. The shared memory version significantly reduces both the scalability in terms of N and p. Theoretical estimates are compared with numerical experiments performed with linear, quadratic, cubic, quartic, and quintic B-splines, in one and two spatial dimensions. © 2014 Elsevier Ltd. All rights reserved.

  4. Intelligence quotient-adjusted memory impairment is associated with abnormal single photon emission computed tomography perfusion.

    Science.gov (United States)

    Rentz, Dorene M; Huh, Terri J; Sardinha, Lisa M; Moran, Erin K; Becker, John A; Daffner, Kirk R; Sperling, Reisa A; Johnson, Keith A

    2007-09-01

    Cognitive reserve among highly intelligent older individuals makes detection of early Alzheimer's disease (AD) difficult. We tested the hypothesis that mild memory impairment determined by IQ-adjusted norms is associated with single photon emission computed tomography (SPECT) perfusion abnormality at baseline and predictive of future decline. Twenty-three subjects with a Clinical Dementia Rating (CDR) score of 0, were reclassified after scores were adjusted for IQ into two groups, 10 as having mild memory impairments for ability (IQ-MI) and 13 as memory-normal (IQ-MN). Subjects underwent cognitive and functional assessments at baseline and annual follow-up for 3 years. Perfusion SPECT was acquired at baseline. At follow-up, the IQ-MI subjects demonstrated decline in memory, visuospatial processing, and phonemic fluency, and 6 of 10 had progressed to a CDR of 0.5, while the IQ-MN subjects did not show decline. The IQ-MI group had significantly lower perfusion than the IQ-MN group in parietal/precuneus, temporal, and opercular frontal regions. In contrast, higher perfusion was observed in IQ-MI compared with IQ-MN in the left medial frontal and rostral anterior cingulate regions. IQ-adjusted memory impairment in individuals with high cognitive reserve is associated with baseline SPECT abnormality in a pattern consistent with prodromal AD and predicts subsequent cognitive and functional decline.

  5. General-purpose interface bus for multiuser, multitasking computer system

    Science.gov (United States)

    Generazio, Edward R.; Roth, Don J.; Stang, David B.

    1990-01-01

    The architecture of a multiuser, multitasking, virtual-memory computer system intended for the use by a medium-size research group is described. There are three central processing units (CPU) in the configuration, each with 16 MB memory, and two 474 MB hard disks attached. CPU 1 is designed for data analysis and contains an array processor for fast-Fourier transformations. In addition, CPU 1 shares display images viewed with the image processor. CPU 2 is designed for image analysis and display. CPU 3 is designed for data acquisition and contains 8 GPIB channels and an analog-to-digital conversion input/output interface with 16 channels. Up to 9 users can access the third CPU simultaneously for data acquisition. Focus is placed on the optimization of hardware interfaces and software, facilitating instrument control, data acquisition, and processing.

  6. Propagating fronts in reaction-transport systems with memory

    Energy Technology Data Exchange (ETDEWEB)

    Yadav, A. [Department of Chemistry, Southern Methodist University, Dallas, TX 75275-0314 (United States)], E-mail: ayadav1@lsu.edu; Fedotov, Sergei [School of Mathematics, University of Manchester, Manchester M60 1DQ (United Kingdom)], E-mail: sergei.fedotov@manchester.ac.uk; Mendez, Vicenc [Grup de Fisica Estadistica, Departament de Fisica, Universitat Autonoma de Barcelona, E-08193 Bellaterra (Spain)], E-mail: vicenc.mendez@uab.es; Horsthemke, Werner [Department of Chemistry, Southern Methodist University, Dallas, TX 75275-0314 (United States)], E-mail: whorsthe@smu.edu

    2007-11-26

    In reaction-transport systems with non-standard diffusion, the memory of the transport causes a coupling of reactions and transport. We investigate the effect of this coupling for systems with Fisher-type kinetics and obtain a general analytical expression for the front speed. We apply our results to the specific case of subdiffusion.

  7. High-Performance Computation of Distributed-Memory Parallel 3D Voronoi and Delaunay Tessellation

    Energy Technology Data Exchange (ETDEWEB)

    Peterka, Tom; Morozov, Dmitriy; Phillips, Carolyn

    2014-11-14

    Computing a Voronoi or Delaunay tessellation from a set of points is a core part of the analysis of many simulated and measured datasets: N-body simulations, molecular dynamics codes, and LIDAR point clouds are just a few examples. Such computational geometry methods are common in data analysis and visualization; but as the scale of simulations and observations surpasses billions of particles, the existing serial and shared-memory algorithms no longer suffice. A distributed-memory scalable parallel algorithm is the only feasible approach. The primary contribution of this paper is a new parallel Delaunay and Voronoi tessellation algorithm that automatically determines which neighbor points need to be exchanged among the subdomains of a spatial decomposition. Other contributions include periodic and wall boundary conditions, comparison of our method using two popular serial libraries, and application to numerous science datasets.

  8. Learning to live independently with expert systems in memory rehabilitation.

    Science.gov (United States)

    Man, D W K; Tam, S F; Hui-Chan, C W Y

    2003-01-01

    Expert systems (ES), which are a branch of artificial intelligence, has been widely used in different applications, including medical consultation and more recently in rehabilitation for assessment and intervention. The development and validation of an expert system for memory rehabilitation (ES-MR) is reported here. Through a web-based platform, ES-MR can provide experts with better decision making in providing intervention for persons with brain injuries, stroke, and dementia. The application and possible commercial production of a simultaneously developed version for "non-expert" users is proposed. This is especially useful for providing remote assistance to persons with permanent memory impairment when they reach a plateau of cognitive training and demand a prosthetic system to enhance memory for day-to-day independence. The potential use of ES-MR as a cognitive aid in conjunction with WAP mobile phones, Bluetooth technology, and Personal Digital Assistants (PDAs) is suggested as an avenue for future study.

  9. Intelligent computing systems emerging application areas

    CERN Document Server

    Virvou, Maria; Jain, Lakhmi

    2016-01-01

    This book at hand explores emerging scientific and technological areas in which Intelligent Computing Systems provide efficient solutions and, thus, may play a role in the years to come. It demonstrates how Intelligent Computing Systems make use of computational methodologies that mimic nature-inspired processes to address real world problems of high complexity for which exact mathematical solutions, based on physical and statistical modelling, are intractable. Common intelligent computational methodologies are presented including artificial neural networks, evolutionary computation, genetic algorithms, artificial immune systems, fuzzy logic, swarm intelligence, artificial life, virtual worlds and hybrid methodologies based on combinations of the previous. The book will be useful to researchers, practitioners and graduate students dealing with mathematically-intractable problems. It is intended for both the expert/researcher in the field of Intelligent Computing Systems, as well as for the general reader in t...

  10. Determination of strain fields in porous shape memory alloys using micro-computed tomography

    Science.gov (United States)

    Bormann, Therese; Friess, Sebastian; de Wild, Michael; Schumacher, Ralf; Schulz, Georg; Müller, Bert

    2010-09-01

    Shape memory alloys (SMAs) belong to 'intelligent' materials since the metal alloy can change its macroscopic shape as the result of the temperature-induced, reversible martensite-austenite phase transition. SMAs are often applied for medical applications such as stents, hinge-less instruments, artificial muscles, and dental braces. Rapid prototyping techniques, including selective laser melting (SLM), allow fabricating complex porous SMA microstructures. In the present study, the macroscopic shape changes of the SMA test structures fabricated by SLM have been investigated by means of micro computed tomography (μCT). For this purpose, the SMA structures are placed into the heating stage of the μCT system SkyScan 1172™ (SkyScan, Kontich, Belgium) to acquire three-dimensional datasets above and below the transition temperature, i.e. at room temperature and at about 80°C, respectively. The two datasets were registered on the basis of an affine registration algorithm with nine independent parameters - three for the translation, three for the rotation and three for the scaling in orthogonal directions. Essentially, the scaling parameters characterize the macroscopic deformation of the SMA structure of interest. Furthermore, applying the non-rigid registration algorithm, the three-dimensional strain field of the SMA structure on the micrometer scale comes to light. The strain fields obtained will serve for the optimization of the SLM-process and, more important, of the design of the complex shaped SMA structures for tissue engineering and medical implants.

  11. Parallel discrete ordinates algorithms on distributed and common memory systems

    International Nuclear Information System (INIS)

    Wienke, B.R.; Hiromoto, R.E.; Brickner, R.G.

    1987-01-01

    The S/sub n/ algorithm employs iterative techniques in solving the linear Boltzmann equation. These methods, both ordered and chaotic, were compared on both the Denelcor HEP and the Intel hypercube. Strategies are linked to the organization and accessibility of memory (common memory versus distributed memory architectures), with common concern for acquisition of global information. Apart from this, the inherent parallelism of the algorithm maps directly onto the two architectures. Results comparing execution times, speedup, and efficiency are based on a representative 16-group (full upscatter and downscatter) sample problem. Calculations were performed on both the Los Alamos National Laboratory (LANL) Denelcor HEP and the LANL Intel hypercube. The Denelcor HEP is a 64-bit multi-instruction, multidate MIMD machine consisting of up to 16 process execution modules (PEMs), each capable of executing 64 processes concurrently. Each PEM can cooperate on a job, or run several unrelated jobs, and share a common global memory through a crossbar switch. The Intel hypercube, on the other hand, is a distributed memory system composed of 128 processing elements, each with its own local memory. Processing elements are connected in a nearest-neighbor hypercube configuration and sharing of data among processors requires execution of explicit message-passing constructs

  12. FPGA-accelerated simulation of computer systems

    CERN Document Server

    Angepat, Hari; Chung, Eric S; Hoe, James C; Chung, Eric S

    2014-01-01

    To date, the most common form of simulators of computer systems are software-based running on standard computers. One promising approach to improve simulation performance is to apply hardware, specifically reconfigurable hardware in the form of field programmable gate arrays (FPGAs). This manuscript describes various approaches of using FPGAs to accelerate software-implemented simulation of computer systems and selected simulators that incorporate those techniques. More precisely, we describe a simulation architecture taxonomy that incorporates a simulation architecture specifically designed f

  13. Memory under stress: from single systems to network changes.

    Science.gov (United States)

    Schwabe, Lars

    2017-02-01

    Stressful events have profound effects on learning and memory. These effects are mainly mediated by catecholamines and glucocorticoid hormones released from the adrenals during stressful encounters. It has been known for long that both catecholamines and glucocorticoids influence the functioning of the hippocampus, a critical hub for episodic memory. However, areas implicated in other forms of memory, such as the insula or the dorsal striatum, can be affected by stress as well. Beyond changes in single memory systems, acute stress triggers the reconfiguration of large scale neural networks which sets the stage for a shift from thoughtful, 'cognitive' control of learning and memory toward more reflexive, 'habitual' processes. Stress-related alterations in amygdala connectivity with the hippocampus, dorsal striatum, and prefrontal cortex seem to play a key role in this shift. The bias toward systems proficient in threat processing and the implementation of well-established routines may facilitate coping with an acute stressor. Overreliance on these reflexive systems or the inability to shift flexibly between them, however, may represent a risk factor for psychopathology in the long-run. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  14. Photopolymerized Thiol-Ene Systems as Shape Memory Polymers

    Science.gov (United States)

    Nair, Devatha P.; Cramer, Neil B.; Scott, Timothy F.; Bowman, Christopher N.; Shandas, Robin

    2010-01-01

    In this study we introduce the use of thiol-ene photopolymers as shape memory polymer systems. The thiol-ene polymer networks are compared to a commonly utilized acrylic shape memory polymer and shown to have significantly improved properties for two different thiol-ene based polymer formulations. Using thermomechanical and mechanical analysis, we demonstrate that thiol-ene based shape memory polymer systems have comparable thermomechanical properties while also exhibiting a number of advantageous properties due to the thiol-ene polymerization mechanism which results in the formation of a homogenous polymer network with low shrinkage stress and negligible oxygen inhibition. The resulting thiol-ene shape memory polymer systems are tough and flexible as compared to the acrylic counterparts. The polymers evaluated in this study were engineered to have a glass transition temperature between 30 and 40 °C, exhibited free strain recovery of greater than 96% and constrained stress recovery of 100%. The thiol-ene polymers exhibited excellent shape fixity and a rapid and distinct shape memory actuation response. PMID:21072253

  15. A Survey of Civilian Dental Computer Systems.

    Science.gov (United States)

    1988-01-01

    the requirements of a employ unconwnon computer architectures and species. military dental system, per the DENTSS Functional De- Many use various...Dentacomp MBC - Legend Graphic Shenandoah Microcomputer Services Dental Management - Patient Service System Medivation - Dental Coupler Software Hows

  16. DDP-516 Computer Graphics System Capabilities

    Science.gov (United States)

    1972-06-01

    This report describes the capabilities of the DDP-516 Computer Graphics System. One objective of this report is to acquaint DOT management and project planners with the system's current capabilities, applications hardware and software. The Appendix i...

  17. Computer Literacy in a Distance Education System

    Science.gov (United States)

    Farajollahi, Mehran; Zandi, Bahman; Sarmadi, Mohamadreza; Keshavarz, Mohsen

    2015-01-01

    In a Distance Education (DE) system, students must be equipped with seven skills of computer (ICDL) usage. This paper aims at investigating the effect of a DE system on the computer literacy of Master of Arts students at Tehran University. The design of this study is quasi-experimental. Pre-test and post-test were used in both control and…

  18. Preventive maintenance for computer systems - concepts & issues ...

    African Journals Online (AJOL)

    Performing preventive maintenance activities for the computer is not optional. The computer is a sensitive and delicate device that needs adequate time and attention to make it work properly. In this paper, the concept and issues on how to prolong the life span of the system, that is, the way to make the system last long and ...

  19. PREFACE: Special section on Computational Fluid Dynamics—in memory of Professor Kunio Kuwahara Special section on Computational Fluid Dynamics—in memory of Professor Kunio Kuwahara

    Science.gov (United States)

    Ishii, Katsuya

    2011-08-01

    This issue includes a special section on computational fluid dynamics (CFD) in memory of the late Professor Kunio Kuwahara, who passed away on 15 September 2008, at the age of 66. In this special section, five articles are included that are based on the lectures and discussions at `The 7th International Nobeyama Workshop on CFD: To the Memory of Professor Kuwahara' held in Tokyo on 23 and 24 September 2009. Professor Kuwahara started his research in fluid dynamics under Professor Imai at the University of Tokyo. His first paper was published in 1969 with the title 'Steady Viscous Flow within Circular Boundary', with Professor Imai. In this paper, he combined theoretical and numerical methods in fluid dynamics. Since that time, he made significant and seminal contributions to computational fluid dynamics. He undertook pioneering numerical studies on the vortex method in 1970s. From then to the early nineties, he developed numerical analyses on a variety of three-dimensional unsteady phenomena of incompressible and compressible fluid flows and/or complex fluid flows using his own supercomputers with academic and industrial co-workers and members of his private research institute, ICFD in Tokyo. In addition, a number of senior and young researchers of fluid mechanics around the world were invited to ICFD and the Nobeyama workshops, which were held near his villa, and they intensively discussed new frontier problems of fluid physics and fluid engineering at Professor Kuwahara's kind hospitality. At the memorial Nobeyama workshop held in 2009, 24 overseas speakers presented their papers, including the talks of Dr J P Boris (Naval Research Laboratory), Dr E S Oran (Naval Research Laboratory), Professor Z J Wang (Iowa State University), Dr M Meinke (RWTH Aachen), Professor K Ghia (University of Cincinnati), Professor U Ghia (University of Cincinnati), Professor F Hussain (University of Houston), Professor M Farge (École Normale Superieure), Professor J Y Yong (National

  20. Space-Bounded Church-Turing Thesis and Computational Tractability of Closed Systems.

    Science.gov (United States)

    Braverman, Mark; Schneider, Jonathan; Rojas, Cristóbal

    2015-08-28

    We report a new limitation on the ability of physical systems to perform computation-one that is based on generalizing the notion of memory, or storage space, available to the system to perform the computation. Roughly, we define memory as the maximal amount of information that the evolving system can carry from one instant to the next. We show that memory is a limiting factor in computation even in lieu of any time limitations on the evolving system-such as when considering its equilibrium regime. We call this limitation the space-bounded Church-Turing thesis (SBCT). The SBCT is supported by a simulation assertion (SA), which states that predicting the long-term behavior of bounded-memory systems is computationally tractable. In particular, one corollary of SA is an explicit bound on the computational hardness of the long-term behavior of a discrete-time finite-dimensional dynamical system that is affected by noise. We prove such a bound explicitly.

  1. In-Depth Analysis of Computer Memory Acquisition Software for Forensic Purposes.

    Science.gov (United States)

    McDown, Robert J; Varol, Cihan; Carvajal, Leonardo; Chen, Lei

    2016-01-01

    The comparison studies on random access memory (RAM) acquisition tools are either limited in metrics or the selected tools were designed to be executed in older operating systems. Therefore, this study evaluates widely used seven shareware or freeware/open source RAM acquisition forensic tools that are compatible to work with the latest 64-bit Windows operating systems. These tools' user interface capabilities, platform limitations, reporting capabilities, total execution time, shared and proprietary DLLs, modified registry keys, and invoked files during processing were compared. We observed that Windows Memory Reader and Belkasoft's Live Ram Capturer leaves the least fingerprints in memory when loaded. On the other hand, ProDiscover and FTK Imager perform poor in memory usage, processing time, DLL usage, and not-wanted artifacts introduced to the system. While Belkasoft's Live Ram Capturer is the fastest to obtain an image of the memory, Pro Discover takes the longest time to do the same job. © 2015 American Academy of Forensic Sciences.

  2. Experimental analyses of dynamical systems involving shape memory alloys

    DEFF Research Database (Denmark)

    Enemark, Søren; Savi, Marcelo A.; Santos, Ilmar F.

    2015-01-01

    The use of shape memory alloys (SMAs) in dynamical systems has an increasing importance in engineering especially due to their capacity to provide vibration reductions. In this regard, experimental tests are essential in order to show all potentialities of this kind of systems. In this work, SMA...... springs are incorporated in a dynamical system that consists of a one degree of freedom oscillator connected to a linear spring and a mass, which is also connected to the SMA spring. Two types of springs are investigated defming two distinct systems: a pseudoelastic and a shape memory system....... The characterisation of the springs is evaluated by considering differential calorimetry scanning tests and also force-displacement tests at different temperatures. Free and forced vibration experiments are made in order to investigate the dynamical behaviour of the systems. For both systems, it is observed...

  3. A model for the neuronal substrate of dead reckoning and memory in arthropods: a comparative computational and behavioral study.

    Science.gov (United States)

    Bernardet, Ulysses; Bermúdez I Badia, Sergi; Verschure, Paul F M J

    2008-06-01

    Returning to the point of departure after exploring the environment is a key capability for most animals. In the absence of landmarks, this task will be solved by integrating direction and distance traveled over time. This is referred to as path integration or dead reckoning. An important question is how the nervous systems of navigating animals such as the 1 mm(3) brain of ants can integrate local information in order to make global decision. In this article we propose a neurobiologically plausible system of storing and retrieving direction and distance information. The path memory of our model builds on the well established concept of population codes, moreover our system does not rely on trigonometric functions or other complex non-linear operations such as multiplication, but only uses biologically plausible operations such as integration and thresholding. We test our model in two paradigms; in the first paradigm the system receives input from a simulated compass, in the second paradigm, the model is tested against behavioral data recorded from 17 ants. We were able to show that our path memory system was able to reliably encode and compute the angle of the vector pointing to the start location, and that the system stores the total length of the trajectory in a dependable way. From the structure and behavior of our model, we derive testable predictions both at the level of observable behavior as well as on the anatomy and physiology of its underlying neuronal substrate.

  4. Data processing system with a micro-computer for high magnetic field tokamak, TRIAM-1

    International Nuclear Information System (INIS)

    Kawasaki, Shoji; Nakamura, Kazuo; Nakamura, Yukio; Hiraki, Naoharu; Toi, Kazuo

    1981-01-01

    A data processing system was designed and constructed for the purpose of analyzing the data of the high magnetic field tokamak TRIAM-1. The system consists of a 10-channel A-D converter, a 20 K byte memory (RAM), an address bus control circuit, a data bus control circuit, a timing pulse and control signal generator, a D-A converter, a micro-computer, and a power source. The memory can be used as a CPU memory except at the time of sampling and data output. The out-put devices of the system are an X-Y recorder and an oscilloscope. The computer is composed of a CPU, a memory and an I/O part. The memory size can be extended. A cassette tape recorder is provided to keep the programs of the computer. An interface circuit between the computer and the tape recorder was designed and constructed. An electric discharge printer as an I/O device can be connected. From TRIAM-1, the signals of magnetic probes, plasma current, vertical field coil current, and one-turn loop voltage are fed into the processing system. The plasma displacement calculated from these signals is shown by one of I/O devices. The results of test run showed good performance. (Kato, T.)

  5. Computational Modeling of the Negative Priming Effect Based on Inhibition Patterns and Working Memory

    Directory of Open Access Journals (Sweden)

    Dongil eChung

    2013-11-01

    Full Text Available Negative priming (NP, slowing down of the response for target stimuli that have been previously exposed, but ignored, has been reported in multiple psychological paradigms including the Stroop task. Although NP likely results from the interplay of selective attention, episodic memory retrieval, working memory, and inhibition mechanisms, a comprehensive theoretical account of NP is currently unavailable. This lacuna may result from the complexity of stimuli combinations in NP. Thus, we aimed to investigate the presence of different degrees of the NP effect according to prime-probe combinations within a classic Stroop task. We recorded reaction times (RTs from 66 healthy participants during Stroop task performance and examined three different NP subtypes, defined according to the type of the Stroop probe in prime-probe pairs. Our findings show significant RT differences among NP subtypes that are putatively due to the presence of differential disinhibition, i.e., release from inhibition. Among the several potential origins for differential subtypes of NP, we investigated the involvement of selective attention and/or working memory using a parallel distributed processing (PDP model (employing selective attention only and a modified PDP model with working memory (PDP-WM, employing both selective attention and working memory. Our findings demonstrate that, unlike the conventional PDP model, the PDP-WM successfully simulates different levels of NP effects that closely follow the behavioral data. This outcome suggests that working memory engages in the re-accumulation of the evidence for target response and induces differential NP effects. Our computational model complements earlier efforts and may pave the road to further insights into an integrated theoretical account of complex NP effects.

  6. Memory and reward systems coproduce 'nostalgic' experiences in the brain.

    Science.gov (United States)

    Oba, Kentaro; Noriuchi, Madoka; Atomi, Tomoaki; Moriguchi, Yoshiya; Kikuchi, Yoshiaki

    2016-07-01

    People sometimes experience an emotional state known as 'nostalgia', which involves experiencing predominantly positive emotions while remembering autobiographical events. Nostalgia is thought to play an important role in psychological resilience. Previous neuroimaging studies have shown involvement of memory and reward systems in such experiences. However, it remains unclear how these two systems are collaboratively involved with nostalgia experiences. Here, we conducted a functional magnetic resonance imaging study of healthy females to investigate the relationship between memory-reward co-activation and nostalgia, using childhood-related visual stimuli. Moreover, we examined the factors constituting nostalgia and their neural correlates. We confirmed the presence of nostalgia-related activity in both memory and reward systems, including the hippocampus (HPC), substantia nigra/ventral tegmental area (SN/VTA), and ventral striatum (VS). We also found significant HPC-VS co-activation, with its strength correlating with individual 'nostalgia tendencies'. Factor analyses showed that two dimensions underlie nostalgia: emotional and personal significance and chronological remoteness, with the former correlating with caudal SN/VTA and left anterior HPC activity, and the latter correlating with rostral SN/VTA activity. These findings demonstrate the cooperative activity of memory and reward systems, where each system has a specific role in the construction of the factors that underlie the experience of nostalgia. © The Author (2015). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  7. Generalised Computability and Applications to Hybrid Systems

    DEFF Research Database (Denmark)

    Korovina, Margarita V.; Kudinov, Oleg V.

    2001-01-01

    We investigate the concept of generalised computability of operators and functionals defined on the set of continuous functions, firstly introduced in [9]. By working in the reals, with equality and without equality, we study properties of generalised computable operators and functionals. Also we...... propose an interesting application to formalisation of hybrid systems. We obtain some class of hybrid systems, which trajectories are computable in the sense of computable analysis. This research was supported in part by the RFBR (grants N 99-01-00485, N 00-01- 00810) and by the Siberian Branch of RAS (a...... grant for young researchers, 2000)...

  8. Learning and Memory... and the Immune System

    Science.gov (United States)

    Marin, Ioana; Kipnis, Jonathan

    2013-01-01

    The nervous system and the immune system are two main regulators of homeostasis in the body. Communication between them ensures normal functioning of the organism. Immune cells and molecules are required for sculpting the circuitry and determining the activity of the nervous system. Within the parenchyma of the central nervous system (CNS),…

  9. Evolutionary computation for trading systems

    OpenAIRE

    Kaucic, Massimiliano

    2008-01-01

    2007/2008 Evolutionary computations, also called evolutionary algorithms, consist of several heuristics, which are able to solve optimization tasks by imitating some aspects of natural evolution. They may use different levels of abstraction, but they are always working on populations of possible solutions for a given task. The basic idea is that if only those individuals of a population which meet a certain selection criteria reproduce, while the remaining individuals die, the ...

  10. Computer-Supported Information Systems.

    Science.gov (United States)

    Mayhew, William H.

    1983-01-01

    The planning and implementation of a computerized management information system at a fictional small college is described. Nine key points are made regarding department involvement, centralization, gradual program implementation, lowering costs, system documentation, and upper-level administrative support. (MSE)

  11. The ACP (Advanced Computer Program) multiprocessor system at Fermilab

    Energy Technology Data Exchange (ETDEWEB)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Case, G.; Cook, A.; Fischler, M.; Gaines, I.; Hance, R.; Husby, D.

    1986-09-01

    The Advanced Computer Program at Fermilab has developed a multiprocessor system which is easy to use and uniquely cost effective for many high energy physics problems. The system is based on single board computers which cost under $2000 each to build including 2 Mbytes of on board memory. These standard VME modules each run experiment reconstruction code in Fortran at speeds approaching that of a VAX 11/780. Two versions have been developed: one uses Motorola's 68020 32 bit microprocessor, the other runs with AT and T's 32100. both include the corresponding floating point coprocessor chip. The first system, when fully configured, uses 70 each of the two types of processors. A 53 processor system has been operated for several months with essentially no down time by computer operators in the Fermilab Computer Center, performing at nearly the capacity of 6 CDC Cyber 175 mainframe computers. The VME crates in which the processing ''nodes'' sit are connected via a high speed ''Branch Bus'' to one or more MicroVAX computers which act as hosts handling system resource management and all I/O in offline applications. An interface from Fastbus to the Branch Bus has been developed for online use which has been tested error free at 20 Mbytes/sec for 48 hours. ACP hardware modules are now available commercially. A major package of software, including a simulator that runs on any VAX, has been developed. It allows easy migration of existing programs to this multiprocessor environment. This paper describes the ACP Multiprocessor System and early experience with it at Fermilab and elsewhere.

  12. The ACP [Advanced Computer Program] multiprocessor system at Fermilab

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.

    1986-09-01

    The Advanced Computer Program at Fermilab has developed a multiprocessor system which is easy to use and uniquely cost effective for many high energy physics problems. The system is based on single board computers which cost under $2000 each to build including 2 Mbytes of on board memory. These standard VME modules each run experiment reconstruction code in Fortran at speeds approaching that of a VAX 11/780. Two versions have been developed: one uses Motorola's 68020 32 bit microprocessor, the other runs with AT and T's 32100. both include the corresponding floating point coprocessor chip. The first system, when fully configured, uses 70 each of the two types of processors. A 53 processor system has been operated for several months with essentially no down time by computer operators in the Fermilab Computer Center, performing at nearly the capacity of 6 CDC Cyber 175 mainframe computers. The VME crates in which the processing ''nodes'' sit are connected via a high speed ''Branch Bus'' to one or more MicroVAX computers which act as hosts handling system resource management and all I/O in offline applications. An interface from Fastbus to the Branch Bus has been developed for online use which has been tested error free at 20 Mbytes/sec for 48 hours. ACP hardware modules are now available commercially. A major package of software, including a simulator that runs on any VAX, has been developed. It allows easy migration of existing programs to this multiprocessor environment. This paper describes the ACP Multiprocessor System and early experience with it at Fermilab and elsewhere

  13. MTA Computer Based Evaluation System.

    Science.gov (United States)

    Brenner, Lisa P.; And Others

    The MTA PLATO-based evaluation system, which has been implemented by a consortium of schools of medical technology, is designed to be general-purpose, modular, data-driven, and interactive, and to accommodate other national and local item banks. The system provides a comprehensive interactive item-banking system in conjunction with online student…

  14. Modeling Students' Memory for Application in Adaptive Educational Systems

    Science.gov (United States)

    Pelánek, Radek

    2015-01-01

    Human memory has been thoroughly studied and modeled in psychology, but mainly in laboratory setting under simplified conditions. For application in practical adaptive educational systems we need simple and robust models which can cope with aspects like varied prior knowledge or multiple-choice questions. We discuss and evaluate several models of…

  15. Portable wireless neurofeedback system of EEG alpha rhythm enhances memory.

    Science.gov (United States)

    Wei, Ting-Ying; Chang, Da-Wei; Liu, You-De; Liu, Chen-Wei; Young, Chung-Ping; Liang, Sheng-Fu; Shaw, Fu-Zen

    2017-11-13

    Effect of neurofeedback training (NFT) on enhancement of cognitive function or amelioration of clinical symptoms is inconclusive. The trainability of brain rhythm using a neurofeedback system is uncertainty because various experimental designs are used in previous studies. The current study aimed to develop a portable wireless NFT system for alpha rhythm and to validate effect of the NFT system on memory with a sham-controlled group. The proposed system contained an EEG signal analysis device and a smartphone with wireless Bluetooth low-energy technology. Instantaneous 1-s EEG power and contiguous 5-min EEG power throughout the training were developed as feedback information. The training performance and its progression were kept to boost usability of our device. Participants were blinded and randomly assigned into either the control group receiving random 4-Hz power or Alpha group receiving 8-12-Hz power. Working memory and episodic memory were assessed by the backward digital span task and word-pair task, respectively. The portable neurofeedback system had advantages of a tiny size and long-term recording and demonstrated trainability of alpha rhythm in terms of significant increase of power and duration of 8-12 Hz. Moreover, accuracies of the backward digital span task and word-pair task showed significant enhancement in the Alpha group after training compared to the control group. Our tiny portable device demonstrated success trainability of alpha rhythm and enhanced two kinds of memories. The present study suggest that the portable neurofeedback system provides an alternative intervention for memory enhancement.

  16. Scintillation camera-computer systems: General principles of quality control

    International Nuclear Information System (INIS)

    Ganatra, R.D.

    1992-01-01

    Scintillation camera-computer systems are designed to allow the collection, digital analysis and display of the image data from a scintillation camera. The components of the computer in such a system are essentially the same as those of a computer used in any other application, i.e. a central processing unit (CPU), memory and magnetic storage. Additional hardware items necessary for nuclear medicine applications are an analogue-to-digital converter (ADC), which converts the analogue signals from the camera to digital numbers, and an image display. It is possible that the transfer of data from camera to computer degrades the information to some extent. The computer can generate the image for display, but it also provides the capability of manipulating the primary data to improve the display of the image. The first function of conversion from analogue to digital mode is not within the control of the operator, but the second type of manipulation is in the control of the operator. These type of manipulations should be done carefully without sacrificing the integrity of the incoming information

  17. Computational Intelligence in Information Systems Conference

    CERN Document Server

    Au, Thien-Wan; Omar, Saiful

    2017-01-01

    This book constitutes the Proceedings of the Computational Intelligence in Information Systems conference (CIIS 2016), held in Brunei, November 18–20, 2016. The CIIS conference provides a platform for researchers to exchange the latest ideas and to present new research advances in general areas related to computational intelligence and its applications. The 26 revised full papers presented in this book have been carefully selected from 62 submissions. They cover a wide range of topics and application areas in computational intelligence and informatics.

  18. Index : A Rule Based Expert System For Computer Network Maintenance

    Science.gov (United States)

    Chaganty, Srinivas; Pitchai, Anandhi; Morgan, Thomas W.

    1988-03-01

    Communications is an expert intensive discipline. The application of expert systems for maintenance of large and complex networks, mainly as an aid in trouble shooting, can simplify the task of network management. The important steps involved in troubleshooting are fault detection, fault reporting, fault interpretation and fault isolation. At present, Network Maintenance Facilities are capable of detecting and reporting the faults to network personnel. Fault interpretation refers to the next step in the process, which involves coming up with reasons for the failure. Fault interpretation can be characterized in two ways. First, it involves such a diversity of facts that it is difficult to predict. Secondly, it embodies a wealth of knowledge in the form of network management personnel. The application of expert systems in these interpretive tasks is an important step towards automation of network maintenance. In this paper, INDEX (Intelligent Network Diagnosis Expediter), a rule based production system for computer network alarm interpretation is described. It acts as an intelligent filter for people analyzing network alarms. INDEX analyzes the alarms in the network and identifies proper maintenance action to be taken.The important feature of this production system is that it is data driven. Working memory is the principal data repository of production systems and its contents represent the current state of the problem. Control is based upon which productions match the constantly changing working memory elements. Implementation of the prototype is in OPS83. Major issues in rule based system development such as rule base organization, implementation and efficiency are discussed.

  19. Lifetime-Based Memory Management for Distributed Data Processing Systems

    DEFF Research Database (Denmark)

    Lu, Lu; Shi, Xuanhua; Zhou, Yongluan

    2016-01-01

    create a large amount of long-living data objects in the heap, which may quickly saturate the garbage collector, especially when handling a large dataset, and hence would limit the scalability of the system. To eliminate this problem, we propose a lifetime-based memory management framework, which......, by automatically analyzing the user-defined functions and data types, obtains the expected lifetime of the data objects, and then allocates and releases memory space accordingly to minimize the garbage collection overhead. In particular, we present Deca, a concrete implementation of our proposal on top of Spark...... the garbage collection time by up to 99.9%, 2) to achieve up to 22.7x speed up in terms of execution time in cases without data spilling and 41.6x speedup in cases with data spilling, and 3) to consume up to 46.6% less memory....

  20. Image detection and compression for memory efficient system analysis

    Science.gov (United States)

    Bayraktar, Mustafa

    2015-02-01

    The advances in digital signal processing have been progressing towards efficient use of memory and processing. Both of these factors can be utilized efficiently by using feasible techniques of image storage by computing the minimum information of image which will enhance computation in later processes. Scale Invariant Feature Transform (SIFT) can be utilized to estimate and retrieve of an image. In computer vision, SIFT can be implemented to recognize the image by comparing its key features from SIFT saved key point descriptors. The main advantage of SIFT is that it doesn't only remove the redundant information from an image but also reduces the key points by matching their orientation and adding them together in different windows of image [1]. Another key property of this approach is that it works on highly contrasted images more efficiently because it`s design is based on collecting key points from the contrast shades of image.

  1. Contrasting single and multi-component working-memory systems in dual tasking

    NARCIS (Netherlands)

    Nijboer, Menno; Borst, Jelmer; van Rijn, Hedderik; Taatgen, Niels

    2016-01-01

    Working memory can be a major source of interference in dual tasking. However, there is no consensus on whether this interference is the result of a single working memory bottleneck, or of interactions between different working memory components that together form a complete working-memory system.

  2. Assessing and Changing Self-Concept: Guidelines from the Memory System.

    Science.gov (United States)

    Nurius, Paula S.

    1994-01-01

    Draws on architecture and operation of human memory to better specify self-concept form and functioning. Translates these major components and processes of memory system into practice implications for targets and methods of change: declarative knowledge versus procedural knowledge, storage memory versus working memory, and role of sensory…

  3. Efficient implementation of multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    Science.gov (United States)

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2012-01-10

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  4. Computer-aided dispatching system design specification

    Energy Technology Data Exchange (ETDEWEB)

    Briggs, M.G.

    1997-12-16

    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol Operations Center. This document reflects the as-built requirements for the system that was delivered by GTE Northwest, Inc. This system provided a commercial off-the-shelf computer-aided dispatching system and alarm monitoring system currently in operations at the Hanford Patrol Operations Center, Building 2721E. This system also provides alarm back-up capability for the Plutonium Finishing Plant (PFP).

  5. Computer-aided dispatching system design specification

    International Nuclear Information System (INIS)

    Briggs, M.G.

    1997-01-01

    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol Operations Center. This document reflects the as-built requirements for the system that was delivered by GTE Northwest, Inc. This system provided a commercial off-the-shelf computer-aided dispatching system and alarm monitoring system currently in operations at the Hanford Patrol Operations Center, Building 2721E. This system also provides alarm back-up capability for the Plutonium Finishing Plant (PFP)

  6. Computer-controlled environmental test systems - Criteria for selection, installation, and maintenance.

    Science.gov (United States)

    Chapman, C. P.

    1972-01-01

    Applications for presently marketed, new computer-controlled environmental test systems are suggested. It is shown that capital costs of these systems follow an exponential cost function curve that levels out as additional applications are implemented. Some test laboratory organization changes are recommended in terms of new personnel requirements, and facility modification are considered in support of a computer-controlled test system. Software for computer-controlled test systems are discussed, and control loop speed constraints are defined for real-time control functions. Suitable input and output devices and memory storage device tradeoffs are also considered.

  7. Logic computation in phase change materials by threshold and memory switching.

    Science.gov (United States)

    Cassinerio, M; Ciocchini, N; Ielmini, D

    2013-11-06

    Memristors, namely hysteretic devices capable of changing their resistance in response to applied electrical stimuli, may provide new opportunities for future memory and computation, thanks to their scalable size, low switching energy and nonvolatile nature. We have developed a functionally complete set of logic functions including NOR, NAND and NOT gates, each utilizing a single phase-change memristor (PCM) where resistance switching is due to the phase transformation of an active chalcogenide material. The logic operations are enabled by the high functionality of nanoscale phase change, featuring voltage comparison, additive crystallization and pulse-induced amorphization. The nonvolatile nature of memristive states provides the basis for developing reconfigurable hybrid logic/memory circuits featuring low-power and high-speed switching. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. A Compute Capable SSD Architecture for Next-Generation Non-volatile Memories

    Energy Technology Data Exchange (ETDEWEB)

    De, Arup [Univ. of California, San Diego, CA (United States)

    2014-01-01

    Existing storage technologies (e.g., disks and ash) are failing to cope with the processor and main memory speed and are limiting the overall perfor- mance of many large scale I/O or data-intensive applications. Emerging fast byte-addressable non-volatile memory (NVM) technologies, such as phase-change memory (PCM), spin-transfer torque memory (STTM) and memristor are very promising and are approaching DRAM-like performance with lower power con- sumption and higher density as process technology scales. These new memories are narrowing down the performance gap between the storage and the main mem- ory and are putting forward challenging problems on existing SSD architecture, I/O interface (e.g, SATA, PCIe) and software. This dissertation addresses those challenges and presents a novel SSD architecture called XSSD. XSSD o oads com- putation in storage to exploit fast NVMs and reduce the redundant data tra c across the I/O bus. XSSD o ers a exible RPC-based programming framework that developers can use for application development on SSD without dealing with the complication of the underlying architecture and communication management. We have built a prototype of XSSD on the BEE3 FPGA prototyping system. We implement various data-intensive applications and achieve speedup and energy ef- ciency of 1.5-8.9 and 1.7-10.27 respectively. This dissertation also compares XSSD with previous work on intelligent storage and intelligent memory. The existing ecosystem and these new enabling technologies make this system more viable than earlier ones.

  9. Abstract Specification of the UBIFS File System for Flash Memory

    Science.gov (United States)

    Schierl, Andreas; Schellhorn, Gerhard; Haneberg, Dominik; Reif, Wolfgang

    Today we see an increasing demand for flash memory because it has certain advantages like resistance against kinetic shock. However, reliable data storage also requires a specialized file system knowing and handling the limitations of flash memory. This paper develops a formal, abstract model for the UBIFS flash file system, which has recently been included in the Linux kernel. We develop formal specifications for the core components of the file system: the inode-based file store, the flash index, its cached copy in the RAM and the journal to save the differences. Based on these data structures we give an abstract specification of the interface operations of UBIFS and prove some of the most important properties using the interactive verification system KIV.

  10. Maze learning by a hybrid brain-computer system

    Science.gov (United States)

    Wu, Zhaohui; Zheng, Nenggan; Zhang, Shaowu; Zheng, Xiaoxiang; Gao, Liqiang; Su, Lijuan

    2016-09-01

    The combination of biological and artificial intelligence is particularly driven by two major strands of research: one involves the control of mechanical, usually prosthetic, devices by conscious biological subjects, whereas the other involves the control of animal behaviour by stimulating nervous systems electrically or optically. However, to our knowledge, no study has demonstrated that spatial learning in a computer-based system can affect the learning and decision making behaviour of the biological component, namely a rat, when these two types of intelligence are wired together to form a new intelligent entity. Here, we show how rule operations conducted by computing components contribute to a novel hybrid brain-computer system, i.e., ratbots, exhibit superior learning abilities in a maze learning task, even when their vision and whisker sensation were blocked. We anticipate that our study will encourage other researchers to investigate combinations of various rule operations and other artificial intelligence algorithms with the learning and memory processes of organic brains to develop more powerful cyborg intelligence systems. Our results potentially have profound implications for a variety of applications in intelligent systems and neural rehabilitation.

  11. Computer-aided power systems analysis

    CERN Document Server

    Kusic, George

    2008-01-01

    Computer applications yield more insight into system behavior than is possible by using hand calculations on system elements. Computer-Aided Power Systems Analysis: Second Edition is a state-of-the-art presentation of basic principles and software for power systems in steady-state operation. Originally published in 1985, this revised edition explores power systems from the point of view of the central control facility. It covers the elements of transmission networks, bus reference frame, network fault and contingency calculations, power flow on transmission networks, generator base power setti

  12. Computational cost of isogeometric multi-frontal solvers on parallel distributed memory machines

    KAUST Repository

    Woźniak, Maciej

    2015-02-01

    This paper derives theoretical estimates of the computational cost for isogeometric multi-frontal direct solver executed on parallel distributed memory machines. We show theoretically that for the Cp-1 global continuity of the isogeometric solution, both the computational cost and the communication cost of a direct solver are of order O(log(N)p2) for the one dimensional (1D) case, O(Np2) for the two dimensional (2D) case, and O(N4/3p2) for the three dimensional (3D) case, where N is the number of degrees of freedom and p is the polynomial order of the B-spline basis functions. The theoretical estimates are verified by numerical experiments performed with three parallel multi-frontal direct solvers: MUMPS, PaStiX and SuperLU, available through PETIGA toolkit built on top of PETSc. Numerical results confirm these theoretical estimates both in terms of p and N. For a given problem size, the strong efficiency rapidly decreases as the number of processors increases, becoming about 20% for 256 processors for a 3D example with 1283 unknowns and linear B-splines with C0 global continuity, and 15% for a 3D example with 643 unknowns and quartic B-splines with C3 global continuity. At the same time, one cannot arbitrarily increase the problem size, since the memory required by higher order continuity spaces is large, quickly consuming all the available memory resources even in the parallel distributed memory version. Numerical results also suggest that the use of distributed parallel machines is highly beneficial when solving higher order continuity spaces, although the number of processors that one can efficiently employ is somehow limited.

  13. The Impact of Transactive Memory System and Interaction Platform in Collaborative Knowledge Construction on Social Presence and Self-Regulation

    Science.gov (United States)

    Yilmaz, Ramazan; Karaoglan Yilmaz, Fatma Gizem; Kilic Cakmak, Ebru

    2017-01-01

    The purpose of this study is to examine the impacts of transactive memory system (TMS) and interaction platforms in computer-supported collaborative learning (CSCL) on social presence perceptions and self-regulation skills of learners. Within the scope of the study, social presence perceptions and self-regulation skills of students in…

  14. Computing for Decentralized Systems (lecture 2)

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    With the rise of Bitcoin, Ethereum, and other cryptocurrencies it is becoming apparent the paradigm shift towards decentralized computing. Computer engineers will need to understand this shift when developing systems in the coming years. Transferring value over the Internet is just one of the first working use cases of decentralized systems, but it is expected they will be used for a number of different services such as general purpose computing, data storage, or even new forms of governance. Decentralized systems, however, pose a series of challenges that cannot be addressed with traditional approaches in computing. Not having a central authority implies truth must be agreed upon rather than simply trusted and, so, consensus protocols, cryptographic data structures like the blockchain, and incentive models like mining rewards become critical for the correct behavior of decentralized system. This series of lectures will be a fast track to introduce these fundamental concepts through working examples and pra...

  15. Computing for Decentralized Systems (lecture 1)

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    With the rise of Bitcoin, Ethereum, and other cryptocurrencies it is becoming apparent the paradigm shift towards decentralized computing. Computer engineers will need to understand this shift when developing systems in the coming years. Transferring value over the Internet is just one of the first working use cases of decentralized systems, but it is expected they will be used for a number of different services such as general purpose computing, data storage, or even new forms of governance. Decentralized systems, however, pose a series of challenges that cannot be addressed with traditional approaches in computing. Not having a central authority implies truth must be agreed upon rather than simply trusted and, so, consensus protocols, cryptographic data structures like the blockchain, and incentive models like mining rewards become critical for the correct behavior of decentralized system. This series of lectures will be a fast track to introduce these fundamental concepts through working examples and pra...

  16. Analyses of Markov decision process structure regarding the possible strategic use of interacting memory systems

    Directory of Open Access Journals (Sweden)

    Eric A Zilli

    2008-12-01

    Full Text Available Behavioral tasks are often used to study the different memory systems present in humans and animals. Such tasks are usually designed to isolate and measure some aspect of a single memory system. However, it is not necessarily clear that any given task actually does isolate a system or that the strategy used by a subject in the experiment is the one desired by the experimenter. We have previously shown that when tasks are written mathematically as a form of partially-observable Markov decision processes, the structure of the tasks provide information regarding the possible utility of certain memory systems. These previous analyses dealt with the disambiguation problem: given a specific ambiguous observation of the environment, is there information provided by a given memory strategy that can disambiguate that observation to allow a correct decisionµ Here we extend this approach to cases where multiple memory systems can be strategically combined in different ways. Specifically, we analyze the disambiguation arising from three ways by which episodic-like memory retrieval might be cued (by another episodic-like memory, by a semantic association, or by working memory for some earlier observation. We also consider the disambiguation arising from holding earlier working memories, episodic-like memories or semantic associations in working memory. From these analyses we can begin to develop a quantitative hierarchy among memory systems in which stimulus-response memories and semantic associations provide no disambiguation while the episodic memory system provides the most flexible

  17. Computer Systems for Distributed and Distance Learning.

    Science.gov (United States)

    Anderson, M.; Jackson, David

    2000-01-01

    Discussion of network-based learning focuses on a survey of computer systems for distributed and distance learning. Both Web-based systems and non-Web-based systems are reviewed in order to highlight some of the major trends of past projects and to suggest ways in which progress may be made in the future. (Contains 92 references.) (Author/LRW)

  18. Computer networks in future accelerator control systems

    International Nuclear Information System (INIS)

    Dimmler, D.G.

    1977-03-01

    Some findings of a study concerning a computer based control and monitoring system for the proposed ISABELLE Intersecting Storage Accelerator are presented. Requirements for development and implementation of such a system are discussed. An architecture is proposed where the system components are partitioned along functional lines. Implementation of some conceptually significant components is reviewed

  19. Space-Bounded Church-Turing Thesis and Computational Tractability of Closed Systems

    Science.gov (United States)

    Braverman, Mark; Schneider, Jonathan; Rojas, Cristóbal

    2015-08-01

    We report a new limitation on the ability of physical systems to perform computation—one that is based on generalizing the notion of memory, or storage space, available to the system to perform the computation. Roughly, we define memory as the maximal amount of information that the evolving system can carry from one instant to the next. We show that memory is a limiting factor in computation even in lieu of any time limitations on the evolving system—such as when considering its equilibrium regime. We call this limitation the space-bounded Church-Turing thesis (SBCT). The SBCT is supported by a simulation assertion (SA), which states that predicting the long-term behavior of bounded-memory systems is computationally tractable. In particular, one corollary of SA is an explicit bound on the computational hardness of the long-term behavior of a discrete-time finite-dimensional dynamical system that is affected by noise. We prove such a bound explicitly.

  20. Memory

    OpenAIRE

    Wager, Nadia

    2017-01-01

    This chapter will explore a response to traumatic victimisation which has divided the opinions of psychologists at an exponential rate. We will be examining amnesia for memories of childhood sexual abuse and the potential to recover these memories in adulthood. Whilst this phenomenon is generally accepted in clinical circles, it is seen as highly contentious amongst research psychologists, particularly experimental cognitive psychologists. The chapter will begin with a real case study of a wo...

  1. Control through a system of small computers

    International Nuclear Information System (INIS)

    The system status and executive programs are discussed for the computers used in the control of the linac at SLAC. A continuing traffic study is maintained of the flow of tasks within and between CPUs and of the use of resources such as the disk, links, interfaces, and internal I/O buffers. The study shows that the computers in the control system are idle much of the time. By controlling the peak traffic, many occasional tasks can be added as additional computer aids to the accelerator operators. (PMA)

  2. Computer-Based Wireless Advertising Communication System

    Directory of Open Access Journals (Sweden)

    Anwar Al-Mofleh

    2009-10-01

    Full Text Available In this paper we developed a computer based wireless advertising communication system (CBWACS that enables the user to advertise whatever he wants from his own office to the screen in front of the customer via wireless communication system. This system consists of two PIC microcontrollers, transmitter, receiver, LCD, serial cable and antenna. The main advantages of the system are: the wireless structure and the system is less susceptible to noise and other interferences because it uses digital communication techniques.

  3. Computer-Aided dispatching system design specification

    International Nuclear Information System (INIS)

    Briggs, M.G.

    1996-01-01

    This document defines the performance requirements for a graphic display dispatching system to support Hanford Patrol emergency response. This system is defined as a Commercial-Off the-Shelf computer dispatching system providing both text and graphical display information while interfacing with the diverse reporting system within the Hanford Facility. This system also provided expansion capabilities to integrate Hanford Fire and the Occurrence Notification Center and provides back-up capabilities for the Plutonium Processing Facility

  4. Shared visual attention and memory systems in the Drosophila brain.

    Directory of Open Access Journals (Sweden)

    Bruno van Swinderen

    Full Text Available BACKGROUND: Selective attention and memory seem to be related in human experience. This appears to be the case as well in simple model organisms such as the fly Drosophila melanogaster. Mutations affecting olfactory and visual memory formation in Drosophila, such as in dunce and rutabaga, also affect short-term visual processes relevant to selective attention. In particular, increased optomotor responsiveness appears to be predictive of visual attention defects in these mutants. METHODOLOGY/PRINCIPAL FINDINGS: To further explore the possible overlap between memory and visual attention systems in the fly brain, we screened a panel of 36 olfactory long term memory (LTM mutants for visual attention-like defects using an optomotor maze paradigm. Three of these mutants yielded high dunce-like optomotor responsiveness. We characterized these three strains by examining their visual distraction in the maze, their visual learning capabilities, and their brain activity responses to visual novelty. We found that one of these mutants, D0067, was almost completely identical to dunce(1 for all measures, while another, D0264, was more like wild type. Exploiting the fact that the LTM mutants are also Gal4 enhancer traps, we explored the sufficiency for the cells subserved by these elements to rescue dunce attention defects and found overlap at the level of the mushroom bodies. Finally, we demonstrate that control of synaptic function in these Gal4 expressing cells specifically modulates a 20-30 Hz local field potential associated with attention-like effects in the fly brain. CONCLUSIONS/SIGNIFICANCE: Our study uncovers genetic and neuroanatomical systems in the fly brain affecting both visual attention and odor memory phenotypes. A common component to these systems appears to be the mushroom bodies, brain structures which have been traditionally associated with odor learning but which we propose might be also involved in generating oscillatory brain activity

  5. The endocannabinoid system and associative learning and memory in zebrafish.

    Science.gov (United States)

    Ruhl, Tim; Moesbauer, Kirstin; Oellers, Nadine; von der Emde, Gerhard

    2015-09-01

    In zebrafish the medial pallium of the dorsal telencephalon represents an amygdala homolog structure, which is crucially involved in emotional associative learning and memory. Similar to the mammalian amygdala, the medial pallium contains a high density of endocannabinoid receptor CB1. To elucidate the role of the zebrafish endocannabinoid system in associative learning, we tested the influence of acute and chronic administration of receptor agonists (THC, WIN55,212-2) and antagonists (Rimonabant, AM-281) on two different learning paradigms. In an appetitively motivated two-alternative choice paradigm, animals learned to associate a certain color with a food reward. In a second set-up, a fish shuttle-box, animals associated the onset of a light stimulus with the occurrence of a subsequent electric shock (avoidance conditioning). Once fish successfully had learned to solve these behavioral tasks, acute receptor activation or inactivation had no effect on memory retrieval, suggesting that established associative memories were stable and not alterable by the endocannabinoid system. In both learning tasks, chronic treatment with receptor antagonists improved acquisition learning, and additionally facilitated reversal learning during color discrimination. In contrast, chronic CB1 activation prevented aversively motivated acquisition learning, while different effects were found on appetitively motivated acquisition learning. While THC significantly improved behavioral performance, WIN55,212-2 significantly impaired color association. Our findings suggest that the zebrafish endocannabinoid system can modulate associative learning and memory. Stimulation of the CB1 receptor might play a more specific role in acquisition and storage of aversive learning and memory, while CB1 blocking induces general enhancement of cognitive functions. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Resummed memory kernels in generalized system-bath master equations

    Science.gov (United States)

    Mavros, Michael G.; Van Voorhis, Troy

    2014-08-01

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the "Landau-Zener resummation" of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.

  7. A HETEROGENEOUS MULTIPROCESSOR SYSTEM-ON-CHIP ARCHITECTURE INCORPORATING MEMORY ALLOCATION

    Directory of Open Access Journals (Sweden)

    T.Thillaikkarasi

    2010-06-01

    Full Text Available This paper describes the development of a Multiprocessor System-on-Chip (MPSoC with a novel interconnect architecture incorporating memory allocation. It addresses the problem of mapping a process network with data dependent behavior and soft real time constraints onto the heterogeneous multiprocessor System on Chip (SoC architectures and focuses on a memory allocation step which is based on an integer linear programming model. An application is modeled as Kahn Process Network (KPN which makes the parallelism present in the application explicit. The main contribution of our work is an MILP based approach which can be used to map the KPN of streaming applications with data dependent behavior and interleaved computation and communication. Our solution minimizes hardware cost while taking into account the performance constraints. One of the salient features of our work is that it takes into account the additional overheads because of data communication conflicts. It permits to obtain an optimal distributed shared memory architecture minimizing the global cost to access the shared data in the application, and the memory cost. Our approach allows automatic generation of an architecture-level specification of the application.

  8. A Gamma Memory Neural Network for System Identification

    Science.gov (United States)

    Motter, Mark A.; Principe, Jose C.

    1992-01-01

    A gamma neural network topology is investigated for a system identification application. A discrete gamma memory structure is used in the input layer, providing delayed values of both the control inputs and the network output to the input layer. The discrete gamma memory structure implements a tapped dispersive delay line, with the amount of dispersion regulated by a single, adaptable parameter. The network is trained using static back propagation, but captures significant features of the system dynamics. The system dynamics identified with the network are the Mach number dynamics of the 16 Foot Transonic Tunnel at NASA Langley Research Center, Hampton, Virginia. The training data spans an operating range of Mach numbers from 0.4 to 1.3.

  9. Systematization of radiotherapy units by computer system

    International Nuclear Information System (INIS)

    Uchiyama, Yukio; Kimura, Chiaki; Ueda, Toshio; Morita, Kozo; Watanabe, Michiko.

    1986-01-01

    In order to carry out the radiation therapy, accurately the linkage or the systematization of the several radiotherapy devices (the CT-scanner, the computer system for Radiation Treatment Planning (RTP) and the 6 MeV linea accelerator with the conformation device) was performed with the aid of the computer system. The clinical experiences in routine work of our department for the past twenty years were useful to accomplish this total treatment planning system. During six months experience it turned out that this system was easy to use for the daily routine work without any trouble. (author)

  10. Information systems and computing technology

    CERN Document Server

    Zhang, Lei

    2013-01-01

    Invited papersIncorporating the multi-cross-sectional temporal effect in Geographically Weighted Logit Regression K. Wu, B. Liu, B. Huang & Z. LeiOne shot learning human actions recognition using key posesW.H. Zou, S.G. Li, Z. Lei & N. DaiBand grouping pansharpening for WorldView-2 satellite images X. LiResearch on GIS based haze trajectory data analysis system Y. Wang, J. Chen, J. Shu & X. WangRegular papersA warning model of systemic financial risks W. Xu & Q. WangResearch on smart mobile phone user experience with grounded theory J.P. Wan & Y.H. ZhuThe software reliability analysis based on

  11. Iterative schemes for parallel Sn algorithms in a shared-memory computing environment

    International Nuclear Information System (INIS)

    Haghighat, A.; Hunter, M.A.; Mattis, R.E.

    1995-01-01

    Several two-dimensional spatial domain partitioning S n transport theory algorithms are developed on the basis of different iterative schemes. These algorithms are incorporated into TWOTRAN-II and tested on the shared-memory CRAY Y-MP C90 computer. For a series of fixed-source r-z geometry homogeneous problems, it is demonstrated that the concurrent red-black algorithms may result in large parallel efficiencies (>60%) on C90. It is also demonstrated that for a realistic shielding problem, the use of the negative flux fixup causes high load imbalance, which results in a significant loss of parallel efficiency

  12. Artificial immune system applications in computer security

    CERN Document Server

    Tan, Ying

    2016-01-01

    This book provides state-of-the-art information on the use, design, and development of the Artificial Immune System (AIS) and AIS-based solutions to computer security issues. Artificial Immune System: Applications in Computer Security focuses on the technologies and applications of AIS in malware detection proposed in recent years by the Computational Intelligence Laboratory of Peking University (CIL@PKU). It offers a theoretical perspective as well as practical solutions for readers interested in AIS, machine learning, pattern recognition and computer security. The book begins by introducing the basic concepts, typical algorithms, important features, and some applications of AIS. The second chapter introduces malware and its detection methods, especially for immune-based malware detection approaches. Successive chapters present a variety of advanced detection approaches for malware, including Virus Detection System, K-Nearest Neighbour (KNN), RBF networ s, and Support Vector Machines (SVM), Danger theory, ...

  13. Computing experiments on stellar systems

    CERN Document Server

    Bouvier, P

    1972-01-01

    A stellar system being usually conceived, in a first approximation, as a group of point-like stars held together by their own gravitational mutual attraction, one may discriminate between three or four different lines of attack on the problem of the dynamical evolution of such a system. These are the straight-forward integration of the n- body problem, the statistical model description, the Monte Carlo technique, the Boltzmann moment approach. Direct numerical integration can now be applied to the dynamical evolution of star clusters containing up to 500 stars, which includes small to medium open stellar clusters, while statistical and Monte Carlo descriptions are better suited for systems of at least several thousand stars. The overall dynamic evolution of an isolated star cluster is characterized by the formation of a dense core surrounded by an extended halo, with some stars escaping with positive energy. This general feature has been confirmed in all the numerical experiments carried out in the last ten y...

  14. Network Memory Protocol

    National Research Council Canada - National Science Library

    Wilcox, D

    1997-01-01

    This report presents initial research into the design of a new computer system local area network transport layer protocol, designated the network memory protocol, which provides clients with direct...

  15. Role of computers in CANDU safety systems

    International Nuclear Information System (INIS)

    Hepburn, G.A.; Gilbert, R.S.; Ichiyen, N.M.

    1985-01-01

    Small digital computers are playing an expanding role in the safety systems of CANDU nuclear generating stations, both as active components in the trip logic, and as monitoring and testing systems. The paper describes three recent applications: (i) A programmable controller was retro-fitted to Bruce ''A'' Nuclear Generating Station to handle trip setpoint modification as a function of booster rod insertion. (ii) A centralized monitoring computer to monitor both shutdown systems and the Emergency Coolant Injection system, is currently being retro-fitted to Bruce ''A''. (iii) The implementation of process trips on the CANDU 600 design using microcomputers. While not truly a retrofit, this feature was added very late in the design cycle to increase the margin against spurious trips, and has now seen about 4 unit-years of service at three separate sites. Committed future applications of computers in special safety systems are also described. (author)

  16. Quantum Computing in Solid State Systems

    CERN Document Server

    Ruggiero, B; Granata, C

    2006-01-01

    The aim of Quantum Computation in Solid State Systems is to report on recent theoretical and experimental results on the macroscopic quantum coherence of mesoscopic systems, as well as on solid state realization of qubits and quantum gates. Particular attention has been given to coherence effects in Josephson devices. Other solid state systems, including quantum dots, optical, ion, and spin devices which exhibit macroscopic quantum coherence are also discussed. Quantum Computation in Solid State Systems discusses experimental implementation of quantum computing and information processing devices, and in particular observations of quantum behavior in several solid state systems. On the theoretical side, the complementary expertise of the contributors provides models of the various structures in connection with the problem of minimizing decoherence.

  17. Replacement of the JRR-3 computer system

    International Nuclear Information System (INIS)

    Kato, Tomoaki; Kobayashi, Kenichi; Suwa, Masayuki; Mineshima, Hiromi; Sato, Mitsugu

    2000-01-01

    The JRR-3 computer system contributes to stable operation of JRR-3 since 1990. But now about 10 years have passed since it was designed and some problems have occurred. Under these situations, we should replace the old computer system for safe and stable operation. In this replacement, the system is improved as regards man-machine interface and efficiency about maintenance. The new system consists of three functions, which are 'the function of management for operation information' (renewal function), 'the function of management for facility information' (new function) and the function of management for information publication' (new function). By this replacement, New JRR-3 computer system can contribute to safe and stable operation. (author)

  18. Computer Automated Design of Systems

    Science.gov (United States)

    1976-06-01

    reduced order model for large order systems. MacNamara [5 1 went even further and used an iterative method to find the optimum compensation for an...Table I. These blccks were selected because cf tteir ccmmcn usage in the modeling process. They were alsc found to be adequate either separately cr in...r <-* >r3 s: II s i. <» ^"" —ZJO ** e>.— . -Z-lO- "•* o— ^-z •* 2.— **020Z0ZH • » OZO 9ZUZH • iZOZO «ZOZh UJ UJ UJ |. i i| i i

  19. Searching for memories, Sudoku, implicit check bits, and the iterative use of not-always-correct rapid neural computation.

    Science.gov (United States)

    Hopfield, J J

    2008-05-01

    The algorithms that simple feedback neural circuits representing a brain area can rapidly carry out are often adequate to solve easy problems but for more difficult problems can return incorrect answers. A new excitatory-inhibitory circuit model of associative memory displays the common human problem of failing to rapidly find a memory when only a small clue is present. The memory model and a related computational network for solving Sudoku puzzles produce answers that contain implicit check bits in the representation of information across neurons, allowing a rapid evaluation of whether the putative answer is correct or incorrect through a computation related to visual pop-out. This fact may account for our strong psychological feeling of right or wrong when we retrieve a nominal memory from a minimal clue. This information allows more difficult computations or memory retrievals to be done in a serial fashion by using the fast but limited capabilities of a computational module multiple times. The mathematics of the excitatory-inhibitory circuits for associative memory and for Sudoku, both of which are understood in terms of energy or Lyapunov functions, is described in detail.

  20. A FPGA-based Measurement System for Nonvolatile Semiconductor Memory Characterization

    Science.gov (United States)

    Bu, Jiankang; White, Marvin

    2002-03-01

    Low voltage, long retention, high density SONOS nonvolatile semiconductor memory (NVSM) devices are ideally suited for PCMCIA, FLASH and 'smart' cards. The SONOS memory transistor requires characterization with an accurate, rapid measurement system with minimum disturbance to the device. The FPGA-based measurement system includes three parts: 1) a pattern generator implemented with XILINX FPGAs and corresponding software, 2) a high-speed, constant-current, threshold voltage detection circuit, 3) and a data evaluation program, implemented with a LABVIEW program. Fig. 1 shows the general block diagram of the FPGA-based measurement system. The function generator is designed and simulated with XILINX Foundation Software. Under the control of the specific erase/write/read pulses, the analog detect circuit applies operational modes to the SONOS device under test (DUT) and determines the change of the memory-state of the SONOS nonvolatile memory transistor. The TEK460 digitizes the analog threshold voltage output and sends to the PC computer. The data is filtered and averaged with a LABVIEWTM program running on the PC computer and displayed on the monitor in real time. We have implemented the pattern generator with XILINX FPGAs. Fig. 2 shows the block diagram of the pattern generator. We realized the logic control by a method of state machine design. Fig. 3 shows a small part of the state machine. The flexibility of the FPGAs enhances the capabilities of this system and allows measurement variations without hardware changes. The characterization of the nonvolatile memory transistor device under test (DUT), as function of programming voltage and time, is achieved by a high-speed, constant-current threshold voltage detection circuit. The analog detection circuit incorporating fast analog switches controlled digitally with the FPGAs. The schematic circuit diagram is shown in Fig. 4. The various operational modes for the DUT are realized with control signals applied to the

  1. Real time computer system with distributed microprocessors

    International Nuclear Information System (INIS)

    Heger, D.; Steusloff, H.; Syrbe, M.

    1979-01-01

    The usual centralized structure of computer systems, especially of process computer systems, cannot sufficiently use the progress of very large-scale integrated semiconductor technology with respect to increasing the reliability and performance and to decreasing the expenses especially of the external periphery. This and the increasing demands on process control systems has led the authors to generally examine the structure of such systems and to adapt it to the new surroundings. Computer systems with distributed, optical fibre-coupled microprocessors allow a very favourable problem-solving with decentralized controlled buslines and functional redundancy with automatic fault diagnosis and reconfiguration. A fit programming system supports these hardware properties: PEARL for multicomputer systems, dynamic loader, processor and network operating system. The necessary design principles for this are proved mainly theoretically and by value analysis. An optimal overall system of this new generation of process control systems was established, supported by results of 2 PDV projects (modular operating systems, input/output colour screen system as control panel), for the purpose of testing by apllying the system for the control of 28 pit furnaces of a steel work. (orig.) [de

  2. Results from the First Two Flights of the Static Computer Memory Integrity Testing Experiment

    Science.gov (United States)

    Hancock, Thomas M., III

    1999-01-01

    This paper details the scientific objectives, experiment design, data collection method, and post flight analysis following the first two flights of the Static Computer Memory Integrity Testing (SCMIT) experiment. SCMIT is designed to detect soft-event upsets in passive magnetic memory. A soft-event upset is a change in the logic state of active or passive forms of magnetic memory, commonly referred to as a "Bitflip". In its mildest form a soft-event upset can cause software exceptions, unexpected events, start spacecraft safeing (ending data collection) or corrupted fault protection and error recovery capabilities. In it's most severe form loss of mission or spacecraft can occur. Analysis after the first flight (in 1991 during STS-40) identified possible soft-event upsets to 25% of the experiment detectors. Post flight analysis after the second flight (in 1997 on STS-87) failed to find any evidence of soft-event upsets. The SCMIT experiment is currently scheduled for a third flight in December 1999 on STS-101.

  3. SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations

    Science.gov (United States)

    Choi, Shinhyun; Tan, Scott H.; Li, Zefan; Kim, Yunjo; Choi, Chanyeol; Chen, Pai-Yu; Yeon, Hanwool; Yu, Shimeng; Kim, Jeehwan

    2018-01-01

    Although several types of architecture combining memory cells and transistors have been used to demonstrate artificial synaptic arrays, they usually present limited scalability and high power consumption. Transistor-free analog switching devices may overcome these limitations, yet the typical switching process they rely on—formation of filaments in an amorphous medium—is not easily controlled and hence hampers the spatial and temporal reproducibility of the performance. Here, we demonstrate analog resistive switching devices that possess desired characteristics for neuromorphic computing networks with minimal performance variations using a single-crystalline SiGe layer epitaxially grown on Si as a switching medium. Such epitaxial random access memories utilize threading dislocations in SiGe to confine metal filaments in a defined, one-dimensional channel. This confinement results in drastically enhanced switching uniformity and long retention/high endurance with a high analog on/off ratio. Simulations using the MNIST handwritten recognition data set prove that epitaxial random access memories can operate with an online learning accuracy of 95.1%.

  4. Computer Aided Implementation using Xilinx System Generator

    OpenAIRE

    Eriksson, Henrik

    2004-01-01

    The development in electronics increases the demand for good design methods and design tools in the field of electrical engeneering. To improve their design methods Ericsson Microwave Systems AB is interested in using computer tools to create a link between the specification and the implementation of a digital system in a FPGA. Xilinx System Generator for DSP is a tool for implementing a model of a digital signalprocessing algorithm in a Xilinx FPGA. To evaluate Xilinx System Generator two t...

  5. Intelligent computational systems for space applications

    Science.gov (United States)

    Lum, Henry; Lau, Sonie

    Intelligent computational systems can be described as an adaptive computational system integrating both traditional computational approaches and artificial intelligence (AI) methodologies to meet the science and engineering data processing requirements imposed by specific mission objectives. These systems will be capable of integrating, interpreting, and understanding sensor input information; correlating that information to the "world model" stored within its data base and understanding the differences, if any; defining, verifying, and validating a command sequence to merge the "external world" with the "internal world model"; and, controlling the vehicle and/or platform to meet the scientific and engineering mission objectives. Performance and simulation data obtained to date indicate that the current flight processors baselined for many missions such as Space Station Freedom do not have the computational power to meet the challenges of advanced automation and robotics systems envisioned for the year 2000 era. Research issues which must be addressed to achieve greater than giga-flop performance for on-board intelligent computational systems have been identified, and a technology development program has been initiated to achieve the desired long-term system performance objectives.

  6. Konsep Memory Systems dalam Iklan ‘Diskon Ramadhan’

    Directory of Open Access Journals (Sweden)

    Elsye Rumondang Damanik

    2011-10-01

    Full Text Available The purpose of the article is to discuss and reminiscence the concept of memory systems and its purpose of marketing activity. Information-processed activity related to marketing activity made this concept is important to be discussed. To limit the problem discussion scope, the article will only discuss about human role as consumer in marketing activity and also the effects of memory system in helping human being to precede information related to marketing. In presenting the article, the writer had gathered data dan information through literature study from books and information from mass media. The result is that is it important for marketers to understand information-processed stages by their consumers and how the seller optimize or perhaps manipulate the stages to win the market. 

  7. Ageing and memory effects in a mechanically alloyed nanoparticle system

    International Nuclear Information System (INIS)

    Osth, Michael; Herisson, Didier; Nordblad, Per; De Toro, Jose A.; Riveiro, Jose M.

    2007-01-01

    Ageing and memory experiments have been performed to explore the non-equilibrium dynamics of the mechanically alloyed nanoparticle system Fe 30 Ag 40 W 30 , which comprises a heterogeneous ensemble of magnetic particles with average moment ∼ 10 2 μ B dispersed in a metallic non-magnetic matrix. This system has earlier, from critical slowing down analysis, been reported to enter a spin glass like state at low temperatures [J. A. de Toro et al., Phys. Rev. B 69, (2004) 224407]. The wait time dependence of the magnetic relaxation observed after the application of a weak magnetic field and the memory of the thermal history in the low temperature phase recorded on continuous heating in a weak applied field show similar features as observed in corresponding experiments on canonical spin glasses

  8. Novel tribological systems using shape memory alloys and thin films

    Science.gov (United States)

    Zhang, Yijun

    Shape memory alloys and thin films are shown to have robust indentation-induced shape memory and superelastic effects. Loading conditions that are similar to indentations are very common in tribological systems. Therefore novel tribological systems that have better wear resistance and stronger coating to substrate adhesion can be engineered using indentation-induced shape memory and superelastic effects. By incorporating superelastic NiTi thin films as interlayers between chromium nitride (CrN) and diamond-like carbon (DLC) hard coatings and aluminum substrates, it is shown that the superelasticity can improve tribological performance and increase interfacial adhesion. The NiTi interlayers were sputter deposited onto 6061 T6 aluminum and M2 steel substrates. CrN and DLC coatings were deposited by unbalanced magnetron sputter deposition. Temperature scanning X-ray diffraction and nanoindentation were used to characterize NiTi interlayers. Temperature scanning wear and scratch tests showed that superelastic NiTi interlayers improved tribological performance on aluminum substrates significantly. The two-way shape memory effect under contact loading conditions is demonstrated for the first time, which could be used to make novel tribological systems. Spherical indents in NiTi shape memory alloys and thin films had reversible depth changes that were driven by temperature cycling, after thermomechanical cycling, or one-cycle slip-plasticity deformation training. Reversible surface topography was realized after the indents were planarized. Micro- and nano- scale circular surface protrusions arose from planarized spherical indents in bulk and thin film NiTi alloy; line surface protrusions appeared from planarized scratch tracks. Functional surfaces with reversible surface topography can potentially result in novel tribological systems with reversible friction coefficient. A three dimensional constitutive model was developed to describe shape memory effects with slip

  9. Method and apparatus for managing access to a memory

    Science.gov (United States)

    DeBenedictis, Erik

    2017-08-01

    A method and apparatus for managing access to a memory of a computing system. A controller transforms a plurality of operations that represent a computing job into an operational memory layout that reduces a size of a selected portion of the memory that needs to be accessed to perform the computing job. The controller stores the operational memory layout in a plurality of memory cells within the selected portion of the memory. The controller controls a sequence by which a processor in the computing system accesses the memory to perform the computing job using the operational memory layout. The operational memory layout reduces an amount of energy consumed by the processor to perform the computing job.

  10. Cloud Computing for Standard ERP Systems

    DEFF Research Database (Denmark)

    Schubert, Petra; Adisa, Femi

    Cloud Computing is a topic that has gained momentum in the last years. Current studies show that an increasing number of companies is evaluating the promised advantages and considering making use of cloud services. In this paper we investigate the phenomenon of cloud computing and its importance...... for the operation of ERP systems. We argue that the phenomenon of cloud computing could lead to a decisive change in the way business software is deployed in companies. Our reference framework contains three levels (IaaS, PaaS, SaaS) and clarifies the meaning of public, private and hybrid clouds. The three levels...... of cloud computing and their impact on ERP systems operation are discussed. From the literature we identify areas for future research and propose a research agenda....

  11. Computer Application Systems at the University.

    Science.gov (United States)

    Bazewicz, Mieczyslaw

    1979-01-01

    The results of the WASC Project at the Technical University of Wroclaw have confirmed the possibility of constructing informatic systems based on the recognized size and specifics of user's needs (needs of the university) and provided some solutions to the problem of collaboration of computer systems at remote universities. (Author/CMV)

  12. Terrace Layout Using a Computer Assisted System

    Science.gov (United States)

    Development of a web-based terrace design tool based on the MOTERR program is presented, along with representative layouts for conventional and parallel terrace systems. Using digital elevation maps and geographic information systems (GIS), this tool utilizes personal computers to rapidly construct ...

  13. Honeywell Modular Automation System Computer Software Documentation

    International Nuclear Information System (INIS)

    CUNNINGHAM, L.T.

    1999-01-01

    This document provides a Computer Software Documentation for a new Honeywell Modular Automation System (MAS) being installed in the Plutonium Finishing Plant (PFP). This system will be used to control new thermal stabilization furnaces in HA-211 and vertical denitration calciner in HC-230C-2

  14. Self-pacing direct memory access data transfer operations for compute nodes in a parallel computer

    Science.gov (United States)

    Blocksome, Michael A

    2015-02-17

    Methods, apparatus, and products are disclosed for self-pacing DMA data transfer operations for nodes in a parallel computer that include: transferring, by an origin DMA on an origin node, a RTS message to a target node, the RTS message specifying an message on the origin node for transfer to the target node; receiving, in an origin injection FIFO for the origin DMA from a target DMA on the target node in response to transferring the RTS message, a target RGET descriptor followed by a DMA transfer operation descriptor, the DMA descriptor for transmitting a message portion to the target node, the target RGET descriptor specifying an origin RGET descriptor on the origin node that specifies an additional DMA descriptor for transmitting an additional message portion to the target node; processing, by the origin DMA, the target RGET descriptor; and processing, by the origin DMA, the DMA transfer operation descriptor.

  15. Applying improved instrumentation and computer control systems

    International Nuclear Information System (INIS)

    Bevilacqua, F.; Myers, J.E.

    1977-01-01

    In-core and out-of-core instrumentation systems for the Cherokee-I reactor are described. The reactor has 61m-core instrument assemblies. Continuous computer monitoring and processing of data from over 300 fixed detectors will be used to improve the manoeuvering of core power. The plant protection system is a standard package for the Combustion Engineering System 80, consisting of two independent systems, the reactor protection system and the engineering safety features activation system, both of which are designed to meet NRC, ANS and IEEE design criteria or standards. The plants protection system has its own computer which provides plant monitoring, alarming, logging and performance calculations. (U.K.)

  16. Distributed-Computer System Optimizes SRB Joints

    Science.gov (United States)

    Rogers, James L., Jr.; Young, Katherine C.; Barthelemy, Jean-Francois M.

    1991-01-01

    Initial calculations of redesign of joint on solid rocket booster (SRB) that failed during Space Shuttle tragedy showed redesign increased weight. Optimization techniques applied to determine whether weight could be reduced while keeping joint closed and limiting stresses. Analysis system developed by use of existing software coupling structural analysis with optimization computations. Software designed executable on network of computer workstations. Took advantage of parallelism offered by finite-difference technique of computing gradients to enable several workstations to contribute simultaneously to solution of problem. Key features, effective use of redundancies in hardware and flexible software, enabling optimization to proceed with minimal delay and decreased overall time to completion.

  17. Computer networks ISE a systems approach

    CERN Document Server

    Peterson, Larry L

    2007-01-01

    Computer Networks, 4E is the only introductory computer networking book written by authors who have had first-hand experience with many of the protocols discussed in the book, who have actually designed some of them as well, and who are still actively designing the computer networks today. This newly revised edition continues to provide an enduring, practical understanding of networks and their building blocks through rich, example-based instruction. The authors' focus is on the why of network design, not just the specifications comprising today's systems but how key technologies and p

  18. Unified Computational Intelligence for Complex Systems

    CERN Document Server

    Seiffertt, John

    2010-01-01

    Computational intelligence encompasses a wide variety of techniques that allow computation to learn, to adapt, and to seek. That is, they may be designed to learn information without explicit programming regarding the nature of the content to be retained, they may be imbued with the functionality to adapt to maintain their course within a complex and unpredictably changing environment, and they may help us seek out truths about our own dynamics and lives through their inclusion in complex system modeling. These capabilities place our ability to compute in a category apart from our ability to e

  19. Dissipation Assisted Quantum Memory with Coupled Spin Systems

    Science.gov (United States)

    Jiang, Liang; Verstraete, Frank; Cirac, Ignacio; Lukin, Mikhail

    2009-05-01

    Dissipative dynamics often destroys quantum coherences. However, one can use dissipation to suppress decoherence. A well-known example is the so-called quantum Zeno effect, in which one can freeze the evolution using dissipative processes (e.g., frequently projecting the system to its initial state). Similarly, the undesired decoherence of quantum bits can also be suppressed using controlled dissipation. We propose and analyze the use of this generalization of quantum Zeno effect for protecting the quantum information encoded in the coupled spin systems. This new approach may potentially enhance the performance of quantum memories, in systems such as nitrogen-vacancy color-centers in diamond.

  20. Effect of yogic education system and modern education system on memory

    Directory of Open Access Journals (Sweden)

    Rangan R

    2009-01-01

    Full Text Available Background/Aim: Memory is more associated with the temporal cortex than other cortical areas. The two main components of memory are spatial and verbal which relate to right and left hemispheres of the brain, respectively. Many investigations have shown the beneficial effects of yoga on memory and temporal functions of the brain. This study was aimed at comparing the effect of one Gurukula Education System (GES school based on a yoga way of life with a school using the Modern Education System (MES on memory. Materials and Methods: Forty nine boys of ages ranging from 11-13 years were selected from each of two residential schools, one MES and the other GES, providing similar ambiance and daily routines. The boys were matched for age and socioeconomic status. The GES educational program is based around integrated yoga modules while the MES provides a conventional modern education program. Memory was assessed by means of standard spatial and verbal memory tests applicable to Indian conditions before and after an academic year. Results: Between groups there was matching at start of the academic year, while after it the GES boys showed significant enhancement in both verbal and visual memory scores than MES boys (P < 0.001, Mann-Whitney test. Conclusions: The present study showed that the GES meant for total personality development adopting yoga way of life is more effective in enhancing visual and verbal memory scores than the MES.

  1. Effect of yogic education system and modern education system on memory.

    Science.gov (United States)

    Rangan, R; Nagendra, Hr; Bhat, G Ramachandra

    2009-07-01

    Memory is more associated with the temporal cortex than other cortical areas. The two main components of memory are spatial and verbal which relate to right and left hemispheres of the brain, respectively. Many investigations have shown the beneficial effects of yoga on memory and temporal functions of the brain. This study was aimed at comparing the effect of one Gurukula Education System (GES) school based on a yoga way of life with a school using the Modern Education System (MES) on memory. Forty nine boys of ages ranging from 11-13 years were selected from each of two residential schools, one MES and the other GES, providing similar ambiance and daily routines. The boys were matched for age and socioeconomic status. The GES educational program is based around integrated yoga modules while the MES provides a conventional modern education program. Memory was assessed by means of standard spatial and verbal memory tests applicable to Indian conditions before and after an academic year. Between groups there was matching at start of the academic year, while after it the GES boys showed significant enhancement in both verbal and visual memory scores than MES boys (P < 0.001, Mann-Whitney test). The present study showed that the GES meant for total personality development adopting yoga way of life is more effective in enhancing visual and verbal memory scores than the MES.

  2. Computer surety: computer system inspection guidance. [Contains glossary

    Energy Technology Data Exchange (ETDEWEB)

    1981-07-01

    This document discusses computer surety in NRC-licensed nuclear facilities from the perspective of physical protection inspectors. It gives background information and a glossary of computer terms, along with threats and computer vulnerabilities, methods used to harden computer elements, and computer audit controls.

  3. A model for calculating the optimal replacement interval of computer systems

    International Nuclear Information System (INIS)

    Fujii, Minoru; Asai, Kiyoshi

    1981-08-01

    A mathematical model for calculating the optimal replacement interval of computer systems is described. This model is made to estimate the best economical interval of computer replacement when computing demand, cost and performance of computer, etc. are known. The computing demand is assumed to monotonously increase every year. Four kinds of models are described. In the model 1, a computer system is represented by only a central processing unit (CPU) and all the computing demand is to be processed on the present computer until the next replacement. On the other hand in the model 2, the excessive demand is admitted and may be transferred to other computing center and processed costly there. In the model 3, the computer system is represented by a CPU, memories (MEM) and input/output devices (I/O) and it must process all the demand. Model 4 is same as model 3, but the excessive demand is admitted to be processed in other center. (1) Computing demand at the JAERI, (2) conformity of Grosch's law for the recent computers, (3) replacement cost of computer systems, etc. are also described. (author)

  4. A distributed-memory hierarchical solver for general sparse linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Chao [Stanford Univ., CA (United States). Inst. for Computational and Mathematical Engineering; Pouransari, Hadi [Stanford Univ., CA (United States). Dept. of Mechanical Engineering; Rajamanickam, Sivasankaran [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Boman, Erik G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Darve, Eric [Stanford Univ., CA (United States). Inst. for Computational and Mathematical Engineering and Dept. of Mechanical Engineering

    2017-12-20

    We present a parallel hierarchical solver for general sparse linear systems on distributed-memory machines. For large-scale problems, this fully algebraic algorithm is faster and more memory-efficient than sparse direct solvers because it exploits the low-rank structure of fill-in blocks. Depending on the accuracy of low-rank approximations, the hierarchical solver can be used either as a direct solver or as a preconditioner. The parallel algorithm is based on data decomposition and requires only local communication for updating boundary data on every processor. Moreover, the computation-to-communication ratio of the parallel algorithm is approximately the volume-to-surface-area ratio of the subdomain owned by every processor. We also provide various numerical results to demonstrate the versatility and scalability of the parallel algorithm.

  5. Local annealing of shape memory alloys using laser scanning and computer vision

    Science.gov (United States)

    Hafez, Moustapha; Bellouard, Yves; Sidler, Thomas C.; Clavel, Reymond; Salathe, Rene-Paul

    2000-11-01

    A complete set-up for local annealing of Shape Memory Alloys (SMA) is proposed. Such alloys, when plastically deformed at a given low temperature, have the ability to recover a previously memorized shape simply by heating up to a higher temperature. They find more and more applications in the fields of robotics and micro engineering. There is a tremendous advantage in using local annealing because this process can produce monolithic parts, which have different mechanical behavior at different location of the same body. Using this approach, it is possible to integrate all the functionality of a device within one piece of material. The set-up is based on a 2W-laser diode emitting at 805nm and a scanner head. The laser beam is coupled into an optical fiber of 60(mu) in diameter. The fiber output is focused on the SMA work-piece using a relay lens system with a 1:1 magnification, resulting in a spot diameter of 60(mu) . An imaging system is used to control the position of the laser spot on the sample. In order to displace the spot on the surface a tip/tilt laser scanner is used. The scanner is positioned in a pre-objective configuration and allows a scan field size of more than 10 x 10 mm2. A graphical user interface of the scan field allows the user to quickly set up marks and alter their placement and power density. This is achieved by computer controlling X and Y positions of the scanner as well as the laser diode power. A SMA micro-gripper with a surface area less than 1 mm2 and an opening of the jaws of 200(mu) has been realized using this set-up. It is electrically actuated and a controlled force of 16mN can be applied to hold and release small objects such as graded index micro-lenses at a cycle time of typically 1s.

  6. From experiment to design -- Fault characterization and detection in parallel computer systems using computational accelerators

    Science.gov (United States)

    Yim, Keun Soo

    This dissertation summarizes experimental validation and co-design studies conducted to optimize the fault detection capabilities and overheads in hybrid computer systems (e.g., using CPUs and Graphics Processing Units, or GPUs), and consequently to improve the scalability of parallel computer systems using computational accelerators. The experimental validation studies were conducted to help us understand the failure characteristics of CPU-GPU hybrid computer systems under various types of hardware faults. The main characterization targets were faults that are difficult to detect and/or recover from, e.g., faults that cause long latency failures (Ch. 3), faults in dynamically allocated resources (Ch. 4), faults in GPUs (Ch. 5), faults in MPI programs (Ch. 6), and microarchitecture-level faults with specific timing features (Ch. 7). The co-design studies were based on the characterization results. One of the co-designed systems has a set of source-to-source translators that customize and strategically place error detectors in the source code of target GPU programs (Ch. 5). Another co-designed system uses an extension card to learn the normal behavioral and semantic execution patterns of message-passing processes executing on CPUs, and to detect abnormal behaviors of those parallel processes (Ch. 6). The third co-designed system is a co-processor that has a set of new instructions in order to support software-implemented fault detection techniques (Ch. 7). The work described in this dissertation gains more importance because heterogeneous processors have become an essential component of state-of-the-art supercomputers. GPUs were used in three of the five fastest supercomputers that were operating in 2011. Our work included comprehensive fault characterization studies in CPU-GPU hybrid computers. In CPUs, we monitored the target systems for a long period of time after injecting faults (a temporally comprehensive experiment), and injected faults into various types of

  7. Infrastructure Support for Collaborative Pervasive Computing Systems

    DEFF Research Database (Denmark)

    Vestergaard Mogensen, Martin

    Collaborative Pervasive Computing Systems (CPCS) are currently being deployed to support areas such as clinical work, emergency situations, education, ad-hoc meetings, and other areas involving information sharing and collaboration.These systems allow the users to work together synchronously......, are all characterized by unstable, volatile environments, either due to the underlying components changing or the nomadic work habits of users. A major challenge, for the creators of collaborative pervasive computing systems, is the construction of infrastructures supporting the system. The complexity......, but from different places, by sharing information and coordinating activities. Several researchers have shown the value of such distributed collaborative systems. However, building these systems is by no means a trivial task and introduces a lot of yet unanswered questions. The aforementioned areas...

  8. Operator support system using computational intelligence techniques

    International Nuclear Information System (INIS)

    Bueno, Elaine Inacio; Pereira, Iraci Martinez

    2015-01-01

    Computational Intelligence Systems have been widely applied in Monitoring and Fault Detection Systems in several processes and in different kinds of applications. These systems use interdependent components ordered in modules. It is a typical behavior of such systems to ensure early detection and diagnosis of faults. Monitoring and Fault Detection Techniques can be divided into two categories: estimative and pattern recognition methods. The estimative methods use a mathematical model, which describes the process behavior. The pattern recognition methods use a database to describe the process. In this work, an operator support system using Computational Intelligence Techniques was developed. This system will show the information obtained by different CI techniques in order to help operators to take decision in real time and guide them in the fault diagnosis before the normal alarm limits are reached. (author)

  9. Operator support system using computational intelligence techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bueno, Elaine Inacio, E-mail: ebueno@ifsp.edu.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Sao Paulo (IFSP), Sao Paulo, SP (Brazil); Pereira, Iraci Martinez, E-mail: martinez@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    Computational Intelligence Systems have been widely applied in Monitoring and Fault Detection Systems in several processes and in different kinds of applications. These systems use interdependent components ordered in modules. It is a typical behavior of such systems to ensure early detection and diagnosis of faults. Monitoring and Fault Detection Techniques can be divided into two categories: estimative and pattern recognition methods. The estimative methods use a mathematical model, which describes the process behavior. The pattern recognition methods use a database to describe the process. In this work, an operator support system using Computational Intelligence Techniques was developed. This system will show the information obtained by different CI techniques in order to help operators to take decision in real time and guide them in the fault diagnosis before the normal alarm limits are reached. (author)

  10. Monitoring SLAC High Performance UNIX Computing Systems

    International Nuclear Information System (INIS)

    Lettsome, Annette K.

    2005-01-01

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface

  11. Memories.

    Science.gov (United States)

    Brand, Judith, Ed.

    1998-01-01

    This theme issue of the journal "Exploring" covers the topic of "memories" and describes an exhibition at San Francisco's Exploratorium that ran from May 22, 1998 through January 1999 and that contained over 40 hands-on exhibits, demonstrations, artworks, images, sounds, smells, and tastes that demonstrated and depicted the biological,…

  12. On the Performance of Three In-Memory Data Systems for On Line Analytical Processing

    Directory of Open Access Journals (Sweden)

    Ionut HRUBARU

    2017-01-01

    Full Text Available In-memory database systems are among the most recent and most promising Big Data technologies, being developed and released either as brand new distributed systems or as extensions of old monolith (centralized database systems. As name suggests, in-memory systems cache all the data into special memory structures. Many are part of the NewSQL strand and target to bridge the gap between OLTP and OLAP into so-called Hybrid Transactional Analytical Systems (HTAP. This paper aims to test the performance of using such type of systems for TPCH analytical workloads. Performance is analyzed in terms of data loading, memory footprint and execution time of the TPCH query set for three in-memory data systems: Oracle, SQL Server and MemSQL. Tests are subsequently deployed on classical on-disk architectures and results compared to in-memory solutions. As in-memory is an enterprise edition feature, associated costs are also considered.

  13. Metasynthetic computing and engineering of complex systems

    CERN Document Server

    Cao, Longbing

    2015-01-01

    Provides a comprehensive overview and introduction to the concepts, methodologies, analysis, design and applications of metasynthetic computing and engineering. The author: Presents an overview of complex systems, especially open complex giant systems such as the Internet, complex behavioural and social problems, and actionable knowledge discovery and delivery in the big data era. Discusses ubiquitous intelligence in complex systems, including human intelligence, domain intelligence, social intelligence, network intelligence, data intelligence and machine intelligence, and their synergy thro

  14. Reliable computer systems design and evaluatuion

    CERN Document Server

    Siewiorek, Daniel

    2014-01-01

    Enhance your hardware/software reliabilityEnhancement of system reliability has been a major concern of computer users and designers ¦ and this major revision of the 1982 classic meets users' continuing need for practical information on this pressing topic. Included are case studies of reliablesystems from manufacturers such as Tandem, Stratus, IBM, and Digital, as well as coverage of special systems such as the Galileo Orbiter fault protection system and AT&T telephone switching processors.

  15. A computer based Moessbauer spectrometer system

    International Nuclear Information System (INIS)

    Jin Ge; Li Yuzhi; Yin Zejie; Yao Chunbo; Li Tie; Tan Yexian; Wang Jian

    1999-01-01

    A computer based Moessbauer spectrometer system with a single chip processor for online control and data acquisition is developed. The spectrometer is designed as a single-width NIM module and can be performed directly in NIM crate. Because the structure of the spectrometer is designed to be quite flexible, the system is easy to be configured with other kinds of Moessbauer driver, and can be used in other data acquisition systems

  16. Reward-related learning via multiple memory systems.

    Science.gov (United States)

    Delgado, Mauricio R; Dickerson, Kathryn C

    2012-07-15

    The application of a neuroeconomic approach to the study of reward-related processes has provided significant insights in our understanding of human learning and decision making. Much of this research has focused primarily on the contributions of the corticostriatal circuitry, involved in trial-and-error reward learning. As a result, less consideration has been allotted to the potential influence of different neural mechanisms such as the hippocampus or to more common ways in human society in which information is acquired and utilized to reach a decision, such as through explicit instruction rather than trial-and-error learning. This review examines the individual contributions of multiple learning and memory neural systems and their interactions during human decision making in both normal and neuropsychiatric populations. Specifically, the anatomical and functional connectivity across multiple memory systems are highlighted to suggest that probing the role of the hippocampus and its interactions with the corticostriatal circuitry via the application of model-based neuroeconomic approaches may provide novel insights into neuropsychiatric populations that suffer from damage to one of these structures and as a consequence have deficits in learning, memory, or decision making. Copyright © 2012 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  17. The endocannabinoid system in anxiety, fear memory and habituation

    Science.gov (United States)

    Ruehle, S; Rey, A Aparisi; Remmers, F

    2012-01-01

    Evidence for the involvement of the endocannabinoid system (ECS) in anxiety and fear has been accumulated, providing leads for novel therapeutic approaches. In anxiety, a bidirectional influence of the ECS has been reported, whereby anxiolytic and anxiogenic responses have been obtained after both increases and decreases of the endocannabinoid tone. The recently developed genetic tools have revealed different but complementary roles for the cannabinoid type 1 (CB1) receptor on GABAergic and glutamatergic neuronal populations. This dual functionality, together with the plasticity of CB1 receptor expression, particularly on GABAergic neurons, as induced by stressful and rewarding experiences, gives the ECS a unique regulatory capacity for maintaining emotional homeostasis. However, the promiscuity of the endogenous ligands of the CB1 receptor complicates the interpretation of experimental data concerning ECS and anxiety. In fear memory paradigms, the ECS is mostly involved in the two opposing processes of reconsolidation and extinction of the fear memory. Whereas ECS activation deteriorates reconsolidation, proper extinction depends on intact CB1 receptor signalling. Thus, both for anxiety and fear memory processing, endocannabinoid signalling may ensure an appropriate reaction to stressful events. Therefore, the ECS can be considered as a regulatory buffer system for emotional responses. PMID:21768162

  18. Large computer systems and new architectures

    International Nuclear Information System (INIS)

    Bloch, T.

    1978-01-01

    The super-computers of today are becoming quite specialized and one can no longer expect to get all the state-of-the-art software and hardware facilities in one package. In order to achieve faster and faster computing it is necessary to experiment with new architectures, and the cost of developing each experimental architecture into a general-purpose computer system is too high when one considers the relatively small market for these computers. The result is that such computers are becoming 'back-ends' either to special systems (BSP, DAP) or to anything (CRAY-1). Architecturally the CRAY-1 is the most attractive today since it guarantees a speed gain of a factor of two over a CDC 7600 thus allowing us to regard any speed up resulting from vectorization as a bonus. It looks, however, as if it will be very difficult to make substantially faster computers using only pipe-lining techniques and that it will be necessary to explore multiple processors working on the same problem. The experience which will be gained with the BSP and the DAP over the next few years will certainly be most valuable in this respect. (Auth.)

  19. Framework for computer-aided systems design

    International Nuclear Information System (INIS)

    Esselman, W.H.

    1992-01-01

    Advanced computer technology, analytical methods, graphics capabilities, and expert systems contribute to significant changes in the design process. Continued progress is expected. Achieving the ultimate benefits of these computer-based design tools depends on successful research and development on a number of key issues. A fundamental understanding of the design process is a prerequisite to developing these computer-based tools. In this paper a hierarchical systems design approach is described, and methods by which computers can assist the designer are examined. A framework is presented for developing computer-based design tools for power plant design. These tools include expert experience bases, tutorials, aids in decision making, and tools to develop the requirements, constraints, and interactions among subsystems and components. Early consideration of the functional tasks is encouraged. Methods of acquiring an expert's experience base is a fundamental research problem. Computer-based guidance should be provided in a manner that supports the creativity, heuristic approaches, decision making, and meticulousness of a good designer

  20. Linear filtering of systems with memory and application to finance

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available We study the linear filtering problem for systems driven by continuous Gaussian processes V ( 1 and V ( 2 with memory described by two parameters. The processes V ( j have the virtue that they possess stationary increments and simple semimartingale representations simultaneously. They allow for straightforward parameter estimations. After giving the semimartingale representations of V ( j by innovation theory, we derive Kalman-Bucy-type filtering equations for the systems. We apply the result to the optimal portfolio problem for an investor with partial observations. We illustrate the tractability of the filtering algorithm by numerical implementations.

  1. Analogous Mechanisms of Selection and Updating in Declarative and Procedural Working Memory: Experiments and a Computational Model

    Science.gov (United States)

    Oberauer, Klaus; Souza, Alessandra S.; Druey, Michel D.; Gade, Miriam

    2013-01-01

    The article investigates the mechanisms of selecting and updating representations in declarative and procedural working memory (WM). Declarative WM holds the objects of thought available, whereas procedural WM holds representations of what to do with these objects. Both systems consist of three embedded components: activated long-term memory, a…

  2. Phase change memory

    CERN Document Server

    Qureshi, Moinuddin K

    2011-01-01

    As conventional memory technologies such as DRAM and Flash run into scaling challenges, architects and system designers are forced to look at alternative technologies for building future computer systems. This synthesis lecture begins by listing the requirements for a next generation memory technology and briefly surveys the landscape of novel non-volatile memories. Among these, Phase Change Memory (PCM) is emerging as a leading contender, and the authors discuss the material, device, and circuit advances underlying this exciting technology. The lecture then describes architectural solutions t

  3. Resolving time of scintillation camera-computer system and methods of correction for counting loss, 2

    International Nuclear Information System (INIS)

    Iinuma, Takeshi; Fukuhisa, Kenjiro; Matsumoto, Toru

    1975-01-01

    Following the previous work, counting-rate performance of camera-computer systems was investigated for two modes of data acquisition. The first was the ''LIST'' mode in which image data and timing signals were sequentially stored on magnetic disk or tape via a buffer memory. The second was the ''HISTOGRAM'' mode in which image data were stored in a core memory as digital images and then the images were transfered to magnetic disk or tape by the signal of frame timing. Firstly, the counting-rates stored in the buffer memory was measured as a function of display event-rates of the scintillation camera for the two modes. For both modes, stored counting-rated (M) were expressed by the following formula: M=N(1-Ntau) where N was the display event-rates of the camera and tau was the resolving time including analog-to-digital conversion time and memory cycle time. The resolving time for each mode may have been different, but it was about 10 μsec for both modes in our computer system (TOSBAC 3400 model 31). Secondly, the date transfer speed from the buffer memory to the external memory such as magnetic disk or tape was considered for the two modes. For the ''LIST'' mode, the maximum value of stored counting-rates from the camera was expressed in terms of size of the buffer memory, access time and data transfer-rate of the external memory. For the ''HISTOGRAM'' mode, the minimum time of the frame was determined by size of the buffer memory, access time and transfer rate of the external memory. In our system, the maximum value of stored counting-rates were about 17,000 counts/sec. with the buffer size of 2,000 words, and minimum frame time was about 130 msec. with the buffer size of 1024 words. These values agree well with the calculated ones. From the author's present analysis, design of the camera-computer system becomes possible for quantitative dynamic imaging and future improvements are suggested. (author)

  4. Expert-systems and computer-based industrial systems

    International Nuclear Information System (INIS)

    Terrien, J.F.

    1987-01-01

    Framatome makes wide use of expert systems, computer-assisted engineering, production management and personnel training. It has set up separate business units and subsidiaries and also participates in other companies which have the relevant expertise. Five examples of the products and services available in these are discussed. These are in the field of applied artificial intelligence and expert-systems, in integrated computer-aid design and engineering, structural analysis, computer-related products and services and document management systems. The structure of the companies involved and the work they are doing is discussed. (UK)

  5. Architecture, systems research and computational sciences

    CERN Document Server

    2012-01-01

    The Winter 2012 (vol. 14 no. 1) issue of the Nexus Network Journal is dedicated to the theme “Architecture, Systems Research and Computational Sciences”. This is an outgrowth of the session by the same name which took place during the eighth international, interdisciplinary conference “Nexus 2010: Relationships between Architecture and Mathematics, held in Porto, Portugal, in June 2010. Today computer science is an integral part of even strictly historical investigations, such as those concerning the construction of vaults, where the computer is used to survey the existing building, analyse the data and draw the ideal solution. What the papers in this issue make especially evident is that information technology has had an impact at a much deeper level as well: architecture itself can now be considered as a manifestation of information and as a complex system. The issue is completed with other research papers, conference reports and book reviews.

  6. Non-volatile main memory management methods based on a file system.

    Science.gov (United States)

    Oikawa, Shuichi

    2014-01-01

    There are upcoming non-volatile (NV) memory technologies that provide byte addressability and high performance. PCM, MRAM, and STT-RAM are such examples. Such NV memory can be used as storage because of its data persistency without power supply while it can be used as main memory because of its high performance that matches up with DRAM. There are a number of researches that investigated its uses for main memory and storage. They were, however, conducted independently. This paper presents the methods that enables the integration of the main memory and file system management for NV memory. Such integration makes NV memory simultaneously utilized as both main memory and storage. The presented methods use a file system as their basis for the NV memory management. We implemented the proposed methods in the Linux kernel, and performed the evaluation on the QEMU system emulator. The evaluation results show that 1) the proposed methods can perform comparably to the existing DRAM memory allocator and significantly better than the page swapping, 2) their performance is affected by the internal data structures of a file system, and 3) the data structures appropriate for traditional hard disk drives do not always work effectively for byte addressable NV memory. We also performed the evaluation of the effects caused by the longer access latency of NV memory by cycle-accurate full-system simulation. The results show that the effect on page allocation cost is limited if the increase of latency is moderate.

  7. Thermodynamic framework for information in nanoscale systems with memory.

    Science.gov (United States)

    Arias-Gonzalez, J Ricardo

    2017-11-28

    Information is represented by linear strings of symbols with memory that carry errors as a result of their stochastic nature. Proofreading and edition are assumed to improve certainty although such processes may not be effective. Here, we develop a thermodynamic theory for material chains made up of nanoscopic subunits with symbolic meaning in the presence of memory. This framework is based on the characterization of single sequences of symbols constructed under a protocol and is used to derive the behavior of ensembles of sequences similarly constructed. We then analyze the role of proofreading and edition in the presence of memory finding conditions to make revision an effective process, namely, to decrease the entropy of the chain. Finally, we apply our formalism to DNA replication and RNA transcription finding that Watson and Crick hybridization energies with which nucleotides are branched to the template strand during the copying process are optimal to regulate the fidelity in proofreading. These results are important in applications of information theory to a variety of solid-state physical systems and other biomolecular processes.

  8. An integrated radiation physics computer code system.

    Science.gov (United States)

    Steyn, J. J.; Harris, D. W.

    1972-01-01

    An integrated computer code system for the semi-automatic and rapid analysis of experimental and analytic problems in gamma photon and fast neutron radiation physics is presented. Such problems as the design of optimum radiation shields and radioisotope power source configurations may be studied. The system codes allow for the unfolding of complex neutron and gamma photon experimental spectra. Monte Carlo and analytic techniques are used for the theoretical prediction of radiation transport. The system includes a multichannel pulse-height analyzer scintillation and semiconductor spectrometer coupled to an on-line digital computer with appropriate peripheral equipment. The system is geometry generalized as well as self-contained with respect to material nuclear cross sections and the determination of the spectrometer response functions. Input data may be either analytic or experimental.

  9. Towards a new PDG computing system

    International Nuclear Information System (INIS)

    Beringer, J; Dahl, O; Zyla, P; Jackson, K; McParland, C; Poon, S; Robertson, D

    2011-01-01

    The computing system that supports the worldwide Particle Data Group (PDG) of over 170 authors in the production of the Review of Particle Physics was designed more than 20 years ago. It has reached its scalability and usability limits and can no longer satisfy the requirements and wishes of PDG collaborators and users alike. We discuss the ongoing effort to modernize the PDG computing system, including requirements, architecture and status of implementation. The new system will provide improved user features and will fully support the PDG collaboration from distributed web-based data entry, work flow management, authoring and refereeing to data verification and production of the web edition and manuscript for the publisher. Cross-linking with other HEP information systems will be greatly improved.

  10. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  11. Lumber Grading With A Computer Vision System

    Science.gov (United States)

    Richard W. Conners; Tai-Hoon Cho; Philip A. Araman

    1989-01-01

    Over the past few years significant progress has been made in developing a computer vision system for locating and identifying defects on surfaced hardwood lumber. Unfortunately, until September of 1988 little research had gone into developing methods for analyzing rough lumber. This task is arguably more complex than the analysis of surfaced lumber. The prime...

  12. Model Checking - Automated Verification of Computational Systems

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 14; Issue 7. Model Checking - Automated Verification of Computational Systems. Madhavan Mukund. General Article Volume 14 Issue 7 July 2009 pp 667-681. Fulltext. Click here to view fulltext PDF. Permanent link:

  13. Prestandardisation Activities for Computer Based Safety Systems

    DEFF Research Database (Denmark)

    Taylor, J. R.; Bologna, S.; Ehrenberger, W.

    1981-01-01

    Questions of technical safety become more and more important. Due to the higher complexity of their functions computer based safety systems have special problems. Researchers, producers, licensing personnel and customers have met on a European basis to exchange knowledge and formulate positions...

  14. Space systems computer-aided design technology

    Science.gov (United States)

    Garrett, L. B.

    1984-01-01

    The interactive Design and Evaluation of Advanced Spacecraft (IDEAS) system is described, together with planned capability increases in the IDEAS system. The system's disciplines consist of interactive graphics and interactive computing. A single user at an interactive terminal can create, design, analyze, and conduct parametric studies of earth-orbiting satellites, which represents a timely and cost-effective method during the conceptual design phase where various missions and spacecraft options require evaluation. Spacecraft concepts evaluated include microwave radiometer satellites, communication satellite systems, solar-powered lasers, power platforms, and orbiting space stations.

  15. Personal computer and workstation operating systems tutorial

    OpenAIRE

    Frame, Charles E.

    1994-01-01

    This thesis is a review and analysis of personal computer and workstation operating systems. The emphasis is placed on UNIX, MS DOS, MS windows and OS/2 operating systems. UNIX is cover under the U.S. Government POSIX standard, which dictates its use when practical. MS DOS is the most used operating system worldwide. OS/2 was developed to combat some of the shortcomings of MS DOS. Each operating system which is discussed has a design philosophy that fulfills specific user's needs. UNIX was de...

  16. Computer-aided control system design

    International Nuclear Information System (INIS)

    Lebenhaft, J.R.

    1986-01-01

    Control systems are typically implemented using conventional PID controllers, which are then tuned manually during plant commissioning to compensate for interactions between feedback loops. As plants increase in size and complexity, such controllers can fail to provide adequate process regulations. Multivariable methods can be utilized to overcome these limitations. At the Chalk River Nuclear Laboratories, modern control systems are designed and analyzed with the aid of MVPACK, a system of computer programs that appears to the user like a high-level calculator. The software package solves complicated control problems, and provides useful insight into the dynamic response and stability of multivariable systems

  17. Adaptive Fuzzy Systems in Computational Intelligence

    Science.gov (United States)

    Berenji, Hamid R.

    1996-01-01

    In recent years, the interest in computational intelligence techniques, which currently includes neural networks, fuzzy systems, and evolutionary programming, has grown significantly and a number of their applications have been developed in the government and industry. In future, an essential element in these systems will be fuzzy systems that can learn from experience by using neural network in refining their performances. The GARIC architecture, introduced earlier, is an example of a fuzzy reinforcement learning system which has been applied in several control domains such as cart-pole balancing, simulation of to Space Shuttle orbital operations, and tether control. A number of examples from GARIC's applications in these domains will be demonstrated.

  18. Cluster Computing for Embedded/Real-Time Systems

    Science.gov (United States)

    Katz, D.; Kepner, J.

    1999-01-01

    Embedded and real-time systems, like other computing systems, seek to maximize computing power for a given price, and thus can significantly benefit from the advancing capabilities of cluster computing.

  19. Impact of singular excessive computer game and television exposure on sleep patterns and memory performance of school-aged children.

    Science.gov (United States)

    Dworak, Markus; Schierl, Thomas; Bruns, Thomas; Strüder, Heiko Klaus

    2007-11-01

    Television and computer game consumption are a powerful influence in the lives of most children. Previous evidence has supported the notion that media exposure could impair a variety of behavioral characteristics. Excessive television viewing and computer game playing have been associated with many psychiatric symptoms, especially emotional and behavioral symptoms, somatic complaints, attention problems such as hyperactivity, and family interaction problems. Nevertheless, there is insufficient knowledge about the relationship between singular excessive media consumption on sleep patterns and linked implications on children. The aim of this study was to investigate the effects of singular excessive television and computer game consumption on sleep patterns and memory performance of children. Eleven school-aged children were recruited for this polysomnographic study. Children were exposed to voluntary excessive television and computer game consumption. In the subsequent night, polysomnographic measurements were conducted to measure sleep-architecture and sleep-continuity parameters. In addition, a visual and verbal memory test was conducted before media stimulation and after the subsequent sleeping period to determine visuospatial and verbal memory performance. Only computer game playing resulted in significant reduced amounts of slow-wave sleep as well as significant declines in verbal memory performance. Prolonged sleep-onset latency and more stage 2 sleep were also detected after previous computer game consumption. No effects on rapid eye movement sleep were observed. Television viewing reduced sleep efficiency significantly but did not affect sleep patterns. The results suggest that television and computer game exposure affect children's sleep and deteriorate verbal cognitive performance, which supports the hypothesis of the negative influence of media consumption on children's sleep, learning, and memory.

  20. The Case for Higher Computational Density in the Memory-Bound FDTD Method within Multicore Environments

    Directory of Open Access Journals (Sweden)

    Mohammed F. Hadi

    2012-01-01

    Full Text Available It is argued here that more accurate though more compute-intensive alternate algorithms to certain computational methods which are deemed too inefficient and wasteful when implemented within serial codes can be more efficient and cost-effective when implemented in parallel codes designed to run on today's multicore and many-core environments. This argument is most germane to methods that involve large data sets with relatively limited computational density—in other words, algorithms with small ratios of floating point operations to memory accesses. The examples chosen here to support this argument represent a variety of high-order finite-difference time-domain algorithms. It will be demonstrated that a three- to eightfold increase in floating-point operations due to higher-order finite-differences will translate to only two- to threefold increases in actual run times using either graphical or central processing units of today. It is hoped that this argument will convince researchers to revisit certain numerical techniques that have long been shelved and reevaluate them for multicore usability.

  1. A Primer on Memory Consistency and Cache Coherence

    CERN Document Server

    Sorin, Daniel; Wood, David

    2011-01-01

    Many modern computer systems and most multicore chips (chip multiprocessors) support shared memory in hardware. In a shared memory system, each of the processor cores may read and write to a single shared address space. For a shared memory machine, the memory consistency model defines the architecturally visible behavior of its memory system. Consistency definitions provide rules about loads and stores (or memory reads and writes) and how they act upon memory. As part of supporting a memory consistency model, many machines also provide cache coherence protocols that ensure that multiple cached

  2. Logic and memory concepts for all-magnetic computing based on transverse domain walls

    Science.gov (United States)

    Vandermeulen, J.; Van de Wiele, B.; Dupré, L.; Van Waeyenberge, B.

    2015-06-01

    We introduce a non-volatile digital logic and memory concept in which the binary data is stored in the transverse magnetic domain walls present in in-plane magnetized nanowires with sufficiently small cross sectional dimensions. We assign the digital bit to the two possible orientations of the transverse domain wall. Numerical proofs-of-concept are presented for a NOT-, AND- and OR-gate, a FAN-out as well as a reading and writing device. Contrary to the chirality based vortex domain wall logic gates introduced in Omari and Hayward (2014 Phys. Rev. Appl. 2 044001), the presented concepts remain applicable when miniaturized and are driven by electrical currents, making the technology compatible with the in-plane racetrack memory concept. The individual devices can be easily combined to logic networks working with clock speeds that scale linearly with decreasing design dimensions. This opens opportunities to an all-magnetic computing technology where the digital data is stored and processed under the same magnetic representation.

  3. Logic and memory concepts for all-magnetic computing based on transverse domain walls

    International Nuclear Information System (INIS)

    Vandermeulen, J; Van de Wiele, B; Dupré, L; Van Waeyenberge, B

    2015-01-01

    We introduce a non-volatile digital logic and memory concept in which the binary data is stored in the transverse magnetic domain walls present in in-plane magnetized nanowires with sufficiently small cross sectional dimensions. We assign the digital bit to the two possible orientations of the transverse domain wall. Numerical proofs-of-concept are presented for a NOT-, AND- and OR-gate, a FAN-out as well as a reading and writing device. Contrary to the chirality based vortex domain wall logic gates introduced in Omari and Hayward (2014 Phys. Rev. Appl. 2 044001), the presented concepts remain applicable when miniaturized and are driven by electrical currents, making the technology compatible with the in-plane racetrack memory concept. The individual devices can be easily combined to logic networks working with clock speeds that scale linearly with decreasing design dimensions. This opens opportunities to an all-magnetic computing technology where the digital data is stored and processed under the same magnetic representation. (paper)

  4. Counterbalancing Regulation in Response Memory of a Positively Autoregulated Two-Component System.

    Science.gov (United States)

    Gao, Rong; Godfrey, Katherine A; Sufian, Mahir A; Stock, Ann M

    2017-09-15

    Fluctuations in nutrient availability often result in recurrent exposures to the same stimulus conditions. The ability to memorize the past event and use the "memory" to make adjustments to current behaviors can lead to a more efficient adaptation to the recurring stimulus. A short-term phenotypic memory can be conferred via carryover of the response proteins to facilitate the recurrent response, but the additional accumulation of response proteins can lead to a deviation from response homeostasis. We used the Escherichia coli PhoB/PhoR two-component system (TCS) as a model system to study how cells cope with the recurrence of environmental phosphate (Pi) starvation conditions. We discovered that "memory" of prior Pi starvation can exert distinct effects through two regulatory pathways, the TCS signaling pathway and the stress response pathway. Although carryover of TCS proteins can lead to higher initial levels of transcription factor PhoB and a faster initial response in prestarved cells than in cells not starved, the response enhancement can be overcome by an earlier and greater repression of promoter activity in prestarved cells due to the memory of the stress response. The repression counterbalances the carryover of the response proteins, leading to a homeostatic response whether or not cells are prestimulated. A computational model based on sigma factor competition was developed to understand the memory of stress response and to predict the homeostasis of other PhoB-regulated response proteins. Our insight into the history-dependent PhoBR response may provide a general understanding of how TCSs respond to recurring stimuli and adapt to fluctuating environmental conditions. IMPORTANCE Bacterial cells in their natural environments experience scenarios that are far more complex than are typically replicated in laboratory experiments. The architectures of signaling systems and the integration of multiple adaptive pathways have evolved to deal with such complexity

  5. Time-Predictable Virtual Memory

    DEFF Research Database (Denmark)

    Puffitsch, Wolfgang; Schoeberl, Martin

    2016-01-01

    Virtual memory is an important feature of modern computer architectures. For hard real-time systems, memory protection is a particularly interesting feature of virtual memory. However, current memory management units are not designed for time-predictability and therefore cannot be used...... in such systems. This paper investigates the requirements on virtual memory from the perspective of hard real-time systems and presents the design of a time-predictable memory management unit. Our evaluation shows that the proposed design can be implemented efficiently. The design allows address translation...... and address range checking in constant time of two clock cycles on a cache miss. This constant time is in strong contrast to the possible cost of a miss in a translation look-aside buffer in traditional virtual memory organizations. Compared to a platform without a memory management unit, these two additional...

  6. From Augustine of Hippo's Memory Systems to Our Modern Taxonomy in Cognitive Psychology and Neuroscience of Memory: A 16-Century Nap of Intuition before Light of Evidence.

    Science.gov (United States)

    Cassel, Jean-Christophe; Cassel, Daniel; Manning, Lilianne

    2013-03-01

    Over the last half century, neuropsychologists, cognitive psychologists and cognitive neuroscientists interested in human memory have accumulated evidence showing that there is not one general memory function but a variety of memory systems deserving distinct (but for an organism, complementary) functional entities. The first attempts to organize memory systems within a taxonomic construct are often traced back to the French philosopher Maine de Biran (1766-1824), who, in his book first published in 1803, distinguished mechanical memory, sensitive memory and representative memory, without, however, providing any experimental evidence in support of his view. It turns out, however, that what might be regarded as the first elaborated taxonomic proposal is 14 centuries older and is due to Augustine of Hippo (354-430), also named St Augustine, who, in Book 10 of his Confessions, by means of an introspective process that did not aim at organizing memory systems, nevertheless distinguished and commented on sensible memory, intellectual memory, memory of memories, memory of feelings and passion, and memory of forgetting. These memories were envisaged as different and complementary instances. In the current study, after a short biographical synopsis of St Augustine, we provide an outline of the philosopher's contribution, both in terms of questions and answers, and focus on how this contribution almost perfectly fits with several viewpoints of modern psychology and neuroscience of memory about human memory functions, including the notion that episodic autobiographical memory stores events of our personal history in their what, where and when dimensions, and from there enables our mental time travel. It is not at all meant that St Augustine's elaboration was the basis for the modern taxonomy, but just that the similarity is striking, and that the architecture of our current viewpoints about memory systems might have preexisted as an outstanding intuition in the philosopher

  7. Simulation of radiation effects on three-dimensional computer optical memories

    Science.gov (United States)

    Moscovitch, M.; Emfietzoglou, D.

    1997-01-01

    A model was developed to simulate the effects of heavy charged-particle (HCP) radiation on the information stored in three-dimensional computer optical memories. The model is based on (i) the HCP track radial dose distribution, (ii) the spatial and temporal distribution of temperature in the track, (iii) the matrix-specific radiation-induced changes that will affect the response, and (iv) the kinetics of transition of photochromic molecules from the colored to the colorless isomeric form (bit flip). It is shown that information stored in a volume of several nanometers radius around the particle's track axis may be lost. The magnitude of the effect is dependent on the particle's track structure.

  8. Simulation of radiation effects on three-dimensional computer optical memories

    International Nuclear Information System (INIS)

    Moscovitch, M.; Emfietzoglou, D.

    1997-01-01

    A model was developed to simulate the effects of heavy charged-particle (HCP) radiation on the information stored in three-dimensional computer optical memories. The model is based on (i) the HCP track radial dose distribution, (ii) the spatial and temporal distribution of temperature in the track, (iii) the matrix-specific radiation-induced changes that will affect the response, and (iv) the kinetics of transition of photochromic molecules from the colored to the colorless isomeric form (bit flip). It is shown that information stored in a volume of several nanometers radius around the particle close-quote s track axis may be lost. The magnitude of the effect is dependent on the particle close-quote s track structure. copyright 1997 American Institute of Physics

  9. A look at computer system selection criteria

    Science.gov (United States)

    Poole, E. W.; Flowers, F. L.; Stanley, W. I. (Principal Investigator)

    1979-01-01

    There is no difficulty in identifying the criteria involved in the computer selection process; complexity arises in objectively evaluating various candidate configurations against the criteria, based on the user's specific needs. A model for formalizing the selection process consists of two major steps: verifying that the candidate configuration is adequate to the user's programming requirements, and determining an overall system evaluation rating based on cost, usability, adaptability, and availability. A 36 step instruction for computer sizing evaluation is included in the appendix along with a sample application of the configuration adequacy model. Selection criteria and the weighting process are also discussed.

  10. International Conference on Soft Computing Systems

    CERN Document Server

    Panigrahi, Bijaya

    2016-01-01

    The book is a collection of high-quality peer-reviewed research papers presented in International Conference on Soft Computing Systems (ICSCS 2015) held at Noorul Islam Centre for Higher Education, Chennai, India. These research papers provide the latest developments in the emerging areas of Soft Computing in Engineering and Technology. The book is organized in two volumes and discusses a wide variety of industrial, engineering and scientific applications of the emerging techniques. It presents invited papers from the inventors/originators of new applications and advanced technologies.

  11. Music Genre Classification Systems - A Computational Approach

    DEFF Research Database (Denmark)

    Ahrendt, Peter

    2006-01-01

    Automatic music genre classification is the classification of a piece of music into its corresponding genre (such as jazz or rock) by a computer. It is considered to be a cornerstone of the research area Music Information Retrieval (MIR) and closely linked to the other areas in MIR. It is thought...... that MIR will be a key element in the processing, searching and retrieval of digital music in the near future. This dissertation is concerned with music genre classification systems and in particular systems which use the raw audio signal as input to estimate the corresponding genre. This is in contrast...... to systems which use e.g. a symbolic representation or textual information about the music. The approach to music genre classification systems has here been system-oriented. In other words, all the different aspects of the systems have been considered and it is emphasized that the systems should...

  12. modeling workflow management in a distributed computing system

    African Journals Online (AJOL)

    Dr Obe

    extension in colour, time etc, have been applied successfully to model and ... The general characteristics are: (i) The absence of' shared memory;. (ii) Unpredictable inter-node communication delays; and. (iii) Practically no global system state observable by component machines. Due to the lack of shared memory, inter-site.

  13. Nature-inspired computing for control systems

    CERN Document Server

    2016-01-01

    The book presents recent advances in nature-inspired computing, giving a special emphasis to control systems applications. It reviews different techniques used for simulating physical, chemical, biological or social phenomena at the purpose of designing robust, predictive and adaptive control strategies. The book is a collection of several contributions, covering either more general approaches in control systems, or methodologies for control tuning and adaptive controllers, as well as exciting applications of nature-inspired techniques in robotics. On one side, the book is expected to motivate readers with a background in conventional control systems to try out these powerful techniques inspired by nature. On the other side, the book provides advanced readers with a deeper understanding of the field and a broad spectrum of different methods and techniques. All in all, the book is an outstanding, practice-oriented reference guide to nature-inspired computing addressing graduate students, researchers and practi...

  14. Reactor safety: the Nova computer system

    International Nuclear Information System (INIS)

    Eisgruber, H.; Stadelmann, W.

    1991-01-01

    After instances of maloperation, the causes of defects, the effectiveness of the measures taken to control the situation, and possibilities to avoid future recurrences need to be investigated above all before the plant is restarted. The most important aspect in all these efforts is to check the sequence in time, and the completeness, of the control measures initiated automatically. For this verification, a computer system is used instead of time-consuming manual analytical techniques, which produces the necessary information almost in real time. The results are available within minutes after completion of the measures initiated automatically. As all short-term safety functions are initiated by automatic systems, their consistent and comprehensive verification results in a clearly higher level of safety. The report covers the development of the computer system, and its implementation, in the Gundremmingen nuclear power station. Similar plans are being pursued in Biblis and Muelheim-Kaerlich. (orig.) [de

  15. Computer control system of TARN-2

    International Nuclear Information System (INIS)

    Watanabe, S.

    1989-01-01

    The CAMAC interface system is employed in order to regulate the power supply, beam diagnostic and so on. Five CAMAC stations are located in the TARN-2 area and are linked with a serial highway system. The CAMAC serial highway is driven by a serial highway driver, Kinetic 3992, which is housed in the CAMAC powered crate and regulated by two successive methods. One is regulated by the mini computer through the standard branch-highway crate controller, named Type-A2, and the other is regulated with the microcomputer through the auxiliary crate controller. The CAMAC serial highway comprises the two-way optical cables with a total length of 300 m. Each CAMAC station has the serial and auxiliary crate controllers so as to realize alternative control with the local computer system. Interpreter, INSBASIC, is used in the main control computer. There are many kinds of the 'device control function' of the INSBASIC. Because the 'device control function' implies physical operating procedure of such a device, only knowledge of the logical operating procedure is required. A touch panel system is employed to regulate the complicated control flow without any knowledge of the usage of the device. A rotary encoder system, which is analogous to the potentiometer operation, is also available for smooth adjustment of the setting parameter. (author)

  16. National Ignition Facility integrated computer control system

    International Nuclear Information System (INIS)

    Van Arsdall, P.J. LLNL

    1998-01-01

    The NIF design team is developing the Integrated Computer Control System (ICCS), which is based on an object-oriented software framework applicable to event-driven control systems. The framework provides an open, extensible architecture that is sufficiently abstract to construct future mission-critical control systems. The ICCS will become operational when the first 8 out of 192 beams are activated in mid 2000. The ICCS consists of 300 front-end processors attached to 60,000 control points coordinated by a supervisory system. Computers running either Solaris or VxWorks are networked over a hybrid configuration of switched fast Ethernet and asynchronous transfer mode (ATM). ATM carries digital motion video from sensors to operator consoles. Supervisory software is constructed by extending the reusable framework components for each specific application. The framework incorporates services for database persistence, system configuration, graphical user interface, status monitoring, event logging, scripting language, alert management, and access control. More than twenty collaborating software applications are derived from the common framework. The framework is interoperable among different kinds of computers and functions as a plug-in software bus by leveraging a common object request brokering architecture (CORBA). CORBA transparently distributes the software objects across the network. Because of the pivotal role played, CORBA was tested to ensure adequate performance

  17. Shape memory system with integrated actuation using embedded particles

    Science.gov (United States)

    Buckley, Patrick R [New York, NY; Maitland, Duncan J [Pleasant Hill, CA

    2009-09-22

    A shape memory material with integrated actuation using embedded particles. One embodiment provides a shape memory material apparatus comprising a shape memory material body and magnetic pieces in the shape memory material body. Another embodiment provides a method of actuating a device to perform an activity on a subject comprising the steps of positioning a shape memory material body in a desired position with regard to the subject, the shape memory material body capable of being formed in a specific primary shape, reformed into a secondary stable shape, and controllably actuated to recover the specific primary shape; including pieces in the shape memory material body; and actuating the shape memory material body using the pieces causing the shape memory material body to be controllably actuated to recover the specific primary shape and perform the activity on the subject.

  18. Shape memory system with integrated actuation using embedded particles

    Science.gov (United States)

    Buckley, Patrick R.; Maitland, Duncan J.

    2014-04-01

    A shape memory material with integrated actuation using embedded particles. One embodiment provides a shape memory material apparatus comprising a shape memory material body and magnetic pieces in the shape memory material body. Another embodiment provides a method of actuating a device to perform an activity on a subject comprising the steps of positioning a shape memory material body in a desired position with regard to the subject, the shape memory material body capable of being formed in a specific primary shape, reformed into a secondary stable shape, and controllably actuated to recover the specific primary shape; including pieces in the shape memory material body; and actuating the shape memory material body using the pieces causing the shape memory material body to be controllably actuated to recover the specific primary shape and perform the activity on the subject.

  19. Decomposability queueing and computer system applications

    CERN Document Server

    Courtois, P J

    1977-01-01

    Decomposability: Queueing and Computer System Applications presents a set of powerful methods for systems analysis. This 10-chapter text covers the theory of nearly completely decomposable systems upon which specific analytic methods are based.The first chapters deal with some of the basic elements of a theory of nearly completely decomposable stochastic matrices, including the Simon-Ando theorems and the perturbation theory. The succeeding chapters are devoted to the analysis of stochastic queuing networks that appear as a type of key model. These chapters also discuss congestion problems in

  20. Computer-controlled radiation monitoring system

    International Nuclear Information System (INIS)

    Homann, S.G.

    1994-01-01

    A computer-controlled radiation monitoring system was designed and installed at the Lawrence Livermore National Laboratory's Multiuser Tandem Laboratory (10 MV tandem accelerator from High Voltage Engineering Corporation). The system continuously monitors the photon and neutron radiation environment associated with the facility and automatically suspends accelerator operation if preset radiation levels are exceeded. The system has proved reliable real-time radiation monitoring over the past five years, and has been a valuable tool for maintaining personnel exposure as low as reasonably achievable

  1. Very Dense High Speed 3u VPX Memory and Processing Space Systems, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — While VPX shows promise as an open standard COTS computing and memory platform, there are several challenges that must be overcome to migrate the technology for a...

  2. Computational perspectives in the history of science: to the memory of Peter Damerow.

    Science.gov (United States)

    Laubichler, Manfred D; Maienschein, Jane; Renn, Jürgen

    2013-03-01

    Computational methods and perspectives can transform the history of science by enabling the pursuit of novel types of questions, dramatically expanding the scale of analysis (geographically and temporally), and offering novel forms of publication that greatly enhance access and transparency. This essay presents a brief summary of a computational research system for the history of science, discussing its implications for research, education, and publication practices and its connections to the open-access movement and similar transformations in the natural and social sciences that emphasize big data. It also argues that computational approaches help to reconnect the history of science to individual scientific disciplines.

  3. Applicability of Computational Systems Biology in Toxicology

    DEFF Research Database (Denmark)

    Kongsbak, Kristine Grønning; Hadrup, Niels; Audouze, Karine Marie Laure

    2014-01-01

    be used to establish hypotheses on links between the chemical and human diseases. Such information can also be applied for designing more intelligent animal/cell experiments that can test the established hypotheses. Here, we describe how and why to apply an integrative systems biology method......Systems biology as a research field has emerged within the last few decades. Systems biology, often defined as the antithesis of the reductionist approach, integrates information about individual components of a biological system. In integrative systems biology, large data sets from various sources...... and databases are used to model and predict effects of chemicals on, for instance, human health. In toxicology, computational systems biology enables identification of important pathways and molecules from large data sets; tasks that can be extremely laborious when performed by a classical literature search...

  4. Distributed computer controls for accelerator systems

    International Nuclear Information System (INIS)

    Moore, T.L.

    1988-09-01

    A distributed control system has been designed and installed at the Lawrence Livermore National Laboratory Multi-user Tandem Facility using an extremely modular approach in hardware and software. The two tiered, geographically organized design allowed total system implementation with four months with a computer and instrumentation cost of approximately $100K. Since the system structure is modular, application to a variety of facilities is possible. Such a system allows rethinking and operational style of the facilities, making possible highly reproducible and unattended operation. The impact of industry standards, i.e., UNIX, CAMAC, and IEEE-802.3, and the use of a graphics-oriented controls software suite allowed the efficient implementation of the system. The definition, design, implementation, operation and total system performance will be discussed. 3 refs

  5. Distributed computer controls for accelerator systems

    Science.gov (United States)

    Moore, T. L.

    1989-04-01

    A distributed control system has been designed and installed at the Lawrence Livermore National Laboratory Multiuser Tandem Facility using an extremely modular approach in hardware and software. The two tiered, geographically organized design allowed total system implantation within four months with a computer and instrumentation cost of approximately $100k. Since the system structure is modular, application to a variety of facilities is possible. Such a system allows rethinking of operational style of the facilities, making possible highly reproducible and unattended operation. The impact of industry standards, i.e., UNIX, CAMAC, and IEEE-802.3, and the use of a graphics-oriented controls software suite allowed the effective implementation of the system. The definition, design, implementation, operation and total system performance will be discussed.

  6. Distributed computer controls for accelerator systems

    International Nuclear Information System (INIS)

    Moore, T.L.

    1989-01-01

    A distributed control system has been designed and installed at the Lawrence Livermore National Laboratory Multiuser Tandem Facility using an extremely modular approach in hardware and software. The two tiered, geographically organized design allowed total system implantation within four months with a computer and instrumentation cost of approximately $100k. Since the system structure is modular, application to a variety of facilities is possible. Such a system allows rethinking of operational style of the facilities, making possible highly reproducible and unattended operation. The impact of industry standards, i.e., UNIX, CAMAC, and IEEE-802.3, and the use of a graphics-oriented controls software suite allowed the effective implementation of the system. The definition, design, implementation, operation and total system performance will be discussed. (orig.)

  7. Computer aided system engineering for space construction

    Science.gov (United States)

    Racheli, Ugo

    1989-01-01

    This viewgraph presentation covers the following topics. Construction activities envisioned for the assembly of large platforms in space (as well as interplanetary spacecraft and bases on extraterrestrial surfaces) require computational tools that exceed the capability of conventional construction management programs. The Center for Space Construction is investigating the requirements for new computational tools and, at the same time, suggesting the expansion of graduate and undergraduate curricula to include proficiency in Computer Aided Engineering (CAE) though design courses and individual or team projects in advanced space systems design. In the center's research, special emphasis is placed on problems of constructability and of the interruptability of planned activity sequences to be carried out by crews operating under hostile environmental conditions. The departure point for the planned work is the acquisition of the MCAE I-DEAS software, developed by the Structural Dynamics Research Corporation (SDRC), and its expansion to the level of capability denoted by the acronym IDEAS**2 currently used for configuration maintenance on Space Station Freedom. In addition to improving proficiency in the use of I-DEAS and IDEAS**2, it is contemplated that new software modules will be developed to expand the architecture of IDEAS**2. Such modules will deal with those analyses that require the integration of a space platform's configuration with a breakdown of planned construction activities and with a failure modes analysis to support computer aided system engineering (CASE) applied to space construction.

  8. The effect on memory of chronic prednisone treatment in patients with systemic disease.

    Science.gov (United States)

    Keenan, P A; Jacobson, M W; Soleymani, R M; Mayes, M D; Stress, M E; Yaldoo, D T

    1996-12-01

    There have been no systematic investigations of the effects of glucocorticoid treatment on memory in a clinical population despite experimental and clinical evidence that such treatment could cause memory disturbance. We conducted both cross-sectional and longitudinal studies. In Study 1, we administered tests of both hippocampal-dependent explicit memory and hippocampal-independent implicit memory to twenty-five prednisone-treated patients with systemic disease without CNS involvement and 25 matched clinical controls. All treated patients were taking doses of 5 to 40 mg of prednisone daily for at least 1 year. The glucocorticoid-treated group performed worse than the controls on tests of explicit memory, but the groups did not differ on the implicit memory task. Multiple regression analyses suggested that elderly patients are more susceptible to memory impairment with less protracted treatment. The results of Study 2, a prospective, longitudinal study of the effects of prednisone on memory across 3 months of therapy, suggest that even acute treatment can adversely affect memory. The observed alteration in memory was not secondary to inattention, affective disturbance, generalized global cognitive decline, or severity of disease. Results reported here, combined with previous clinical and experimental reports, indicate that the risk of memory impairment should be carefully considered before initiating treatment with glucocorticoids. Conversely, use of glucocorticoids should be considered in the differential diagnosis of memory loss. Finally, the potential benefit of anti-inflammatory treatment in Alzheimer's disease might be counterbalanced by possible iatrogenic memory impairment, at least when synthetic glucocorticoids are considered.

  9. Checkpoint triggering in a computer system

    Science.gov (United States)

    Cher, Chen-Yong

    2016-09-06

    According to an aspect, a method for triggering creation of a checkpoint in a computer system includes executing a task in a processing node of the computer system and determining whether it is time to read a monitor associated with a metric of the task. The monitor is read to determine a value of the metric based on determining that it is time to read the monitor. A threshold for triggering creation of the checkpoint is determined based on the value of the metric. Based on determining that the value of the metric has crossed the threshold, the checkpoint including state data of the task is created to enable restarting execution of the task upon a restart operation.

  10. Blood leakage detection during dialysis therapy based on fog computing with array photocell sensors and heteroassociative memory model

    Science.gov (United States)

    Wu, Jian-Xing; Huang, Ping-Tzan; Li, Chien-Ming

    2018-01-01

    Blood leakage and blood loss are serious life-threatening complications occurring during dialysis therapy. These events have been of concerns to both healthcare givers and patients. More than 40% of adult blood volume can be lost in just a few minutes, resulting in morbidities and mortality. The authors intend to propose the design of a warning tool for the detection of blood leakage/blood loss during dialysis therapy based on fog computing with an array of photocell sensors and heteroassociative memory (HAM) model. Photocell sensors are arranged in an array on a flexible substrate to detect blood leakage via the resistance changes with illumination in the visible spectrum of 500–700 nm. The HAM model is implemented to design a virtual alarm unit using electricity changes in an embedded system. The proposed warning tool can indicate the risk level in both end-sensing units and remote monitor devices via a wireless network and fog/cloud computing. The animal experimental results (pig blood) will demonstrate the feasibility. PMID:29515815

  11. Blood leakage detection during dialysis therapy based on fog computing with array photocell sensors and heteroassociative memory model.

    Science.gov (United States)

    Wu, Jian-Xing; Huang, Ping-Tzan; Lin, Chia-Hung; Li, Chien-Ming

    2018-02-01

    Blood leakage and blood loss are serious life-threatening complications occurring during dialysis therapy. These events have been of concerns to both healthcare givers and patients. More than 40% of adult blood volume can be lost in just a few minutes, resulting in morbidities and mortality. The authors intend to propose the design of a warning tool for the detection of blood leakage/blood loss during dialysis therapy based on fog computing with an array of photocell sensors and heteroassociative memory (HAM) model. Photocell sensors are arranged in an array on a flexible substrate to detect blood leakage via the resistance changes with illumination in the visible spectrum of 500-700 nm. The HAM model is implemented to design a virtual alarm unit using electricity changes in an embedded system. The proposed warning tool can indicate the risk level in both end-sensing units and remote monitor devices via a wireless network and fog/cloud computing. The animal experimental results (pig blood) will demonstrate the feasibility.

  12. Interaction between Neurogenesis and Hippocampal Memory System: New Vistas

    Science.gov (United States)

    Abrous, Djoher Nora; Wojtowicz, Jan Martin

    2015-01-01

    During the last decade, the questions on the functionality of adult neurogenesis have changed their emphasis from if to how the adult-born neurons participate in a variety of memory processes. The emerging answers are complex because we are overwhelmed by a variety of behavioral tasks that apparently require new neurons to be performed optimally. With few exceptions, the hippocampal memory system seems to use the newly generated neurons for multiple roles. Adult neurogenesis has given the dentate gyrus new capabilities not previously thought possible within the scope of traditional synaptic plasticity. Looking at these new developments from the perspective of past discoveries, the science of adult neurogenesis has emerged from its initial phase of being, first, a surprising oddity and, later, exciting possibility, to the present state of being an integral part of mainstream neuroscience. The answers to many remaining questions regarding adult neurogenesis will come along only with our growing understanding of the functionality of the brain as a whole. This, in turn, will require integration of multiple levels of organization from molecules and cells to circuits and systems, ultimately resulting in comprehension of behavioral outcomes. PMID:26032718

  13. Extended memory management under RTOS

    Science.gov (United States)

    Plummer, M.

    1981-01-01

    A technique for extended memory management in ROLM 1666 computers using FORTRAN is presented. A general software system is described for which the technique can be ideally applied. The memory manager interface with the system is described. The protocols by which the manager is invoked are presented, as well as the methods used by the manager.

  14. Testability and Fault Tolerance for Emerging Nanoelectronic Memories

    NARCIS (Netherlands)

    Haron, N.Z.B.

    2012-01-01

    Emerging nanoelectronic memories such as Resistive Random Access Memories (RRAMs) are possible candidates to replace the conventional memory technologies such as SRAMs, DRAMs and flash memories in future computer systems. Despite their advantages such as enormous storage capacity, low-power per unit

  15. Performance evaluation of a computed radiography system

    Energy Technology Data Exchange (ETDEWEB)

    Roussilhe, J.; Fallet, E. [Carestream Health France, 71 - Chalon/Saone (France); Mango, St.A. [Carestream Health, Inc. Rochester, New York (United States)

    2007-07-01

    Computed radiography (CR) standards have been formalized and published in Europe and in the US. The CR system classification is defined in those standards by - minimum normalized signal-to-noise ratio (SNRN), and - maximum basic spatial resolution (SRb). Both the signal-to-noise ratio (SNR) and the contrast sensitivity of a CR system depend on the dose (exposure time and conditions) at the detector. Because of their wide dynamic range, the same storage phosphor imaging plate can qualify for all six CR system classes. The exposure characteristics from 30 to 450 kV, the contrast sensitivity, and the spatial resolution of the KODAK INDUSTREX CR Digital System have been thoroughly evaluated. This paper will present some of the factors that determine the system's spatial resolution performance. (authors)

  16. TMX-U computer system in evolution

    International Nuclear Information System (INIS)

    Casper, T.A.; Bell, H.; Brown, M.; Gorvad, M.; Jenkins, S.; Meyer, W.; Moller, J.; Perkins, D.

    1986-01-01

    Over the past three years, the total TMX-U diagnsotic data base has grown to exceed 10 megabytes from over 1300 channels; roughly triple the originally designed size. This acquisition and processing load has resulted in an experiment repetition rate exceeding 10 minutes per shot using the five original Hewlett-Packard HP-1000 computers with their shared disks. Our new diagnostics tend to be multichannel instruments, which, in our environment, can be more easily managed using local computers. For this purpose, we are using HP series 9000 computers for instrument control, data acquisition, and analysis. Fourteen such systems are operational with processed format output exchanged via a shared resource manager. We are presently implementing the necessary hardware and software changes to create a local area network allowing us to combine the data from these systems with our main data archive. The expansion of our diagnostic system using the paralled acquisition and processing concept allows us to increase our data base with a minimum of impact on the experimental repetition rate

  17. Flash memory management system and method utilizing multiple block list windows

    Science.gov (United States)

    Chow, James (Inventor); Gender, Thomas K. (Inventor)

    2005-01-01

    The present invention provides a flash memory management system and method with increased performance. The flash memory management system provides the ability to efficiently manage and allocate flash memory use in a way that improves reliability and longevity, while maintaining good performance levels. The flash memory management system includes a free block mechanism, a disk maintenance mechanism, and a bad block detection mechanism. The free block mechanism provides efficient sorting of free blocks to facilitate selecting low use blocks for writing. The disk maintenance mechanism provides for the ability to efficiently clean flash memory blocks during processor idle times. The bad block detection mechanism provides the ability to better detect when a block of flash memory is likely to go bad. The flash status mechanism stores information in fast access memory that describes the content and status of the data in the flash disk. The new bank detection mechanism provides the ability to automatically detect when new banks of flash memory are added to the system. Together, these mechanisms provide a flash memory management system that can improve the operational efficiency of systems that utilize flash memory.

  18. MEMORY MODULATION

    Science.gov (United States)

    Roozendaal, Benno; McGaugh, James L.

    2011-01-01

    Our memories are not all created equally strong: Some experiences are well remembered while others are remembered poorly, if at all. Research on memory modulation investigates the neurobiological processes and systems that contribute to such differences in the strength of our memories. Extensive evidence from both animal and human research indicates that emotionally significant experiences activate hormonal and brain systems that regulate the consolidation of newly acquired memories. These effects are integrated through noradrenergic activation of the basolateral amygdala which regulates memory consolidation via interactions with many other brain regions involved in consolidating memories of recent experiences. Modulatory systems not only influence neurobiological processes underlying the consolidation of new information, but also affect other mnemonic processes, including memory extinction, memory recall and working memory. In contrast to their enhancing effects on consolidation, adrenal stress hormones impair memory retrieval and working memory. Such effects, as with memory consolidation, require noradrenergic activation of the basolateral amygdala and interactions with other brain regions. PMID:22122145

  19. A computer-based purchase management system

    International Nuclear Information System (INIS)

    Kuriakose, K.K.; Subramani, M.G.

    1989-01-01

    The details of a computer-based purchase management system developed to meet the specific requirements of Madras Regional Purchase Unit (MRPU) is given. Howe ver it can be easily modified to meet the requirements of any other purchase department. It covers various operations of MRPU starting from indent processing to preparation of purchase orders and reminders. In order to enable timely management action and control facilities are provided to generate the necessary management information reports. The scope for further work is also discussed. The system is completely menu driven and user friendly. Appendix A and B contains the menu implemented and the sample outputs respectively. (author)

  20. Tools for Embedded Computing Systems Software

    Science.gov (United States)

    1978-01-01

    A workshop was held to assess the state of tools for embedded systems software and to determine directions for tool development. A synopsis of the talk and the key figures of each workshop presentation, together with chairmen summaries, are presented. The presentations covered four major areas: (1) tools and the software environment (development and testing); (2) tools and software requirements, design, and specification; (3) tools and language processors; and (4) tools and verification and validation (analysis and testing). The utility and contribution of existing tools and research results for the development and testing of embedded computing systems software are described and assessed.

  1. Computational modeling of shallow geothermal systems

    CERN Document Server

    Al-Khoury, Rafid

    2011-01-01

    A Step-by-step Guide to Developing Innovative Computational Tools for Shallow Geothermal Systems Geothermal heat is a viable source of energy and its environmental impact in terms of CO2 emissions is significantly lower than conventional fossil fuels. Shallow geothermal systems are increasingly utilized for heating and cooling of buildings and greenhouses. However, their utilization is inconsistent with the enormous amount of energy available underneath the surface of the earth. Projects of this nature are not getting the public support they deserve because of the uncertainties associated with

  2. Prestandardisation Activities for Computer Based Safety Systems

    DEFF Research Database (Denmark)

    Taylor, J. R.; Bologna, S.; Ehrenberger, W.

    1981-01-01

    Questions of technical safety become more and more important. Due to the higher complexity of their functions computer based safety systems have special problems. Researchers, producers, licensing personnel and customers have met on a European basis to exchange knowledge and formulate positions....... The Commission of the european Community supports the work. Major topics comprise hardware configuration and self supervision, software design, verification and testing, documentation, system specification and concurrent processing. Preliminary results have been used for the draft of an IEC standard and for some...

  3. Radiation management computer system for Monju

    International Nuclear Information System (INIS)

    Aoyama, Kei; Yasutomo, Katsumi; Sudou, Takayuki; Yamashita, Masahiro; Hayata, Kenichi; Ueda, Hajime; Hosokawa, Hideo

    2002-01-01

    Radiation management of nuclear power research institutes, nuclear power stations and other such facilities are strictly managed under Japanese laws and management policies. Recently, the momentous issues of more accurate radiation dose management and increased work efficiency has been discussed. Up to now, Fuji Electric Company has supplied a large number of Radiation Management Systems to nuclear power stations and related nuclear facilities. We introduce the new radiation management computer system with adopted WWW technique for Japan Nuclear Cycle Development Institute, MONJU Fast Breeder Reactor (MONJU). (author)

  4. Computer performance optimization systems, applications, processes

    CERN Document Server

    Osterhage, Wolfgang W

    2013-01-01

    Computing power performance was important at times when hardware was still expensive, because hardware had to be put to the best use. Later on this criterion was no longer critical, since hardware had become inexpensive. Meanwhile, however, people have realized that performance again plays a significant role, because of the major drain on system resources involved in developing complex applications. This book distinguishes between three levels of performance optimization: the system level, application level and business processes level. On each, optimizations can be achieved and cost-cutting p

  5. Honeywell Modular Automation System Computer Software Documentation

    International Nuclear Information System (INIS)

    STUBBS, A.M.

    2000-01-01

    The purpose of this Computer Software Document (CSWD) is to provide configuration control of the Honeywell Modular Automation System (MAS) in use at the Plutonium Finishing Plant (PFP). This CSWD describes hardware and PFP developed software for control of stabilization furnaces. The Honeywell software can generate configuration reports for the developed control software. These reports are described in the following section and are attached as addendum's. This plan applies to PFP Engineering Manager, Thermal Stabilization Cognizant Engineers, and the Shift Technical Advisors responsible for the Honeywell MAS software/hardware and administration of the Honeywell System

  6. Implantation and use of a version of the GAMALTA computer code in the 3.500 M Lecroy system

    International Nuclear Information System (INIS)

    Auler, L.T.

    1984-05-01

    The GAMALTA computer code was implanted in the 3.500 M Le Croy system, for creating an optional analysis function which is charged in RAM memory from a discket. The mode to construct functions to make part of the menu of the system is explained and a procedure to use the GAMALTA code is done. (M.C.K.) [pt

  7. Memory Overview - Technologies and Needs

    Science.gov (United States)

    LaBel, Kenneth A.

    2010-01-01

    As NASA has evolved it's usage of spaceflight computing, memory applications have followed as well. In this talk, we will discuss the history of NASA's memories from magnetic core and tape recorders to current semiconductor approaches. We will briefly describe current functional memory usage in NASA space systems followed by a description of potential radiation-induced failure modes along with considerations for reliable system design.

  8. Production Management System for AMS Computing Centres

    Science.gov (United States)

    Choutko, V.; Demakov, O.; Egorov, A.; Eline, A.; Shan, B. S.; Shi, R.

    2017-10-01

    The Alpha Magnetic Spectrometer [1] (AMS) has collected over 95 billion cosmic ray events since it was installed on the International Space Station (ISS) on May 19, 2011. To cope with enormous flux of events, AMS uses 12 computing centers in Europe, Asia and North America, which have different hardware and software configurations. The centers are participating in data reconstruction, Monte-Carlo (MC) simulation [2]/Data and MC production/as well as in physics analysis. Data production management system has been developed to facilitate data and MC production tasks in AMS computing centers, including job acquiring, submitting, monitoring, transferring, and accounting. It was designed to be modularized, light-weighted, and easy-to-be-deployed. The system is based on Deterministic Finite Automaton [3] model, and implemented by script languages, Python and Perl, and the built-in sqlite3 database on Linux operating systems. Different batch management systems, file system storage, and transferring protocols are supported. The details of the integration with Open Science Grid are presented as well.

  9. Interactive computer-enhanced remote viewing system

    International Nuclear Information System (INIS)

    Tourtellott, J.A.; Wagner, J.F.

    1995-01-01

    Remediation activities such as decontamination and decommissioning (D ampersand D) typically involve materials and activities hazardous to humans. Robots are an attractive way to conduct such remediation, but for efficiency they need a good three-dimensional (3-D) computer model of the task space where they are to function. This model can be created from engineering plans and architectural drawings and from empirical data gathered by various sensors at the site. The model is used to plan robotic tasks and verify that selected paths am clear of obstacles. This need for a task space model is most pronounced in the remediation of obsolete production facilities and underground storage tanks. Production facilities at many sites contain compact process machinery and systems that were used to produce weapons grade material. For many such systems, a complex maze of pipes (with potentially dangerous contents) must be removed, and this represents a significant D ampersand D challenge. In an analogous way, the underground storage tanks at sites such as Hanford represent a challenge because of their limited entry and the tumbled profusion of in-tank hardware. In response to this need, the Interactive Computer-Enhanced Remote Viewing System (ICERVS) is being designed as a software system to: (1) Provide a reliable geometric description of a robotic task space, and (2) Enable robotic remediation to be conducted more effectively and more economically than with available techniques. A system such as ICERVS is needed because of the problems discussed below

  10. Compact, open-architecture computed radiography system

    International Nuclear Information System (INIS)

    Huang, H.K.; Lim, A.; Kangarloo, H.; Eldredge, S.; Loloyan, M.; Chuang, K.S.

    1990-01-01

    Computed radiography (CR) was introduced in 1982, and its basic system design has not changed. Current CR systems have certain limitations: spatial resolution and signal-to-noise ratios are lower than those of screen-film systems, they are complicated and expensive to build, and they have a closed architecture. The authors of this paper designed and implemented a simpler, lower-cost, compact, open-architecture CR system to overcome some of these limitations. The open-architecture system is a manual-load-single-plate reader that can fit on a desk top. Phosphor images are stored in a local disk and can be sent to any other computer through standard interfaces. Any manufacturer's plate can be read with a scanning time of 90 second for a 35 x 43-cm plate. The standard pixel size is 174 μm and can be adjusted for higher spatial resolution. The data resolution is 12 bits/pixel over an x-ray exposure range of 0.01-100 mR

  11. Serotonergic modulation of spatial working memory: predictions from a computational network model

    Directory of Open Access Journals (Sweden)

    Maria eCano-Colino

    2013-09-01

    Full Text Available Serotonin (5-HT receptors of types 1A and 2A are massively expressed in prefrontal cortex (PFC neurons, an area associated with cognitive function. Hence, 5-HT could be effective in modulating prefrontal-dependent cognitive functions, such as spatial working memory (SWM. However, a direct association between 5-HT and SWM has proved elusive in psycho-pharmacological studies. Recently, a computational network model of the PFC microcircuit was used to explore the relationship between 5‑HT and SWM (Cano-Colino et al. 2013. This study found that both excessive and insufficient 5-HT levels lead to impaired SWM performance in the network, and it concluded that analyzing behavioral responses based on confidence reports could facilitate the experimental identification of SWM behavioral effects of 5‑HT neuromodulation. Such analyses may have confounds based on our limited understanding of metacognitive processes. Here, we extend these results by deriving three additional predictions from the model that do not rely on confidence reports. Firstly, only excessive levels of 5-HT should result in SWM deficits that increase with delay duration. Secondly, excessive 5-HT baseline concentration makes the network vulnerable to distractors at distances that were robust to distraction in control conditions, while the network still ignores distractors efficiently for low 5‑HT levels that impair SWM. Finally, 5-HT modulates neuronal memory fields in neurophysiological experiments: Neurons should be better tuned to the cued stimulus than to the behavioral report for excessive 5-HT levels, while the reverse should happen for low 5-HT concentrations. In all our simulations agonists of 5-HT1A receptors and antagonists of 5-HT2A receptors produced behavioral and physiological effects in line with global 5-HT level increases. Our model makes specific predictions to be tested experimentally and advance our understanding of the neural basis of SWM and its neuromodulation

  12. Effects of degraded sensory input on memory for speech: behavioral data and a test of biologically constrained computational models.

    Science.gov (United States)

    Piquado, Tepring; Cousins, Katheryn A Q; Wingfield, Arthur; Miller, Paul

    2010-12-13

    Poor hearing acuity reduces memory for spoken words, even when the words are presented with enough clarity for correct recognition. An "effortful hypothesis" suggests that the perceptual effort needed for recognition draws from resources that would otherwise be available for encoding the word in memory. To assess this hypothesis, we conducted a behavioral task requiring immediate free recall of word-lists, some of which contained an acoustically masked word that was just above perceptual threshold. Results show that masking a word reduces the recall of that word and words prior to it, as well as weakening the linking associations between the masked and prior words. In contrast, recall probabilities of words following the masked word are not affected. To account for this effect we conducted computational simulations testing two classes of models: Associative Linking Models and Short-Term Memory Buffer Models. Only a model that integrated both contextual linking and buffer components matched all of the effects of masking observed in our behavioral data. In this Linking-Buffer Model, the masked word disrupts a short-term memory buffer, causing associative links of words in the buffer to be weakened, affecting memory for the masked word and the word prior to it, while allowing links of words following the masked word to be spared. We suggest that these data account for the so-called "effortful hypothesis", where distorted input has a detrimental impact on prior information stored in short-term memory. Copyright © 2010 Elsevier B.V. All rights reserved.

  13. RXY/DRXY-a postprocessing graphical system for scientific computation

    International Nuclear Information System (INIS)

    Jin Qijie

    1990-01-01

    Scientific computing require computer graphical function for its visualization. The developing objects and functions of a postprocessing graphical system for scientific computation are described, and also briefly described its implementation

  14. Critical Problems in Very Large Scale Computer Systems

    Science.gov (United States)

    1989-09-30

    Corporation, Portland, Oregon, January, 1989. 1 18 1 I 1 32. J. White, ’Architectul Support for Computer-Aided Design," panel member. SASIMI Workshop...supporung .oherent memory access in the absence of a discard awk,. ardl ,, tmed messages. Simila.r!,,, fal..s -t I Inie ~shared bus. Finally, the design

  15. Computation in Dynamically Bounded Asymmetric Systems

    Science.gov (United States)

    Rutishauser, Ueli; Slotine, Jean-Jacques; Douglas, Rodney

    2015-01-01

    Previous explanations of computations performed by recurrent networks have focused on symmetrically connected saturating neurons and their convergence toward attractors. Here we analyze the behavior of asymmetrical connected networks of linear threshold neurons, whose positive response is unbounded. We show that, for a wide range of parameters, this asymmetry brings interesting and computationally useful dynamical properties. When driven by input, the network explores potential solutions through highly unstable ‘expansion’ dynamics. This expansion is steered and constrained by negative divergence of the dynamics, which ensures that the dimensionality of the solution space continues to reduce until an acceptable solution manifold is reached. Then the system contracts stably on this manifold towards its final solution trajectory. The unstable positive feedback and cross inhibition that underlie expansion and divergence are common motifs in molecular and neuronal networks. Therefore we propose that very simple organizational constraints that combine these motifs can lead to spontaneous computation and so to the spontaneous modification of entropy that is characteristic of living systems. PMID:25617645

  16. A computer-aided continuous assessment system

    Directory of Open Access Journals (Sweden)

    B. C.H. Turton

    1996-12-01

    Full Text Available Universities within the United Kingdom have had to cope with a massive expansion in undergraduate student numbers over the last five years (Committee of Scottish University Principals, 1993; CVCP Briefing Note, 1994. In addition, there has been a move towards modularization and a closer monitoring of a student's progress throughout the year. Since the price/performance ratio of computer systems has continued to improve, Computer- Assisted Learning (CAL has become an attractive option. (Fry, 1990; Benford et al, 1994; Laurillard et al, 1994. To this end, the Universities Funding Council (UFQ has funded the Teaching and Learning Technology Programme (TLTP. However universities also have a duty to assess as well as to teach. This paper describes a Computer-Aided Assessment (CAA system capable of assisting in grading students and providing feedback. In this particular case, a continuously assessed course (Low-Level Languages of over 100 students is considered. Typically, three man-days are required to mark one assessed piece of coursework from the students in this class. Any feedback on how the questions were dealt with by the student are of necessity brief. Most of the feedback is provided in a tutorial session that covers the pitfalls encountered by the majority of the students.

  17. Ecosystem biophysical memory in the southwestern North America climate system

    International Nuclear Information System (INIS)

    Forzieri, G; Feyen, L; Vivoni, E R

    2013-01-01

    To elucidate the potential role of vegetation to act as a memory source in the southwestern North America climate system, we explore correlation structures of remotely sensed vegetation dynamics with precipitation, temperature and teleconnection indices over 1982–2006 for six ecoregions. We found that lagged correlations between vegetation dynamics and climate variables are modulated by the dominance of monsoonal or Mediterranean regimes and ecosystem-specific physiological processes. Subtropical and tropical ecosystems exhibit a one month lag positive correlation with precipitation, a zero- to one-month lag negative correlation with temperature, and modest negative effects of sea surface temperature (SST). Mountain forests have a zero month lag negative correlation with precipitation, a zero–one month lag negative correlation with temperature, and no significant correlation with SSTs. Deserts show a strong one–four month lag positive correlation with precipitation, a low zero–two month lag negative correlation with temperature, and a high four–eight month lag positive correlation with SSTs. The ecoregion-specific biophysical memories identified offer an opportunity to improve the predictability of land–atmosphere interactions and vegetation feedbacks onto climate. (letter)

  18. Influences of multiple memory systems on auditory mental image acuity.

    Science.gov (United States)

    Navarro Cebrian, Ana; Janata, Petr

    2010-05-01

    The influence of different memory systems and associated attentional processes on the acuity of auditory images, formed for the purpose of making intonation judgments, was examined across three experiments using three different task types (cued-attention, imagery, and two-tone discrimination). In experiment 1 the influence of implicit long-term memory for musical scale structure was manipulated by varying the scale degree (leading tone versus tonic) of the probe note about which a judgment had to be made. In experiments 2 and 3 the ability of short-term absolute pitch knowledge to develop was manipulated by presenting blocks of trials in the same key or in seven different keys. The acuity of auditory images depended on all of these manipulations. Within individual listeners, thresholds in the two-tone discrimination and cued-attention conditions were closely related. In many listeners, cued-attention thresholds were similar to thresholds in the imagery condition, and depended on the amount of training individual listeners had in playing a musical instrument. The results indicate that mental images formed at a sensory/cognitive interface for the purpose of making perceptual decisions are highly malleable.

  19. Dynamical Systems Analysis Applied to Working Memory Data

    Directory of Open Access Journals (Sweden)

    Fidan eGasimova

    2014-07-01

    Full Text Available In the present paper we investigate weekly fluctuations in the working memory capacity (WMC assessed over a period of two years. We use dynamical system analysis, specifically a second order linear differential equation, to model weekly variability in WMC in a sample of 112 9th graders. In our longitudinal data we use a B-spline imputation method to deal with missing data. The results show a significant negative frequency parameter in the data, indicating a cyclical pattern in weekly memory updating performance across time. We use a multilevel modeling approach to capture individual differences in model parameters and find that a higher initial performance level and a slower improvement at the MU task is associated with a slower frequency of oscillation. Additionally, we conduct a simulation study examining the analysis procedure’s performance using different numbers of B-spline knots and values of time delay embedding dimensions. Results show that the number of knots in the B-spline imputation influence accuracy more than the number of embedding dimensions.

  20. Massively Parallel Polar Decomposition on Distributed-Memory Systems

    KAUST Repository

    Ltaief, Hatem

    2018-01-01

    We present a high-performance implementation of the Polar Decomposition (PD) on distributed-memory systems. Building upon on the QR-based Dynamically Weighted Halley (QDWH) algorithm, the key idea lies in finding the best rational approximation for the scalar sign function, which also corresponds to the polar factor for symmetric matrices, to further accelerate the QDWH convergence. Based on the Zolotarev rational functions—introduced by Zolotarev (ZOLO) in 1877— this new PD algorithm ZOLO-PD converges within two iterations even for ill-conditioned matrices, instead of the original six iterations needed for QDWH. ZOLO-PD uses the property of Zolotarev functions that optimality is maintained when two functions are composed in an appropriate manner. The resulting ZOLO-PD has a convergence rate up to seventeen, in contrast to the cubic convergence rate for QDWH. This comes at the price of higher arithmetic costs and memory footprint. These extra floating-point operations can, however, be processed in an embarrassingly parallel fashion. We demonstrate performance using up to 102, 400 cores on two supercomputers. We demonstrate that, in the presence of a large number of processing units, ZOLO-PD is able to outperform QDWH by up to 2.3X speedup, especially in situations where QDWH runs out of work, for instance, in the strong scaling mode of operation.

  1. Systems analysis of the space shuttle. [communication systems, computer systems, and power distribution

    Science.gov (United States)

    Schilling, D. L.; Oh, S. J.; Thau, F.

    1975-01-01

    Developments in communications systems, computer systems, and power distribution systems for the space shuttle are described. The use of high speed delta modulation for bit rate compression in the transmission of television signals is discussed. Simultaneous Multiprocessor Organization, an approach to computer organization, is presented. Methods of computer simulation and automatic malfunction detection for the shuttle power distribution system are also described.

  2. A computational model of fMRI activity in the intraparietal sulcus that supports visual working memory.

    Science.gov (United States)

    Domijan, Dražen

    2011-12-01

    A computational model was developed to explain a pattern of results of fMRI activation in the intraparietal sulcus (IPS) supporting visual working memory for multiobject scenes. The model is based on the hypothesis that dendrites of excitatory neurons are major computational elements in the cortical circuit. Dendrites enable formation of a competitive queue that exhibits a gradient of activity values for nodes encoding different objects, and this pattern is stored in working memory. In the model, brain imaging data are interpreted as a consequence of blood flow arising from dendritic processing. Computer simulations showed that the model successfully simulates data showing the involvement of inferior IPS in object individuation and spatial grouping through representation of objects' locations in space, along with the involvement of superior IPS in object identification through representation of a set of objects' features. The model exhibits a capacity limit due to the limited dynamic range for nodes and the operation of lateral inhibition among them. The capacity limit is fixed in the inferior IPS regardless of the objects' complexity, due to the normalization of lateral inhibition, and variable in the superior IPS, due to the different encoding demands for simple and complex shapes. Systematic variation in the strength of self-excitation enables an understanding of the individual differences in working memory capacity. The model offers several testable predictions regarding the neural basis of visual working memory.

  3. System administration of ATLAS TDAQ computing environment

    Science.gov (United States)

    Adeel-Ur-Rehman, A.; Bujor, F.; Benes, J.; Caramarcu, C.; Dobson, M.; Dumitrescu, A.; Dumitru, I.; Leahu, M.; Valsan, L.; Oreshkin, A.; Popov, D.; Unel, G.; Zaytsev, A.

    2010-04-01

    This contribution gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting High Level Trigger, Event Filter and other subsystems of the ATLAS detector operating on LHC collider at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers, about 40 multi-screen user interface machines installed in the control rooms and various hardware and service monitoring machines as well. In the final configuration, the online computer farm will be capable of hosting tens of thousands applications running simultaneously. The software distribution requirements are matched by the two level NFS based solution. Hardware and network monitoring systems of ATLAS TDAQ are based on NAGIOS and MySQL cluster behind it for accounting and storing the monitoring data collected, IPMI tools, CERN LANDB and the dedicated tools developed by the group, e.g. ConfdbUI. The user management schema deployed in TDAQ environment is founded on the authentication and role management system based on LDAP. External access to the ATLAS online computing facilities is provided by means of the gateways supplied with an accounting system as well. Current activities of the group include deployment of the centralized storage system, testing and validating hardware solutions for future use within the ATLAS TDAQ environment including new multi-core blade servers, developing GUI tools for user authentication and roles management, testing and validating 64-bit OS, and upgrading the existing TDAQ hardware components, authentication servers and the gateways.

  4. System administration of ATLAS TDAQ computing environment

    International Nuclear Information System (INIS)

    Adeel-Ur-Rehman, A; Bujor, F; Dumitrescu, A; Dumitru, I; Leahu, M; Valsan, L; Benes, J; Caramarcu, C; Dobson, M; Unel, G; Oreshkin, A; Popov, D; Zaytsev, A

    2010-01-01

    This contribution gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting High Level Trigger, Event Filter and other subsystems of the ATLAS detector operating on LHC collider at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers, about 40 multi-screen user interface machines installed in the control rooms and various hardware and service monitoring machines as well. In the final configuration, the online computer farm will be capable of hosting tens of thousands applications running simultaneously. The software distribution requirements are matched by the two level NFS based solution. Hardware and network monitoring systems of ATLAS TDAQ are based on NAGIOS and MySQL cluster behind it for accounting and storing the monitoring data collected, IPMI tools, CERN LANDB and the dedicated tools developed by the group, e.g. ConfdbUI. The user management schema deployed in TDAQ environment is founded on the authentication and role management system based on LDAP. External access to the ATLAS online computing facilities is provided by means of the gateways supplied with an accounting system as well. Current activities of the group include deployment of the centralized storage system, testing and validating hardware solutions for future use within the ATLAS TDAQ environment including new multi-core blade servers, developing GUI tools for user authentication and roles management, testing and validating 64-bit OS, and upgrading the existing TDAQ hardware components, authentication servers and the gateways.

  5. Neurocognitive systems related to real-world prospective memory.

    Directory of Open Access Journals (Sweden)

    Grégoria Kalpouzos

    Full Text Available BACKGROUND: Prospective memory (PM denotes the ability to remember to perform actions in the future. It has been argued that standard laboratory paradigms fail to capture core aspects of PM. METHODOLOGY/PRINCIPAL FINDINGS: We combined functional MRI, virtual reality, eye-tracking and verbal reports to explore the dynamic allocation of neurocognitive processes during a naturalistic PM task where individuals performed errands in a realistic model of their residential town. Based on eye movement data and verbal reports, we modeled PM as an iterative loop of five sustained and transient phases: intention maintenance before target detection (TD, TD, intention maintenance after TD, action, and switching, the latter representing the activation of a new intention in mind. The fMRI analyses revealed continuous engagement of a top-down fronto-parietal network throughout the entire task, likely subserving goal maintenance in mind. In addition, a shift was observed from a perceptual (occipital system while searching for places to go, to a mnemonic (temporo-parietal, fronto-hippocampal system for remembering what actions to perform after TD. Updating of the top-down fronto-parietal network occurred at both TD and switching, the latter likely also being characterized by frontopolar activity. CONCLUSION/SIGNIFICANCE: Taken together, these findings show how brain systems complementary interact during real-world PM, and support a more complete model of PM that can be applied to naturalistic PM tasks and that we named PROspective MEmory DYnamic (PROMEDY model because of its dynamics on both multi-phase iteration and the interactions of distinct neurocognitive networks.

  6. Verification Methodology of Fault-tolerant, Fail-safe Computers Applied to MAGLEV Control Computer Systems

    Science.gov (United States)

    1993-05-01

    The Maglev control computer system should be designed to verifiably possess high reliability and safety as well as high availability to make Maglev a dependable and attractive transportation alternative to the public. A Maglev computer system has bee...

  7. Visual computing model for immune system and medical system.

    Science.gov (United States)

    Gong, Tao; Cao, Xinxue; Xiong, Qin

    2015-01-01

    Natural immune system is an intelligent self-organizing and adaptive system, which has a variety of immune cells with different types of immune mechanisms. The mutual cooperation between the immune cells shows the intelligence of this immune system, and modeling this immune system has an important significance in medical science and engineering. In order to build a comprehensible model of this immune system for better understanding with the visualization method than the traditional mathematic model, a visual computing model of this immune system was proposed and also used to design a medical system with the immune system, in this paper. Some visual simulations of the immune system were made to test the visual effect. The experimental results of the simulations show that the visual modeling approach can provide a more effective way for analyzing this immune system than only the traditional mathematic equations.

  8. Visual computing scientific visualization and imaging systems

    CERN Document Server

    2014-01-01

    This volume aims to stimulate discussions on research involving the use of data and digital images as an understanding approach for analysis and visualization of phenomena and experiments. The emphasis is put not only on graphically representing data as a way of increasing its visual analysis, but also on the imaging systems which contribute greatly to the comprehension of real cases. Scientific Visualization and Imaging Systems encompass multidisciplinary areas, with applications in many knowledge fields such as Engineering, Medicine, Material Science, Physics, Geology, Geographic Information Systems, among others. This book is a selection of 13 revised and extended research papers presented in the International Conference on Advanced Computational Engineering and Experimenting -ACE-X conferences 2010 (Paris), 2011 (Algarve), 2012 (Istanbul) and 2013 (Madrid). The examples were particularly chosen from materials research, medical applications, general concepts applied in simulations and image analysis and ot...

  9. Epilepsy analytic system with cloud computing.

    Science.gov (United States)

    Shen, Chia-Ping; Zhou, Weizhi; Lin, Feng-Seng; Sung, Hsiao-Ya; Lam, Yan-Yu; Chen, Wei; Lin, Jeng-Wei; Pan, Ming-Kai; Chiu, Ming-Jang; Lai, Feipei

    2013-01-01

    Biomedical data analytic system has played an important role in doing the clinical diagnosis for several decades. Today, it is an emerging research area of analyzing these big data to make decision support for physicians. This paper presents a parallelized web-based tool with cloud computing service architecture to analyze the epilepsy. There are many modern analytic functions which are wavelet transform, genetic algorithm (GA), and support vector machine (SVM) cascaded in the system. To demonstrate the effectiveness of the system, it has been verified by two kinds of electroencephalography (EEG) data, which are short term EEG and long term EEG. The results reveal that our approach achieves the total classification accuracy higher than 90%. In addition, the entire training time accelerate about 4.66 times and prediction time is also meet requirements in real time.

  10. 2XIIB computer data acquisition system

    International Nuclear Information System (INIS)

    Tyler, G.C.

    1975-01-01

    All major plasma diagnostic measurements from the 2XIIB experiment are recorded, digitized, and stored by the computer data acquisition system. The raw data is then examined, correlated, reduced, and useful portions are quickly retrieved which direct the future conduct of the plasma experiment. This is done in real time and on line while the data is current. The immediate availability of this pertinent data has accelerated the rate at which the 2XII personnel have been able to gain knowledge in the study of plasma containment and fusion interaction. The up time of the experiment is being used much more effectively than ever before. This paper describes the hardware configuration of our data system in relation to various plasma parameters measured, the advantages of powerful software routines to reduce and correlate the data, the present plans for expansion of the system, and the problems we have had to overcome in certain areas to meet our original goals

  11. Tutoring system for nondestructive testing using computer

    International Nuclear Information System (INIS)

    Kim, Jin Koo; Koh, Sung Nam; Shim, Yun Ju; Kim, Min Koo

    1997-01-01

    This paper is written to introduce a multimedia tutoring system for nondestructive testing using personal computer. Nondestructive testing, one of the chief methods for inspecting welds and many other components, is very difficult for the NDT inspectors to understand its technical basis without a wide experience. And it is necessary for considerable repeated education and training for keeping their knowledge. The tutoring system that can simulate NDT works is suggested to solve the above problem based on reasonable condition. The tutoring system shows basic theories of nondestructive testing in a book-style with video images and hyper-links, and it offers practices, in which users can simulate the testing equipment. The book-style and simulation practices provide effective and individual environments for learning nondestructive testing.

  12. Implementing a modular system of computer codes

    International Nuclear Information System (INIS)

    Vondy, D.R.; Fowler, T.B.

    1983-07-01

    A modular computation system has been developed for nuclear reactor core analysis. The codes can be applied repeatedly in blocks without extensive user input data, as needed for reactor history calculations. The primary control options over the calculational paths and task assignments within the codes are blocked separately from other instructions, admitting ready access by user input instruction or directions from automated procedures and promoting flexible and diverse applications at minimum application cost. Data interfacing is done under formal specifications with data files manipulated by an informed manager. This report emphasizes the system aspects and the development of useful capability, hopefully informative and useful to anyone developing a modular code system of much sophistication. Overall, this report in a general way summarizes the many factors and difficulties that are faced in making reactor core calculations, based on the experience of the authors. It provides the background on which work on HTGR reactor physics is being carried out

  13. Strategic Priming with Multiple Antigens Can Yield Memory Cell Phenotypes Optimized for Infection with Mycobacterium tuberculosis: a Computational Study

    Directory of Open Access Journals (Sweden)

    Cordelia eZiraldo

    2016-01-01

    Full Text Available Lack of an effective vaccine results in 9 million new cases of tuberculosis (TB every year and 1.8 million deaths worldwide. Although many infants are vaccinated at birth with BCG (an attenuated M. bovis, this does not prevent infection or development of TB after childhood. Immune responses necessary for prevention of infection or disease are still unknown, making development of effective vaccines against TB challenging. Several new vaccines are ready for human clinical trials, but these trials are difficult and expensive; especially challenging is determining the appropriate cellular response necessary for protection. The magnitude of an immune response is likely key to generating a successful vaccine. Characteristics such as numbers of central memory (CM and effector memory (EM T cells responsive to a diverse set of epitopes are also correlated with protection. Promising vaccines against TB contain mycobacterial subunit antigens (Ag present during both active and latent infection. We hypothesize that protection against different key immunodominant antigens could require a vaccine that produces different levels of EM and CM for each Ag-specific memory population. We created a computational model to explore EM and CM values, and their ratio, within what we term Memory Design Space. Our model captures events involved in T cell priming within lymph nodes and tracks their circulation through blood to peripheral tissues. We used the model to test whether multiple Ag-specific memory cell populations could be generated with distinct locations within Memory Design Space at a specific time point post vaccination. Boosting can further shift memory populations to memory cell ratios unreachable by initial priming events. By strategically varying antigen load, properties of cellular interactions within the LN, and delivery parameters (e.g. number of boosts of multi-subunit vaccines, we can generate multiple Ag-specific memory populations that cover a wide

  14. Strategic Priming with Multiple Antigens can Yield Memory Cell Phenotypes Optimized for Infection with Mycobacterium tuberculosis: A Computational Study.

    Science.gov (United States)

    Ziraldo, Cordelia; Gong, Chang; Kirschner, Denise E; Linderman, Jennifer J

    2015-01-01

    Lack of an effective vaccine results in 9 million new cases of tuberculosis (TB) every year and 1.8 million deaths worldwide. Although many infants are vaccinated at birth with BCG (an attenuated M. bovis), this does not prevent infection or development of TB after childhood. Immune responses necessary for prevention of infection or disease are still unknown, making development of effective vaccines against TB challenging. Several new vaccines are ready for human clinical trials, but these trials are difficult and expensive; especially challenging is determining the appropriate cellular response necessary for protection. The magnitude of an immune response is likely key to generating a successful vaccine. Characteristics such as numbers of central memory (CM) and effector memory (EM) T cells responsive to a diverse set of epitopes are also correlated with protection. Promising vaccines against TB contain mycobacterial subunit antigens (Ag) present during both active and latent infection. We hypothesize that protection against different key immunodominant antigens could require a vaccine that produces different levels of EM and CM for each Ag-specific memory population. We created a computational model to explore EM and CM values, and their ratio, within what we term Memory Design Space. Our model captures events involved in T cell priming within lymph nodes and tracks their circulation through blood to peripheral tissues. We used the model to test whether multiple Ag-specific memory cell populations could be generated with distinct locations within Memory Design Space at a specific time point post vaccination. Boosting can further shift memory populations to memory cell ratios unreachable by initial priming events. By strategically varying antigen load, properties of cellular interactions within the LN, and delivery parameters (e.g., number of boosts) of multi-subunit vaccines, we can generate multiple Ag-specific memory populations that cover a wide range of

  15. RASCAL: A Rudimentary Adaptive System for Computer-Aided Learning.

    Science.gov (United States)

    Stewart, John Christopher

    Both the background of computer-assisted instruction (CAI) systems in general and the requirements of a computer-aided learning system which would be a reasonable assistant to a teacher are discussed. RASCAL (Rudimentary Adaptive System for Computer-Aided Learning) is a first attempt at defining a CAI system which would individualize the learning…

  16. Stress and the engagement of multiple memory systems: integration of animal and human studies.

    Science.gov (United States)

    Schwabe, Lars

    2013-11-01

    Learning and memory can be controlled by distinct memory systems. How these systems are coordinated to optimize learning and behavior has long been unclear. Accumulating evidence indicates that stress may modulate the engagement of multiple memory systems. In particular, rodent and human studies demonstrate that stress facilitates dorsal striatum-dependent "habit" memory, at the expense of hippocampus-dependent "cognitive" memory. Based on these data, a model is proposed which states that the impact of stress on the relative use of multiple memory systems is due to (i) differential effects of hormones and neurotransmitters that are released during stressful events on hippocampal and dorsal striatal memory systems, thus changing the relative strength of and the interactions between these systems, and (ii) a modulatory influence of the amygdala which biases learning toward dorsal striatum-based memory after stress. This shift to habit memory after stress can be adaptive with respect to current performance but might contribute to psychopathology in vulnerable individuals. Copyright © 2013 Wiley Periodicals, Inc.

  17. Knowledge and intelligent computing system in medicine.

    Science.gov (United States)

    Pandey, Babita; Mishra, R B

    2009-03-01

    Knowledge-based systems (KBS) and intelligent computing systems have been used in the medical planning, diagnosis and treatment. The KBS consists of rule-based reasoning (RBR), case-based reasoning (CBR) and model-based reasoning (MBR) whereas intelligent computing method (ICM) encompasses genetic algorithm (GA), artificial neural network (ANN), fuzzy logic (FL) and others. The combination of methods in KBS such as CBR-RBR, CBR-MBR and RBR-CBR-MBR and the combination of methods in ICM is ANN-GA, fuzzy-ANN, fuzzy-GA and fuzzy-ANN-GA. The combination of methods from KBS to ICM is RBR-ANN, CBR-ANN, RBR-CBR-ANN, fuzzy-RBR, fuzzy-CBR and fuzzy-CBR-ANN. In this paper, we have made a study of different singular and combined methods (185 in number) applicable to medical domain from mid 1970s to 2008. The study is presented in tabular form, showing the methods and its salient features, processes and application areas in medical domain (diagnosis, treatment and planning). It is observed that most of the methods are used in medical diagnosis very few are used for planning and moderate number in treatment. The study and its presentation in this context would be helpful for novice researchers in the area of medical expert system.

  18. An Applet-based Anonymous Distributed Computing System.

    Science.gov (United States)

    Finkel, David; Wills, Craig E.; Ciaraldi, Michael J.; Amorin, Kevin; Covati, Adam; Lee, Michael

    2001-01-01

    Defines anonymous distributed computing systems and focuses on the specifics of a Java, applet-based approach for large-scale, anonymous, distributed computing on the Internet. Explains the possibility of a large number of computers participating in a single computation and describes a test of the functionality of the system. (Author/LRW)

  19. Interface methods for using intranet portal organizational memory information system.

    Science.gov (United States)

    Ji, Yong Gu; Salvendy, Gavriel

    2004-12-01

    In this paper, an intranet portal is considered as an information infrastructure (organizational memory information system, OMIS) supporting organizational learning. The properties and the hierarchical structure of information and knowledge in an intranet portal OMIS was identified as a problem for navigation tools of an intranet portal interface. The problem relates to navigation and retrieval functions of intranet portal OMIS and is expected to adversely affect user performance, satisfaction, and usefulness. To solve the problem, a conceptual model for navigation tools of an intranet portal interface was proposed and an experiment using a crossover design was conducted with 10 participants. In the experiment, a separate access method (tabbed tree tool) was compared to an unified access method (single tree tool). The results indicate that each information/knowledge repository for which a user has a different structural knowledge should be handled separately with a separate access to increase user satisfaction and the usefulness of the OMIS and to improve user performance in navigation.

  20. Transactive memory system links work team characteristics and performance.

    Science.gov (United States)

    Zhang, Zhi-Xue; Hempel, Paul S; Han, Yu-Lan; Tjosvold, Dean

    2007-11-01

    Teamwork and coordination of expertise among team members with different backgrounds are increasingly recognized as important for team effectiveness. Recently, researchers have examined how team members rely on transactive memory system (TMS; D. M. Wegner, 1987) to share their distributed knowledge and expertise. To establish the ecological validity and generality of TMS research findings, this study sampled 104 work teams from a variety of organizational settings in China and examined the relationships between team characteristics, TMS, and team performance. The results suggest that task interdependence, cooperative goal interdependence, and support for innovation are positively related to work teams' TMS and that TMS is related to team performance; moreover, structural equation analysis indicates that TMS mediates the team characteristics-performance links. Findings have implications both for team leaders to manage their work teams effectively and for team members to improve their team performance. (c) 2007 APA

  1. BLACKCOMB2: Hardware-software co-design for non-volatile memory in exascale systems

    Energy Technology Data Exchange (ETDEWEB)

    Mudge, Trevor [Univ. of Michigan, Ann Arbor, MI (United States)

    2017-12-15

    This work was part of a larger project, Blackcomb2, centered at Oak Ridge National Labs (Jeff Vetter PI) to investigate the opportunities for replacing or supplementing DRAM main memory with nonvolatile memory (NVmemory) in Exascale memory systems. The goal was to reduce the energy consumed by in future supercomputer memory systems and to improve their resiliency. Building on the accomplishments of the original Blackcomb Project, funded in 2010, the goal for Blackcomb2 was to identify, evaluate, and optimize the most promising emerging memory technologies, architecture hardware and software technologies, which are essential to provide the necessary memory capacity, performance, resilience, and energy efficiency in Exascale systems. Capacity and energy are the key drivers.

  2. Non-volatile memory for checkpoint storage

    Science.gov (United States)

    Blumrich, Matthias A.; Chen, Dong; Cipolla, Thomas M.; Coteus, Paul W.; Gara, Alan; Heidelberger, Philip; Jeanson, Mark J.; Kopcsay, Gerard V.; Ohmacht, Martin; Takken, Todd E.

    2014-07-22

    A system, method and computer program product for supporting system initiated checkpoints in high performance parallel computing systems and storing of checkpoint data to a non-volatile memory storage device. The system and method generates selective control signals to perform checkpointing of system related data in presence of messaging activity associated with a user application running at the node. The checkpointing is initiated by the system such that checkpoint data of a plurality of network nodes may be obtained even in the presence of user applications running on highly parallel computers that include ongoing user messaging activity. In one embodiment, the non-volatile memory is a pluggable flash memory card.

  3. Programming guidelines for computer systems of NPPs

    International Nuclear Information System (INIS)

    Suresh babu, R.M.; Mahapatra, U.

    1999-09-01

    Software quality is assured by systematic development and adherence to established standards. All national and international software quality standards have made it mandatory for the software development organisation to produce programming guidelines as part of software documentation. This document contains a set of programming guidelines for detailed design and coding phases of software development cycle. These guidelines help to improve software quality by increasing visibility, verifiability, testability and maintainability. This can be used organisation-wide for various computer systems being developed for our NPPs. This also serves as a guide for reviewers. (author)

  4. Memory conformity affects inaccurate memories more than accurate memories.

    Science.gov (United States)

    Wright, Daniel B; Villalba, Daniella K

    2012-01-01

    After controlling for initial confidence, inaccurate memories were shown to be more easily distorted than accurate memories. In two experiments groups of participants viewed 50 stimuli and were then presented with these stimuli plus 50 fillers. During this test phase participants reported their confidence that each stimulus was originally shown. This was followed by computer-generated responses from a bogus participant. After being exposed to this response participants again rated the confidence of their memory. The computer-generated responses systematically distorted participants' responses. Memory distortion depended on initial memory confidence, with uncertain memories being more malleable than confident memories. This effect was moderated by whether the participant's memory was initially accurate or inaccurate. Inaccurate memories were more malleable than accurate memories. The data were consistent with a model describing two types of memory (i.e., recollective and non-recollective memories), which differ in how susceptible these memories are to memory distortion.

  5. Gilgamesh: A Multithreaded Processor-In-Memory Architecture for Petaflops Computing

    Science.gov (United States)

    Sterling, T. L.; Zima, H. P.

    2002-01-01

    Processor-in-Memory (PIM) architectures avoid the von Neumann bottleneck in conventional machines by integrating high-density DRAM and CMOS logic on the same chip. Parallel systems based on this new technology are expected to provide higher scalability, adaptability, robustness, fault tolerance and lower power consumption than current MPPs or commodity clusters. In this paper we describe the design of Gilgamesh, a PIM-based massively parallel architecture, and elements of its execution model. Gilgamesh extends existing PIM capabilities by incorporating advanced mechanisms for virtualizing tasks and data and providing adaptive resource management for load balancing and latency tolerance. The Gilgamesh execution model is based on macroservers, a middleware layer which supports object-based runtime management of data and threads allowing explicit and dynamic control of locality and load balancing. The paper concludes with a discussion of related research activities and an outlook to future work.

  6. Spectrometer user interface to computer systems

    International Nuclear Information System (INIS)

    Salmon, L.; Davies, M.; Fry, F.A.; Venn, J.B.

    1979-01-01

    A computer system for use in radiation spectrometry should be designed around the needs and comprehension of the user and his operating environment. To this end, the functions of the system should be built in a modular and independent fashion such that they can be joined to the back end of an appropriate user interface. The point that this interface should be designed rather than just allowed to evolve is illustrated by reference to four related computer systems of differing complexity and function. The physical user interfaces in all cases are keyboard terminals, and the virtues and otherwise of these devices are discussed and compared with others. The language interface needs to satisfy a number of requirements, often conflicting. Among these, simplicity and speed of operation compete with flexibility and scope. Both experienced and novice users need to be considered, and any individual's needs may vary from naive to complex. To be efficient and resilient, the implementation must use an operating system, but the user needs to be protected from its complex and unfamiliar syntax. At the same time the interface must allow the user access to all services appropriate to his needs. The user must also receive an image of privacy in a multi-user system. The interface itself must be stable and exhibit continuity between implementations. Some of these conflicting needs have been overcome by the SABRE interface with languages operating at several levels. The foundation is a simple semimnemonic command language that activates indididual and independent functions. The commands can be used with positional parameters or in an interactive dialogue the precise nature of which depends upon the operating environment and the user's experience. A command procedure or macrolanguage allows combinations of commands with conditional branching and arithmetic features. Thus complex but repetitive operations are easily performed

  7. Procedural Memory: Computer Learning in Control Subjects and in Parkinson’s Disease Patients

    Directory of Open Access Journals (Sweden)

    C. Thomas-Antérion

    1996-01-01

    Full Text Available We used perceptual motor tasks involving the learning of mouse control by looking at a Macintosh computer screen. We studied 90 control subjects aged between sixteen and seventy-five years. There was a significant time difference between the scales of age but improvement was the same for all subjects. We also studied 24 patients with Parkinson's disease (PD. We observed an influence of age and also of educational levels. The PD patients had difficulties of learning in all tests but they did not show differences in time when compared to the control group in the first learning session (Student's t-test. They learned two or four and a half times less well than the control group. In the first test, they had some difficulty in initiating the procedure and learned eight times less well than the control group. Performances seemed to be heterogeneous: patients with only tremor (seven and patients without treatment (five performed better than others but learned less. Success in procedural tasks for the PD group seemed to depend on the capacity to initiate the response and not on the development of an accurate strategy. Many questions still remain unanswered, and we have to study different kinds of implicit memory tasks to differentiate performance in control and basal ganglia groups.

  8. The Associative Memory System Infrastructure of the ATLAS Fast Tracker

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00525014; The ATLAS collaboration

    2016-01-01

    The Associative Memory (AM) system of the Fast Tracker (FTK) processor has been designed to perform pattern matching using the hit information of the ATLAS experiment silicon tracker. The AM is the heart of FTK and is mainly based on the use of ASICs (AM chips) designed on purpose to execute pattern matching with a high degree of parallelism. It finds track candidates at low resolution that are seeds for a full resolution track fitting. The AM system implementation is based on a collection of boards, named “Serial Link Processor” (AMBSLP), since it is based on a network of 900 2 Gb/s serial links to sustain huge data traffic. The AMBSLP has high power consumption (~250 W) and the AM system needs custom power and cooling. This presentation reports on the integration of the AMBSLP inside FTK, the infrastructure needed to run and cool the system which foresees many AMBSLPs in the same crate, the performance of the produced prototypes tested in the global FTK integration, an important milestone to be satisfie...

  9. LSG: An External-Memory Tool to Compute String Graphs for Next-Generation Sequencing Data Assembly.

    Science.gov (United States)

    Bonizzoni, Paola; Vedova, Gianluca Della; Pirola, Yuri; Previtali, Marco; Rizzi, Raffaella

    2016-03-01

    The large amount of short read data that has to be assembled in future applications, such as in metagenomics or cancer genomics, strongly motivates the investigation of disk-based approaches to index next-generation sequencing (NGS) data. Positive results in this direction stimulate the investigation of efficient external memory algorithms for de novo assembly from NGS data. Our article is also motivated by the open problem of designing a space-efficient algorithm to compute a string graph using an indexing procedure based on the Burrows-Wheeler transform (BWT). We have developed a disk-based algorithm for computing string graphs in external memory: the light string graph (LSG). LSG relies on a new representation of the FM-index that is exploited to use an amount of main memory requirement that is independent from the size of the data set. Moreover, we have developed a pipeline for genome assembly from NGS data that integrates LSG with the assembly step of SGA (Simpson and Durbin, 2012 ), a state-of-the-art string graph-based assembler, and uses BEETL for indexing the input data. LSG is open source software and is available online. We have analyzed our implementation on a 875-million read whole-genome dataset, on which LSG has built the string graph using only 1GB of main memory (reducing the memory occupation by a factor of 50 with respect to SGA), while requiring slightly more than twice the time than SGA. The analysis of the entire pipeline shows an important decrease in memory usage, while managing to have only a moderate increase in the running time.

  10. How much of reinforcement learning is working memory, not reinforcement learning? A behavioral, computational, and neurogenetic analysis.

    Science.gov (United States)

    Collins, Anne G E; Frank, Michael J

    2012-04-01

    Instrumental learning involves corticostriatal circuitry and the dopaminergic system. This system is typically modeled in the reinforcement learning (RL) framework by incrementally accumulating reward values of states and actions. However, human learning also implicates prefrontal cortical mechanisms involved in higher level cognitive functions. The interaction of these systems remains poorly understood, and models of human behavior often ignore working memory (WM) and therefore incorrectly assign behavioral variance to the RL system. Here we designed a task that highlights the profound entanglement of these two processes, even in simple learning problems. By systematically varying the size of the learning problem and delay between stimulus repetitions, we separately extracted WM-specific effects of load and delay on learning. We propose a new computational model that accounts for the dynamic integration of RL and WM processes observed in subjects' behavior. Incorporating capacity-limited WM into the model allowed us to capture behavioral variance that could not be captured in a pure RL framework even if we (implausibly) allowed separate RL systems for each set size. The WM component also allowed for a more reasonable estimation of a single RL process. Finally, we report effects of two genetic polymorphisms having relative specificity for prefrontal and basal ganglia functions. Whereas the COMT gene coding for catechol-O-methyl transferase selectively influenced model estimates of WM capacity, the GPR6 gene coding for G-protein-coupled receptor 6 influenced the RL learning rate. Thus, this study allowed us to specify distinct influences of the high-level and low-level cognitive functions on instrumental learning, beyond the possibilities offered by simple RL models. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  11. Protein Degradation by Ubiquitin-Proteasome System in Formation and Labilization of Contextual Conditioning Memory

    Science.gov (United States)

    Fustiñana, María Sol; de la Fuente, Verónica; Federman, Noel; Freudenthal, Ramiro; Romano, Arturo

    2014-01-01

    The ubiquitin-proteasome system (UPS) of protein degradation has been evaluated in different forms of neural plasticity and memory. The role of UPS in such processes is controversial. Several results support the idea that the activation of this system in memory consolidation is necessary to overcome negative constrains for plasticity. In this…

  12. Organizational memory and the completeness of process modeling in ERP systems

    NARCIS (Netherlands)

    van Stijn, E.J.; Wensley, A.K.P.

    2001-01-01

    Enterprise resource planning (ERP) systems not only have a broad functional scope promising to support many different business processes, they also embed many different aspects of the company’s organizational memory. Disparities can exist between those memory contents in the ERP system and related

  13. A single-system model predicts recognition memory and repetition priming in amnesia

    NARCIS (Netherlands)

    Berry, C.J.; Kessels, R.P.C.; Wester, A.J.; Shanks, D.R.

    2014-01-01

    We challenge the claim that there are distinct neural systems for explicit and implicit memory by demonstrating that a formal single-system model predicts the pattern of recognition memory (explicit) and repetition priming (implicit) in amnesia. In the current investigation, human participants with

  14. Railroad Classification Yard Technology Manual: Volume II : Yard Computer Systems

    Science.gov (United States)

    1981-08-01

    This volume (Volume II) of the Railroad Classification Yard Technology Manual documents the railroad classification yard computer systems methodology. The subjects covered are: functional description of process control and inventory computer systems,...

  15. Grid Computing BOINC Redesign Mindmap with incentive system (gamification)

    OpenAIRE

    Kitchen, Kris

    2016-01-01

    Grid Computing BOINC Redesign Mindmap with incentive system (gamification) this is a PDF viewable of https://figshare.com/articles/Grid_Computing_BOINC_Redesign_Mindmap_with_incentive_system_gamification_/1265350

  16. A computer-based spectrometry system for assessment of body radioactivity

    International Nuclear Information System (INIS)

    Venn, J.B.

    1985-01-01

    This paper describes a PDP-11 computer system operating under RT-11 for the acquisition and processing of pulse height spectra in the measurement of body radioactivity. SABRA (system for the assessment of body radioactivity) provides control of multiple detection systems from visual display consoles by means of a command language. A wide range of facilities is available for the display, processing and storage of acquired spectra and complex operations may be pre-programmed by means of the SABRE MACRO language. The hardware includes a CAMAC interface to the detection systems, disc cartridge drives for mass storage of data and programs, and data-links to other computers. The software is written in assembler language and includes special features for the dynamic allocation of computer memory and for safeguarding acquired data. (orig.)

  17. Computational Intelligence Techniques for Tactile Sensing Systems

    Science.gov (United States)

    Gastaldo, Paolo; Pinna, Luigi; Seminara, Lucia; Valle, Maurizio; Zunino, Rodolfo

    2014-01-01

    Tactile sensing helps robots interact with humans and objects effectively in real environments. Piezoelectric polymer sensors provide the functional building blocks of the robotic electronic skin, mainly thanks to their flexibility and suitability for detecting dynamic contact events and for recognizing the touch modality. The paper focuses on the ability of tactile sensing systems to support the challenging recognition of certain qualities/modalities of touch. The research applies novel computational intelligence techniques and a tensor-based approach for the classification of touch modalities; its main results consist in providing a procedure to enhance system generalization ability and architecture for multi-class recognition applications. An experimental campaign involving 70 participants using three different modalities in touching the upper surface of the sensor array was conducted, and confirmed the validity of the approach. PMID:24949646

  18. Computer Information System For Nuclear Medicine

    Science.gov (United States)

    Cahill, P. T.; Knowles, R. J.....; Tsen, O.

    1983-12-01

    To meet the complex needs of a nuclear medicine division serving a 1100-bed hospital, a computer information system has been developed in sequential phases. This database management system is based on a time-shared minicomputer linked to a broadband communications network. The database contains information on patient histories, billing, types of procedures, doses of radiopharmaceuticals, times of study, scanning equipment used, and technician performing the procedure. These patient records are cycled through three levels of storage: (a) an active file of 100 studies for those patients currently scheduled, (b) a temporary storage level of 1000 studies, and (c) an archival level of 10,000 studies containing selected information. Merging of this information with reports and various statistical analyses are possible. This first phase has been in operation for well over a year. The second phase is an upgrade of the size of the various storage levels by a factor of ten.

  19. Computational intelligence techniques for tactile sensing systems.

    Science.gov (United States)

    Gastaldo, Paolo; Pinna, Luigi; Seminara, Lucia; Valle, Maurizio; Zunino, Rodolfo

    2014-06-19

    Tactile sensing helps robots interact with humans and objects effectively in real environments. Piezoelectric polymer sensors provide the functional building blocks of the robotic electronic skin, mainly thanks to their flexibility and suitability for detecting dynamic contact events and for recognizing the touch modality. The paper focuses on the ability of tactile sensing systems to support the challenging recognition of certain qualities/modalities of touch. The research applies novel computational intelligence techniques and a tensor-based approach for the classification of touch modalities; its main results consist in providing a procedure to enhance system generalization ability and architecture for multi-class recognition applications. An experimental campaign involving 70 participants using three different modalities in touching the upper surface of the sensor array was conducted, and confirmed the validity of the approach.

  20. Computational Intelligence Techniques for Tactile Sensing Systems

    Directory of Open Access Journals (Sweden)

    Paolo Gastaldo

    2014-06-01

    Full Text Available Tactile sensing helps robots interact with humans and objects effectively in real environments. Piezoelectric polymer sensors provide the functional building blocks of the robotic electronic skin, mainly thanks to their flexibility and suitability for detecting dynamic contact events and for recognizing the touch modality. The paper focuses on the ability of tactile sensing systems to support the challenging recognition of certain qualities/modalities of touch. The research applies novel computational intelligence techniques and a tensor-based approach for the classification of touch modalities; its main results consist in providing a procedure to enhance system generalization ability and architecture for multi-class recognition applications. An experimental campaign involving 70 participants using three different modalities in touching the upper surface of the sensor array was conducted, and confirmed the validity of the approach.